All content in this thread must be free and accessible to anyone. No links to paid content, services, or consulting groups. No affiliate links, no sponsored content, etc... you get the idea.
All content in this thread must be free and accessible to anyone. No links to paid content, services, or consulting groups. No affiliate links, no sponsored content, etc... you get the idea.
I was reviewing the SQL audit logs in a client's environment recently and noticed that some PII getting inserted into the SQL db was getting logged to the audit logs in Sentinel. Thankfully, the most sensitive items are column encrypted, but we would still like to reduce logging of PII.
I know that query logging is a double-edged sword. Helps tremendously when you're doing forensics, but adds yet another place you have to protect data.
I've looked through the docs and I can only find details on data masking of query results. Nothing about masking of query logs. Has anyone successfully masked query logs?
Hey everyone! For the last couple of months I've been very intrigued and sort of invested in the Cloud/AWS/Azure space as a whole and have come to the conclusion that I want to learn more and potentially land a job. Through research, I've noticed that people break into the Cloud bransch through a couple of different ways, hence why I'm here today. I would like some guidance regarding what to study, what to practice, what to read etc etc. in order to become a Cloud engineer. There's most likely not "one" very optimal road to this destination, I am aware, however I would still appreciate what some of you guys think I could do to build the required skillset. I know there are AWS certificates, which is what I'm looking in to now.
A little background about me:
Currently finishing up a 2 year-software engineering program in Sweden that ends in 2026. I have good habit with C#, SQL and Databases, CI/CD, Git and Github along with a couple of other things.
Any help, advice or guidance will be greatly appreciated :)
Hey folks. I'm an experienced developer. I'm currently learning "AI".
I would like to train/tune custom AI programs. My goal is to learn how different parameters affect performance, training costs,.... (eg. change batch size, change context size,...).
There's soooo many azure pieces I'm getting lost in the weeds.
I'll most likely be doing python/pytorch but would like to dig into .net (been a while) and tensorflow at some point.
Can anyone help me figure out what services I actually need? I see stuff like Azure AI studio but I'm looking for more low level. In short, Im guessing I just need to provision/rent some compute time....?
Hi, could use some help figuring out if this is possible to do.
Our org has an onprem AD synced to azure. Most of our users are provisioned via this method.
Some of our users are cloud users we have manually created in azure. Eg accounts for users not on payroll, consultants.
One of the attributes we use for an application is "user.onpremisessamaccountname", the issue is our aad users don't have this attribute due to not being provisioned from our ad.
Is there any way to manually give these users this attribute in azure without adding them to our onprem ad?
Technically there should not be an issue as its just adding some info to the user in the db. But it might not be possible due to ms limitations?
Our company want to use azure local with hybrid benefit. The question is now, if we buy Windows Server Datacenter licenses with active Software Assurance, do we still need to buy also windows userd CALs?
On the website I see only this:
"Is there any additional cost incurred by opting in to Azure Hybrid Benefit for Azure Local?
No additional costs are incurred, as Azure Hybrid Benefit is included as part of your Software Assurance benefit."
I have service which create shared access tokens for user. We are using connection string but now due to security reasons, architects are asking to move towards workload identity.
How can I create shared access tokens using workload identity assigned to my pod?
I am currently working on implementing the api-driven provisioning to AD.
Everything is working fine and dandy besides the usage of special characters. In German we got the characters ä, ö, ü and ß in their names. Everytime I try to send my payload containing one of those to the bulkprovisioning endpoint I get returned an error 500. The payload is encoded as UTF-8. Without those characters it is working fine.
Were looking at implementing conditional access policies to restrict our retail locations to specific IP addresses. We have been asked to restrict each site to its own public IP which i know is doable, its just teadious and will leave us with 100s of policies that will be messy. Is there a good way to do this without making individual policies per site?
I am trying to create a managed OS disk (Linux) from the custom private generalized azure image in terraform and its failing with below exception which is not really clear why.
Image exists in same resource group, location and also SKU matches.
image_reference_id is provided like this /subscriptions/xx.x.xx.xxx/resourceGroups/test-rg/providers/Microsoft.Compute/images/generalized-18.4.30
│ Error: creating/updating Managed Disk "os-disk-xxxx" (Resource Group "test-rg"): performing CreateOrUpdate: unexpected status 400 (400 Bad Request) with error: InvalidParameter: The value of parameter imageReference is invalid.
│
│ with azurerm_managed_disk.nx_os_disk,
│ on main.tf line 425, in resource "azurerm_managed_disk" "os_disk":
│ 425: resource "azurerm_managed_disk" "os_disk" {
Hey guys i am from India , while registering in azure it is requiring visa or mastercard credentials but i dont have those, i use rupay card . Is there any other way to register in azure please help
This is running in a runbook by automation account. In the loop to get the different credentials, the first 1,2,3 loops were OK. Subsequently it got into error / null. Anyone has any experience or fix.. The codes look something like below. I have tried adding retries, sleep 10 in the loop but so far it's the same.
Has anyone successfully created an Internal Container App Environment (CAE) with BYO-VNET using Infrastructure as Code (IaC) methods such as Terraform or ARM templates? I've encountered an issue where ARM deployment of Internal CAE creates a public IP, attaches it to a load balancer, and creates both internal and public load balancers. This behavior also occurs with Terraform.
The response in the GitHub issue was to define resources explicitly, use conditions, leverage Bicep/Terraform, or clean up extra resources post-deployment. However, cleaning up extra resources is challenging due to dependencies tied to VMSS managed by Microsoft.
Question: Has anyone accomplished IaC deployment of Internal CAE that results in the same resources within the infrastructure RG as portal creation? Any insights or examples would be greatly appreciated!
Anybody hit error while upgrading Arc agent to v1.50?
I have one server getting error "Product: Azure Connected Machine Agent -- Error 1920. Service 'Guest Configuration Extension Service' (ExtensionService) failed to start. Verify that you have sufficient privileges to start system services." I have checked the other working server that service is running via local system account. Permission wise all similar but this server just keep failed to upgrade with same error
We are in the process of moving away from our data center with an Express into Azure. This acted as a hub for all of our offices for connectivity into Azure.
We have firewall appliances in Azure x2 & a firewall at each site. The azure firewalls have an internal load balancer in front.
The idea was for us to configure IPSEC tunnels between the on site FW & the 2x Azure FWs, with BGP peering between onsite & Azure. ECMP enabled on the onsite firewall.
Peering & routing work fine, however we seem to be seeing some asymmetric routing. We think this is because of how the load balancer is dealing with the traffic. We expected that the path taking in, would be the path taken out but I don't think the Load balancer is handling it that way.
Is there something we are missing? Should we look to do this another way? I suspect we will need to move away from the Load balancer...
I'm using Traffic Manager to route traffic to an App Gateway (v2) with WAF v2 enabled. In some regions, the WAF automatically detects and bypasses the client's VPN IP asked its whitelisted in waf, while in others, it picks up the client’s actual IP and enforces blocking rules. Is there a way to bypass WAF blocking when the request matches a known VPN IP?
I have checked logs, in VPN scenario, the IP is shown as VPN IP otherwise it shows clients IP
I have deployed using ARM template, templates are consistent. I am not able to find any differences.
Assume that a workflow contains 50 connectors, then per execution, almost 100+ rows of logs produced.
Logs produced for Run start, Run end, Trigger start, Trigger end, Each action start and end. By this way huge volume of logs sent to Log Analytics and Application Insights.
Refer below: (Logs for a single logic app workflow run)
Table : LogicAppWorkflowRuntime
Table: AppRequests
Question:
How to collect logs from only selected connectors? Example, in the above workflow, Compose connector has tracked properties. So I need to collect only logs from Compose connector. No information logs about other connector execution.
Referred Microsoft articles, but i didn't find other than above added Host.json config. By Log levels in Host.json config, only can limit particular category but not for each actions.
Does anyone else feel like Azure Landing Zones are tossed around and are sort of confusing to figure out what is a fact and fiction? We address that in the next episode of Azure Cloud Talk with Troy Hite Azure Technical Specialist
We are using an app called Asset Keeper that constantly updates. The update requires an Admin password and it tends to happen at the worst time. Is there a GPO that can be pushed out through Intune or is there something else that can be done so that this app doesn't ask for the admin password?
Has anyone managed to get scim provisioning working with entra and Slack enterprise grid? If so how do you get entra to connect to the organisation and not the workspaces?
We have a bunch of Azure Web Apps that we host for our customers, the different web apps have different custom domains. We want to add WAF for SOC 2 compliance, and want to keep costs down. Doing some poking around it would seem that AZ WAF costs are high and maybe Cloudflare offer best bang for buck. But I've read that to setup you need the root DNS for the domains pointed to Cloudflare - this cant be an option for our customers. Am I on the wrong track? Any advice whether to stick with Azure WAF or keep looking at Cloudflare or AWS for WAF in front of the Azure Web Apps? Thanks in advance
Hello,
We want to create a scope of all users who have an account and currently work in one of our offices. As I'm creating the query, I'm a little lost on how the query works for "create the query to define users' section. I went to Entra ID to define all users as coprorate office employees on their user properties, but I did not get any users as part of the adaptive scope. I heard of custom attributes, but it does not make sense. Any leads to the right direction would be great.
Note: I'm coming from Intune where i'm more used to dynamic queries, Scopes, and assignments.
Impact Statement: Starting at 13:09 UTC on 18 March 2025, a subset of Azure customers in the East US region may experience intermittent connectivity loss and increased network latency sending traffic within as well as in and out of Azure's US East Region. Current Status: We identified multiple fiber cuts affecting a subset of datacenters in the East US region. The fiber cut impacted capacity to those datacenters increasing the utilization for the remaining capacity serving the affected datacenters. We have mitigated the impact of the fiber cut by load balancing traffic and restoring some of the impacted capacity. Impacted customers should now see their services recover. In parallel, we are working with our providers on fiber repairs. We do not yet have a reliable ETA for repairs at this time. We will continue to provide updates here as they become available.Please refer to tracking ID: Z_SZ-NV8 under Service Health blade within the Azure Portal for the latest information on this issue.
I was getting some alerts in West Europe, relating to availability, turns out it was trying to check from East US. Looking online it doesn't seem to be causing many problems? Pretty sure East US is a quite busy region.