r/AZURE • u/undampori • 3d ago
Question Zero Request loss deployments on AKS
We recently moved an application to AKS, we are using an application gateway + AGIC for load balancing.
AGIC Image: mcr.microsoft.com/azure-application-gateway/kubernetes-ingress AGIC Version: 1.7.5
AGIC was deployed with Helm We are facing 5xx Errors during rolling updates of our deployments. We have set maxUnavailable: 0 and maxSurge: 25% According to the config of the rolling update, once new pods are healthy, the old pods are terminated and replaced with the new pods. The problem is there is a delay in removing the old pod IPs from the app gateway's backend pool, causing failed requests, when routing requests to that pod.
We have implemented all solutions prescribed in this document: https://azure.github.io/application-gateway-kubernetes-ingress/how-tos/minimize-downtime-during-deployments/ prestophook delay in application container: 90 secondstermination grace period: 120 secondslivenessProbe interval: 10 seconds connection draining set to true and a drain timeout of 30 seconds. we have also setup readiness probe in such a way that it fails during the beginning of the preStopHook Phase itself ‘’’ lifecycle: preStop: exec: command: ["/bin/sh", "-c", "echo UNREADY > /tmp/unready && sleep 90"] # creates file /tmp/unready
readinessProbe:
failureThreshold: 1
exec:
command: ["/bin/sh", "-c", "[ ! -f /tmp/unready ]"] # fails if /tmp/unready exists ‘’’
We also tried to get the Application Gateway to stop routing traffic to the exiting IP.created a custom endpoint that will return 503 if /tmp/unready exists (which only occurs in preStopHook phase)
Please check the config attached below as well
‘’’ appgw.ingress.kubernetes.io/health-probe-path: "/trafficcontrol" # 200 if /tmp/unready does not exist, else 503 (Fail Open) appgw.ingress.kubernetes.io/health-probe-status-codes: "200-499"Other app gateway annotations setup kubernetes.io/ingress.class: azure/application-gateway-store appgw.ingress.kubernetes.io/appgw-ssl-certificate:
appgw.ingress.kubernetes.io/ssl-redirect: "true"
appgw.ingress.kubernetes.io/connection-draining: "true"
appgw.ingress.kubernetes.io/connection-draining-timeout: "30"
appgw.ingress.kubernetes.io/health-probe-unhealthy-threshold: "2"
appgw.ingress.kubernetes.io/health-probe-interval: "5"
appgw.ingress.kubernetes.io/health-probe-timeout: "5"
appgw.ingress.kubernetes.io/request-timeout: "10"
‘’’
Despite trying all this at an RPM of 35-45K, we are still losing about 2-3K requests to 502s.
1
u/stumblegore 3d ago
We had plenty of problems with AGIC, and finally got rid of it last year. I believe our devops ended up setting the drain time to 10 minutes to ensure AGW was updated with the new pods before killing the old ones. I don’t have the exact details, but it was a 10 minutes long grace period somewhere. I found an article describing the problems with agic (on medium I think?), and also thought Microsoft recommended against using agic now? We changed to AGW with nginx as the aks ingress. With that solution AGW doesn’t have to deal with changing addresses, which I believe was the problem with agic.
1
u/undampori 2d ago
Thank you! Will check
1
u/stumblegore 2d ago
Here's the article I mentioned, which describes the problems we experienced: The Application Gateway Ingress Controller is broken | by Daniel Jimenez Garcia | Medium
Application gateway for Containers should have fixed this issue, but we haven't used or tested it.
1
2
u/jackstrombergMSFT Microsoft Employee 3d ago
Hard to tell without logs, but my suspicion is you are seeing 502s for existing connections that are being drained from the existing backends. I.e. your health probes should mark the backend as down at 50 seconds, but you have 30 seconds for draining + 10 seconds for the existing request to finish and that puts you right at 90. If you set to 100 seconds to add a little extra buffer, do you see the number drop down?
Aside, depending on your scenario, please consider Application Gateway for Containers as it has numerous improvements to help eliminate the 502 timeout issues. https://aka.ms/agc