The HAProxy Ingress Behavior That Broke Our Google SSO in Production
Google OAuth was broken in production. Not intermittently. Completely. Every user who clicked “Login with Google” hit an error page. The callback URL was wrong, NextAuth was rejecting it, and nothing in our recent deployments explained why.
The logs showed a clean 308 redirect. HAProxy was adding a trailing slash to our OAuth callback path. One character, silently appended, was enough to invalidate the entire flow.
What made it worse was that we had not touched the Ingress responsible for that route. No config changes, no deployments to that service, nothing. The Ingress looked exactly as it should.
What Actually Happened
OAuth flow depends on exact URL matching. Google redirects the user back to a pre-registered callback URL. If anything changes that URL, even a trailing slash, the provider rejects it.
Our flow was:
- User clicks “Login with Google”
- Google redirects back to
/api/auth/callback/google?state=...&code=... - HAProxy intercepts and issues a 308 to
/api/auth/callback/google/ - NextAuth receives the modified callback URL and fails with
OAuthCallbackerror
Step three is where everything went wrong. HAProxy was redirecting the OAuth callback with a trailing slash, and the redirect was also dropping the query string. state and code parameters, the ones Google sends back to complete the OAuth handshake, were gone.
The redirect rule responsible was in a backend-config-snippet on an Ingress we had recently updated. But not the Ingress serving the OAuth route. A completely different Ingress, for a completely different domain.
The Behavior Nobody Documents Clearly
HAProxy Ingress Controller merges backend-config-snippet annotations per backend service, not per Ingress resource.
This is the behavior that catches teams off guard. When two or more Ingress resources point to the same Kubernetes Service, every backend-config-snippet from every one of those Ingresses gets injected into the same backend block in the generated haproxy.cfg.
To make it concrete:
# Ingress A - domain: staging.example.com
backend-config-snippet: |
http-request redirect code 308 location %[path]/ if !{ path_end / }
# Ingress B - domain: www.example.com
backend-config-snippet: |
http-request redirect code 308 location %[path]/ if !{ path_end / }
# Both point to: my-app-service
What ends up in haproxy.cfg:
backend my-app-service
### injected from Ingress A ###
http-request redirect code 308 location %[path]/ if !{ path_end / }
### injected from Ingress B ###
http-request redirect code 308 location %[path]/ if !{ path_end / }
Both rules are active. Both apply to all traffic hitting that backend, regardless of which domain it came from. Changing one Ingress changes the behavior for traffic from every Ingress that shares the same service.
In our case, a redirect rule on a non-production Ingress was quietly being evaluated against OAuth callbacks on our main domain.
Four Things We Got Wrong
1. Redirect rules do not belong in backend-config-snippet
http-request redirect rules are meant for the frontend. In HAProxy’s architecture, the frontend evaluates the request before routing it to a backend. That is the right place for redirect logic, before you have committed to a backend, and before the rules from unrelated Ingresses can interfere.
Backend is evaluated after the routing decision. Putting redirects there works, but it means the rule runs in a context shared by every Ingress pointing at that service. Use haproxy.org/frontend-config-snippet for redirect rules.
2. Every trailing slash redirect needs an /api exclusion
A redirect rule that catches all paths without exceptions will catch OAuth callbacks, webhook endpoints, and any other API path that cannot tolerate a modified URL.
# This will redirect /api/auth/callback/google to /api/auth/callback/google/
http-request redirect code 308 location %[path]/ if !{ path_end / } !{ path_reg \.(js|css|png|jpg|ico|svg|woff2)$ }
# This will not
http-request redirect code 308 location %[path]/?%[query] if !{ path_end / } !{ path_beg /api } !{ path_reg \.(js|css|png|jpg|ico|svg|woff2)$ }
Any path under /api should be excluded from trailing slash redirects. These paths are consumed by code, not browsers. They do not need normalization, and they cannot tolerate unexpected redirects.
3. Never redirect without %[query]
location %[path]/ drops the query string. location %[path]/?%[query] preserves it.
For OAuth, the query string is the entire payload: state, code, session_state. Losing it means the authorization flow is dead on arrival. This is a one-character difference that breaks authentication completely.
4. The Ingress manifest is not the source of truth
Reading the Ingress YAML tells you what you configured. It does not tell you what HAProxy is actually running. The generated haproxy.cfg is the real source of truth, and it reflects the merged output of every Ingress touching your backend.
When debugging HAProxy behavior, inspect the generated config directly:
kubectl exec -n <haproxy-namespace> <haproxy-pod> -- cat /etc/haproxy/haproxy.cfg
Search for every snippet injected into the backend you care about, not just the ones from the Ingress you edited. That is the only way to see the full picture of what rules are actually evaluating your traffic.
The Blast Radius of Shared Backends
The deeper issue here is not the specific redirect rule. It is the implicit coupling that shared backends create in HAProxy.
In NGINX Ingress, each Ingress resource is relatively self-contained. An annotation on one Ingress does not bleed into another. In HAProxy, backend-config-snippet is pooled. The moment two Ingresses share a backend service, they share a config space. A change on one is a change on both, whether you intended it or not.
This is not a bug; it is a consequence of how HAProxy generates a single unified config file. But it means the operational model is different. You cannot reason about an Ingress in isolation. You have to reason about it in the context of every other Ingress sharing its backend.
Teams that come from NGINX tend to underestimate this. The annotation names look familiar, the concepts seem equivalent, and the differences only show up in production when something unrelated breaks.
HAProxy Controller Is Dynamic, and That Is the Real Risk
Everything described so far assumes you made a deliberate change and then something broke. But HAProxy Ingress Controller introduces a subtler risk that deserves its own attention: the controller is fully dynamic.
Every time any Ingress resource in the cluster changes, HAProxy regenerates its entire haproxy.cfg from scratch and reloads. This includes changes you did not make. A teammate applies an unrelated Ingress update in a different namespace. The controller pod gets restarted due to a node eviction or an OOM kill. A Helm chart upgrade touches a ConfigMap annotation. Any of these events trigger a full config reload, and the resulting config reflects the current state of every Ingress in the cluster at that exact moment.
This means the blast radius of any single change is not limited to the moment it is applied. It persists silently in the config, waiting for the next reload to activate in combination with something else.
The failure pattern is disorienting because there is often no obvious trigger. A service that was working at 2pm stops working at 3pm. No deployment happened to that service. No one touched its Ingress. The on-call engineer checks the usual suspects and finds nothing. What actually happened is that a controller restart re-evaluated the config, and the merged result of some unrelated snippet finally hit the right condition.
This is not hypothetical. It is a real operational pattern that teams running HAProxy Ingress at any meaningful scale will encounter eventually. The controller pod itself becomes a silent failure surface. You cannot reason about what is running in HAProxy without knowing the current merged state of all Ingresses, and that state can change without any intentional action on your part.
The checklist below is not just for when you are about to make a change. It applies any time HAProxy behavior changes unexpectedly. The first question is always: what changed in the generated config, not what changed in the Ingress you are looking at.
Checklist Before Applying Ingress Changes in HAProxy
Before applying any Ingress change that touches backend-config-snippet:
- Find every other Ingress sharing the same backend service:
kubectl get ingress -A -o yaml | grep <service-name> - Inspect the current generated
haproxy.cfgfor the affected backend before and after your change - Confirm every redirect rule excludes
/apipaths - Confirm every redirect rule preserves the query string with
%[query] - Test OAuth and SSO flows in staging before touching production
- Prefer
frontend-config-snippetfor redirect rules, because backend snippets should be for backend-specific behavior only
The rule that broke our OAuth was not malicious, not careless, and not written by someone who did not know what they were doing. It was written without knowing that HAProxy merges backend snippets across Ingress boundaries. Once you know that, the rest follows.