What NGINX to HAProxy Migration Taught Us About Config Blast Radius

Switching ingress controllers is not a lift-and-shift operation. NGINX and HAProxy are built on different architectural assumptions, and those differences compound at every layer — from how configuration is loaded to how certificates are selected to how the system behaves when a single rule is malformed.

This is a post-migration review of what we found, what broke, and what needs to be in place before any team runs this in production.

More …

The Open Source Bait and Switch Nobody Talks About

We needed an API gateway. Kong was $30k/year, AWS API Gateway had its own cost trap. Tyk’s open source gateway looked like the answer: free, performant, written in Go.

The problem was route management. Tyk uses imperative API calls by default, but our infrastructure is fully declarative. Everything lives in Git, deployed with kubectl apply. We needed an operator.

Tyk has one. It’s called Tyk Operator and it’s exactly what we needed: declarative, GitOps-ready.

More …

Why Auto-Upgrade is Playing Russian Roulette With Your Uptime

The alert sound is burned into my brain now. That specific PagerDuty tone that means something is really wrong. Not “a pod restarted” wrong. Not “latency spike” wrong. The kind of wrong that makes your stomach drop before you even look at your phone.

Late Sunday night. I’d finally convinced myself to stop checking Slack every five minutes and actually relax. Big mistake.

More …

Deploy Apache APISIX on Kubernetes

Running API Gateway in Kubernetes isn’t straightforward. Most documentation glosses over real issues like the etcd image registry problem post-VMware acquisition, CRD-based configuration patterns, and plugin troubleshooting. This guide covers deploying APISIX with local chart customization to handle these issues, implementing traffic management patterns (rate limiting, circuit breaker, caching) through Kubernetes CRDs

More …

Production-Ready Keycloak on Kubernetes with Auto-Clustering

The Challenge

Running Keycloak in production is notoriously challenging. Session loss during scaling, complex external cache configurations, and maintaining high availability while ensuring session persistence across multiple replicas are common pain points. Traditional approaches often require external Infinispan clusters or Redis, adding operational complexity and potential failure points.

Solution Overview

Instead of managing external caching systems, we can leverage Keycloak’s built-in clustering capabilities with Kubernetes-native service discovery. This approach uses JGroups with DNS-based discovery through headless services,

More …