The Open Source Bait and Switch Nobody Talks About
We needed an API gateway. Enterprise options like Kong ($30k/year) and AWS API Gateway were expensive. Tyk’s open source gateway looked perfect—free, performant, written in Go.
More …We needed an API gateway. Enterprise options like Kong ($30k/year) and AWS API Gateway were expensive. Tyk’s open source gateway looked perfect—free, performant, written in Go.
More …The alert sound is burned into my brain now. That specific PagerDuty tone that means something is really wrong. Not “a pod restarted” wrong. Not “latency spike” wrong. The kind of wrong that makes your stomach drop before you even look at your phone.
Late Sunday night. I’d finally convinced myself to stop checking Slack every five minutes and actually relax. Big mistake.
More …Running API Gateway in Kubernetes isn’t straightforward. Most documentation glosses over real issues like the etcd image registry problem post-VMware acquisition, CRD-based configuration patterns, and plugin troubleshooting. This guide covers deploying APISIX with local chart customization to handle these issues, implementing traffic management patterns (rate limiting, circuit breaker, caching) through Kubernetes CRDs
More …Running Keycloak in production is notoriously challenging. Session loss during scaling, complex external cache configurations, and maintaining high availability while ensuring session persistence across multiple replicas are common pain points. Traditional approaches often require external Infinispan clusters or Redis, adding operational complexity and potential failure points.
Instead of managing external caching systems, we can leverage Keycloak’s built-in clustering capabilities with Kubernetes-native service discovery. This approach uses JGroups with DNS-based discovery through headless services,
More …Yesterday’s daily standup was supposed to be 15 minutes. It turned into a 2-hour debugging session instead. A feature we’d already demoed to the client last week suddenly wasn’t showing up in production. The app team kept saying ‘it works fine in staging,’ while DevOps insisted ‘infrastructure looks good on our end.’ Meanwhile, the client kept asking when it would go live.
What made it frustrating was that all our monitoring was green. Database connections healthy, API response times normal, zero error rate. But somehow the new feature just wasn’t there. No error logs, no exceptions, nothing crashed.
More …