Deploy Apache APISIX on Kubernetes
Running API Gateway in Kubernetes isn’t straightforward. Most documentation glosses over real issues like the etcd image registry problem post-VMware acquisition, CRD-based configuration patterns, and plugin troubleshooting. This guide covers deploying APISIX with local chart customization to handle these issues, implementing traffic management patterns (rate limiting, circuit breaker, caching) through Kubernetes CRDs
Setup Context
This deployment targets APISIX as an internal gateway. SSL/TLS termination happens at the ingress layer
Traffic flow: External → Ingress/LB (SSL) → APISIX (HTTP) → Services
Prerequisites
- Kubernetes 1.24+
- Helm 3.x
- kubectl configured
- Available StorageClass
Minimum cluster resources: 2 CPU cores, 4GB RAM, 10GB storage
Download and Customize Chart
Local chart customization is necessary to fix the etcd image issue and enable gateway proxy creation
helm repo add apisix https://charts.apiseven.com
helm repo update
helm pull apisix/apisix --version 2.11.6
tar -xvzf apisix-2.11.6.tgz
cd apisix/
Fix etcd image registry issue
Edit charts/etcd/values.yaml
image:
registry: docker.io
repository: bitnamilegacy/etcd
tag: 3.5.10-debian-11-r2
digest: ""
Post-VMware acquisition broke the default bitnami/etcd registry. Switch to bitnamilegacy repository
Enable gateway proxy
Edit charts/apisix-ingress-controller/values.yaml
gatewayProxy:
createDefault: true
Fix service template
The default service template includes externalTrafficPolicy
without type checking. This field only applies to LoadBalancer
and NodePort
services
Edit templates/service-gateway.yaml
:
# Find this line:
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
# Replace with:
{{- if or (eq .Values.service.type "LoadBalancer") (eq .Values.service.type "NodePort") }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy | default "Cluster" }}
{{- end }}
Configuration
Create values-dev.yaml
global:
imagePullSecrets: []
replicaCount: 1
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 100m
memory: 256Mi
livenessProbe:
enabled: true
httpGet:
path: /healthz
port: 9080
initialDelaySeconds: 30
periodSeconds: 30
readinessProbe:
enabled: true
httpGet:
path: /healthz
port: 9080
initialDelaySeconds: 10
periodSeconds: 10
service:
type: ClusterIP
http:
enabled: true
servicePort: 80
containerPort: 9080
tls:
enabled: true
servicePort: 443
containerPort: 9443
apisix:
fullCustomConfig:
enabled: true
config:
apisix:
node_listen:
- 9080
enable_heartbeat: true
enable_admin: true
enable_admin_cors: true
enable_debug: false
enable_control: true
control:
ip: 127.0.0.1
port: 9090
enable_dev_mode: false
enable_reuseport: true
enable_ipv6: false
enable_http2: true
enable_server_tokens: true
proxy_cache:
cache_ttl: 30s
zones:
- name: disk_cache_one
memory_size: 128m
disk_size: 5G
disk_path: "/tmp/disk_cache_one"
cache_levels: "1:2"
- name: memory_cache
memory_size: 512m
router:
http: radixtree_host_uri
ssl: 'radixtree_sni'
proxy_mode: http
stream_proxy:
tcp:
- 9100
udp:
- 9200
dns_resolver_valid: 30
resolver_timeout: 5
ssl:
enable: true
listen:
- port: 9443
enable_http3: false
ssl_protocols: "TLSv1.2 TLSv1.3"
ssl_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384"
nginx_config:
error_log: "/dev/stderr"
error_log_level: "warn"
worker_processes: "auto"
enable_cpu_affinity: true
worker_rlimit_nofile: 20480
event:
worker_connections: 10620
http:
enable_access_log: true
access_log: "/dev/stdout"
access_log_format: '$remote_addr - $remote_user [$time_local] $http_host \"$request\" $status $body_bytes_sent $request_time \"$http_referer\" \"$http_user_agent\" $upstream_addr $upstream_status $upstream_response_time \"$upstream_scheme://$upstream_host$upstream_uri\"'
access_log_format_escape: default
keepalive_timeout: "60s"
client_header_timeout: 60s
client_body_timeout: 60s
send_timeout: 10s
underscores_in_headers: "on"
real_ip_header: "X-Real-IP"
real_ip_from:
- 127.0.0.1
- 'unix:'
http_configuration_snippet: |
client_header_buffer_size 32k;
large_client_header_buffers 8 128k;
plugins:
- real-ip
- client-control
- proxy-control
- request-id
- cors
- ip-restriction
- ua-restriction
- referer-restriction
- uri-blocker
- request-validation
- basic-auth
- jwt-auth
- key-auth
- consumer-restriction
- proxy-cache
- proxy-mirror
- proxy-rewrite
- api-breaker
- limit-conn
- limit-count
- limit-req
- gzip
- traffic-split
- redirect
- response-rewrite
- prometheus
- http-logger
- file-logger
stream_plugins:
- ip-restriction
- limit-conn
deployment:
role: traditional
role_traditional:
config_provider: etcd
admin:
enable_admin_ui: true
allow_admin:
- 0.0.0.0/0
admin_listen:
ip: 0.0.0.0
port: 9180
admin_key:
- name: "admin"
key: edd1c9f034335f136f87ad84b625c8f1
role: admin
- name: "viewer"
key: 4054f7cf07e344346cd3f287985e76a2
role: viewer
etcd:
host:
- "http://apisix-etcd.apisix.svc.cluster.local:2379"
prefix: "/apisix"
timeout: 30
etcd:
enabled: true
replicaCount: 3
auth:
rbac:
create: false
allowNoneAuthentication: true
persistence:
enabled: true
storageClass: "local-path"
size: 5Gi
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 50m
memory: 64Mi
rbac:
create: true
serviceAccount:
create: true
ingress:
enabled: false
autoscaling:
enabled: false
ingress-controller:
enabled: true
replicaCount: 1
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 50m
memory: 64Mi
config:
apisix:
serviceName: apisix-admin
serviceNamespace: apisix
servicePort: 9180
ingressClass: apisix
logLevel: "info"
Configuration Notes
Critical values to adjust for your environment
# etcd host must match target namespace
etcd:
host:
- "http://apisix-etcd.YOUR_NAMESPACE.svc.cluster.local:2379"
# match your clusters StorageClass
etcd:
persistence:
storageClass: "your-storage-class"
# generate secure admin key for non-dev environments
admin_key:
- name: "admin"
key: "your-secure-key"
role: admin
# ingress controller namespace must match deployment namespace
config:
apisix:
serviceNamespace: apisix
Deploy
kubectl create namespace apisix
helm install apisix ./apisix \
-n apisix \
-f values-dev.yaml
kubectl wait --for=condition=ready pod \
-l app.kubernetes.io/name=apisix \
-n apisix \
--timeout=300s
Verify deployment
kubectl get pods -n apisix
# expected: 3 pods running (apisix, etcd, ingress-controller)
kubectl exec -n apisix deploy/apisix -- \
curl -s http://localhost:9180/apisix/admin/routes \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1'
# should return: {"list":[],"total":0}
Backend Application
Deploy test backend for routing demonstration
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-server
namespace: default
labels:
app: echo-server
spec:
replicas: 1
selector:
matchLabels:
app: echo-server
template:
metadata:
labels:
app: echo-server
spec:
containers:
- name: echo-server
image: ealen/echo-server:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: echo-server
namespace: default
spec:
selector:
app: echo-server
ports:
- protocol: TCP
port: 1980
targetPort: 80
kubectl apply -f echo-server.yaml
Routing Configuration
APISIX uses CRDs for declarative configuration. Create an ApisixRoute with multiple plugins
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
name: echo-server
namespace: default
spec:
ingressClassName: apisix
http:
- name: echo
match:
paths:
- /api/echo/*
- /api/echo
methods:
- GET
- POST
backends:
- serviceName: echo-server
servicePort: 1980
plugins:
- name: api-breaker
enable: true
config:
break_response_code: 502
unhealthy:
http_statuses: [500, 503, 502]
failures: 3
healthy:
http_statuses: [200]
successes: 1
- name: proxy-cache
enable: true
config:
cache_strategy: "disk"
cache_method: ["GET", "HEAD"]
- name: proxy-rewrite
enable: true
config:
uri: /echo
- name: limit-count
enable: true
config:
count: 10
time_window: 10
key: remote_addr
reject_code: 429
Plugin configuration breakdown
- api-breaker: Circuit breaker, opens after 3 consecutive failures (500/502/503), returns 502
- proxy-cache: Disk-based caching for GET/HEAD requests
- proxy-rewrite: Rewrites
/api/echo
to/echo
at backend - limit-count: Rate limiting, 10 requests per 10 seconds per IP, returns 429
kubectl apply -f apisix-route.yaml
kubectl get apisixroute
Testing
Expose APISIX for testing (adjust IP based on your setup)
# option 1: Port-forward
kubectl port-forward -n apisix svc/apisix-gateway 9080:80
# option 2: Direct cluster IP access
# use cluster IP directly if accessible
Rate Limiting
10 requests per 10 seconds configuration. Send 15 requests
for i in {1..15}; do
echo "Request $i: $(curl -s -o /dev/null -w '%{http_code}' http://localhost:9080/api/echo)"
done
Result: First 10 requests return 200, requests 11-15 return 429 (rate limited). Counter resets after 10 seconds
Proxy Cache
Check Apisix-Cache-Status
header
curl -v http://localhost:9080/api/echo
Cache behavior
- First request:
Apisix-Cache-Status: MISS
(fetch from backend) - Second request:
Apisix-Cache-Status: HIT
+Age: 2
(served from cache) - After TTL (30s):
Apisix-Cache-Status: EXPIRED
(refetch from backend)
Proxy Rewrite
External path /api/echo
rewrites to internal /echo
at backend. Useful for API versioning, path standardization, or hiding internal structure
Route configuration
match:
paths: ["/api/echo/*", "/api/echo"]
plugins:
- name: proxy-rewrite
config:
uri: /echo
Client requests /api/echo
, backend receives /echo
Circuit Breaker
Deploy error-generating backend
apiVersion: apps/v1
kind: Deployment
metadata:
name: error-server
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: error-server
template:
metadata:
labels:
app: error-server
spec:
containers:
- name: http-echo
image: hashicorp/http-echo
args:
- "-text=Service Unavailable"
- "-status-code=503"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: error-server
namespace: default
spec:
selector:
app: error-server
ports:
- port: 1980
targetPort: 5678
kubectl apply -f error-server.yaml
Create route with circuit breaker
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
name: echo-route-error
namespace: default
spec:
ingressClassName: apisix
http:
- name: echo-error
match:
paths:
- /echo-error/*
- /echo-error
methods:
- GET
- POST
backends:
- serviceName: error-server
servicePort: 1980
plugins:
- name: api-breaker
enable: true
config:
max_breaker_sec: 30
break_response_code: 502
unhealthy:
http_statuses: [500, 503, 502]
failures: 3
healthy:
http_statuses: [200]
successes: 1
kubectl apply -f error-route.yaml
Test circuit breaker behavior
for i in {1..15}; do
echo "Request $i: $(curl -s -o /dev/null -w '%{http_code}' http://localhost:9080/echo-error)"
done
Result:
- Requests 1-3: 503 (backend errors, passed through)
- Requests 4+: 502 (circuit open, APISIX returns immediately without backend call)
Circuit stays open for 30 seconds (max_breaker_sec
), then attempts backend again. Success closes circuit, failure reopens it
Connection Limiting
Limits concurrent connections (different from rate limiting which counts requests over time)
plugins:
- name: limit-conn
enable: true
config:
conn: 5
burst: 2
default_conn_delay: 0.1
rejected_code: 503
key: remote_addr
Allows 5 concurrent connections + 2 burst, returns 503 when exceeded
Test:
for i in {1..10}; do
curl -s -o /dev/null -w "Request $i: %{http_code}\n" http://localhost:9080/api/echo &
done
wait
Troubleshooting
Pod CrashLoopBackOff
etcd connection failed
kubectl logs -n apisix deploy/apisix | grep etcd
kubectl get pods -n apisix | grep etcd
Check etcd host URL matches namespace in values. Test connectivity
kubectl exec -n apisix deploy/apisix -- \
curl -v http://apisix-etcd.apisix.svc.cluster.local:2379/health
Insufficient resources
kubectl describe pod -n apisix <pod-name> | grep -A 5 "Events"
Look for OOMKilled. Increase memory limits in values
Image pull errors
Verify etcd image uses bitnamilegacy/etcd
in chart
Route Not Working (404)
Check route exists and ingress controller synced
kubectl get apisixroute
kubectl logs -n apisix -l app.kubernetes.io/name=apisix-ingress-controller --tail=50
Verify ingressClassName and check route registration
kubectl exec -n apisix deploy/apisix -- \
curl -s http://localhost:9180/apisix/admin/routes \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' | jq
Recreate route or restart ingress controller
kubectl delete apisixroute <name> && kubectl apply -f route.yaml
kubectl rollout restart deployment -n apisix apisix-ingress-controller
502 Bad Gateway
Backend unreachable (not circuit breaker)
kubectl get pods | grep echo-server
Check service selector matches pod labels
kubectl get svc echo-server -o yaml | grep -A 3 selector
kubectl get pods -l app=echo-server --show-labels
Verify service port configuration. Test backend directly
kubectl port-forward svc/echo-server 8080:1980
curl http://localhost:8080
Plugin Not Applied
Check plugin enabled
kubectl exec -n apisix deploy/apisix -- \
curl -s http://localhost:9180/apisix/admin/plugins/list \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1'
kubectl logs -n apisix deploy/apisix | grep -i "plugin\|error"
Verify YAML syntax and plugin name. Force reload
kubectl delete apisixroute <name> && kubectl apply -f route.yaml
etcd Persistence
PVC not bound:
kubectl get pvc -n apisix # STATUS should be "Bound"
kubectl get storageclass
kubectl logs -n apisix apisix-etcd-0 | grep -i error
High Memory Usage
Pods OOMKilled or high memory consumption
kubectl top pods -n apisix
Reduce cache sizes in values and upgrade
proxy_cache:
zones:
- name: disk_cache_one
memory_size: 64m
helm upgrade apisix ./apisix -n apisix -f values-dev.yaml
Quick Diagnostics
# status check
kubectl get all -n apisix
# recent events
kubectl get events -n apisix --sort-by='.lastTimestamp'
# component logs
kubectl logs -n apisix deploy/apisix --tail=100
kubectl logs -n apisix -l app.kubernetes.io/name=apisix-ingress-controller --tail=100
# etcd health
kubectl exec -n apisix apisix-etcd-0 -- etcdctl endpoint health
# dump routes
kubectl exec -n apisix deploy/apisix -- \
curl -s http://localhost:9180/apisix/admin/routes \
-H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' | jq '.' > routes.json