With less than 2 months until Ingress NGINX loses security support, I wanted to share our migration planning process. This is a significant undertaking for any organization with substantial Kubernetes infrastructure.
Our Starting Point
Current State:
- 15 Kubernetes clusters (6 production, 9 dev/staging)
- 340+ Ingress resources
- 8 teams affected
- Heavy use of NGINX-specific annotations
- Custom snippets for rate limiting and auth
This isn’t a weekend project.
Evaluation Criteria
We evaluated alternatives against these criteria:
| Criterion | Weight | Why It Matters |
|---|---|---|
| Long-term viability | 30% | Don’t want another migration in 2 years |
| Feature parity | 25% | Need to support current use cases |
| Migration complexity | 20% | Engineering time cost |
| Community/Support | 15% | Help when things break |
| Performance | 10% | Traffic-critical workloads |
The Candidates
1. Gateway API (with Envoy-based implementations)
Pros:
- Kubernetes-native, official successor to Ingress
- Expressive routing model (HTTPRoute, GRPCRoute)
- Multiple implementations available (Envoy Gateway, Istio, Cilium)
- Active development, growing ecosystem
Cons:
- Newer, less battle-tested at scale
- Different mental model from Ingress
- Some NGINX-specific features need workarounds
Our Assessment: Best long-term choice, but steeper learning curve.
2. Kong Ingress Controller
Pros:
- Feature-rich, enterprise-ready
- Plugin ecosystem for auth, rate limiting, etc.
- Good migration tooling from NGINX
- Commercial support available
Cons:
- Vendor lock-in concerns
- Cost for enterprise features
- Heavier resource footprint
Our Assessment: Good option if you need enterprise support, but introduces new vendor dependency.
3. Traefik
Pros:
- Auto-discovery of services
- Built-in Let’s Encrypt integration
- Good documentation
- Active open source community
Cons:
- Different configuration model
- Some performance concerns at high traffic
- Less enterprise-focused
Our Assessment: Great for smaller deployments, questions at our scale.
4. F5/NGINX Ingress Controller (Commercial)
Pros:
- Familiar NGINX configuration model
- Commercial support from F5
- Easier migration from community NGINX
Cons:
- Expensive licensing
- Still NGINX (similar architectural concerns)
- Vendor lock-in
Our Assessment: Path of least resistance, but doesn’t solve underlying issues.
Our Decision: Gateway API
We chose Gateway API for these reasons:
- Future-proof: It’s the Kubernetes-endorsed direction
- Implementation choice: Multiple backends (we chose Envoy Gateway)
- No vendor lock-in: Standard API means we can switch implementations
- Better model: Role-based configuration aligns with our team structure
Migration Approach
Phase 1: Parallel Deployment (Weeks 1-2)
# Deploy Gateway API alongside existing Ingress
# Both route to same backends
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: api-gateway
namespace: gateway-system
spec:
gatewayClassName: envoy-gateway
listeners:
- name: http
port: 80
protocol: HTTP
- name: https
port: 443
protocol: HTTPS
tls:
mode: Terminate
certificateRefs:
- name: wildcard-cert
Phase 2: Traffic Splitting (Weeks 3-4)
- Route 10% traffic through Gateway API
- Monitor for errors, latency differences
- Gradually increase to 50%, then 90%
Phase 3: Full Cutover (Weeks 5-6)
- Route 100% through Gateway API
- Keep Ingress NGINX as fallback (1 week)
- Remove Ingress NGINX completely
Configuration Translation Examples
Rate Limiting
Before (NGINX annotation):
annotations:
nginx.ingress.kubernetes.io/limit-rps: "10"
nginx.ingress.kubernetes.io/limit-connections: "5"
After (Gateway API + BackendTrafficPolicy):
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: rate-limit-policy
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: my-route
rateLimit:
type: Local
local:
requests: 10
unit: Second
Path-Based Routing
Before (Ingress):
spec:
rules:
- host: api.example.com
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: api-v1
port: 80
After (HTTPRoute):
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: api-routes
spec:
parentRefs:
- name: api-gateway
hostnames:
- api.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /v1
backendRefs:
- name: api-v1
port: 80
Lessons Learned So Far
- Audit annotations first: We found 47 unique NGINX annotations in use. Many were obsolete.
- Start with simple services: Don’t migrate your most complex routing first.
- Monitoring is critical: We added extensive logging during parallel run.
- Team training matters: Gateway API concepts are different enough to require training.
Questions for the Community
- Anyone else migrating to Gateway API? What implementation did you choose?
- How are you handling custom NGINX configurations (snippets, lua)?
- What’s your rollback strategy if issues are found post-migration?
Would love to hear from others going through this process.