Dynamic Traffic Routing with SwitchProxy
October 27, 2025 · Eden Team
Tags: release
Zero-downtime migrations require gradually shifting traffic from old to new databases. Eden v0.8.0 introduces SwitchProxy, a traffic routing layer that makes this seamless.
The Old Way Sucked
Traditional database migrations look something like this:
- Stop application traffic
- Export data from old database
- Import data to new database
- Update application config
- Restart application
- Hope nothing broke
This means downtime, stale data, and no easy rollback if something goes wrong. SwitchProxy fixes all of that.
How SwitchProxy Works
SwitchProxy sits between your application and Eden's database proxies:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Application │────▶│ SwitchProxy │────▶│ Old Redis │
└─────────────┘ └─────────────┘ │ └─────────────┘
│
└─▶┌─────────────┐
│ New Redis │
└─────────────┘Traffic routing is controlled dynamically—no restarts required:
# Route all traffic to old database
curl -X POST http://switchproxy:8009/route/1
# Route all traffic to new database
curl -X POST http://switchproxy:8009/route/2One API call to switch. That's it.
Graceful Connection Draining
When you switch routes, SwitchProxy doesn't just yank connections away. It:
- Stops accepting new connections to the old target
- Waits for in-flight requests to complete
- Gracefully closes idle connections
- Starts routing to the new target
No requests dropped. No half-finished operations left hanging.
Automatic Failback
If the new database becomes unavailable, SwitchProxy can automatically failback:
failback:
enabled: true
threshold: 3 # Consecutive failures before failback
cooldown: 30s # Wait before retrying new databaseThree failures in a row? Automatically switch back to the old database, log an alert, wait for the cooldown, and try again. Your users don't even notice.
Canary Deployments with Traffic Splitting
Not ready to go all-in? Route a percentage of traffic to test the waters:
# Send 10% of traffic to new database
curl -X POST http://switchproxy:8009/split -d '{"ratio": 0.1}'
# Gradually increase
curl -X POST http://switchproxy:8009/split -d '{"ratio": 0.25}'
curl -X POST http://switchproxy:8009/split -d '{"ratio": 0.50}'
curl -X POST http://switchproxy:8009/split -d '{"ratio": 1.0}' # Full cutoverTraffic splitting uses consistent hashing, so the same client always hits the same backend. Important for session consistency.
Real-Time Stats
See exactly where your traffic is going:
curl http://switchproxy:8009/stats{
"current_route": 2,
"split_ratio": 0.25,
"connections": {
"server_1": 142,
"server_2": 47
},
"latency_p99_ms": {
"server_1": 2.3,
"server_2": 1.8
}
}Connection counts, request counts, error rates, and latency percentiles—all in real-time.