Back to Blog

Dynamic Traffic Routing with SwitchProxy

October 27, 2025 · Eden Team

Tags: release


Zero-downtime migrations require gradually shifting traffic from old to new databases. Eden v0.8.0 introduces SwitchProxy, a traffic routing layer that makes this seamless.

The Old Way Sucked

Traditional database migrations look something like this:

  1. Stop application traffic
  2. Export data from old database
  3. Import data to new database
  4. Update application config
  5. Restart application
  6. Hope nothing broke

This means downtime, stale data, and no easy rollback if something goes wrong. SwitchProxy fixes all of that.

How SwitchProxy Works

SwitchProxy sits between your application and Eden's database proxies:

┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│ Application │────▶│ SwitchProxy │────▶│ Old Redis   │
└─────────────┘     └─────────────┘  │  └─────────────┘
                                     │
                                     └─▶┌─────────────┐
                                        │ New Redis   │
                                        └─────────────┘

Traffic routing is controlled dynamically—no restarts required:

bash
# Route all traffic to old database
curl -X POST http://switchproxy:8009/route/1

# Route all traffic to new database
curl -X POST http://switchproxy:8009/route/2

One API call to switch. That's it.

Graceful Connection Draining

When you switch routes, SwitchProxy doesn't just yank connections away. It:

  1. Stops accepting new connections to the old target
  2. Waits for in-flight requests to complete
  3. Gracefully closes idle connections
  4. Starts routing to the new target

No requests dropped. No half-finished operations left hanging.

Automatic Failback

If the new database becomes unavailable, SwitchProxy can automatically failback:

yaml
failback:
  enabled: true
  threshold: 3          # Consecutive failures before failback
  cooldown: 30s         # Wait before retrying new database

Three failures in a row? Automatically switch back to the old database, log an alert, wait for the cooldown, and try again. Your users don't even notice.

Canary Deployments with Traffic Splitting

Not ready to go all-in? Route a percentage of traffic to test the waters:

bash
# Send 10% of traffic to new database
curl -X POST http://switchproxy:8009/split -d '{"ratio": 0.1}'

# Gradually increase
curl -X POST http://switchproxy:8009/split -d '{"ratio": 0.25}'
curl -X POST http://switchproxy:8009/split -d '{"ratio": 0.50}'
curl -X POST http://switchproxy:8009/split -d '{"ratio": 1.0}'  # Full cutover

Traffic splitting uses consistent hashing, so the same client always hits the same backend. Important for session consistency.

Real-Time Stats

See exactly where your traffic is going:

bash
curl http://switchproxy:8009/stats
json
{
  "current_route": 2,
  "split_ratio": 0.25,
  "connections": {
    "server_1": 142,
    "server_2": 47
  },
  "latency_p99_ms": {
    "server_1": 2.3,
    "server_2": 1.8
  }
}

Connection counts, request counts, error rates, and latency percentiles—all in real-time.

SwitchProxy configuration guide →