High-Throughput Redis Migrations: 150K Keys Per Second
September 21, 2025 · Eden Team
Tags: release
Eden v0.6.0 achieves Redis-to-Redis migration throughput of ~10MB/s or approximately 150,000 keys per second on typical workloads. That's fast enough to migrate a million keys in under 7 seconds.
Why Live Migration Is Harder Than It Sounds
Migrating a Redis database seems simple on paper: read keys from source, write to target. In practice? It's a minefield of edge cases that can corrupt your data or take down production.
Here's what not to do:
# Seriously, don't do this
redis-cli KEYS "*" | xargs -I {} redis-cli GET {} | ...This naive approach fails in spectacular ways:
- KEYS blocks Redis — On a 10M key database, this can freeze your database for 30+ seconds
- Memory explosion — Loading all keys into memory can crash your migration script
- Lost updates — Keys modified during migration may be lost or duplicated
- Type blindness — GET only works for strings; hashes, lists, and sets fail silently
- TTL loss — Expiration times aren't preserved
- Throughput — Sequential operations achieve maybe 1,000 keys/second (not great)
Real-world Redis databases have millions of keys, multiple data types, active writes during migration, and zero tolerance for data loss. Eden handles all of this.
How Eden Does It
Eden's migration engine uses a three-stage pipeline designed for maximum throughput without blocking either database:
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Scanner │────▶│ Pipeline │────▶│ Writer │
│ (Source DB) │ │ (Batching) │ │ (Target DB) │
└──────────────┘ └──────────────┘ └──────────────┘
│ │ │
SCAN-based Type-aware Pipelined
non-blocking batching async writesScanner — Uses SCAN (not KEYS) to iterate without blocking your production database.
Pipeline — Groups keys by type, batches for efficiency, and applies backpressure when the writer falls behind.
Writer — Pipelined async writes with retries and conflict resolution.
We track seen keys with a bloom filter to handle duplicates (SCAN can return the same key twice), and checkpoint progress so you can resume if something goes wrong.
The Speed Tricks
Batching
Instead of migrating keys one at a time (150,000 round trips), Eden batches them:
MGET key1 key2 key3 ... key100
→ MSET key1 v1 key2 v2 ... key100 v100That's 100x fewer network round trips. On a 1ms network latency, batching saves over 2 minutes per 150K keys.
Pipelining
Even with batching, waiting for each response wastes time. Eden sends multiple batches before waiting for responses:
Client: MSET batch1...
Client: MSET batch2...
Client: MSET batch3...
Server: OK (batch1)
Server: OK (batch2)
Server: OK (batch3)This keeps the network saturated instead of sitting idle waiting for ACKs.
Type-Aware Migration
Here's where most migration tools fall apart. Redis has 10+ data types, and each needs different handling:
| Type | How Eden Handles It |
|------|---------------------|
| String | Batched MGET/MSET |
| Hash | HSCAN to avoid blocking on huge hashes |
| List | Chunked reads to bound memory |
| Set | Batched SADD |
| Sorted Set | Preserves scores exactly |
| Stream | Keeps message IDs for consumer groups |
TTL Preservation
Expiration times are preserved with millisecond precision. Keys that expire during migration are detected and skipped—we don't create zombie data.
Real-World Numbers
Tested on AWS with two cache.r6g.large ElastiCache instances in the same AZ:
| Workload | Keys/sec | MB/sec |
|----------|----------|--------|
| Strings (100B avg) | 180,000 | 18 MB/s |
| Mixed types | 150,000 | 10 MB/s |
| Large values (10KB avg) | 12,000 | 120 MB/s |
| Hashes (100 fields) | 45,000 | 8 MB/s |
Network bandwidth limits large values; CPU limits small values. Eden auto-tunes batch sizes based on your value distribution.
How We Stack Up
| Tool | Keys/sec | Live Traffic? | Type-Aware? |
|------|----------|---------------|-------------|
| redis-cli --pipe | ~50,000 | No | No |
| RIOT | ~80,000 | Limited | Partial |
| AWS DMS | ~30,000 | Yes | No |
| Eden | 150,000 | Yes | Yes |
Throttling (Be Nice to Your Database)
If your source is serving production traffic, you probably don't want to max out its CPU during a migration. Eden lets you throttle:
migration:
throttle:
enabled: true
max_keys_per_second: 100000
max_bytes_per_second: 50_000_000Throttle during business hours. Go full speed during maintenance windows or when migrating from a read replica.
Watch It Happen
curl http://eden:8000/api/v1/migrations/my-migration{
"status": "success",
"data": {
"id": "my-migration",
"status": "running",
"progress": {
"keys_migrated": 8234567,
"keys_total": 15000000,
"keys_failed": 12
}
}
}Real-time progress and error counts. No more staring at logs wondering if anything is happening.