Upstash had one genuinely brilliant idea: charge Redis by the request, not by the hour. For a tiny side project that fires 20 commands a day, paying $0 instead of $15 feels like magic. For a serverless function at the edge that needs a cache without provisioning anything, Upstash is a near-perfect fit.
But somewhere between the free tier and production scale, the math flips. A single real-time feature — a live leaderboard, a rate limiter, a presence indicator — can push request counts from thousands per day into millions per hour. And when that happens, a per-request Redis bill stops looking clever and starts looking terrifying.
This guide is for European teams who picked Upstash for the free tier, grew into a real product, and now need to decide: stay on per-request pricing, or move to a predictable flat-rate managed Redis in the EU?
We'll cover:
- What Upstash actually is and where it wins
- The exact pricing breakpoint where serverless Redis gets expensive
- Six EU-friendly alternatives with real numbers
- Why Valkey (post-Redis license change) and Dragonfly (25x throughput) matter in 2026
- How to migrate in under an hour: connection strings, RDB imports, client library swaps
- GDPR, CLOUD Act, and why the location of your parent company still matters
What is Upstash, and why did everyone pick it?
Upstash is a US-based (Delaware) serverless data platform. Their flagship product is Upstash Redis — a Redis-compatible cache that you pay for per request rather than per hour. It has three key design decisions that made it explode in popularity around 2022-2023:
- Per-request billing. $0.2 per 100,000 commands. You pay for what you actually use.
- REST API on top of RESP. You can talk to Redis over HTTPS, which means it works inside Vercel Edge Functions, Cloudflare Workers, and anywhere you can't open a long-lived TCP connection.
- Global edge replication. Your data lives in multiple regions with automatic replication, so a Cloudflare Worker in Frankfurt reads from Frankfurt, not Virginia.
That combo — especially the REST API — solved a real problem. Serverless platforms like Vercel and Cloudflare Workers can't hold TCP connections the way a long-running Node.js server can, and traditional Redis requires RESP over TCP. Upstash said: "fine, we'll wrap it in HTTP for you." Genius.
Upstash pricing: what it actually costs
Upstash has two core plans: Pay as You Go and Pro.
| Cost item | Free tier | Pay as You Go | Pro (fixed) |
|---|---|---|---|
| Commands | 10,000/day | $0.2 per 100K | Included in plan |
| Storage | 256 MB | $0.25/GB/month | Up to 100 GB |
| Bandwidth | Limited | $0.03/GB (varies) | Included |
| Global replication | No | Add-on | Included |
| Monthly minimum | $0 | $0 (usage-based) | $280+ |
That $0.2 per 100,000 commands looks tiny. And at 10,000 commands a day — roughly 300,000 a month — you're inside the free tier forever. This is the headline that sells Upstash.
Now do the real math.
The pricing breakpoint: when serverless Redis gets expensive fast
Let's walk through three real workloads and what they'd cost on Upstash versus a flat-rate managed Redis plan.
Workload 1: A small SaaS with rate limiting
You use Redis to enforce "100 requests per minute per user." Every API call does one INCR and one EXPIRE — two Redis commands.
- 10,000 daily active users × 500 API calls/day × 2 commands = 10,000,000 commands/day
- Monthly: 300,000,000 commands
- Upstash cost: 300M / 100K × $0.2 = $600/month
For a SaaS doing €10-30K MRR, $600/mo on rate limiting alone is ridiculous. A DanubeData Redis Small plan at €9.99/mo handles this without breaking a sweat — and has no per-request cost at all.
Workload 2: Session store for a medium web app
Classic Rails-style session store. Every authenticated page load reads and writes a session key.
- 50,000 monthly active users, average 30 sessions/month, 15 page views per session
- 50,000 × 30 × 15 × 2 commands = 45,000,000 commands/month
- Upstash cost: 45M / 100K × $0.2 = $90/month
That $90 buys you Upstash plus whatever storage you're using. Compare to DanubeData Medium Redis at €19.99/mo with 2GB RAM and unlimited requests. You're spending 4x more on Upstash for the same job — and you don't own your infrastructure.
Workload 3: Real-time leaderboard for a game
Every score update is one ZADD, and every time the player opens the leaderboard you run ZREVRANGE.
- 100,000 daily players × 20 score updates + 50 leaderboard views = 70 commands/player/day
- 7,000,000 commands/day = 210,000,000 commands/month
- Upstash cost: 210M / 100K × $0.2 = $420/month
DanubeData Redis Large 4GB: €39.99/mo. That's a 10x difference, every month, forever, with no upside because you're not actually using the edge features.
The real rule: Upstash wins on spiky, unpredictable, low-volume workloads. It loses badly on anything sustained.
Here's the breakpoint in one table.
| Commands/month | Upstash cost | DanubeData equivalent | Winner |
|---|---|---|---|
| < 300K (free tier) | $0 | €4.99 (Micro 256MB) | Upstash |
| 1M | $2 | €4.99 | Upstash (barely) |
| 10M | $20 | €9.99 (Small 1GB) | DanubeData |
| 30M | $60 | €9.99 (Small 1GB) | DanubeData (6x cheaper) |
| 100M | $200 | €19.99 (Medium 2GB) | DanubeData (10x cheaper) |
| 500M+ | $1,000+ | €39.99 (Large 4GB) | DanubeData (25x cheaper) |
The crossover is around 5 million commands per month. Below that, Upstash's free tier and micro-billing are genuinely a great deal. Above it, you are paying a tax for elasticity you're not using — because at that scale, your workload is no longer intermittent.
The CLOUD Act question that nobody asks
Upstash is a Delaware C-Corp. That means, regardless of which region you pick for your data, the company itself is subject to the US CLOUD Act. Under the CLOUD Act, US law enforcement can compel a US-headquartered company to hand over customer data, even if that data is physically stored in an EU data center.
For a hobby project, this is academic. For a regulated business — fintech, healthtech, government contractors, anyone processing personal data of EU residents under GDPR — it's a real problem. The EDPB's ongoing concerns about US data transfers mean that "region: Frankfurt" isn't enough on its own.
This is why EU-parented infrastructure matters. DanubeData is operated from the EU, runs on Hetzner dedicated hardware in Falkenstein, Germany, and has no US parent. A CLOUD Act warrant has no legal jurisdiction to compel data disclosure from us.
Six Upstash alternatives for European teams
1. DanubeData Managed Redis, Valkey, and Dragonfly
This is us. We run managed in-memory databases on Kubernetes with KubeVirt and TopoLVM storage in Falkenstein, Germany. Every plan includes SSL, daily snapshots, offsite backups, and optional read replicas. Billing is flat monthly — no per-request surprises.
| Plan | RAM | Redis / Valkey | Dragonfly |
|---|---|---|---|
| Micro | 256 MB | €4.99/mo | €6.99/mo |
| Small | 1 GB | €9.99/mo | €14.99/mo |
| Medium | 2 GB | €19.99/mo | €29.99/mo |
| Large | 4 GB | €39.99/mo | €59.99/mo |
Who it's for: EU teams with sustained workloads who want predictable bills, full RESP access, GDPR peace of mind, and the option to use Valkey (the Linux Foundation fork) or Dragonfly (25x Redis throughput).
2. Redis Cloud (from Redis Ltd.)
The official commercial Redis offering. High quality, enterprise features (RedisBloom, RedisSearch, RedisTimeSeries modules), multiple EU regions. Has a free 30MB tier and paid plans that start around €7/month but climb quickly. Redis Ltd. is an Israeli-American company; EU data residency is supported but governance sits outside the EU.
Who it's for: Teams needing official Redis support contracts and the commercial Redis modules.
3. Aiven Redis / Valkey
Aiven is a Finnish (EU!) managed database company offering Redis/Valkey across AWS, GCP, Azure, and DigitalOcean. Pricing starts around €80/mo for a 1GB single-node Valkey plan. Strong HA, good observability, multi-cloud portability.
Who it's for: Larger orgs already on Aiven Postgres/Kafka that want everything on one platform, and can afford the premium.
4. Scaleway Managed Redis
Scaleway is a French cloud provider. Managed Redis starts around €12/mo for a small node in Paris or Amsterdam. Clean EU jurisdiction, native VPC peering, decent console. Fewer bells and whistles than the enterprise offerings, but well-priced.
Who it's for: French-speaking teams or anyone who prefers a mid-sized EU cloud with first-party managed services.
5. AWS ElastiCache / Memorystore / Azure Cache
The big three hyperscalers all offer managed Redis. ElastiCache starts around $12/mo for cache.t4g.micro in eu-central-1, but the real cost is network egress and the fact that you're buying everything in dollars from a US parent. GCP Memorystore and Azure Cache for Redis are similar stories.
Who it's for: Teams already fully committed to a hyperscaler ecosystem, where same-VPC latency and IAM integration outweigh the jurisdictional concerns.
6. Self-host Redis on a VPS (with Sentinel, or Dragonfly single-node)
The old-school option. Rent a VPS for €4.49/mo (DanubeData Starter: 1 vCPU, 2GB RAM), install Redis, configure appendonly yes, set a password, open port 6379 only inside your VPC. For HA, run three Redis Sentinel instances. Or run a single Dragonfly container and get 25x the throughput of Redis on the same hardware.
Who it's for: Teams who want full control and can handle patching, monitoring, and backups themselves.
If you're interested in this route, see our guide: Self-Host Redis on a VPS.
Head-to-head comparison
| Feature | Upstash | DanubeData | Aiven | Scaleway | Self-host |
|---|---|---|---|---|---|
| Pricing model | Per request | Flat monthly | Flat monthly | Flat monthly | VPS only |
| Starting price (1GB) | ~$20-60 (usage) | €9.99 | ~€80 | ~€12 | €4.49 |
| EU jurisdiction | US parent | DE / EU | FI / EU | FR / EU | You choose |
| RESP (TCP) protocol | Yes | Yes (standard) | Yes | Yes | Yes |
| REST API | Yes (native) | Via proxy | No | No | DIY |
| Global edge replication | Yes | EU regions | Multi-cloud | EU only | DIY |
| Valkey support | No | Yes | Yes | No (yet) | Yes |
| Dragonfly support | No | Yes | No | No | Yes |
| Read replicas | Pro plan | All plans | All plans | Most plans | DIY |
| Daily snapshots | Pro plan | Included | Included | Included | DIY |
| Latency (EU client) | ~5-20ms | < 1ms (same VPC) | 1-5ms | 1-5ms | < 1ms |
Why Valkey matters in 2026
In March 2024, Redis Ltd. changed the Redis license from BSD-3 to a dual SSPL/RSAL model. This meant cloud providers like AWS, Google, and Oracle could no longer offer Redis as a managed service without a commercial agreement.
The Linux Foundation immediately forked Redis at its last open version and created Valkey — fully open-source, drop-in compatible, maintained by AWS, Google, Oracle, Alibaba, and independent contributors. Valkey 7.2 is wire-compatible with Redis 7.2, so any Redis client library works unchanged.
DanubeData supports both. If you're starting fresh in 2026, we recommend Valkey unless you have a specific need for a Redis Ltd.-only feature (mostly enterprise modules like RedisSearch). The ecosystem is bigger, the license is unambiguous, and the performance is identical.
Why Dragonfly is the dark horse
Dragonfly is a Redis-compatible in-memory store written from scratch in C++ to be multi-threaded. Single-node Redis is single-threaded by design — it can only use one CPU core. Dragonfly uses all your cores.
Benchmarks consistently show Dragonfly doing 25x the throughput of Redis on the same hardware, while using less memory (due to a more efficient hash table). If you're currently running a Redis cluster just to scale CPU, a single Dragonfly node can often replace the whole cluster.
It speaks the RESP protocol. Your existing redis-py, ioredis, or go-redis client works unchanged. The only gotchas: Dragonfly doesn't yet support the full Redis Streams API, and its clustering model is different (flat, not hash-slot-based).
On DanubeData, Dragonfly plans cost about 40-50% more than Redis plans — and that's because the underlying instance has more CPU cores allocated. For a cache-heavy workload, it's often cheaper than scaling Redis horizontally.
Migrating off Upstash: the 60-minute playbook
Step 1: Provision your new Redis
Sign up at danubedata.ro, use the €50 signup credit, and create a Redis/Valkey instance sized to your current Upstash memory usage. You'll get a connection string like:
redis://default:PASSWORD@redis-abc123.falkenstein.danubedata.ro:6379
Step 2: Export from Upstash
Upstash gives you an RDB snapshot through the console ("Backup → Download"). If your Upstash region supports the --rdb endpoint, you can also do it from the command line:
# Create a snapshot via the Upstash REST API
curl -X POST https://YOUR_UPSTASH_REST_URL/bgsave
-H "Authorization: Bearer YOUR_UPSTASH_TOKEN"
# Download via the Upstash dashboard: Databases → Backups → Download RDB
Step 3: Import into DanubeData using redis-cli MIGRATE or DUMP/RESTORE
For small datasets (< 1GB), the easiest path is redis-cli MIGRATE:
# Connect to your OLD Upstash
redis-cli -u redis://default:TOKEN@old-host.upstash.io:PORT --tls
# MIGRATE each key (or use --scan for bulk)
# Or simpler: use a dump/restore script
For larger datasets, use redis-dump-go or the official redis-shake tool from Alibaba:
# Install redis-shake
go install github.com/alibaba/RedisShake/cmd/redis-shake@latest
# Sync from Upstash to DanubeData
redis-shake sync
--source.address=old-host.upstash.io:PORT
--source.password=YOUR_UPSTASH_TOKEN
--source.tls_enable=true
--target.address=redis-abc123.falkenstein.danubedata.ro:6379
--target.password=NEW_PASSWORD
--target.tls_enable=true
Or, if you have the RDB file:
# Stop the new Redis, drop the RDB into the data dir, restart
# On managed Redis you can upload via the dashboard's "Restore from snapshot" button
Step 4: Swap your client library
The big gotcha: Upstash's @upstash/redis is an HTTP client, not a RESP client. To move to standard Redis, you swap it for ioredis or redis (node-redis).
Before (Upstash HTTP client):
import { Redis } from '@upstash/redis';
const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL,
token: process.env.UPSTASH_REDIS_REST_TOKEN,
});
await redis.set('user:42', JSON.stringify({name: 'Jane'}));
const user = await redis.get('user:42');
After (standard ioredis, same API for basic ops):
import Redis from 'ioredis';
const redis = new Redis(process.env.REDIS_URL, {
tls: {}, // DanubeData Redis uses TLS by default
});
await redis.set('user:42', JSON.stringify({name: 'Jane'}));
const user = await redis.get('user:42');
For Python, swap upstash-redis for standard redis-py:
import redis
r = redis.from_url(
"rediss://default:PASSWORD@redis-abc123.falkenstein.danubedata.ro:6379",
decode_responses=True,
)
r.set("user:42", json.dumps({"name": "Jane"}))
user = r.get("user:42")
Step 5: What if I need the REST API?
If you have Cloudflare Workers or Vercel Edge Functions that literally cannot open a TCP connection, you have two options:
- Run a tiny HTTP-to-RESP proxy. A 20-line Node.js or Go service, deployed as a Knative serverless container on DanubeData (€5/mo), that accepts
POST /pipelinerequests and forwards them to Redis. We have a reference implementation at github.com/danubedata/redis-http-proxy. - Use the built-in REST endpoint that DanubeData exposes on Pro plans (ask support to enable).
Most teams find that once they move out of Vercel Edge Functions for their data-heavy paths, they don't actually need the REST API.
Step 6: Point your app at the new URL and monitor
Change your REDIS_URL environment variable, deploy, and watch for errors. Keep Upstash running in parallel for 24-48 hours so you can roll back if something goes wrong. Once your error rates are stable, delete the Upstash database and stop the bleeding.
Common Redis use cases and DanubeData sizing
Caching
Cache database query results, HTML fragments, API responses. Size your Redis to fit your hot working set, not your entire dataset. Typical ratio: 10-20% of your Postgres DB size. For most apps, Small (1GB) is plenty.
Session store
Store user sessions with SETEX or EXPIRE. Each session is typically 1-4KB. 1GB Redis holds ~250K-1M active sessions. Small (1GB) handles most SaaS products.
Rate limiting
Token bucket or sliding window with INCR + EXPIRE. Memory footprint is tiny (~100 bytes per active rate-limit key). Even Micro (256MB) handles millions of rate-limit counters.
Pub/sub and Streams
Redis Streams for durable message queues, pub/sub for fire-and-forget. Memory depends on message volume and retention. Use XADD MAXLEN to cap stream size.
Queues (rq, Bull, Sidekiq)
Job queues like Sidekiq (Ruby), rq (Python), or Bull (Node). Typically low memory unless you have millions of stuck jobs. Small (1GB) is usually sufficient; add a read replica if you have heavy monitoring queries.
Frequently Asked Questions
Is serverless Redis actually cheaper than managed Redis?
Only in two specific cases: you're on the free tier (under 10K requests/day), or your workload is truly bursty — days of zero traffic punctuated by short spikes. For any sustained production workload — more than ~5M commands per month — flat-rate managed Redis wins, often by 5-10x.
What's the real difference between REST and RESP?
RESP (Redis Serialization Protocol) is Redis's native TCP protocol. It's extremely efficient — a GET user:42 is 18 bytes on the wire. REST wraps the same command in HTTP — now it's 400+ bytes of headers, JSON body, and TLS handshake overhead. REST is slower (5-20ms vs < 1ms) and more expensive to run, but it works in environments that can't hold TCP connections. RESP is the default for any real backend.
Do DanubeData Redis plans include HA and replicas?
All plans support adding read replicas (€X/mo extra per replica depending on size). For HA, we can deploy Redis Sentinel or Redis Cluster configurations — contact support for enterprise HA topologies. Daily snapshots are included on every plan, stored offsite on Hetzner Object Storage.
What about persistence? Is my data safe if the server restarts?
Yes. All DanubeData Redis instances run with AOF (append-only file) persistence by default, plus RDB snapshots every 60 seconds when there's write activity. After a restart, Redis replays the AOF log — you'll lose at most the last second of writes (equivalent to appendfsync everysec). If you need stricter durability, we can configure appendfsync always on request.
Is DanubeData Redis GDPR compliant?
Yes. Data is stored in Falkenstein, Germany, on Hetzner dedicated hardware. DanubeData is an EU-operated company with no US parent, so we're not subject to the US CLOUD Act. We offer a Data Processing Agreement (DPA) — downloadable directly from your team settings — and a Schrems II-compliant infrastructure stack.
How do I migrate if I'm using Upstash's global edge replication?
Most teams using edge replication don't actually need it — they added it because it was free on the Pro plan. Start by measuring your actual cross-region read percentage. If it's under 10%, a single EU instance is fine. If it's truly global, we can set up read replicas in multiple DanubeData regions, or you can run a CDN-cache layer (Cloudflare Workers KV for read-mostly data) in front of a single Redis origin.
What happens to my data if DanubeData goes out of business?
You can download an RDB snapshot at any time from the dashboard. We also offer an escrow option on enterprise plans where a third party holds encrypted snapshots of your data. And because we run standard Redis/Valkey/Dragonfly — not a proprietary fork — you can migrate to any other provider or self-host with zero code changes.
Can I run Redis and Valkey side by side?
Absolutely. Some teams use Redis for existing workloads (to avoid retesting everything) and spin up Valkey for new projects. The client libraries are identical, so you can point different env vars at different instances with no code difference.
Get started
If you're paying more than $20/month on Upstash today, you're almost certainly overpaying. Here's the fastest path to switching:
- Sign up at danubedata.ro and claim your €50 credit.
- Provision a Redis, Valkey, or Dragonfly instance sized to your Upstash memory.
- Run
redis-shakeorDUMP/RESTOREto copy your data over. - Swap your
@upstash/redisimport forioredisorredis-py. - Update your
REDIS_URLenvironment variable. - Monitor for 24 hours, then decommission Upstash.
DanubeData Redis pricing recap:
- Micro 256MB: €4.99/mo — free tier replacement, no command limits
- Small 1GB: €9.99/mo — handles most SaaS workloads
- Medium 2GB: €19.99/mo — serious production, read replicas available
- Large 4GB: €39.99/mo — high-throughput caching and session stores
- Dragonfly: +40-50% for multi-threaded 25x performance
All plans include SSL, daily offsite backups, one-click read replicas, and EU data residency.
👉 Launch your first managed Redis
Questions about migrating a specific Upstash workload? Talk to our team — we've seen most of the gotchas and are happy to help plan your move.