For almost a decade, ElephantSQL was the quiet workhorse behind thousands of side projects, Heroku migrations, indie SaaS apps, and university assignments. A Swedish-run PostgreSQL-as-a-service with a famously generous free "Tiny Turtle" tier (20 MB, 5 concurrent connections), it had the kind of reputation that meant you could recommend it without a second thought.
That era is over. ElephantSQL was acquired by 84codes, slowly de-emphasized, and then officially sunset in January 2024: no new signups, and existing customers given until January 2025 to migrate. If you are reading this in 2026, the deadline is already in the rear-view mirror. Any instances still reachable are on borrowed time, and any new project you spin up needs a different home.
This guide is the migration plan we wish we had handed to every ElephantSQL customer on the day the shutdown email landed. We will cover:
- A short history of what ElephantSQL was and why its departure matters
- A side-by-side comparison of the serious 2026 alternatives
- A tested
pg_dump/pg_restoreplaybook - How to do a zero-downtime cutover with logical replication
- Application-side changes:
DATABASE_URL, PgBouncer, TLS, extensions - An opinionated recommendation for small, medium, and EU-regulated workloads
- An FAQ for the questions we get every week on support tickets
If you just want the short answer: for most small and medium EU-based apps, DanubeData Managed PostgreSQL at 19.99 EUR/month is the closest same-region successor to ElephantSQL, with Germany-based residency, read replicas, and automated snapshots. For truly zero-cost experiments, Neon still has the best free tier, with the usual US-CLOUD-Act caveats.
What ElephantSQL Was (and Why Its Shutdown Is a Big Deal)
ElephantSQL launched in 2012 as one of the first truly serious Postgres-as-a-service offerings, long before AWS RDS was considered good defaults. Over time it became the European answer to Heroku Postgres, with three things that made it sticky:
- The Tiny Turtle free tier: 20 MB, 5 connections, one shared database. Small, but enough for hackathons, prototypes, and student projects.
- Swedish data residency: Hosted in Stockholm, inside the EU, with predictable GDPR semantics.
- Simple pricing: Turtle, Hippo, Elephant, Puffin plans. No surprise billing.
It was acquired by 84codes (the company behind CloudAMQP and CloudKarafka) in the late 2010s, and for a few years nothing changed. Then in January 2024, 84codes announced ElephantSQL would be sunset: no new signups, existing customers given twelve months to export and move, and all shared instances scheduled for deletion in early 2025. A blog post linked on the login page framed it as "focusing on our messaging products." No resale, no acquirer stepping in.
If you are reading this post, one of three things is probably true:
- You already migrated in 2024 and want a sanity check that your new provider still looks sensible in 2026.
- You are cleaning up an old repo and just noticed
ec2-54-75-123-45.eu-west-1.compute.amazonaws.comor atiny-turtle.db.elephantsql.comstring in a config file. - You inherited a project and discovered, to your horror, that the database is still pointed at an ElephantSQL URL that went dark months ago.
Whichever bucket you are in, the rest of this guide will move you forward.
Where to Migrate: The 2026 Shortlist
We have evaluated the main managed Postgres providers that an ElephantSQL refugee would realistically consider. Here is the shortlist, with each one summarized honestly (including the awkward bits):
1. DanubeData Managed PostgreSQL (Germany, EU)
Disclosure first: this is our own service, so calibrate accordingly. DanubeData runs managed Postgres on dedicated Hetzner hardware in Falkenstein, Germany, on NVMe storage, with Kubernetes-managed StatefulSets under the hood. The entry plan is 19.99 EUR/month for 1 vCPU, 2 GB RAM, 25 GB NVMe, with Medium (2 vCPU / 4 GB / 50 GB) at 39.99 EUR/month and Large (4 vCPU / 8 GB / 100 GB) at 79.99 EUR/month.
- Postgres versions: 15, 16, 17
- Read replicas: yes, with a reader endpoint for load-balanced queries
- EU residency: Germany, single jurisdiction, GDPR-native
- CLOUD Act exposure: none; no US parent company
- Snapshots: automated (TopoLVM-based), with retention you control
- TLS: yes, with CA cert available for pinning
- Free tier: no, but 50 EUR signup credit covers roughly 2.5 months of the Small plan
For former ElephantSQL customers in Europe, this is the closest "same region, same simplicity" option. Stockholm to Falkenstein is a 20 ms latency difference; most apps will not notice.
2. Neon (United States, serverless Postgres)
Neon is the most interesting technical reinvention of Postgres in a decade: storage and compute are separated, compute scales to zero after idle, and branching is git-like. The free tier is genuinely generous (multiple branches, a few GB of storage), which makes it the natural spiritual successor to Tiny Turtle.
- Postgres versions: 15, 16, 17
- Free tier: yes (best-in-class)
- EU region: yes (Frankfurt), but the company is US-based
- CLOUD Act exposure: yes (US-headquartered), relevant for Schrems II compliance
- Cold starts: roughly 300-600 ms when compute wakes from idle; some web apps feel this
- Extensions: pgvector, pg_trgm, PostGIS supported
Honest take: for a hobby project or prototype, Neon is excellent. For anything touching EU personal data under a strict GDPR interpretation, you need a data processing addendum and a clear-eyed view of US government access risk.
3. Supabase (US/EU regions, broader BaaS)
Supabase is more than just Postgres: you get auth, realtime, storage, and edge functions on top. That is either an advantage or extra surface area depending on taste. Free tier exists. EU region (Frankfurt) available, but again the company is US-headquartered.
- Postgres versions: 15, with newer in pipeline
- Free tier: yes (pauses after 7 days inactivity)
- CLOUD Act exposure: yes
- Extensions: excellent (pgvector, PostGIS, pg_cron, and the full Supabase extension set)
- Best for: apps that want auth + DB + realtime in one stack
4. Aiven (Finland, multi-cloud, EU)
Aiven is Finnish, enterprise-focused, and polished. Managed Postgres lives on top of whichever underlying cloud you pick (AWS, GCP, Azure, DigitalOcean, UpCloud). That gives you region flexibility but also means your GDPR posture depends on which cloud you chose.
- Postgres versions: 13 through 17
- Free tier: limited trial only
- EU HQ: yes (Finland)
- CLOUD Act exposure: depends on the underlying cloud (AWS/GCP = yes)
- Pricing: starts higher than DanubeData (roughly 60-80 EUR/month for comparable Small)
- Best for: teams that want enterprise features and are already multi-cloud
5. Scaleway Managed Database (France, EU)
French sovereign-friendly, and a solid option if you are bound by FR/EU-only procurement rules. Paris and Amsterdam regions. Reasonable pricing, though the entry tier has historically been less competitive than DanubeData or Hetzner-native options.
- Postgres versions: 14, 15, 16
- EU HQ: France
- CLOUD Act exposure: none
- Free tier: no
- Best for: French public sector and companies with sovereignty procurement rules
6. Crunchy Bridge (US-based company, multi-region)
Crunchy Data has employed core Postgres contributors for years and runs Crunchy Bridge with that engineering credibility. Regions include US, EU (Ireland, Frankfurt), and AU. US-incorporated, so CLOUD Act applies.
- Postgres versions: 15, 16, 17
- Free tier: no
- CLOUD Act exposure: yes
- Best for: teams that want very serious Postgres operators and do not need EU-sovereign posture
7. ElestIO (Netherlands)
ElestIO is an "open-source apps, fully managed" provider headquartered in the Netherlands. They run managed Postgres across several European data centers (Hetzner, OVH, others). The product is broader than DBaaS; think of it as Heroku-style hosting for open-source software.
- Postgres versions: 13 through 17
- EU HQ: Netherlands
- CLOUD Act exposure: none
- Best for: teams that also want managed Redis, RabbitMQ, or other FOSS services from one vendor
8. Self-host on a DanubeData VPS
For full control and lowest cost, you can always run Postgres yourself on a VPS. DanubeData VPS starts at 4.49 EUR/month; a 8.99 EUR Standard plan is enough for a small production Postgres with daily pg_dump backups.
- Pros: cheapest, full control over extensions, tuning, version
- Cons: you own patching, HA, backups, and 3 AM pages
- Best for: teams with ops discipline and an existing infrastructure-as-code setup
Comparison Table: The 2026 Alternatives
| Provider | HQ | Entry Plan | Free Tier | EU Residency | CLOUD Act | PG Versions | Replicas |
|---|---|---|---|---|---|---|---|
| ElephantSQL | RIP (Sweden) | Shut down | RIP | Was: yes | No | Was: 11-15 | Was: yes |
| DanubeData | Germany | 19.99 EUR/mo (1 vCPU, 2 GB, 25 GB) | 50 EUR credit | Yes (DE) | No | 15, 16, 17 | Yes |
| Neon | United States | 0 USD (free) / 19 USD+ | Yes (best) | EU region available | Yes | 15, 16, 17 | Branches |
| Supabase | United States | 0 USD / 25 USD+ | Yes (pauses) | EU region available | Yes | 15 | Yes (paid) |
| Aiven | Finland | ~60-80 EUR/mo | Trial only | Yes (EU) | Depends on cloud | 13-17 | Yes |
| Scaleway | France | ~15-20 EUR/mo | No | Yes (FR/NL) | No | 14-16 | Yes |
| Crunchy Bridge | United States | ~35 USD/mo | No | EU region available | Yes | 15, 16, 17 | Yes |
| ElestIO | Netherlands | ~15-20 EUR/mo | Trial | Yes | No | 13-17 | Yes |
The table is deliberately opinionated: if you had 30 seconds to pick a provider for an EU app, the green rows are where to look first. If you care most about free tier and are okay with US-law exposure, Neon is the obvious pick.
Migration Playbook: From ElephantSQL (or Any Postgres) to a New Home
Below is the battle-tested migration path. It applies equally well to DanubeData, Neon, Supabase, Aiven, or any destination that speaks real Postgres. We will use DanubeData as the example target; substitute your own URL where obvious.
Step 0: Freeze Schema Changes and Snapshot
Before touching anything, commit a git tag that matches the production schema. If you are still able to log into ElephantSQL, trigger one final backup from the dashboard and download the dump file. Do not rely on provider snapshots you cannot see.
Step 1: Collect the Connection Strings
You need two:
# Source (old ElephantSQL, or whatever you are leaving)
SOURCE_URL="postgres://user:password@tiny-turtle.db.elephantsql.com:5432/olddbname"
# Destination (DanubeData)
DEST_URL="postgres://danubeuser:newpassword@db-xxx.danubedata.ro:5432/newdbname?sslmode=require"
Two gotchas that bite people every time:
- Append
?sslmode=requireto the destination URL. DanubeData (and every serious managed Postgres) enforces TLS. If you omit it, some client libraries silently fall back to plaintext and fail connection. - On the destination, the database name is usually not
postgres; it is whatever you set at create time. Check the UI.
Step 2: Schema-Only Dump (Dry Run)
Before you copy gigabytes of data, make sure the schema lands cleanly. Run a schema-only dump and restore first:
# Dump just the schema from the source
pg_dump "$SOURCE_URL"
--schema-only
--no-owner
--no-privileges
--format=plain
--file=schema.sql
# Inspect the file for anything suspicious: extensions, roles, tablespaces
less schema.sql
# Apply to the destination
psql "$DEST_URL" --file=schema.sql
Why --no-owner and --no-privileges? On a new managed Postgres, the role names will be different (your old tiny-turtle-user does not exist on DanubeData). Letting Postgres skip the ownership/GRANT lines saves you a pile of "role does not exist" errors and you can reapply them manually afterwards if needed.
Step 3: Install Extensions on the Destination
This is the step that trips up people moving from ElephantSQL: managed Postgres providers only expose a curated list of extensions. Before you restore data, verify every extension your app relies on is available:
-- Connect to the destination
psql "$DEST_URL"
-- List extensions your app uses
SELECT extname, extversion FROM pg_extension;
-- Install the ones you need, e.g.:
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE EXTENSION IF NOT EXISTS pgcrypto;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS vector; -- pgvector
CREATE EXTENSION IF NOT EXISTS postgis;
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
DanubeData Managed Postgres currently supports pg_trgm, pgcrypto, uuid-ossp, pgvector, PostGIS, pg_stat_statements, hstore, citext, ltree, and a dozen more. If your app uses something exotic (pg_cron, timescaledb), open a support ticket before you migrate; we may enable it or recommend self-hosting for that specific need.
Step 4: Full Data Dump and Restore
Now the main event. There are two modes: dump-and-restore (with downtime) and logical replication (without). For most small apps, dump-and-restore at 3 AM is totally fine.
Option A: Dump and Restore (with a small maintenance window)
# Put the app in maintenance mode (stops writes)
# Example for Heroku-style: heroku maintenance:on
# Or for a VPS-hosted app: flip a feature flag or return 503 from nginx
# Full dump, custom format (compressed, parallel-restorable)
pg_dump "$SOURCE_URL"
--format=custom
--no-owner
--no-privileges
--verbose
--file=fulldb.dump
# Check the size
ls -lh fulldb.dump
# Restore into the destination (parallelized with -j)
pg_restore
--dbname="$DEST_URL"
--no-owner
--no-privileges
--verbose
--jobs=4
fulldb.dump
# Update your application DATABASE_URL to the new destination
# Take the app out of maintenance mode
A 20 MB ElephantSQL Tiny Turtle database restores in well under a minute. A 5 GB Hippo database takes 3-8 minutes. For anything bigger than that, or for apps that cannot tolerate even 5 minutes of downtime, use Option B.
Option B: Logical Replication (Near-Zero Downtime)
Postgres logical replication streams changes from a publisher to a subscriber. You copy the schema, start replication, let it catch up, and then cut over DNS/config. Downtime collapses to the time it takes to flip a connection string.
-- On the SOURCE (if the provider allows it; ElephantSQL did not on free tiers):
CREATE PUBLICATION migrate_pub FOR ALL TABLES;
-- On the DESTINATION (DanubeData Managed Postgres):
CREATE SUBSCRIPTION migrate_sub
CONNECTION 'host=source-host port=5432 dbname=olddb user=replicator password=secret sslmode=require'
PUBLICATION migrate_pub
WITH (copy_data = true, create_slot = true);
-- Monitor replication lag
SELECT * FROM pg_stat_subscription;
SELECT * FROM pg_replication_slots;
Once pg_stat_subscription.last_msg_receipt_time is within a few hundred milliseconds of now, you are synchronized. Freeze writes for a beat, verify row counts match, update your app config, and disable the subscription.
Notes:
- Logical replication requires
wal_level = logicalon the source. Most managed providers allow this; some free tiers do not. - Sequences do not replicate automatically. After cutover, run
SELECT setval(...)to catch sequences up on the destination. - DDL (schema changes) does not replicate. Freeze migrations during the window.
Step 5: Verify Row Counts and Checksums
Before you switch production traffic, verify the two databases agree on the truth:
# Dump row counts from the source
psql "$SOURCE_URL" -c "
SELECT schemaname, relname, n_live_tup
FROM pg_stat_user_tables
ORDER BY schemaname, relname;
"
# Same on the destination
psql "$DEST_URL" -c "
SELECT schemaname, relname, n_live_tup
FROM pg_stat_user_tables
ORDER BY schemaname, relname;
"
# Diff them in your favorite tool; numbers should match per table
For extra paranoia, pick 3-5 critical tables and run md5(string_agg(t::text, ',')) or pg_column_size on sample rows to ensure bytes match bytes.
Step 6: Update DATABASE_URL and Deploy
The moment of truth. In your application configuration (Heroku config vars, Kubernetes Secret, .env file, Rails credentials, Laravel .env, Node process.env.DATABASE_URL), replace the ElephantSQL URL with the new one.
# Before
DATABASE_URL=postgres://user:pass@tiny-turtle.db.elephantsql.com:5432/olddbname
# After
DATABASE_URL=postgres://danubeuser:newpass@db-xxx.danubedata.ro:5432/newdbname?sslmode=require
Deploy. Watch your application logs for connection errors. Check that writes succeed (a test user signup, a test comment, etc.). If everything looks good, drop the ElephantSQL database from your records, cancel the subscription, and save the dump file to a cold S3 bucket for 90 days in case you need to rollback.
Step 7: Set Up Connection Pooling (PgBouncer)
ElephantSQL veterans will remember that even the Hippo plan only gave you 50 concurrent connections. Managed Postgres in general is allergic to lots of short-lived connections: each Postgres backend process costs 5-10 MB of RAM and a fork syscall.
If your app is Node/Python/Ruby and spawns many workers, you need PgBouncer in transaction mode in front of Postgres. DanubeData Managed Postgres ships PgBouncer optionally (same endpoint, different port); if your provider does not, run it as a sidecar on the app host:
# pgbouncer.ini (minimal example)
[databases]
appdb = host=db-xxx.danubedata.ro port=5432 dbname=newdbname
[pgbouncer]
listen_addr = 127.0.0.1
listen_port = 6432
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 500
default_pool_size = 25
server_tls_sslmode = require
Your app then connects to 127.0.0.1:6432 instead of the database directly. 500 app connections fan into 25 real Postgres connections. No more "too many clients" errors at traffic spikes.
Caveat: transaction mode breaks prepared statements and LISTEN/NOTIFY. If your ORM uses prepared statements heavily (looking at you, Django and older Rails), either switch to session mode, disable prepared statements in the driver, or use a newer PgBouncer that supports prepared statement protocol v3.
Step 8: TLS and Certificate Pinning
ElephantSQL used a shared CA. Your new provider almost certainly uses a different one. Many client libraries default to sslmode=prefer, which accepts any cert without verification; that is insecure and will bite you in an audit.
For production, use sslmode=verify-full and pin the provider's CA certificate:
# DanubeData example
DATABASE_URL=postgres://user:pass@db-xxx.danubedata.ro:5432/newdbname?sslmode=verify-full&sslrootcert=/etc/ssl/danubedata-ca.crt
Download the CA file from the provider dashboard (DanubeData exposes it on every database detail page), ship it with your container image, and point sslrootcert at it. Your connection is now cryptographically bound to that one CA.
Extension Parity: The Quiet Migration Killer
The single biggest surprise during ElephantSQL migrations was extensions. Apps that "just worked" on ElephantSQL sometimes failed silently on a new provider because one extension was missing or at a different version.
Here is a quick parity matrix for the extensions we see most often:
| Extension | DanubeData | Neon | Supabase | Aiven | Scaleway |
|---|---|---|---|---|---|
| pg_trgm | Yes | Yes | Yes | Yes | Yes |
| pgcrypto | Yes | Yes | Yes | Yes | Yes |
| uuid-ossp | Yes | Yes | Yes | Yes | Yes |
| pgvector | Yes | Yes | Yes | Yes | Check region |
| PostGIS | Yes | Yes | Yes | Yes | Yes |
| pg_stat_statements | Yes | Yes | Yes | Yes | Yes |
| pg_cron | On request | No | Yes | Depends | No |
| timescaledb | On request | No | No | On some clouds | No |
Check your current extension list before anything else:
SELECT extname, extversion FROM pg_extension ORDER BY extname;
Cross-reference against your destination provider. If anything is missing, decide now whether to drop the feature, self-host on a VPS, or pick a different provider.
Backups and Point-in-Time Recovery
ElephantSQL offered periodic backups; what it did not offer (on smaller tiers) was continuous point-in-time recovery. Most customers relied on nightly dumps.
On modern providers, check two things:
- Snapshot frequency and retention: DanubeData runs automated snapshots with configurable retention (default 7 days).
- PITR window: Some providers (Neon, Crunchy Bridge) offer continuous WAL archiving and let you restore to any second within 7-30 days. Ask your provider explicitly.
Regardless of provider, always run your own off-provider nightly pg_dump to cold storage (S3 or similar). If the provider disappears the way ElephantSQL did, a dump you control is what saves you. DanubeData Object Storage at 3.99 EUR/month gives you an S3-compatible bucket in the same country for exactly this use case.
Read Replicas: When and How
Read replicas were an ElephantSQL Elephant-tier feature. They are now table stakes on DanubeData, Supabase Pro, Neon, Aiven, Scaleway, and Crunchy Bridge.
When you actually need a replica:
- Read-heavy workloads: dashboards, reports, aggregations that do not need
NOW()-fresh data - Failover readiness: so you can promote the replica if the primary has a bad day
- Geographic latency: reads near users, writes to a single primary
When you do not:
- Your app does 10 QPS. Save the money.
- You need read-your-own-writes consistency everywhere. Replication lag will bite you.
On DanubeData, you get a dedicated reader endpoint that round-robins across replicas; set it in your app as a second DATABASE_READ_URL and route read-only queries there.
GDPR, CLOUD Act, and Why Residency Is Not Enough
One reason ElephantSQL was so popular in Europe was that it was quiet, simple, and Swedish. Its replacement deserves the same scrutiny.
The key distinction in 2026:
- Data residency: where the bytes are stored. Frankfurt, Paris, Stockholm, Falkenstein, Amsterdam are all inside the EU.
- Provider jurisdiction: where the company is incorporated. This is what determines exposure to foreign subpoena laws like the US CLOUD Act.
A US-headquartered provider storing your bytes in Frankfurt is still reachable by a US subpoena. That is the essence of the Schrems II ruling. If you process personal data of EU residents under a strict legal interpretation (healthcare, finance, public sector, anything touching sensitive categories), you need a provider whose parent company is EU-based.
In our table, the green "CLOUD Act: No" rows are: DanubeData, Scaleway, ElestIO. Aiven is nuanced (Finnish company, but if you put the workload on AWS Frankfurt, the underlying cloud is US-owned).
This is not legal advice; talk to your DPO. But if you were happy with ElephantSQL Sweden from a compliance angle, DanubeData Germany is the closest equivalent.
Our Recommendation (Honest and Specific)
- Small EU app, 10 MB - 25 GB database, needs to be online in 15 minutes: DanubeData Managed PostgreSQL Small at 19.99 EUR/month. Same-region-ish as Stockholm, GDPR-native, 50 EUR signup credit covers the first 2-3 months.
- Medium EU app with growth ahead, needs read replicas: DanubeData Medium (39.99 EUR/month) with one replica, or Aiven if you need multi-cloud and are okay with the price.
- Truly free hobby project, okay with US exposure: Neon free tier. Best product in the space for that use case.
- You also want auth + realtime + storage in the same stack: Supabase. Accept the US-HQ tradeoff.
- French public sector or sovereignty procurement: Scaleway.
- You have ops people and a 4.49 EUR VPS budget: Self-host Postgres on DanubeData VPS. Most control, lowest cost, most responsibility.
FAQ
1. ElephantSQL is already offline. Can I still get my data out?
Probably not from the platform itself. If you saved a pg_dump before the sunset, restore it into your new provider. If you did not, check whether your app itself has any data replicas (app cache, search index, read-only mirror, analytics warehouse) that can rebuild the core tables. Reach out to 84codes support; they occasionally retain deep archives for legal or contractual reasons, though we would not bet a project on it.
2. Which extensions are supported on DanubeData Managed Postgres?
The common set: pg_trgm, pgcrypto, uuid-ossp, pgvector, PostGIS, pg_stat_statements, hstore, citext, ltree, tablefunc, btree_gin, btree_gist, intarray, unaccent. Niche extensions (pg_cron, timescaledb) are available on request. Open a support ticket with your use case.
3. Do I need PgBouncer if I am on a managed provider?
If your app opens fewer than ~30 concurrent connections, no. If it is a Node/Python/Ruby app with multiple workers per pod and a Kubernetes HPA, yes. Without pooling, a traffic spike opens a flood of new Postgres backends and your RAM gets eaten. DanubeData offers an optional managed PgBouncer endpoint; otherwise, run it as a sidecar.
4. Can I have read replicas for under 40 EUR/month?
Yes. DanubeData Managed Postgres bills replicas separately; Small (19.99 EUR) + one Small replica = 39.98 EUR/month. Most small apps do not need one yet, but the option is there when load grows.
5. How do I do point-in-time recovery?
DanubeData takes automated snapshots with configurable retention. For PITR to an arbitrary second, the current Managed Postgres tier uses snapshot-based recovery with a typical granularity of one snapshot per day. Continuous-WAL PITR is on the roadmap. If that is a hard requirement today, Crunchy Bridge and Neon both offer it. For most apps, daily snapshot + off-provider nightly pg_dump is enough.
6. Is DanubeData Managed Postgres really GDPR-compliant for EU personal data?
Yes. Data lives in Falkenstein, Germany, on dedicated Hetzner hardware. DanubeData is a European company; there is no US parent and no CLOUD Act exposure. We sign a DPA on request, and the processing terms are published openly. That said, GDPR compliance is a shared responsibility: we provide the infrastructure and contractual guarantees, you provide the application-layer controls (access, encryption-at-rest keys, deletion workflows).
7. What about connection limits? ElephantSQL Tiny Turtle was only 5.
DanubeData Small tier allows 100 concurrent connections out of the box; Medium 200; Large 400. Add PgBouncer in front and you can serve thousands of application-side connections with no extra infrastructure cost.
8. Can I migrate incrementally without downtime?
Yes, with logical replication (see Step 4 Option B above). Some providers also offer paid managed migration tools (AWS DMS, Azure DMS) that automate parts of this, but for a Postgres-to-Postgres hop inside the EU, native logical replication is simpler and free.
Next Steps
- Pull the most recent
pg_dumpfrom whatever source you have. Put it in cold storage today. - Decide which row of the comparison table fits your profile.
- If that row is DanubeData, create a Managed PostgreSQL instance and claim the 50 EUR signup credit.
- Walk through the 8-step playbook in this post. Small Tiny Turtle databases are usually fully migrated in under an hour.
- Update
DATABASE_URLin your app. Deploy. Monitor. - Keep the old dump file for 90 days in case you need a rollback.
ElephantSQL taught a generation of developers to trust managed Postgres. Its sunset is a reminder that any managed service can disappear, and the best defense is a boring, reproducible migration path plus a dump file you own. The providers in this guide all speak standard Postgres, so once you have the playbook, switching again in the future costs you an afternoon.
For small EU-based apps that just want a quiet, GDPR-native Postgres at a sane price, DanubeData Managed PostgreSQL starts at 19.99 EUR/month with 50 EUR signup credit. Germany-based, no CLOUD Act exposure, Postgres 15/16/17, read replicas, automated snapshots, 20 TB included traffic.
Questions about a specific migration? Contact us with your current provider, approximate database size, and extension list; we will give you a concrete plan and, if useful, help run the dump-and-restore for you.