AWS RDS is convenient but expensive. Many teams spend $200-500/month for modest database workloads that would cost $20-50/month on DanubeData. This comprehensive guide shows you how to migrate PostgreSQL, MySQL, or MariaDB from AWS RDS to DanubeData with minimal risk and zero downtime.
Why Migrate from AWS RDS?
Cost Savings
The most common reason to leave RDS is cost. Here is a real-world comparison:
| Spec | AWS RDS (db.t4g.medium) | DanubeData | Savings |
|---|---|---|---|
| 2 vCPU, 4GB RAM | $60.74/mo (on-demand) | $17.99/mo | 70% ($43/mo) |
| 50GB Storage | $11.50/mo (gp3) | Included | 100% ($11.50/mo) |
| Automated Backups | $5/mo (50GB) | Included | 100% ($5/mo) |
| Read Replica | +$60.74/mo | +$8/mo | 87% ($52.74/mo) |
| Total | $137.98/mo | $25.99/mo | $112/mo (81%) |
Annual savings: $1,344 for a single database with one replica.
Other Reasons
- Complexity: AWS VPCs, security groups, parameter groups, and IAM policies are complicated
- Vendor lock-in: Easier to migrate now than after years of AWS integration
- Unpredictable costs: RDS pricing includes hidden charges (I/O requests, cross-AZ traffic, snapshots)
- Performance: Burstable instances (t3/t4g) throttle under load
- Support: Basic AWS support costs $29/mo (3% of bill minimum)
Pre-Migration Checklist
Before starting migration, gather this information:
Current RDS Configuration
# Get RDS instance details
aws rds describe-db-instances \
--db-instance-identifier mydb \
--query 'DBInstances[0].[DBInstanceClass,Engine,EngineVersion,AllocatedStorage]'
# Output: ["db.t4g.medium", "postgres", "15.4", 50]
- Database engine (PostgreSQL, MySQL, MariaDB)
- Engine version (e.g., PostgreSQL 15.4)
- Database size (run
SELECT pg_size_pretty(pg_database_size('dbname'))) - Average connections (check CloudWatch metrics)
- Extensions/plugins in use (PostgreSQL extensions, MySQL plugins)
- Connection strings used by applications
Security and Access
- List of allowed IP addresses (security group rules)
- SSL/TLS requirements
- Database users and their permissions
- Password authentication method
Application Dependencies
- Peak traffic times (for scheduling migration cutover)
- Acceptable downtime window (or zero-downtime requirement)
- Health check endpoints to verify application after migration
- Rollback plan in case of issues
Migration Methods
Choose the method based on your downtime tolerance and database size:
| Method | Downtime | Complexity | Best For |
|---|---|---|---|
| pg_dump / mysqldump | 30min - 2 hours | Low | Databases < 10GB, maintenance windows available |
| Streaming Replication | < 30 seconds | Medium | PostgreSQL, zero-downtime required |
| Logical Replication | < 30 seconds | High | PostgreSQL 10+, selective table migration |
| MySQL Replication | < 1 minute | Medium | MySQL/MariaDB, zero-downtime required |
Method 1: pg_dump / mysqldump (Simple)
Best for small databases or when you have a maintenance window.
PostgreSQL Migration
# Step 1: Create DanubeData PostgreSQL instance
# From DanubeData dashboard:
# - Create PostgreSQL 15.4 instance
# - Match or exceed RDS storage size
# - Get connection details
# Step 2: Dump RDS database
pg_dump -h mydb.abc123.us-east-1.rds.amazonaws.com \
-U postgres \
-d production \
-F c \
-f production_dump.backup
# -F c = custom format (compressed, faster restore)
# Add --no-owner --no-acl if you want DanubeData to manage permissions
# Step 3: Restore to DanubeData
pg_restore -h your-db.danubedata.ro \
-U postgres \
-d production \
-j 4 \
production_dump.backup
# -j 4 = 4 parallel jobs (faster restore)
# Step 4: Verify data
psql -h your-db.danubedata.ro -U postgres -d production -c "SELECT COUNT(*) FROM users;"
psql -h mydb.abc123.us-east-1.rds.amazonaws.com -U postgres -d production -c "SELECT COUNT(*) FROM users;"
# Step 5: Update application connection string
# Old: postgresql://user:pass@mydb.abc123.us-east-1.rds.amazonaws.com:5432/production
# New: postgresql://user:pass@your-db.danubedata.ro:5432/production
MySQL/MariaDB Migration
# Step 1: Dump RDS database
mysqldump -h mydb.abc123.us-east-1.rds.amazonaws.com \
-u admin \
-p \
--single-transaction \
--routines \
--triggers \
production > production_dump.sql
# --single-transaction = consistent backup without locking tables
# Step 2: Restore to DanubeData
mysql -h your-db.danubedata.ro \
-u root \
-p \
production < production_dump.sql
# Step 3: Verify
mysql -h your-db.danubedata.ro -u root -p -e "SELECT COUNT(*) FROM production.users;"
# Step 4: Update application connection string
Optimization Tips
# Compress dump for faster transfer
pg_dump ... | gzip > production_dump.sql.gz
gunzip -c production_dump.sql.gz | psql ...
# Use parallel dump/restore (PostgreSQL)
pg_dump -F d -j 4 -f dump_directory/ # 4 parallel dump jobs
pg_restore -j 4 dump_directory/ # 4 parallel restore jobs
# Exclude large tables you do not need
pg_dump --exclude-table=audit_logs --exclude-table=analytics_events ...
Method 2: Streaming Replication (Zero Downtime)
For PostgreSQL databases requiring zero downtime.
Step-by-Step Process
# Step 1: Enable WAL archiving on RDS
# Modify RDS parameter group:
# - wal_level = replica
# - max_wal_senders = 5
# - max_replication_slots = 5
# Reboot RDS instance
# Step 2: Create replication user on RDS
psql -h mydb.abc123.us-east-1.rds.amazonaws.com -U postgres -c \
"CREATE ROLE replicator WITH REPLICATION LOGIN PASSWORD 'secure-password';"
# Step 3: Create base backup
pg_basebackup -h mydb.abc123.us-east-1.rds.amazonaws.com \
-U replicator \
-D /var/lib/postgresql/15/main \
-P \
-Xs \
-R
# -P = progress
# -Xs = stream WAL during backup
# -R = create standby.signal and recovery config
# Step 4: Configure DanubeData as replica
# Edit postgresql.auto.conf:
primary_conninfo = 'host=mydb.abc123.us-east-1.rds.amazonaws.com port=5432 user=replicator password=secure-password'
# Start PostgreSQL
sudo systemctl start postgresql
# Step 5: Monitor replication lag
psql -h your-db.danubedata.ro -U postgres -c \
"SELECT pg_last_wal_receive_lsn(), pg_last_wal_replay_lsn();"
# Wait until lag is < 1MB (a few seconds)
# Step 6: Promote DanubeData to primary
psql -h your-db.danubedata.ro -U postgres -c "SELECT pg_promote();"
# Step 7: Switch application immediately
# Update connection string to DanubeData
# Downtime: < 30 seconds
Method 3: Logical Replication (PostgreSQL 10+)
Allows selective table replication and version upgrades.
Configuration
-- On RDS: Enable logical replication
-- Modify parameter group:
-- rds.logical_replication = 1
-- Reboot instance
-- On RDS: Create publication
CREATE PUBLICATION migration_pub FOR ALL TABLES;
-- On DanubeData: Create subscription
CREATE SUBSCRIPTION migration_sub
CONNECTION 'host=mydb.abc123.us-east-1.rds.amazonaws.com dbname=production user=postgres password=xxx'
PUBLICATION migration_pub;
-- Monitor replication
SELECT * FROM pg_stat_subscription;
-- When lag is minimal, switch application to DanubeData
-- Drop subscription to stop replication
DROP SUBSCRIPTION migration_sub;
Selective Table Migration
-- Only replicate specific tables
CREATE PUBLICATION migration_pub FOR TABLE users, orders, products;
-- Or exclude tables
CREATE PUBLICATION migration_pub FOR ALL TABLES EXCEPT TABLE audit_logs;
Method 4: MySQL Replication
-- Step 1: Enable binary logging on RDS
-- Modify parameter group:
-- log_bin = 1
-- binlog_format = ROW
-- Reboot instance
-- Step 2: Create replication user on RDS
CREATE USER 'replicator'@'%' IDENTIFIED BY 'secure-password';
GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'%';
FLUSH PRIVILEGES;
-- Step 3: Get binary log position
SHOW MASTER STATUS;
-- Note: File and Position values
-- Step 4: Dump database with master info
mysqldump -h mydb.abc123.us-east-1.rds.amazonaws.com \
-u admin -p \
--single-transaction \
--master-data=2 \
production > dump.sql
-- Step 5: Restore to DanubeData
mysql -h your-db.danubedata.ro -u root -p production < dump.sql
-- Step 6: Configure replication on DanubeData
CHANGE MASTER TO
MASTER_HOST='mydb.abc123.us-east-1.rds.amazonaws.com',
MASTER_USER='replicator',
MASTER_PASSWORD='secure-password',
MASTER_LOG_FILE='mysql-bin.000123',
MASTER_LOG_POS=456789;
START SLAVE;
-- Step 7: Monitor replication
SHOW SLAVE STATUS\G
-- When Seconds_Behind_Master is 0, switch application
STOP SLAVE;
Post-Migration Validation
Data Integrity Checks
-- PostgreSQL: Compare row counts
SELECT schemaname, tablename, n_live_tup
FROM pg_stat_user_tables
ORDER BY tablename;
-- Run on both RDS and DanubeData, compare results
-- MySQL: Compare checksums
SELECT TABLE_NAME, CHECKSUM TABLE table_name FROM information_schema.tables
WHERE table_schema = 'production';
-- Verify specific critical data
SELECT COUNT(*), MAX(created_at) FROM orders;
SELECT COUNT(*), SUM(balance) FROM accounts;
Performance Testing
# Run read-only queries against DanubeData
# Compare response times to RDS
# PostgreSQL
EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'test@example.com';
# Check if indexes were created
SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'users';
Application Health Checks
- Verify application can connect to new database
- Test authentication for all database users
- Run integration tests against DanubeData
- Check application logs for database errors
- Monitor query performance in production traffic
Common Migration Issues
| Issue | Cause | Solution |
|---|---|---|
| Extension not found | PostgreSQL extension not installed | Install via CREATE EXTENSION or contact support |
| Slow restore | Large database, single-threaded | Use parallel restore (-j 4) or replication |
| Permission denied | User lacks necessary privileges | Grant permissions: GRANT ALL ON DATABASE TO user |
| Replication lag | High write traffic during migration | Reduce traffic or use larger instance |
| SSL required | Application requires SSL connection | Add sslmode=require to connection string |
| Timezone issues | Different default timezone | SET timezone = 'UTC' or match RDS timezone |
Rollback Plan
Always have a rollback strategy in case of issues:
Immediate Rollback (During Cutover)
# If issues detected within minutes of cutover:
# 1. Switch connection string back to RDS
# 2. Restart application
# 3. Verify RDS is still receiving writes (if using replication)
Parallel Running (Recommended)
# Keep RDS running for 7 days after migration
# Monitor both databases for consistency
# If issues arise, switch back to RDS immediately
# After 7 days of stable DanubeData operation:
# 1. Stop RDS instance (do not delete yet)
# 2. Wait 7 more days
# 3. Delete RDS instance to stop charges
DanubeData Features vs RDS
| Feature | AWS RDS | DanubeData |
|---|---|---|
| Automated Backups | ✓ (extra cost) | ✓ Included |
| Point-in-Time Recovery | ✓ (extra cost) | ✓ Included |
| Read Replicas | ✓ (2x cost) | ✓ (+$8/mo) |
| Encryption at Rest | ✓ | ✓ |
| Encryption in Transit | ✓ | ✓ |
| Monitoring Dashboard | CloudWatch (complex) | Built-in (simple) |
| Firewall Rules | Security Groups (complex) | Simple IP whitelist |
| Connection Pooling | RDS Proxy (+$15/mo) | PgBouncer included |
| Setup Complexity | High (VPC, subnets, IAM) | Low (1-click deploy) |
| Support | $29/mo minimum | Email included |
Real Migration Example
A SaaS company migrated their 25GB PostgreSQL database from RDS to DanubeData:
- RDS cost: $147/month (db.t3.medium + storage + backups + replica)
- DanubeData cost: $35.99/month (4GB RAM, 100GB storage, replica included)
- Annual savings: $1,332
- Migration time: 45 minutes using pg_dump/restore
- Downtime: 3 minutes (DNS switch)
- Issues encountered: None
Get Started with DanubeData
Ready to cut your database costs by 70-80%? Here is how to start:
- Create your free DanubeData account
- Deploy a PostgreSQL, MySQL, or MariaDB instance matching your RDS specs
- Follow this migration guide to transfer your data
- Update your application connection string
- Monitor for 7 days, then terminate RDS to stop charges
DanubeData Migration Support:
- Free migration assistance for all customers
- Pre-migration planning and sizing recommendations
- Real-time support during cutover window
- Post-migration monitoring and optimization
- 30-day money-back guarantee if you are not satisfied
Start saving today: Create account or contact sales for enterprise migrations.