DigitalOcean has been a reliable cloud provider for years. But as your infrastructure grows, you might find yourself paying more than necessary—especially for managed databases. This guide shows you how to migrate from DigitalOcean to DanubeData and potentially save 30-40% on your monthly bill.
Why Teams Are Leaving DigitalOcean
We've helped dozens of teams migrate from DigitalOcean. Here are the most common reasons:
1. Pricing Has Increased
DigitalOcean used to be the budget-friendly option. Not anymore. Their managed database pricing now rivals AWS in many cases, without the feature parity.
2. Data Residency Concerns
For European companies, GDPR compliance requires careful consideration of where data is processed. DigitalOcean's Frankfurt datacenter is an option, but their parent company is US-based, which creates CLOUD Act concerns for some organizations.
3. Limited Feature Set
DigitalOcean's managed databases lack features like read replica auto-scaling, detailed query analytics, and comprehensive backup options that other providers offer.
4. Support Limitations
Business-critical database issues require premium support plans that add significant cost.
Cost Comparison: DigitalOcean vs DanubeData
Let's compare actual pricing for equivalent database configurations:
PostgreSQL Pricing
| Specification | DigitalOcean | DanubeData | Savings |
|---|---|---|---|
| 1 vCPU / 1 GB / 10 GB | $15/mo | €19.99/mo (~$21) | Similar* |
| 1 vCPU / 2 GB / 25 GB | $30/mo | €19.99/mo (~$21) | 30% |
| 2 vCPU / 4 GB / 38 GB | $60/mo | €39.99/mo (~$43) | 28% |
| 4 vCPU / 8 GB / 115 GB | $120/mo | €79.99/mo (~$86) | 28% |
| + Standby node (HA) | 2x price | Read replicas available | - |
* DanubeData Small plan includes more storage and automated backups
MySQL Pricing
| Specification | DigitalOcean | DanubeData | Savings |
|---|---|---|---|
| 1 vCPU / 2 GB / 25 GB | $30/mo | €19.99/mo | 33% |
| 2 vCPU / 4 GB / 38 GB | $60/mo | €39.99/mo | 33% |
| 4 vCPU / 8 GB / 115 GB | $120/mo | €79.99/mo | 33% |
Redis Pricing
| Specification | DigitalOcean | DanubeData | Savings |
|---|---|---|---|
| 1 GB Redis | $15/mo | €9.99/mo | 33% |
| 2 GB Redis | $30/mo | €19.99/mo | 33% |
| 4 GB Redis | $60/mo | €39.99/mo | 33% |
Feature Comparison
| Feature | DigitalOcean | DanubeData |
|---|---|---|
| Automated daily backups | ✅ 7 days | ✅ Configurable retention |
| Point-in-time recovery | ✅ Limited | ✅ Full PITR |
| SSL/TLS encryption | ✅ | ✅ Included |
| Read replicas | ✅ Manual | ✅ Easy setup |
| Automatic failover | ✅ With standby | ✅ |
| Firewall rules | ✅ | ✅ |
| Monitoring/Metrics | ✅ Basic | ✅ Detailed (Prometheus) |
| Connection pooling | ✅ Built-in | ✅ |
| Data center location | Multiple (US-based company) | Germany (EU company) |
| Priority support | Extra cost | ✅ Included |
| Valkey/Dragonfly options | ❌ | ✅ |
Migration Overview
The migration process varies by database type. We'll cover:
- PostgreSQL migration (pg_dump/pg_restore or logical replication)
- MySQL migration (mysqldump or replication)
- Redis migration (RDB export or MIGRATE command)
Choose your approach based on:
- Database size: Small databases (<10GB) can use dump/restore
- Downtime tolerance: Zero-downtime requires replication
- Complexity preference: Dump/restore is simpler but requires maintenance window
PostgreSQL Migration
Option 1: pg_dump/pg_restore (Simple, With Downtime)
Best for databases under 50GB or when a maintenance window is acceptable.
Step 1: Create DanubeData PostgreSQL Instance
- Log into DanubeData dashboard
- Navigate to Databases → Create Database
- Select PostgreSQL and choose your plan
- Note the connection details
Step 2: Prepare Source Database
# Get your DigitalOcean connection details from their dashboard
# Format: postgresql://user:password@host:port/database?sslmode=require
# Test connection
psql "postgresql://doadmin:password@db-postgresql-fra1-12345-do-user-12345678-0.b.db.ondigitalocean.com:25060/defaultdb?sslmode=require"
Step 3: Create Backup
# Full database dump with custom format (recommended)
pg_dump
"postgresql://doadmin:YOUR_PASSWORD@your-do-host:25060/your_database?sslmode=require"
-Fc
-v
-f backup.dump
# Check dump size
ls -lh backup.dump
Step 4: Configure Target Firewall
In DanubeData dashboard, add your current IP to the database firewall rules.
Step 5: Restore to DanubeData
# Restore to DanubeData
pg_restore
-h your-db.danubedata.com
-U your_user
-d your_database
-v
--no-owner
--no-acl
backup.dump
# Enter password when prompted
Step 6: Verify Data
# Connect to DanubeData
psql -h your-db.danubedata.com -U your_user -d your_database
# Check tables
dt
# Verify row counts
SELECT schemaname, relname, n_live_tup
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;
Option 2: Logical Replication (Zero Downtime)
For larger databases or when you can't afford downtime.
Step 1: Enable Logical Replication on Source
DigitalOcean managed databases don't support logical replication out of the box. Contact their support to enable it, or use the dump/restore method.
If you have a self-managed PostgreSQL on a DigitalOcean Droplet:
# postgresql.conf
wal_level = logical
max_replication_slots = 4
max_wal_senders = 4
# Restart PostgreSQL
sudo systemctl restart postgresql
Step 2: Create Publication on Source
# On source database
CREATE PUBLICATION migration_pub FOR ALL TABLES;
Step 3: Create Schema on Target
# Dump schema only from source
pg_dump
"postgresql://user:pass@source:25060/db?sslmode=require"
--schema-only
-f schema.sql
# Apply to target
psql -h your-db.danubedata.com -U your_user -d your_database -f schema.sql
Step 4: Create Subscription on Target
# On DanubeData database
CREATE SUBSCRIPTION migration_sub
CONNECTION 'host=source-host port=25060 dbname=source_db user=repl_user password=repl_pass sslmode=require'
PUBLICATION migration_pub;
Step 5: Monitor Replication
# Check subscription status
SELECT * FROM pg_stat_subscription;
# Check replication lag
SELECT
slot_name,
confirmed_flush_lsn,
pg_current_wal_lsn(),
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), confirmed_flush_lsn)) AS lag
FROM pg_replication_slots;
Step 6: Cutover
# When lag is zero:
# 1. Stop writes to source application
# 2. Verify final sync
# 3. Update application connection strings
# 4. Start application with new connection
# 5. Drop subscription when confirmed
DROP SUBSCRIPTION migration_sub;
MySQL Migration
Option 1: mysqldump (Simple, With Downtime)
Step 1: Create DanubeData MySQL Instance
- Log into DanubeData dashboard
- Navigate to Databases → Create Database
- Select MySQL and choose your plan
- Note the connection details
Step 2: Export from DigitalOcean
# Full database dump
mysqldump
-h your-do-host.db.ondigitalocean.com
-P 25060
-u doadmin
-p
--ssl-mode=REQUIRED
--single-transaction
--routines
--triggers
--set-gtid-purged=OFF
your_database > backup.sql
# Compress for faster transfer
gzip backup.sql
Step 3: Import to DanubeData
# Decompress
gunzip backup.sql.gz
# Import to DanubeData
mysql
-h your-db.danubedata.com
-u your_user
-p
--ssl-mode=REQUIRED
your_database < backup.sql
Step 4: Verify
# Connect to DanubeData
mysql -h your-db.danubedata.com -u your_user -p your_database
# Check tables
SHOW TABLES;
# Verify row counts
SELECT
TABLE_NAME,
TABLE_ROWS
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = 'your_database'
ORDER BY TABLE_ROWS DESC;
Option 2: MySQL Replication (Zero Downtime)
DigitalOcean managed MySQL doesn't support external replication. For zero-downtime migration from managed MySQL, consider:
- Using a tool like pt-online-schema-change for large tables
- Application-level dual-write during migration
- Scheduling a brief maintenance window during low traffic
Redis Migration
Option 1: RDB Export/Import
DigitalOcean doesn't expose RDB files directly. Use the MIGRATE command or dump/restore approach:
Step 1: Create DanubeData Redis Instance
- Log into DanubeData dashboard
- Navigate to Cache → Create Cache Instance
- Select Redis and choose your plan
Step 2: Export Keys Using redis-dump
# Install redis-dump (requires Ruby)
gem install redis-dump
# Export all keys
redis-dump
-u redis://default:PASSWORD@your-do-redis:25061
--tls > redis-backup.json
Step 3: Import to DanubeData
# Import using redis-load
cat redis-backup.json | redis-load
-u redis://default:PASSWORD@your-cache.danubedata.com:6379
Option 2: MIGRATE Command (For Small Datasets)
# For each key or pattern
redis-cli -h source-host -p 25061 --tls -a PASSWORD
MIGRATE target-host 6379 "" 0 5000 COPY REPLACE AUTH PASSWORD KEYS key1 key2 key3
Option 3: Application-Level Migration
For cache data, the simplest approach is often:
- Deploy application pointing to new Redis
- Let cache warm up naturally
- Old cache data expires on its own
This works well for session data and computed caches that will repopulate automatically.
Application Connection Update
Update Environment Variables
# Before (DigitalOcean)
DATABASE_URL=postgresql://doadmin:PASSWORD@db-postgresql-fra1-12345.db.ondigitalocean.com:25060/defaultdb?sslmode=require
REDIS_URL=rediss://default:PASSWORD@db-redis-fra1-12345.db.ondigitalocean.com:25061
# After (DanubeData)
DATABASE_URL=postgresql://your_user:PASSWORD@your-db.danubedata.com:5432/your_database?sslmode=require
REDIS_URL=redis://default:PASSWORD@your-cache.danubedata.com:6379
For Laravel Applications
# .env
DB_CONNECTION=pgsql
DB_HOST=your-db.danubedata.com
DB_PORT=5432
DB_DATABASE=your_database
DB_USERNAME=your_user
DB_PASSWORD=your_password
REDIS_HOST=your-cache.danubedata.com
REDIS_PASSWORD=your_redis_password
REDIS_PORT=6379
For Node.js Applications
// database.js
const { Pool } = require('pg');
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
ssl: {
rejectUnauthorized: true
}
});
module.exports = pool;
DNS Cutover Strategy
For seamless migration without changing application code:
Step 1: Lower TTL (1 Week Before)
# Reduce TTL to 60 seconds
db.yourdomain.com. 60 IN CNAME db-postgresql-fra1-12345.db.ondigitalocean.com.
Step 2: Update DNS Record
# Point to DanubeData
db.yourdomain.com. 60 IN CNAME your-db.danubedata.com.
Step 3: Verify Propagation
dig db.yourdomain.com +short
Step 4: Restore TTL
# After confirming everything works
db.yourdomain.com. 3600 IN CNAME your-db.danubedata.com.
Post-Migration Checklist
| Task | Status |
|---|---|
| ✅ Verify all data migrated correctly | ☐ |
| ✅ Application connects successfully | ☐ |
| ✅ All queries execute without errors | ☐ |
| ✅ Performance is acceptable | ☐ |
| ✅ Automated backups are configured | ☐ |
| ✅ Monitoring/alerting is set up | ☐ |
| ✅ Firewall rules are properly configured | ☐ |
| ✅ SSL/TLS connections verified | ☐ |
| ✅ Old DigitalOcean resources terminated | ☐ |
Rollback Plan
Always have a rollback plan. Keep your DigitalOcean database running for at least 48 hours after migration:
If Issues Occur
- Update application to point back to DigitalOcean
- If DNS-based: revert DNS records
- Investigate the issue
- Plan for re-migration
DigitalOcean Cleanup
After confirming successful migration (recommended: wait 1 week):
- Take a final backup of DigitalOcean database (for archive)
- Destroy the managed database cluster
- Remove any related firewall rules
Don't Rush Cleanup:
Keep the old database running until you're 100% confident in the migration. The cost of an extra week of database hosting is negligible compared to the stress of needing to roll back without a working source.
Troubleshooting
Connection Refused
# Check firewall rules in DanubeData dashboard
# Add your application server's IP address
# Verify connectivity
telnet your-db.danubedata.com 5432
SSL Certificate Issues
# Force SSL in connection string
?sslmode=require
# For Node.js with self-signed certs (if needed)
ssl: { rejectUnauthorized: false }
Permission Denied
# Check user privileges
du
# Grant necessary permissions
GRANT ALL PRIVILEGES ON DATABASE your_database TO your_user;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO your_user;
Encoding Issues
# Check database encoding
SELECT pg_encoding_to_char(encoding) FROM pg_database WHERE datname = 'your_database';
# Ensure UTF-8 is used consistently
Summary: Migration Timeline
| Phase | Duration | Tasks |
|---|---|---|
| Preparation | 1-2 days | Create DanubeData account, provision resources, configure firewall |
| Testing | 1-3 days | Test migration in staging, verify application compatibility |
| Migration | 1-4 hours | Execute migration, verify data, update connections |
| Monitoring | 1 week | Watch for issues, keep old database as fallback |
| Cleanup | 1 day | Archive old backups, terminate DigitalOcean resources |
Ready to Migrate?
Migrating from DigitalOcean to DanubeData typically takes less than a day of actual work, with the potential to save 30-40% on your monthly database costs.
👉 Create your free DanubeData account
Then provision your databases and follow this guide. Our support team is available if you run into any issues during migration.
What you'll get:
- ✅ Same reliability with better pricing
- ✅ European data residency (Germany)
- ✅ Priority support included at no extra cost
- ✅ More cache options (Redis, Valkey, Dragonfly)
- ✅ Detailed monitoring and metrics
Need help with your migration? Contact our team—we've migrated dozens of databases from DigitalOcean and can guide you through the process.