Today we are launching database cloning for PostgreSQL, MySQL, and MariaDB instances. Create exact copies of your databases from any snapshot, making it easier than ever to build test environments and implement disaster recovery workflows.
The Challenge of Test Data
Every development team faces the same challenge: how do you test with realistic data without risking production?
The traditional approaches all have problems:
- Manual pg_dump/mysqldump: Slow, error-prone, requires downtime or impacts performance
- Synthetic data generators: Never quite matches real-world complexity and edge cases
- Shared test databases: Tests interfere with each other, hard to reproduce bugs
- Production replicas: Expensive, complex to manage, security concerns
Database cloning solves this by letting you create instant copies from production snapshots. Your test database has real schema, real indexes, and realistic data volumes - all without touching production.
Use Cases
Development Testing
Give each developer their own database clone to test schema migrations, query optimizations, and new features safely. No more "it worked on my machine" because everyone tests against real data structures.
# Each developer gets their own clone
curl -X POST ".../clone" -d '{"name": "db-dev-sarah"}'
curl -X POST ".../clone" -d '{"name": "db-dev-mike"}'
# Developers run migrations safely
php artisan migrate --database=testing
CI/CD Pipelines
Spin up fresh database clones for each test run. Your integration tests run against real data structures, catching issues before they reach production.
# GitHub Actions workflow
jobs:
test:
steps:
- name: Create test database
run: |
CLONE_RESPONSE=$(curl -X POST
"https://api.danubedata.com/api/v1/snapshots/database/$SNAPSHOT_ID/clone"
-H "Authorization: Bearer $API_TOKEN"
-d '{"name": "test-db-${{ github.run_id }}", "source_type": "volume_snapshot"}')
- name: Run tests
run: npm test
- name: Cleanup
if: always()
run: curl -X DELETE ".../instances/$CLONE_ID"
Staging Environments
Keep staging in sync with production by cloning from recent snapshots. Validate deployments against production-like data before going live.
# Weekly staging refresh
0 2 * * 0 curl -X POST ".../clone" -d '{"name": "staging-db-$(date +%Y%m%d)"}'
Disaster Recovery Testing
Clone from offsite Velero backups to verify your disaster recovery process works. Run quarterly DR drills without affecting production.
Performance Testing
Clone production to benchmark query changes against real data volumes. Your EXPLAIN ANALYZE results will match production behavior.
Clone Sources
Local VolumeSnapshot (Recommended)
Uses TopoLVM CSI snapshots for near-instant cloning. The fastest option for day-to-day development workflows.
- Speed: Clones complete in seconds to minutes
- Storage: Copy-on-write means minimal additional disk usage
- Best for: Development, testing, CI/CD pipelines
Velero Backup (Offsite)
Restores from S3-stored offsite backups. Essential for disaster recovery when local storage is unavailable.
- Speed: Depends on backup size and network (minutes to hours)
- Reliability: Works even if primary datacenter is down
- Best for: Disaster recovery, compliance testing, cross-region restoration
What Gets Cloned
Everything. The cloned database inherits all configuration from the source:
| Property | Inherited |
|---|---|
| All data at snapshot time | Yes |
| Database provider (PostgreSQL/MySQL/MariaDB) | Yes |
| Version (e.g., PostgreSQL 16, MySQL 8.0) | Yes |
| Memory size | Yes |
| Storage size | Yes |
| Parameter group settings | Yes |
| SSL/TLS certificates | Yes |
| Datacenter location | Yes |
How to Clone a Database
Via Dashboard
- Navigate to your database instance
- Go to the Snapshots tab
- Find the snapshot you want to clone from
- Click the "Clone" button
- Enter a name for the new instance (must be DNS-compatible: lowercase, alphanumeric, hyphens)
- Select clone source: Local Snapshot or Offsite Backup
- Click "Create Clone"
The new instance will appear in your database list with "Cloning" status, then transition to "Running" when complete.
Via API
POST /api/v1/snapshots/database/{snapshotId}/clone
Content-Type: application/json
Authorization: Bearer YOUR_API_TOKEN
{
"name": "my-test-database",
"source_type": "volume_snapshot"
}
Response (201 Created):
{
"message": "Database clone initiated",
"instance": {
"id": "db_abc123xyz",
"name": "my-test-database",
"status": "pending",
"team_id": "team_456",
"clone_status": "pending",
"cloned_from_snapshot_id": "snap_789"
}
}
Available source_type values:
volume_snapshot- Clone from local TopoLVM snapshot (fast)velero_backup- Clone from offsite S3 backup (disaster recovery)
Clone Workflow Details
Local VolumeSnapshot Clone
1. Create new DatabaseInstance record (status: Pending)
2. Deploy StatefulSet via GitOps
3. Wait for empty PVC creation
4. Scale StatefulSet to 0 replicas
5. Delete empty PVC
6. Create new PVC from VolumeSnapshot dataSource
7. Wait for PVC to bind
8. Scale StatefulSet back to 1
9. Wait for pod ready
10. Update status to Running
Velero Backup Clone
1. Create Velero Restore from offsite backup
2. Wait for restore completion
3. Create temporary VolumeSnapshot from restored PVC
4. Delete Velero-restored PVC (cleanup)
5. Continue with local clone flow using temp snapshot
6. Delete temporary VolumeSnapshot after completion
Handling Clone Failures
If a clone fails, DanubeData automatically:
- Marks the instance with Error status
- Records the error message for debugging
- Attempts cleanup of partial resources
- Removes GitOps manifests
- Soft-deletes the instance record
You can retry by creating a new clone from the same snapshot.
Billing
Cloned instances are billed at the same hourly rate as the source instance's plan. Since billing is hourly:
- Short-lived clones: Create for CI/CD, delete after tests - pay only for hours used
- Long-lived clones: Staging environments billed like any other instance
Example: A DB Small clone (€19.99/month = ~€0.028/hour) used for 2 hours of testing costs less than €0.06.
Security Considerations
When cloning databases with sensitive data:
- Access Controls: Clones are created in your team's namespace with the same access controls
- Data Masking: Consider using parameter groups or post-clone scripts to mask PII in test environments
- Cleanup: Delete clones promptly after testing to minimize data exposure window
- Audit Trail: Clone creation is logged for compliance
Limitations
- Clones must be in the same datacenter as the source snapshot (for local clones)
- Instance names must be unique within your team
- Account limits apply to cloned instances
- Velero clones require the snapshot to have a completed offsite backup
Get Started
Database cloning is available now for all PostgreSQL, MySQL, and MariaDB instances. To try it:
- View your database instances
- Ensure you have at least one snapshot (or enable automated snapshots)
- Click "Clone" on any ready snapshot
Or use the API to integrate cloning into your CI/CD pipeline.
Need help setting up database cloning for your workflow? Contact our team for guidance.