Containerized applications store data in volumes that need backing up just like traditional servers. Whether you're running Docker Compose on a single server or Kubernetes clusters in production, this guide shows you how to back up your container data to S3 storage.
We'll cover Docker volume backups with scripts and Restic, Kubernetes backups with Velero, and database container backups with proper consistency.
Why Back Up Container Data to S3?
| Challenge | Without Backup | With S3 Backup |
|---|---|---|
| Server Failure | Data lost | Restore to new server |
Accidental docker system prune |
Volumes deleted | Restore from backup |
| Bad Deployment | Corrupted data | Roll back to snapshot |
| Ransomware | Encrypted volumes | Restore clean version |
| Migrate to New Cluster | Manual, error-prone | Restore to new cluster |
| DR/Multi-Region | Not possible | Restore in any region |
Part 1: Docker Volume Backups
Understanding Docker Volumes
Docker data persists in volumes, which are directories on the host managed by Docker:
# List all volumes
docker volume ls
# Inspect a volume
docker volume inspect my-app-data
# Volumes are stored at:
# Linux: /var/lib/docker/volumes/
# Mac/Windows: Inside Docker Desktop VM
Method 1: Simple Script Backup
Create a backup script that exports volumes to tarballs and uploads to S3:
#!/bin/bash
# /usr/local/bin/docker-backup.sh
# Docker Volume Backup to S3
set -e
# Configuration
BACKUP_DIR="/tmp/docker-backups"
BUCKET="your-bucket"
REMOTE="danubedata" # rclone remote name
DATE=$(date +%Y-%m-%d_%H-%M)
HOSTNAME=$(hostname)
# Ensure rclone is configured
if ! rclone listremotes | grep -q "$REMOTE:"; then
echo "Error: rclone remote '$REMOTE' not configured"
exit 1
fi
# Create backup directory
mkdir -p "$BACKUP_DIR"
echo "=== Docker Volume Backup Started: $(date) ==="
# Get list of all volumes
VOLUMES=$(docker volume ls -q)
for VOLUME in $VOLUMES; do
echo "Backing up volume: $VOLUME"
# Skip certain volumes
if [[ "$VOLUME" == *"_cache"* ]] || [[ "$VOLUME" == *"_tmp"* ]]; then
echo " Skipping cache/tmp volume"
continue
fi
# Create backup using temporary container
docker run --rm
-v "$VOLUME":/source:ro
-v "$BACKUP_DIR":/backup
alpine
tar -czf "/backup/${VOLUME}-${DATE}.tar.gz" -C /source .
BACKUP_SIZE=$(du -h "$BACKUP_DIR/${VOLUME}-${DATE}.tar.gz" | cut -f1)
echo " Created: ${VOLUME}-${DATE}.tar.gz ($BACKUP_SIZE)"
done
# Upload to S3
echo "Uploading to S3..."
rclone sync "$BACKUP_DIR" "$REMOTE:$BUCKET/docker/$HOSTNAME/$DATE/" --progress
# Verify upload
echo "Verifying upload..."
REMOTE_COUNT=$(rclone ls "$REMOTE:$BUCKET/docker/$HOSTNAME/$DATE/" | wc -l)
LOCAL_COUNT=$(ls -1 "$BACKUP_DIR"/*.tar.gz 2>/dev/null | wc -l)
if [ "$REMOTE_COUNT" -eq "$LOCAL_COUNT" ]; then
echo "Upload verified: $REMOTE_COUNT files"
else
echo "WARNING: Upload verification failed!"
exit 1
fi
# Cleanup local backups
rm -rf "$BACKUP_DIR"
# Delete old backups from S3 (keep 14 days)
echo "Cleaning old backups..."
rclone delete "$REMOTE:$BUCKET/docker/$HOSTNAME/" --min-age 14d
echo "=== Backup Complete: $(date) ==="
Method 2: Restic for Docker Volumes
Restic provides encryption, deduplication, and versioning:
#!/bin/bash
# /usr/local/bin/docker-restic-backup.sh
set -e
# Load Restic credentials
source /root/.restic-env
# Temporary mount point
MOUNT_DIR="/tmp/docker-volume-mount"
mkdir -p "$MOUNT_DIR"
echo "=== Docker Restic Backup Started: $(date) ==="
# Backup each volume
for VOLUME in $(docker volume ls -q); do
echo "Processing volume: $VOLUME"
# Skip temporary volumes
[[ "$VOLUME" == *"_tmp"* ]] && continue
[[ "$VOLUME" == *"_cache"* ]] && continue
# Mount volume to temporary directory
VOLUME_PATH=$(docker volume inspect "$VOLUME" --format '{{ .Mountpoint }}')
if [ -d "$VOLUME_PATH" ]; then
echo " Backing up $VOLUME_PATH"
restic backup "$VOLUME_PATH" --tag "docker-volume" --tag "$VOLUME"
fi
done
# Apply retention policy
restic forget
--keep-hourly 24
--keep-daily 7
--keep-weekly 4
--keep-monthly 6
--prune
echo "=== Backup Complete: $(date) ==="
Method 3: Docker Compose Aware Backup
For Docker Compose projects, backup with application awareness:
#!/bin/bash
# /usr/local/bin/compose-backup.sh
# Usage: compose-backup.sh /path/to/docker-compose.yml
set -e
COMPOSE_FILE="${1:-docker-compose.yml}"
PROJECT_DIR=$(dirname "$COMPOSE_FILE")
PROJECT_NAME=$(basename "$PROJECT_DIR")
BACKUP_DIR="/tmp/compose-backup/$PROJECT_NAME"
DATE=$(date +%Y-%m-%d_%H-%M)
BUCKET="your-bucket"
mkdir -p "$BACKUP_DIR"
echo "=== Backing up Docker Compose project: $PROJECT_NAME ==="
# Change to project directory
cd "$PROJECT_DIR"
# 1. Export database if present (MySQL/PostgreSQL)
if docker compose ps | grep -q "mysql|mariadb"; then
echo "Dumping MySQL database..."
docker compose exec -T db mysqldump -u root -p"$MYSQL_ROOT_PASSWORD" --all-databases
> "$BACKUP_DIR/database.sql"
fi
if docker compose ps | grep -q "postgres"; then
echo "Dumping PostgreSQL database..."
docker compose exec -T db pg_dumpall -U postgres > "$BACKUP_DIR/database.sql"
fi
# 2. Stop containers for consistent backup (optional but recommended)
# docker compose stop
# 3. Backup all volumes
VOLUMES=$(docker compose config --volumes 2>/dev/null | tr -d ' ')
for VOLUME in $VOLUMES; do
FULL_VOLUME="${PROJECT_NAME}_${VOLUME}"
echo "Backing up volume: $FULL_VOLUME"
docker run --rm
-v "${FULL_VOLUME}":/source:ro
-v "$BACKUP_DIR":/backup
alpine
tar -czf "/backup/${VOLUME}.tar.gz" -C /source .
done
# 4. Backup compose file and env
cp "$COMPOSE_FILE" "$BACKUP_DIR/"
[ -f .env ] && cp .env "$BACKUP_DIR/"
# 5. Restart containers if stopped
# docker compose start
# 6. Create final archive
ARCHIVE="/tmp/${PROJECT_NAME}-${DATE}.tar.gz"
tar -czf "$ARCHIVE" -C "$BACKUP_DIR" .
# 7. Upload to S3
echo "Uploading to S3..."
rclone copy "$ARCHIVE" "danubedata:$BUCKET/docker-compose/$PROJECT_NAME/"
# 8. Cleanup
rm -rf "$BACKUP_DIR" "$ARCHIVE"
echo "=== Backup Complete ==="
Database Container Backups
Databases need special handling to ensure consistency:
MySQL/MariaDB Container:
#!/bin/bash
# mysql-container-backup.sh
CONTAINER="mysql-container-name"
DATE=$(date +%Y-%m-%d_%H-%M)
BACKUP_FILE="/tmp/mysql-$DATE.sql.gz"
# Dump database (no container stop needed)
docker exec "$CONTAINER" mysqldump -u root -p"$MYSQL_ROOT_PASSWORD"
--all-databases --single-transaction --routines --triggers
| gzip > "$BACKUP_FILE"
# Upload to S3
rclone copy "$BACKUP_FILE" danubedata:your-bucket/databases/mysql/
# Cleanup
rm "$BACKUP_FILE"
PostgreSQL Container:
#!/bin/bash
# postgres-container-backup.sh
CONTAINER="postgres-container-name"
DATE=$(date +%Y-%m-%d_%H-%M)
BACKUP_FILE="/tmp/postgres-$DATE.sql.gz"
# Dump all databases
docker exec "$CONTAINER" pg_dumpall -U postgres | gzip > "$BACKUP_FILE"
# Upload to S3
rclone copy "$BACKUP_FILE" danubedata:your-bucket/databases/postgres/
# Cleanup
rm "$BACKUP_FILE"
MongoDB Container:
#!/bin/bash
# mongo-container-backup.sh
CONTAINER="mongo-container-name"
DATE=$(date +%Y-%m-%d_%H-%M)
BACKUP_DIR="/tmp/mongo-$DATE"
# Dump all databases
docker exec "$CONTAINER" mongodump --out /dump
docker cp "$CONTAINER":/dump "$BACKUP_DIR"
# Archive and upload
tar -czf "${BACKUP_DIR}.tar.gz" -C "$BACKUP_DIR" .
rclone copy "${BACKUP_DIR}.tar.gz" danubedata:your-bucket/databases/mongo/
# Cleanup
rm -rf "$BACKUP_DIR" "${BACKUP_DIR}.tar.gz"
docker exec "$CONTAINER" rm -rf /dump
Part 2: Kubernetes Backups with Velero
Velero is the industry standard for Kubernetes backup. It backs up both Kubernetes resources (deployments, services, etc.) and persistent volumes.
Step 1: Install Velero CLI
Note: Check Velero releases for the latest version.
# macOS
brew install velero
# Linux (replace version number with latest)
VELERO_VERSION=$(curl -s https://api.github.com/repos/vmware-tanzu/velero/releases/latest | grep tag_name | cut -d '"' -f 4)
wget "https://github.com/vmware-tanzu/velero/releases/download/${VELERO_VERSION}/velero-${VELERO_VERSION}-linux-amd64.tar.gz"
tar -xzf "velero-${VELERO_VERSION}-linux-amd64.tar.gz"
mv "velero-${VELERO_VERSION}-linux-amd64/velero" /usr/local/bin/
# Verify
velero version
Step 2: Create S3 Credentials File
# Create credentials file
cat > credentials-velero << EOF
[default]
aws_access_key_id=YOUR_ACCESS_KEY_ID
aws_secret_access_key=YOUR_SECRET_ACCESS_KEY
EOF
Step 3: Install Velero in Cluster
# Install Velero with S3 backend
velero install
--provider aws
--plugins velero/velero-plugin-for-aws:v1.9.0
--bucket your-backup-bucket
--secret-file ./credentials-velero
--backup-location-config
region=eu-central-1,s3ForcePathStyle=true,s3Url=https://s3.danubedata.com
--snapshot-location-config region=eu-central-1
--use-node-agent
--default-volumes-to-fs-backup
# Check installation
kubectl get pods -n velero
velero backup-location get
Step 4: Create Backups
# Backup entire cluster
velero backup create full-cluster-backup
# Backup specific namespace
velero backup create app-backup --include-namespaces my-app
# Backup with label selector
velero backup create prod-backup --selector environment=production
# Backup excluding certain resources
velero backup create backup-no-secrets
--exclude-resources secrets
# Check backup status
velero backup describe full-cluster-backup
velero backup logs full-cluster-backup
Step 5: Schedule Automated Backups
# Daily backup of entire cluster (keep 30 days)
velero schedule create daily-full
--schedule="0 2 * * *"
--ttl 720h
# Hourly backup of production namespace (keep 48 hours)
velero schedule create hourly-prod
--schedule="0 * * * *"
--include-namespaces production
--ttl 48h
# Weekly backup with longer retention (keep 1 year)
velero schedule create weekly-full
--schedule="0 3 * * 0"
--ttl 8760h
# List schedules
velero schedule get
# Check scheduled backup status
velero backup get
Step 6: Restore from Backup
# List available backups
velero backup get
# Restore entire backup
velero restore create --from-backup full-cluster-backup
# Restore to different namespace
velero restore create --from-backup app-backup
--namespace-mappings old-namespace:new-namespace
# Restore specific resources only
velero restore create --from-backup full-cluster-backup
--include-resources deployments,services,configmaps
# Restore excluding certain resources
velero restore create --from-backup full-cluster-backup
--exclude-resources persistentvolumeclaims
# Check restore status
velero restore describe my-restore
velero restore logs my-restore
Velero with CSI Snapshots
For volume snapshots using CSI drivers:
# Install CSI plugin
velero install
--features=EnableCSI
--plugins velero/velero-plugin-for-aws:v1.9.0,velero/velero-plugin-for-csi:v0.7.0
...
# Create VolumeSnapshotClass
cat << EOF | kubectl apply -f -
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: velero-snapshot-class
labels:
velero.io/csi-volumesnapshot-class: "true"
driver: your-csi-driver
deletionPolicy: Retain
EOF
# Backup with CSI snapshots
velero backup create csi-backup --snapshot-volumes
Disaster Recovery Testing
#!/bin/bash
# dr-test.sh - Disaster Recovery Test
# Create test namespace
kubectl create namespace dr-test
# Restore production to test namespace
velero restore create dr-test-restore
--from-backup daily-full-latest
--include-namespaces production
--namespace-mappings production:dr-test
# Wait for restore to complete
while ! velero restore describe dr-test-restore 2>/dev/null | grep -q "Phase:.*Completed"; do
echo "Waiting for restore to complete..."
sleep 10
done
# Verify pods are running
kubectl get pods -n dr-test
# Run smoke tests
kubectl exec -n dr-test deploy/my-app -- /app/healthcheck.sh
# Cleanup
kubectl delete namespace dr-test
velero restore delete dr-test-restore
echo "DR test completed successfully!"
Backup Schedule Recommendations
| Environment | Frequency | Retention | Method |
|---|---|---|---|
| Production Database | Hourly | 48 hours | Database dump + Restic |
| Production Apps | Daily | 30 days | Velero/Volume backup |
| Staging | Daily | 7 days | Velero |
| Development | Weekly | 4 weeks | Script/Velero |
| Docker Compose (Single Server) | Daily | 14 days | Script + rclone |
Restoring Docker Volumes
#!/bin/bash
# docker-restore.sh
# Usage: docker-restore.sh volume-name 2025-01-05_03-00
VOLUME_NAME="$1"
BACKUP_DATE="$2"
BUCKET="your-bucket"
HOSTNAME=$(hostname)
if [ -z "$VOLUME_NAME" ] || [ -z "$BACKUP_DATE" ]; then
echo "Usage: $0 "
echo "Example: $0 myapp_data 2025-01-05_03-00"
exit 1
fi
BACKUP_FILE="${VOLUME_NAME}-${BACKUP_DATE}.tar.gz"
TEMP_DIR="/tmp/docker-restore"
mkdir -p "$TEMP_DIR"
# Download from S3
echo "Downloading backup..."
rclone copy "danubedata:$BUCKET/docker/$HOSTNAME/$BACKUP_DATE/$BACKUP_FILE" "$TEMP_DIR/"
if [ ! -f "$TEMP_DIR/$BACKUP_FILE" ]; then
echo "Error: Backup file not found"
exit 1
fi
# Create volume if it doesn't exist
docker volume create "$VOLUME_NAME" 2>/dev/null || true
# Restore data
echo "Restoring volume..."
docker run --rm
-v "$VOLUME_NAME":/target
-v "$TEMP_DIR":/backup:ro
alpine
sh -c "rm -rf /target/* && tar -xzf /backup/$BACKUP_FILE -C /target"
# Cleanup
rm -rf "$TEMP_DIR"
echo "Restore complete: $VOLUME_NAME"
Monitoring and Alerts
Velero Prometheus Metrics
# Velero exposes metrics at :8085/metrics
# Add to Prometheus scrape config:
- job_name: velero
static_configs:
- targets: ['velero.velero.svc:8085']
# Useful metrics:
# velero_backup_success_total
# velero_backup_failure_total
# velero_backup_duration_seconds
# velero_restore_success_total
Backup Failure Alerting
# Add to backup scripts
notify_failure() {
curl -X POST "https://hooks.slack.com/your-webhook"
-H "Content-Type: application/json"
-d "{"text": "BACKUP FAILED: $1 at $(date)"}"
}
# Wrap backup commands
if ! velero backup create daily-backup; then
notify_failure "Velero daily backup"
fi
Get Started
Protect your containerized applications with S3 backup:
- Create a DanubeData account
- Create a storage bucket for backups
- Generate access keys
- Set up Docker volume scripts or install Velero
- Configure automated schedules
👉 Create Your Backup Bucket - €3.99/month includes 1TB storage
Containers are ephemeral, but your data shouldn't be. Set up proper backup today.
Need help with container backup strategy? Contact our team—we run Kubernetes in production and understand the challenges.