Linux servers are the backbone of modern infrastructure. Whether you're running web applications, databases, or microservices, reliable backups are non-negotiable. S3-compatible storage offers the perfect destination: durable, affordable, and accessible from anywhere.
This guide covers three battle-tested methods: Restic (modern, encrypted, deduplicated), rclone (simple sync), and Duplicity (GPG-encrypted incremental backups).
Why S3 for Server Backups?
| Feature | Local/NAS | Rsync to Server | S3 Storage |
|---|---|---|---|
| Geographic Redundancy | No | Manual setup | Built-in |
| Durability | RAID dependent | Server dependent | 99.999999999% |
| Maintenance | Hardware, RAID | Second server | Zero |
| Scalability | Buy more drives | Limited | Unlimited |
| Cost per TB | $50-100 drives | $5-10/mo server | €3.99/mo |
| API Access | No | SSH only | Standard S3 API |
What You'll Need
- Linux server (Ubuntu, Debian, RHEL, CentOS, etc.)
- Root or sudo access
- S3-compatible storage credentials:
- Endpoint:
s3.danubedata.com - Access Key ID
- Secret Access Key
- Bucket name
- Endpoint:
Method 1: Restic (Recommended)
Restic is a modern backup program designed for security and efficiency. It provides encryption, deduplication, and integrity verification out of the box.
Why Choose Restic?
- AES-256 encryption (client-side, password-protected)
- Content-defined chunking and deduplication
- Fast incremental backups
- Easy restore (mount backups as filesystem)
- Cross-platform (Linux, Mac, Windows)
- Active development and community
Step 1: Install Restic
# Ubuntu/Debian
apt update && apt install restic -y
# RHEL/CentOS/Rocky
dnf install restic -y
# Or download latest version
wget https://github.com/restic/restic/releases/latest/download/restic_0.17.0_linux_amd64.bz2
bunzip2 restic_0.17.0_linux_amd64.bz2
chmod +x restic_0.17.0_linux_amd64
mv restic_0.17.0_linux_amd64 /usr/local/bin/restic
# Verify installation
restic version
Step 2: Configure Environment
Create a secure credentials file:
# Create credentials file
cat > /root/.restic-env << 'EOF'
export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY"
export AWS_SECRET_ACCESS_KEY="YOUR_SECRET_KEY"
export RESTIC_REPOSITORY="s3:https://s3.danubedata.com/your-bucket/server-backup"
export RESTIC_PASSWORD="your-strong-encryption-password-here"
EOF
# Secure the file
chmod 600 /root/.restic-env
# Load credentials
source /root/.restic-env
Step 3: Initialize Repository
# Source credentials
source /root/.restic-env
# Initialize repository (first time only)
restic init
# Expected output:
# created restic repository at s3:https://s3.danubedata.com/your-bucket/server-backup
Step 4: Run First Backup
# Backup important directories
restic backup /etc /home /var/www /var/lib/mysql --verbose
# Backup with exclusions
restic backup /etc /home /var/www
--exclude="*.log"
--exclude="*.tmp"
--exclude="node_modules"
--exclude="vendor"
--exclude=".cache"
--verbose
# Backup with tags for organization
restic backup /etc --tag config --tag production
restic backup /var/www --tag webapps --tag production
restic backup /home --tag users --tag production
Step 5: View and Manage Snapshots
# List all snapshots
restic snapshots
# List snapshots with specific tag
restic snapshots --tag webapps
# Show detailed info about a snapshot
restic stats
restic stats latest
# Find files in backups
restic find "*.conf"
restic find --snapshot latest "nginx.conf"
Step 6: Create Automated Backup Script
#!/bin/bash
# /usr/local/bin/restic-backup.sh
# Automated Restic backup script for Linux servers
set -e
# Load credentials
source /root/.restic-env
# Logging
LOG_FILE="/var/log/restic-backup.log"
exec 1> >(tee -a "$LOG_FILE") 2>&1
echo "=========================================="
echo "Restic Backup Started: $(date)"
echo "=========================================="
# Define what to backup
BACKUP_PATHS=(
"/etc"
"/home"
"/var/www"
"/var/lib/mysql"
"/root"
"/opt"
)
# Define exclusions
EXCLUDE_PATTERNS=(
"*.log"
"*.tmp"
"*.cache"
"*.pid"
"*.sock"
"node_modules"
"vendor"
".git"
"__pycache__"
"*.pyc"
"/var/lib/mysql/*.sock"
)
# Build exclude arguments
EXCLUDES=""
for pattern in "${EXCLUDE_PATTERNS[@]}"; do
EXCLUDES="$EXCLUDES --exclude=$pattern"
done
# Run backup
echo "Starting backup..."
restic backup ${BACKUP_PATHS[@]}
$EXCLUDES
--tag $(hostname)
--tag daily
--verbose
# Check backup integrity (weekly on Sundays)
if [ $(date +%u) -eq 7 ]; then
echo "Running integrity check..."
restic check --read-data-subset=10%
fi
# Apply retention policy
echo "Applying retention policy..."
restic forget
--keep-hourly 24
--keep-daily 7
--keep-weekly 4
--keep-monthly 12
--keep-yearly 2
--prune
# Show stats
echo "Backup statistics:"
restic stats
echo "=========================================="
echo "Backup Completed: $(date)"
echo "=========================================="
Make executable:
chmod +x /usr/local/bin/restic-backup.sh
Step 7: Schedule with Cron
# Edit crontab
crontab -e
# Daily backup at 3 AM
0 3 * * * /usr/local/bin/restic-backup.sh
# Or use systemd timer (recommended for modern systems)
cat > /etc/systemd/system/restic-backup.service << 'EOF'
[Unit]
Description=Restic Backup
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/restic-backup.sh
User=root
EOF
cat > /etc/systemd/system/restic-backup.timer << 'EOF'
[Unit]
Description=Daily Restic Backup
[Timer]
OnCalendar=*-*-* 03:00:00
Persistent=true
RandomizedDelaySec=1800
[Install]
WantedBy=timers.target
EOF
# Enable timer
systemctl daemon-reload
systemctl enable restic-backup.timer
systemctl start restic-backup.timer
# Check timer status
systemctl list-timers restic-backup.timer
Restoring from Restic
# List snapshots
restic snapshots
# Restore entire snapshot to specific location
restic restore latest --target /restore
# Restore specific directory
restic restore latest --target /restore --include "/etc/nginx"
# Restore specific file
restic restore latest --target /tmp --include "/etc/nginx/nginx.conf"
# Mount backup as filesystem (browse and copy files)
mkdir /mnt/restic
restic mount /mnt/restic
# Now browse /mnt/restic/snapshots/
# When done:
umount /mnt/restic
# Restore to original location (careful!)
restic restore latest --target / --include "/etc/nginx"
Method 2: Rclone (Simple Sync)
Rclone provides simple file synchronization without deduplication or encryption (though crypt overlay is available).
Install Rclone
# Install rclone
curl https://rclone.org/install.sh | sudo bash
# Configure S3 remote
rclone config
# n) New remote
# name> s3backup
# Storage> s3
# provider> Other
# access_key_id> YOUR_ACCESS_KEY
# secret_access_key> YOUR_SECRET_KEY
# region> eu-central-1
# endpoint> https://s3.danubedata.com
# (accept defaults for rest)
Basic Rclone Backup Script
#!/bin/bash
# /usr/local/bin/rclone-backup.sh
BUCKET="your-bucket"
REMOTE="s3backup"
HOSTNAME=$(hostname)
DATE=$(date +%Y-%m-%d)
echo "Starting rclone backup: $(date)"
# Sync /etc (configurations)
rclone sync /etc "$REMOTE:$BUCKET/$HOSTNAME/etc"
--exclude "*.log"
--exclude "*.pid"
--log-level INFO
# Sync /var/www (web apps)
rclone sync /var/www "$REMOTE:$BUCKET/$HOSTNAME/www"
--exclude "node_modules/**"
--exclude "vendor/**"
--exclude ".git/**"
--log-level INFO
# Sync /home (user data)
rclone sync /home "$REMOTE:$BUCKET/$HOSTNAME/home"
--exclude ".cache/**"
--exclude "*.log"
--log-level INFO
echo "Backup completed: $(date)"
Method 3: Duplicity (GPG Encrypted)
Duplicity creates encrypted incremental backups using GPG encryption.
Install Duplicity
# Ubuntu/Debian
apt install duplicity python3-boto3 -y
# RHEL/CentOS
dnf install duplicity python3-boto3 -y
Configure and Run
# Set credentials
export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY"
export AWS_SECRET_ACCESS_KEY="YOUR_SECRET_KEY"
export PASSPHRASE="your-gpg-passphrase"
# Full backup (first time or monthly)
duplicity full /etc
--s3-endpoint-url https://s3.danubedata.com
s3://s3.danubedata.com/your-bucket/duplicity/etc
# Incremental backup (daily)
duplicity incr /etc
--s3-endpoint-url https://s3.danubedata.com
s3://s3.danubedata.com/your-bucket/duplicity/etc
# Full backup if older than 30 days, else incremental
duplicity --full-if-older-than 30D /etc
--s3-endpoint-url https://s3.danubedata.com
s3://s3.danubedata.com/your-bucket/duplicity/etc
# Restore
duplicity restore
--s3-endpoint-url https://s3.danubedata.com
s3://s3.danubedata.com/your-bucket/duplicity/etc
/tmp/restore
Database-Specific Backups
MySQL/MariaDB
#!/bin/bash
# /usr/local/bin/mysql-backup.sh
source /root/.restic-env
DATE=$(date +%Y-%m-%d_%H-%M)
BACKUP_DIR="/tmp/mysql-backup"
mkdir -p "$BACKUP_DIR"
# Dump all databases
mysqldump -u root --all-databases --single-transaction
--routines --triggers --events | gzip > "$BACKUP_DIR/all-databases-$DATE.sql.gz"
# Or dump individual databases
for DB in $(mysql -u root -e "SHOW DATABASES" -s --skip-column-names | grep -v -E "^(information_schema|performance_schema|sys)$"); do
mysqldump -u root --single-transaction "$DB" | gzip > "$BACKUP_DIR/$DB-$DATE.sql.gz"
done
# Upload to S3 with restic
restic backup "$BACKUP_DIR" --tag mysql --tag database
# Cleanup
rm -rf "$BACKUP_DIR"
PostgreSQL
#!/bin/bash
# /usr/local/bin/postgres-backup.sh
source /root/.restic-env
DATE=$(date +%Y-%m-%d_%H-%M)
BACKUP_DIR="/tmp/postgres-backup"
mkdir -p "$BACKUP_DIR"
# Dump all databases
pg_dumpall -U postgres | gzip > "$BACKUP_DIR/all-databases-$DATE.sql.gz"
# Or dump individual databases
for DB in $(psql -U postgres -t -c "SELECT datname FROM pg_database WHERE datistemplate = false"); do
pg_dump -U postgres "$DB" | gzip > "$BACKUP_DIR/$DB-$DATE.sql.gz"
done
# Upload to S3 with restic
restic backup "$BACKUP_DIR" --tag postgres --tag database
# Cleanup
rm -rf "$BACKUP_DIR"
MongoDB
#!/bin/bash
# /usr/local/bin/mongo-backup.sh
source /root/.restic-env
DATE=$(date +%Y-%m-%d_%H-%M)
BACKUP_DIR="/tmp/mongo-backup-$DATE"
# Dump all databases
mongodump --out "$BACKUP_DIR"
# Upload to S3 with restic
restic backup "$BACKUP_DIR" --tag mongodb --tag database
# Cleanup
rm -rf "$BACKUP_DIR"
What to Back Up on Linux Servers
| Directory | Priority | Contents |
|---|---|---|
/etc |
Critical | System and app configurations |
/var/www |
Critical | Web applications |
/home |
High | User data |
/var/lib/mysql |
Critical | MySQL data (prefer dumps) |
/var/lib/postgresql |
Critical | PostgreSQL data (prefer dumps) |
/opt |
Medium | Third-party applications |
/root |
High | Root user files, scripts |
/var/log |
Medium | Logs (optional, often large) |
Best Practices
1. Use Encryption
Always encrypt backups, especially if they contain sensitive data. Restic encrypts by default.
2. Test Restores
# Monthly restore test
restic restore latest --target /tmp/restore-test --include "/etc/nginx"
diff -r /etc/nginx /tmp/restore-test/etc/nginx
rm -rf /tmp/restore-test
3. Monitor Backup Health
# Add to backup script
if [ $? -ne 0 ]; then
curl -X POST "https://hooks.slack.com/your-webhook"
-H "Content-Type: application/json"
-d "{"text": "BACKUP FAILED on $(hostname) at $(date)"}"
fi
4. Set Appropriate Retention
# Production servers - aggressive retention
restic forget
--keep-hourly 48
--keep-daily 30
--keep-weekly 12
--keep-monthly 24
--keep-yearly 5
--prune
5. Verify Backup Integrity
# Weekly integrity check (reads 10% of data)
restic check --read-data-subset=10%
# Monthly full check
restic check --read-data
Comparison: Restic vs Rclone vs Duplicity
| Feature | Restic | Rclone | Duplicity |
|---|---|---|---|
| Encryption | Built-in AES-256 | Via crypt overlay | GPG |
| Deduplication | Yes | No | Yes |
| Incremental | Always | Sync only | Yes |
| Versioning | Built-in | No (S3 versioning) | Built-in |
| Restore Speed | Fast | Fast | Slower (chain) |
| Best For | Most use cases | Simple sync | GPG users |
Get Started
Protect your Linux servers with enterprise-grade S3 backup:
- Create a DanubeData account
- Create a storage bucket
- Generate access keys
- Install Restic and follow this guide
👉 Create Your Backup Bucket - €3.99/month includes 1TB storage
Your servers deserve better than local backups that can fail alongside them.
Questions about server backup strategy? Contact our team—we manage Linux servers ourselves and understand the challenges.