BlogTutorialsrclone with S3-Compatible Storage: The Complete Guide (2026)

rclone with S3-Compatible Storage: The Complete Guide (2026)

Adrian Silaghi
Adrian Silaghi
March 19, 2026
16 min read
3 views
#rclone #s3 #object storage #sync #backup #migration #cloud storage #linux
rclone with S3-Compatible Storage: The Complete Guide (2026)

rclone is the Swiss Army knife of cloud storage. It supports over 70 backends — including every S3-compatible provider — and gives you powerful commands for syncing, copying, mounting, and encrypting your data. Think of it as rsync for cloud storage.

This guide covers everything from installation to advanced topics like encrypted backups, scheduled cron jobs, server-side copies between providers, and performance tuning — all configured for S3-compatible storage like DanubeData.

What is rclone and Why Use It?

rclone is a command-line program for managing files on cloud storage. It's open source, cross-platform, and incredibly versatile.

Key Features

  • Sync — Make a destination identical to a source (one-way or two-way)
  • Copy — Copy files between locations, skipping already-transferred files
  • Mount — Mount any cloud storage as a local filesystem (FUSE)
  • Crypt — Encrypt/decrypt files on the fly with client-side encryption
  • Serve — Serve remote files over HTTP, WebDAV, FTP, or SFTP
  • Bandwidth control — Limit upload/download speed
  • Filtering — Include/exclude files by pattern, size, or age
  • Checksums — Verify data integrity after transfers
Use Case Why rclone Excels
Server backups Efficient sync with checksums, bandwidth limiting, cron integration
Data migration Server-side copy between S3 providers — no local download needed
Local filesystem mount Access S3 buckets as local directories with FUSE mount
Encrypted backups Built-in client-side encryption (rclone crypt) with zero-knowledge design
CI/CD artifact storage Fast, scriptable, works in Docker containers and pipelines
Media management Handles millions of files with filtering and parallel transfers

Installing rclone

Linux (Recommended: Official Script)

# Install the latest version with one command
curl https://rclone.org/install.sh | sudo bash

# Or install a specific version
curl https://rclone.org/install.sh | sudo bash -s beta

Linux (Package Manager)

# Debian/Ubuntu
sudo apt install rclone

# Fedora/RHEL
sudo dnf install rclone

# Arch Linux
sudo pacman -S rclone

# Alpine
sudo apk add rclone

macOS

# Using Homebrew (recommended)
brew install rclone

# Or using the install script
curl https://rclone.org/install.sh | sudo bash

Windows

# Using winget
winget install Rclone.Rclone

# Using Chocolatey
choco install rclone

# Using Scoop
scoop install rclone

Or download the binary directly from rclone.org/downloads and add it to your PATH.

Docker

# Run rclone in Docker
docker run --rm -v ~/.config/rclone:/config/rclone -v /data:/data rclone/rclone ls danubedata:my-bucket

Verify Installation

rclone version
# Output: rclone v1.68.2 (or later)

Configuring rclone for S3-Compatible Storage

Interactive Configuration

rclone config

Follow the prompts:

n) New remote
name> danubedata
Storage> s3
provider> Other
env_auth> false
access_key_id> YOUR_ACCESS_KEY
secret_access_key> YOUR_SECRET_KEY
region> fsn1
endpoint> s3.danubedata.ro
location_constraint> (leave blank)
acl> private
Edit advanced config?> n
y) Yes this is OK

Manual Configuration (Recommended)

Edit the rclone config file directly. This is faster and scriptable:

# Find the config file location
rclone config file
# Usually: ~/.config/rclone/rclone.conf

Add this to ~/.config/rclone/rclone.conf:

[danubedata]
type = s3
provider = Other
env_auth = false
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
region = fsn1
endpoint = s3.danubedata.ro
acl = private
force_path_style = true

Using Environment Variables

For CI/CD pipelines or containers, use environment variables instead of a config file:

export RCLONE_CONFIG_DANUBEDATA_TYPE=s3
export RCLONE_CONFIG_DANUBEDATA_PROVIDER=Other
export RCLONE_CONFIG_DANUBEDATA_ACCESS_KEY_ID=YOUR_ACCESS_KEY
export RCLONE_CONFIG_DANUBEDATA_SECRET_ACCESS_KEY=YOUR_SECRET_KEY
export RCLONE_CONFIG_DANUBEDATA_REGION=fsn1
export RCLONE_CONFIG_DANUBEDATA_ENDPOINT=s3.danubedata.ro
export RCLONE_CONFIG_DANUBEDATA_FORCE_PATH_STYLE=true

# Now use "danubedata:" as the remote name
rclone ls danubedata:my-bucket

Verify the Configuration

# List all configured remotes
rclone listremotes

# Test the connection by listing buckets
rclone lsd danubedata:

# List contents of a bucket
rclone ls danubedata:my-bucket

Core Commands

ls — List Files

# List all files in a bucket (with sizes)
rclone ls danubedata:my-bucket

# List files with modification times
rclone lsl danubedata:my-bucket

# List only files in a directory (not recursive)
rclone lsf danubedata:my-bucket/uploads/

# List directories only
rclone lsd danubedata:my-bucket

# List with JSON output (great for scripting)
rclone lsjson danubedata:my-bucket/uploads/

# Show total size of a path
rclone size danubedata:my-bucket
# Output: Total objects: 1,523 (4.2 GiB)

copy — Copy Files

# Copy a single file to S3
rclone copy /path/to/file.txt danubedata:my-bucket/

# Copy a directory to S3
rclone copy /var/www/uploads danubedata:my-bucket/uploads/

# Copy from S3 to local
rclone copy danubedata:my-bucket/backups/ /tmp/restore/

# Copy with progress display
rclone copy /data danubedata:my-bucket/data/ --progress

# Copy between two S3 remotes (server-side if same provider)
rclone copy danubedata:source-bucket/data aws-s3:dest-bucket/data

sync — Make Destination Match Source

Warning: sync deletes files in the destination that don't exist in the source. Use --dry-run first!

# Preview what sync would do (ALWAYS do this first)
rclone sync /var/www/assets danubedata:my-bucket/assets/ --dry-run

# Sync a local directory to S3
rclone sync /var/www/assets danubedata:my-bucket/assets/ --progress

# Sync with verbose output
rclone sync /data danubedata:my-bucket/data/ -v

# Sync and log changes
rclone sync /data danubedata:my-bucket/data/ --log-file=/var/log/rclone-sync.log --log-level INFO

move — Move Files

# Move files from local to S3 (deletes source after successful transfer)
rclone move /tmp/processed danubedata:my-bucket/archive/

# Move files within S3
rclone move danubedata:my-bucket/temp/ danubedata:my-bucket/archive/

# Move and delete empty source directories
rclone move /data/exports danubedata:my-bucket/exports/ --delete-empty-src-dirs

delete — Remove Files

# Delete all files in a path (keeps directory structure)
rclone delete danubedata:my-bucket/temp/

# Delete files older than 30 days
rclone delete danubedata:my-bucket/logs/ --min-age 30d

# Delete a specific file
rclone deletefile danubedata:my-bucket/old-report.pdf

# Remove an empty bucket
rclone rmdir danubedata:my-bucket

# Remove a bucket and all its contents
rclone purge danubedata:my-bucket/test-data/

mount — Mount S3 as Local Filesystem

# Mount a bucket to a local directory
mkdir -p /mnt/s3-data
rclone mount danubedata:my-bucket /mnt/s3-data --daemon

# Mount with caching for better performance
rclone mount danubedata:my-bucket /mnt/s3-data 
    --vfs-cache-mode full 
    --vfs-cache-max-age 1h 
    --vfs-cache-max-size 5G 
    --vfs-read-chunk-size 32M 
    --vfs-read-chunk-size-limit 256M 
    --daemon

# Mount read-only
rclone mount danubedata:my-bucket /mnt/s3-data --read-only --daemon

# Unmount
fusermount -u /mnt/s3-data   # Linux
umount /mnt/s3-data           # macOS

Persistent Mount with systemd

Create a systemd service for automatic mounting on boot:

# /etc/systemd/system/rclone-s3.service
[Unit]
Description=rclone S3 mount
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/rclone mount danubedata:my-bucket /mnt/s3-data 
    --vfs-cache-mode full 
    --vfs-cache-max-age 1h 
    --vfs-cache-max-size 5G 
    --allow-other 
    --config /root/.config/rclone/rclone.conf
ExecStop=/bin/fusermount -u /mnt/s3-data
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target
sudo systemctl enable rclone-s3
sudo systemctl start rclone-s3
sudo systemctl status rclone-s3

Syncing Local Directories to S3

The most common use case: keeping a local directory in sync with S3 storage.

One-Way Sync (Local to S3)

# Sync website assets
rclone sync /var/www/mysite/uploads danubedata:mysite-bucket/uploads/ 
    --progress 
    --transfers 16 
    --checkers 8

# Sync with size+modtime check (default)
rclone sync /data danubedata:my-bucket/data/

# Sync with checksum verification (slower but more reliable)
rclone sync /data danubedata:my-bucket/data/ --checksum

One-Way Sync (S3 to Local)

# Pull latest data from S3
rclone sync danubedata:my-bucket/shared-data/ /data/shared/ --progress

Two-Way Sync (Bisync)

# First run: establish baseline (required)
rclone bisync /data/project danubedata:my-bucket/project --resync

# Subsequent runs: two-way sync
rclone bisync /data/project danubedata:my-bucket/project

# With conflict resolution (newer file wins)
rclone bisync /data/project danubedata:my-bucket/project --conflict-resolve newer

Encrypting Data with rclone crypt

rclone crypt provides zero-knowledge encryption: your cloud provider never sees unencrypted data or filenames.

Setting Up an Encrypted Remote

Add this to your ~/.config/rclone/rclone.conf:

[danubedata-encrypted]
type = crypt
remote = danubedata:my-encrypted-bucket
password = YOUR_ENCRYPTION_PASSWORD_OBSCURED
password2 = YOUR_SALT_PASSWORD_OBSCURED
filename_encryption = standard
directory_name_encryption = true

Generate obscured passwords (rclone stores passwords in an obscured format):

# Generate obscured password
rclone obscure "your-strong-encryption-password"
# Output: a long obscured string — use this in the config

# Or configure interactively
rclone config
# Choose: New remote -> crypt -> set underlying remote to danubedata:my-encrypted-bucket

Using the Encrypted Remote

# Upload files (automatically encrypted)
rclone copy /data/sensitive danubedata-encrypted:backups/

# List files (filenames are decrypted for you)
rclone ls danubedata-encrypted:backups/

# Download and decrypt
rclone copy danubedata-encrypted:backups/ /tmp/restored/

# The underlying S3 bucket contains encrypted filenames and data
rclone ls danubedata:my-encrypted-bucket
# Shows: encrypted gibberish filenames

What Gets Encrypted?

Component Encrypted? Details
File contents Yes AES-256-GCM (NaCl SecretBox)
File names Yes (configurable) AES-EME with optional obfuscation
Directory names Yes (configurable) Same as file names
File sizes Partially Padded but approximate size is visible
Modification times No Timestamps are preserved in the clear

Critical: Back up your encryption passwords! If you lose them, your data is irrecoverable.

Scheduling Backups with Cron

Basic Cron Backup

# Edit your crontab
crontab -e
# Backup /data every night at 2:00 AM
0 2 * * * /usr/bin/rclone sync /data danubedata:my-backups/daily/ --log-file=/var/log/rclone-backup.log --log-level INFO

# Backup with encrypted storage every night at 3:00 AM
0 3 * * * /usr/bin/rclone sync /data/sensitive danubedata-encrypted:backups/ --log-file=/var/log/rclone-encrypted-backup.log --log-level INFO

Advanced Backup Script

#!/bin/bash
# /usr/local/bin/s3-backup.sh
# Comprehensive backup script with rotation and notifications

set -euo pipefail

# Configuration
REMOTE="danubedata"
BUCKET="my-backups"
SOURCE="/data"
LOG_DIR="/var/log/rclone"
DATE=$(date +%Y-%m-%d)
RETENTION_DAYS=30

mkdir -p "$LOG_DIR"
LOG_FILE="$LOG_DIR/backup-${DATE}.log"

echo "=== Backup started: $(date) ===" | tee -a "$LOG_FILE"

# Create dated snapshot
echo "Creating backup snapshot..." | tee -a "$LOG_FILE"
rclone sync "$SOURCE" "${REMOTE}:${BUCKET}/snapshots/${DATE}/" 
    --transfers 16 
    --checkers 8 
    --log-file="$LOG_FILE" 
    --log-level INFO 
    --stats 60s 
    --stats-log-level NOTICE

# Update "latest" copy (fast — server-side copy)
echo "Updating latest pointer..." | tee -a "$LOG_FILE"
rclone sync "${REMOTE}:${BUCKET}/snapshots/${DATE}/" "${REMOTE}:${BUCKET}/latest/" 
    --log-file="$LOG_FILE" 
    --log-level INFO

# Clean up old snapshots
echo "Cleaning up snapshots older than ${RETENTION_DAYS} days..." | tee -a "$LOG_FILE"
rclone delete "${REMOTE}:${BUCKET}/snapshots/" 
    --min-age "${RETENTION_DAYS}d" 
    --log-file="$LOG_FILE" 
    --log-level INFO

# Remove empty directories
rclone rmdirs "${REMOTE}:${BUCKET}/snapshots/" 
    --log-file="$LOG_FILE" 
    --log-level INFO 2>/dev/null || true

# Report
TOTAL_SIZE=$(rclone size "${REMOTE}:${BUCKET}/snapshots/${DATE}/" --json 2>/dev/null | python3 -c "import sys,json; d=json.load(sys.stdin); print(f"{d['bytes']/1073741824:.2f} GB")" 2>/dev/null || echo "unknown")
echo "=== Backup complete: $(date) | Size: ${TOTAL_SIZE} ===" | tee -a "$LOG_FILE"
# Make executable and schedule
chmod +x /usr/local/bin/s3-backup.sh

# Add to crontab: run at 2:00 AM daily
echo "0 2 * * * /usr/local/bin/s3-backup.sh" | crontab -

Database Backup with rclone

#!/bin/bash
# /usr/local/bin/db-backup-s3.sh

set -euo pipefail

DATE=$(date +%Y-%m-%d_%H%M)
BACKUP_DIR="/tmp/db-backups"
REMOTE="danubedata"
BUCKET="my-db-backups"

mkdir -p "$BACKUP_DIR"

# MySQL backup
mysqldump --all-databases --single-transaction --quick 
    | gzip > "${BACKUP_DIR}/mysql-${DATE}.sql.gz"

# PostgreSQL backup
pg_dumpall | gzip > "${BACKUP_DIR}/postgresql-${DATE}.sql.gz"

# Upload to S3
rclone copy "$BACKUP_DIR" "${REMOTE}:${BUCKET}/databases/${DATE}/" --progress

# Clean up local temp files
rm -rf "$BACKUP_DIR"

# Delete remote backups older than 14 days
rclone delete "${REMOTE}:${BUCKET}/databases/" --min-age 14d
rclone rmdirs "${REMOTE}:${BUCKET}/databases/" 2>/dev/null || true

echo "Database backup complete: ${DATE}"

Bandwidth Limiting and Throttling

# Limit bandwidth to 10 MB/s
rclone sync /data danubedata:my-bucket/ --bwlimit 10M

# Different limits for upload and download
rclone sync /data danubedata:my-bucket/ --bwlimit "10M:5M"
# 10 MB/s upload, 5 MB/s download

# Time-based bandwidth limits (great for shared connections)
rclone sync /data danubedata:my-bucket/ --bwlimit "08:00,512k 00:00,10M"
# 512 KB/s during business hours, 10 MB/s overnight

# Limit to off-peak hours only
rclone sync /data danubedata:my-bucket/ --bwlimit "Mon-Fri 08:00,512k Mon-Fri 18:00,10M Sat-Sun,10M"

Filtering Files and Directories

Include/Exclude Patterns

# Only sync specific file types
rclone sync /data danubedata:my-bucket/ --include "*.jpg" --include "*.png" --include "*.webp"

# Exclude certain patterns
rclone sync /project danubedata:my-bucket/project/ 
    --exclude "node_modules/**" 
    --exclude ".git/**" 
    --exclude "__pycache__/**" 
    --exclude "*.pyc" 
    --exclude ".env"

# Exclude by size (skip files larger than 100MB)
rclone sync /data danubedata:my-bucket/ --max-size 100M

# Only sync files smaller than 10KB
rclone sync /data danubedata:my-bucket/ --max-size 10k

# Only sync files modified in the last 7 days
rclone sync /data danubedata:my-bucket/ --max-age 7d

# Only sync files older than 30 days (archival)
rclone sync /data danubedata:my-bucket/archive/ --min-age 30d

Filter File

For complex filtering, use a filter file:

# /etc/rclone/backup-filters.txt
# Include patterns (processed in order)
+ *.jpg
+ *.png
+ *.pdf
+ *.docx
+ *.sql.gz

# Exclude patterns
- node_modules/**
- .git/**
- __pycache__/**
- *.tmp
- *.log
- .DS_Store
- Thumbs.db

# Exclude hidden files
- .*

# Include everything else
+ **
rclone sync /data danubedata:my-bucket/ --filter-from /etc/rclone/backup-filters.txt

Server-Side Copy Between Remotes

When both source and destination are S3-compatible, rclone can perform server-side copies — the data never touches your local machine.

Same Provider

# Copy between buckets on the same provider (fastest — server-side)
rclone copy danubedata:source-bucket/ danubedata:dest-bucket/ --progress

# Sync between buckets
rclone sync danubedata:production-assets/ danubedata:backup-assets/ --progress

Between Different S3 Providers

# Configure both remotes in rclone.conf, then:
rclone copy aws-s3:my-aws-bucket/ danubedata:my-danubedata-bucket/ 
    --progress 
    --transfers 16 
    --s3-chunk-size 64M

Migrating from One S3 Provider to Another

rclone makes migration between S3 providers straightforward. Here's a step-by-step process:

Step 1: Configure Both Remotes

# ~/.config/rclone/rclone.conf

[old-provider]
type = s3
provider = AWS
access_key_id = OLD_KEY
secret_access_key = OLD_SECRET
region = eu-central-1
endpoint = s3.eu-central-1.amazonaws.com

[danubedata]
type = s3
provider = Other
access_key_id = NEW_KEY
secret_access_key = NEW_SECRET
region = fsn1
endpoint = s3.danubedata.ro
force_path_style = true

Step 2: Assess the Migration

# Check how much data needs to be migrated
rclone size old-provider:my-bucket
# Total objects: 45,231 (128.4 GiB)

# List the largest files
rclone ls old-provider:my-bucket --max-depth 1 | sort -k1 -n -r | head -20

# Dry run the migration
rclone copy old-provider:my-bucket/ danubedata:my-bucket/ --dry-run -v

Step 3: Run the Migration

# Initial copy (may take hours for large datasets)
rclone copy old-provider:my-bucket/ danubedata:my-bucket/ 
    --progress 
    --transfers 32 
    --checkers 16 
    --s3-chunk-size 64M 
    --log-file=/var/log/migration.log 
    --log-level INFO

# Verify with checksums
rclone check old-provider:my-bucket/ danubedata:my-bucket/ 
    --one-way 
    --log-file=/var/log/migration-verify.log

Step 4: Final Sync (After Switching DNS/Config)

# Catch any files that changed during migration
rclone sync old-provider:my-bucket/ danubedata:my-bucket/ 
    --progress 
    --transfers 32

Performance Tuning

Parameter Default Recommended Description
--transfers 4 16-32 Number of parallel file transfers
--checkers 8 16-32 Number of parallel file checkers (for comparing)
--s3-chunk-size 5 MiB 64-128 MiB Chunk size for multipart uploads
--s3-upload-concurrency 4 8-16 Parallel chunks within a single multipart upload
--buffer-size 16 MiB 64-128 MiB In-memory buffer per transfer
--s3-disable-checksum false true (for speed) Skip MD5 checksum on upload (faster but less safe)
--fast-list false true Use fewer API calls to list files (uses more memory)

Optimized Transfer Profiles

# For many small files (< 1 MB each)
rclone sync /data danubedata:my-bucket/ 
    --transfers 32 
    --checkers 32 
    --fast-list 
    --s3-chunk-size 16M

# For large files (> 100 MB each)
rclone sync /data danubedata:my-bucket/ 
    --transfers 8 
    --s3-chunk-size 128M 
    --s3-upload-concurrency 16 
    --buffer-size 128M

# Maximum speed (high bandwidth, lots of RAM)
rclone sync /data danubedata:my-bucket/ 
    --transfers 32 
    --checkers 32 
    --s3-chunk-size 64M 
    --s3-upload-concurrency 16 
    --buffer-size 64M 
    --fast-list 
    --s3-disable-checksum

# Low-resource server (limited RAM and bandwidth)
rclone sync /data danubedata:my-bucket/ 
    --transfers 4 
    --checkers 4 
    --s3-chunk-size 8M 
    --buffer-size 8M 
    --bwlimit 5M

Troubleshooting Common Errors

Error Cause Fix
AccessDenied Wrong credentials or insufficient permissions Verify access_key_id and secret_access_key in config
NoSuchBucket Bucket does not exist Create the bucket first: rclone mkdir danubedata:my-bucket
SignatureDoesNotMatch Wrong secret key or clock skew Verify secret key; sync system clock with ntpdate
RequestTimeTooSkewed System clock is off by more than 15 minutes Run sudo ntpdate pool.ntp.org or enable NTP
connection refused Wrong endpoint URL or network issue Verify endpoint in config; check DNS and firewall
x509: certificate SSL/TLS certificate error Update CA certificates: sudo apt install ca-certificates
directory not found on mount Mount point does not exist Create it first: mkdir -p /mnt/s3
FUSE not found on mount FUSE not installed sudo apt install fuse3 or brew install macfuse
Slow transfers Default settings are conservative Increase --transfers, --checkers, and --s3-chunk-size
Out of memory Too many parallel transfers or large buffers Reduce --transfers, --buffer-size, and --s3-chunk-size

Debugging Tips

# Verbose output (see every operation)
rclone ls danubedata:my-bucket -vv

# Debug HTTP requests (see raw API calls)
rclone ls danubedata:my-bucket --dump headers

# Full debug (headers + bodies — WARNING: exposes credentials)
rclone ls danubedata:my-bucket --dump headers,bodies

# Show transfer stats every 10 seconds
rclone sync /data danubedata:my-bucket/ --stats 10s --stats-log-level NOTICE

# Write a detailed log file
rclone sync /data danubedata:my-bucket/ --log-file=/tmp/rclone-debug.log --log-level DEBUG

rclone vs aws cli vs s3cmd

How does rclone compare to other popular S3 command-line tools?

Feature rclone aws cli (s3) s3cmd
S3-compatible support Excellent (70+ backends) Yes (--endpoint-url) Good (built for this)
Sync command sync, bisync, copy aws s3 sync s3cmd sync
FUSE mount Built-in (rclone mount) No (needs s3fs-fuse) No
Client-side encryption Built-in (rclone crypt) No (use SSE only) GPG encryption
Parallel transfers Configurable (--transfers) Limited (max_concurrent_requests) Basic
Bandwidth limiting Advanced (time-based schedules) No built-in Basic (--speed-limit)
File filtering Advanced (filter files, size, age) --exclude, --include --exclude, --include
Cross-provider copy Native (server-side when possible) No No
Web UI rclone rcd --rc-web-gui No No
Serve over HTTP/FTP Yes (serve http/webdav/ftp/sftp) No No
Written in Go (single binary) Python Python
Install size ~50 MB (no dependencies) ~100 MB (needs Python) ~1 MB (needs Python)
Best for Everything — most versatile AWS-centric workflows Simple S3 tasks, scripting

Verdict: rclone is the best all-around choice for S3-compatible storage. It's a single binary with zero dependencies, supports more backends than any other tool, and includes unique features like FUSE mounting, client-side encryption, and time-based bandwidth scheduling. Use aws cli if your workflow is exclusively AWS. Use s3cmd if you want something lightweight and simple.

Quick Reference: rclone Command Cheat Sheet

Task Command
List buckets rclone lsd danubedata:
List files rclone ls danubedata:bucket
Create bucket rclone mkdir danubedata:new-bucket
Upload file rclone copy file.txt danubedata:bucket/
Download file rclone copy danubedata:bucket/file.txt ./
Sync directory rclone sync /local danubedata:bucket/
Dry run sync rclone sync /local danubedata:bucket/ --dry-run
Delete file rclone deletefile danubedata:bucket/file.txt
Check bucket size rclone size danubedata:bucket
Mount bucket rclone mount danubedata:bucket /mnt/s3 --daemon
Verify integrity rclone check /local danubedata:bucket/ --one-way
Show config rclone config show

Ready to Get Started?

DanubeData Object Storage gives you S3-compatible storage that works perfectly with rclone — European data sovereignty at €3.99/month including 1TB of storage and 1TB of egress traffic. No surprise bills, no hidden fees.

  • S3-compatible API at s3.danubedata.ro — works with rclone out of the box
  • European data centers (Falkenstein, Germany) for GDPR compliance
  • Sync, mount, encrypt, and backup with rclone in minutes
  • Create access keys and start transferring data in under 60 seconds

Create Your Free Account or Create a Storage Bucket

Share this article

Ready to Get Started?

Deploy your infrastructure in minutes with DanubeData's managed services.