Amazon S3 pioneered cloud object storage, but in 2026, there are compelling reasons to move your data to a European S3-compatible provider. Whether you're driven by GDPR compliance, cost savings, data sovereignty, or simply lower latency for European users, this guide walks you through a complete migration—from planning to post-migration validation.
The good news: because the S3 API is a universal standard, migrating between providers is straightforward. Your application code needs minimal changes (usually just an endpoint URL), and tools like rclone handle the heavy lifting of data transfer.
Why Migrate Away from AWS S3?
1. Cost Savings
AWS S3's pricing model includes several hidden costs that add up fast:
- Egress fees: $0.09/GB ($90/TB) for data leaving AWS—the single biggest cost driver
- Request fees: $0.005 per 1,000 GET requests, $0.005 per 1,000 PUT requests
- Storage classes: Choosing the wrong tier leads to retrieval fees (Glacier: $0.01/GB retrieval)
- Cross-region transfer: $0.02/GB between AWS regions
European providers typically offer flat-rate pricing that includes storage and a generous traffic allowance, making costs predictable and often 60-80% lower.
2. GDPR and Data Sovereignty
Even when using AWS's eu-central-1 region, your data is controlled by a US company subject to:
- CLOUD Act: US government can compel AWS to hand over data stored anywhere in the world
- FISA Section 702: Allows surveillance of non-US persons' data held by US companies
- Schrems II implications: EU-US Data Privacy Framework exists but remains legally fragile
Hosting with a European provider under European jurisdiction eliminates these risks entirely.
3. Latency for European Users
If your users and applications are in Europe, a European storage provider delivers:
- 5-15ms latency vs 20-50ms for AWS eu-central-1 (depending on your location)
- Better peering within European internet exchanges (DE-CIX, AMS-IX)
- No transatlantic hops for intra-European traffic
4. Vendor Lock-In Reduction
AWS makes it expensive to leave by design. Egress fees are essentially an exit tax. Moving to a provider with free or cheap egress gives you future flexibility.
Cost Comparison: AWS S3 vs European Alternatives
Let's compare the real costs for a common workload: 500GB stored, 200GB egress/month.
| Provider | Storage Cost | Egress Cost | Total/Month | Data Location |
|---|---|---|---|---|
| AWS S3 (eu-central-1) | $11.50 | $18.00 | $29.50 | Frankfurt, DE (US company) |
| Google Cloud Storage | $10.00 | $24.00 | $34.00 | EU (US company) |
| Azure Blob Storage | $10.40 | $17.42 | $27.82 | EU (US company) |
| Backblaze B2 | $3.00 | $5.00 | $8.00 | EU (US company) |
| Cloudflare R2 | $7.50 | $0.00 | $7.50 | EU (US company) |
| Hetzner Object Storage | $2.85 | $0.54 | $3.39 | Germany (EU company) |
| DanubeData | Included | Included | $3.99 (1TB+1TB incl.) | Germany (EU company) |
Key takeaway: For this workload, DanubeData costs $3.99/month vs AWS S3's $29.50/month—an 86% savings. And your data is held by a European company under EU jurisdiction.
Pre-Migration Checklist
Before touching any data, work through this checklist to ensure a smooth migration:
Inventory Your S3 Usage
# Count total objects and size in your AWS bucket
aws s3 ls s3://my-bucket --recursive --summarize | tail -2
# Example output:
# Total Objects: 847,293
# Total Size: 483.2 GiB
# List all buckets you need to migrate
aws s3 ls
# Check bucket versioning status
aws s3api get-bucket-versioning --bucket my-bucket
# Export bucket policy for reference
aws s3api get-bucket-policy --bucket my-bucket --output text > bucket-policy.json
# Check lifecycle rules
aws s3api get-bucket-lifecycle-configuration --bucket my-bucket
# Check CORS configuration
aws s3api get-bucket-cors --bucket my-bucket
Identify All S3 Consumers
Search your codebase and infrastructure for every place that references your S3 bucket:
- Application code: SDK calls, presigned URL generation, direct uploads
- CI/CD pipelines: Build artifacts, deployment assets
- Backup scripts: Automated backup jobs writing to S3
- CDN origins: CloudFront or other CDNs pulling from the bucket
- Lambda functions: Event-driven processing triggered by S3
- IAM policies: Which services and users have access
- Third-party integrations: Services that read/write to your bucket
# Search your codebase for S3 references
grep -rn "s3://" --include="*.py" --include="*.js" --include="*.php" --include="*.yaml" .
grep -rn "amazonaws.com" --include="*.py" --include="*.js" --include="*.php" --include="*.env" .
grep -rn "AWS_BUCKET|S3_BUCKET|STORAGE_BUCKET" --include="*.env*" .
Document Your Requirements
| Requirement | AWS S3 Feature | S3-Compatible Equivalent |
|---|---|---|
| Versioning | S3 Versioning | Supported (enable per-bucket) |
| Lifecycle rules | S3 Lifecycle Configuration | Supported (expiration, transitions) |
| CORS | Bucket CORS config | Supported |
| Presigned URLs | SDK presigning | Supported (update endpoint) |
| Event notifications | S3 Event Notifications | Webhook-based (varies by provider) |
| Server-side encryption | SSE-S3, SSE-KMS | SSE-S3 supported, KMS varies |
| Multipart uploads | Native | Supported (critical for large files) |
Migration Strategy Options
Choose the right strategy based on your dataset size, downtime tolerance, and complexity:
Strategy 1: Big Bang Migration (Simple, Some Downtime)
- Best for: Small datasets (<100GB), internal tools, non-critical applications
- Process: Copy all data, switch endpoint, verify
- Downtime: Minutes to hours depending on data size
Strategy 2: Dual-Write Migration (Zero Downtime)
- Best for: Production applications, SaaS platforms, user-facing services
- Process: Write to both destinations, copy historical data, switch reads, stop old writes
- Downtime: Zero
Strategy 3: Gradual Migration (Low Risk)
- Best for: Large datasets (>1TB), mixed workloads, complex applications
- Process: Migrate bucket-by-bucket or prefix-by-prefix over days/weeks
- Downtime: Zero per component
Setting Up Your Destination Storage
Create Your DanubeData Storage Bucket
First, create an account and set up your bucket:
- Sign up at danubedata.ro/register
- Navigate to Storage in the dashboard
- Click Create Bucket
- Note your access key and secret key from the Access Keys section
Your S3 credentials:
# DanubeData S3 Configuration
S3_ENDPOINT=https://s3.danubedata.ro
S3_REGION=fsn1
S3_BUCKET=your-bucket-name
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
Verify Connectivity
# Test with aws CLI (works with any S3-compatible endpoint)
aws s3 ls
--endpoint-url https://s3.danubedata.ro
--region fsn1
# Create a test file
echo "migration test" > test.txt
aws s3 cp test.txt s3://your-bucket/test.txt
--endpoint-url https://s3.danubedata.ro
--region fsn1
# Verify it's there
aws s3 ls s3://your-bucket/
--endpoint-url https://s3.danubedata.ro
--region fsn1
Step-by-Step Migration with rclone
rclone is the best tool for S3-to-S3 migration. It supports parallel transfers, bandwidth limiting, checksum verification, and can resume interrupted transfers.
Install rclone
# Linux / macOS
curl https://rclone.org/install.sh | sudo bash
# macOS with Homebrew
brew install rclone
# Verify installation
rclone version
Configure Source (AWS S3)
# Create rclone config for AWS S3
rclone config
# Or edit ~/.config/rclone/rclone.conf directly:
cat > ~/.config/rclone/rclone.conf <<'EOF'
[aws]
type = s3
provider = AWS
access_key_id = YOUR_AWS_ACCESS_KEY
secret_access_key = YOUR_AWS_SECRET_KEY
region = eu-central-1
acl = private
[danubedata]
type = s3
provider = Other
access_key_id = YOUR_DANUBEDATA_ACCESS_KEY
secret_access_key = YOUR_DANUBEDATA_SECRET_KEY
endpoint = https://s3.danubedata.ro
region = fsn1
acl = private
force_path_style = true
EOF
Dry Run First
Always perform a dry run to see what would be transferred without actually moving data:
# Dry run - see what would be copied
rclone copy aws:my-aws-bucket danubedata:my-new-bucket
--dry-run
--progress
-v
# Check sizes match
rclone size aws:my-aws-bucket
rclone size danubedata:my-new-bucket
Execute the Migration
# Full migration with optimal settings
rclone copy aws:my-aws-bucket danubedata:my-new-bucket
--transfers 16
--checkers 32
--multi-thread-streams 4
--s3-upload-concurrency 4
--buffer-size 64M
--s3-chunk-size 64M
--progress
--log-file migration.log
--log-level INFO
--stats 30s
--stats-log-level NOTICE
# For very large files (multi-GB), increase chunk size:
rclone copy aws:my-aws-bucket danubedata:my-new-bucket
--transfers 8
--s3-chunk-size 128M
--s3-upload-concurrency 8
--progress
Understanding the rclone Parameters
| Parameter | Default | Recommended | Purpose |
|---|---|---|---|
--transfers |
4 | 8-32 | Number of parallel file transfers |
--checkers |
8 | 16-64 | Number of parallel existence checks |
--s3-chunk-size |
5M | 64-128M | Multipart upload chunk size |
--s3-upload-concurrency |
4 | 4-8 | Parallel chunks per file upload |
--buffer-size |
16M | 64-128M | In-memory buffer per transfer |
--bwlimit |
off | 100M-1G | Bandwidth limit (useful to avoid saturating network) |
Bandwidth Management
If you need to limit bandwidth to avoid impacting production traffic:
# Limit to 100 MB/s
rclone copy aws:my-aws-bucket danubedata:my-new-bucket
--transfers 16
--bwlimit 100M
--progress
# Time-based bandwidth limits (migrate faster at night)
rclone copy aws:my-aws-bucket danubedata:my-new-bucket
--transfers 16
--bwlimit "08:00,50M 20:00,500M"
--progress
Resuming Interrupted Migrations
rclone's copy command is idempotent—it only copies files that don't exist or differ at the destination. If your migration is interrupted, simply re-run the same command:
# This will skip already-copied files and resume where it left off
rclone copy aws:my-aws-bucket danubedata:my-new-bucket
--transfers 16
--progress
--log-file migration-resume.log
Alternative Migration Tools
Using AWS CLI with Custom Endpoint
# Configure a named profile for DanubeData
aws configure --profile danubedata
# Access Key: your-danubedata-key
# Secret Key: your-danubedata-secret
# Region: fsn1
# Output: json
# Sync from AWS to DanubeData
aws s3 sync s3://my-aws-bucket s3://my-new-bucket
--source-region eu-central-1
--profile danubedata
--endpoint-url https://s3.danubedata.ro
# With delete (mirror exactly)
aws s3 sync s3://my-aws-bucket s3://my-new-bucket
--source-region eu-central-1
--profile danubedata
--endpoint-url https://s3.danubedata.ro
--delete
Using s3cmd
# Configure s3cmd for DanubeData
cat > ~/.s3cfg-danubedata <<'EOF'
[default]
access_key = YOUR_DANUBEDATA_ACCESS_KEY
secret_key = YOUR_DANUBEDATA_SECRET_KEY
host_base = s3.danubedata.ro
host_bucket = %(bucket)s.s3.danubedata.ro
use_https = True
signature_v2 = False
EOF
# Sync using s3cmd
s3cmd sync s3://my-aws-bucket/ s3://my-new-bucket/
-c ~/.s3cfg-danubedata
--no-preserve
--progress
Verifying Data Integrity
After migration, verify that all data arrived correctly:
Object Count Verification
# Count objects in source
aws s3 ls s3://my-aws-bucket --recursive | wc -l
# Count objects in destination
aws s3 ls s3://my-new-bucket --recursive
--endpoint-url https://s3.danubedata.ro
--region fsn1 | wc -l
# Compare counts (should match)
echo "Source: $(aws s3 ls s3://my-aws-bucket --recursive | wc -l)"
echo "Dest: $(aws s3 ls s3://my-new-bucket --recursive --endpoint-url https://s3.danubedata.ro --region fsn1 | wc -l)"
Checksum Verification with rclone
# Full checksum comparison (compares MD5/ETag of every object)
rclone check aws:my-aws-bucket danubedata:my-new-bucket
--one-way
--log-file check-results.log
--log-level INFO
# If differences are found, re-copy only mismatched files
rclone copy aws:my-aws-bucket danubedata:my-new-bucket
--transfers 16
--progress
# Size comparison
rclone size aws:my-aws-bucket
rclone size danubedata:my-new-bucket
Spot-Check Critical Files
# Download a sample file from both and compare
aws s3 cp s3://my-aws-bucket/important/data.json /tmp/source-data.json
aws s3 cp s3://my-new-bucket/important/data.json /tmp/dest-data.json
--endpoint-url https://s3.danubedata.ro --region fsn1
# Compare checksums
md5sum /tmp/source-data.json /tmp/dest-data.json
# Binary comparison
diff /tmp/source-data.json /tmp/dest-data.json
Updating Application Code
The beauty of S3 compatibility is that code changes are minimal. Here are examples for popular languages and frameworks:
Python (boto3)
import boto3
# Before (AWS)
s3 = boto3.client('s3',
region_name='eu-central-1',
aws_access_key_id='AWS_KEY',
aws_secret_access_key='AWS_SECRET'
)
# After (DanubeData)
s3 = boto3.client('s3',
region_name='fsn1',
endpoint_url='https://s3.danubedata.ro',
aws_access_key_id='DANUBEDATA_KEY',
aws_secret_access_key='DANUBEDATA_SECRET'
)
# All operations remain identical
s3.upload_file('local-file.txt', 'my-bucket', 'remote-file.txt')
s3.download_file('my-bucket', 'remote-file.txt', 'downloaded.txt')
response = s3.list_objects_v2(Bucket='my-bucket', Prefix='uploads/')
Node.js (AWS SDK v3)
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
// Before (AWS)
const s3 = new S3Client({
region: 'eu-central-1',
credentials: {
accessKeyId: 'AWS_KEY',
secretAccessKey: 'AWS_SECRET',
},
});
// After (DanubeData)
const s3 = new S3Client({
region: 'fsn1',
endpoint: 'https://s3.danubedata.ro',
credentials: {
accessKeyId: 'DANUBEDATA_KEY',
secretAccessKey: 'DANUBEDATA_SECRET',
},
forcePathStyle: true,
});
// All operations remain identical
await s3.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: 'uploads/photo.jpg',
Body: fileBuffer,
ContentType: 'image/jpeg',
}));
PHP (Laravel / AWS SDK)
// config/filesystems.php
'disks' => [
// Before (AWS)
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION', 'eu-central-1'),
'bucket' => env('AWS_BUCKET'),
],
// After (DanubeData)
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION', 'fsn1'),
'bucket' => env('AWS_BUCKET'),
'endpoint' => env('AWS_ENDPOINT', 'https://s3.danubedata.ro'),
'use_path_style_endpoint' => true,
],
],
// .env changes
AWS_ACCESS_KEY_ID=your-danubedata-key
AWS_SECRET_ACCESS_KEY=your-danubedata-secret
AWS_DEFAULT_REGION=fsn1
AWS_BUCKET=your-bucket-name
AWS_ENDPOINT=https://s3.danubedata.ro
// Application code remains unchanged
Storage::disk('s3')->put('uploads/file.pdf', $contents);
$url = Storage::disk('s3')->temporaryUrl('uploads/file.pdf', now()->addMinutes(30));
Go
package main
import (
"context"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
// DanubeData S3 configuration
cfg, _ := config.LoadDefaultConfig(context.TODO(),
config.WithRegion("fsn1"),
config.WithCredentialsProvider(
credentials.NewStaticCredentialsProvider(
"DANUBEDATA_KEY",
"DANUBEDATA_SECRET",
"",
),
),
)
client := s3.NewFromConfig(cfg, func(o *s3.Options) {
o.BaseEndpoint = aws.String("https://s3.danubedata.ro")
o.UsePathStyle = true
})
// All operations use standard S3 API
_, err := client.PutObject(context.TODO(), &s3.PutObjectInput{
Bucket: aws.String("my-bucket"),
Key: aws.String("data/report.csv"),
Body: file,
})
}
DNS and CDN Considerations
If You're Using CloudFront with S3
When migrating away from AWS S3, you'll need to update your CDN configuration:
# Option 1: Switch CloudFront origin to new endpoint
# (Works but you're still paying AWS for CloudFront)
# Option 2: Switch to Cloudflare CDN (recommended)
# 1. Add your domain to Cloudflare
# 2. Create a CNAME record pointing to your S3 endpoint
# 3. Enable Cloudflare's caching
# DNS records for custom domain with Cloudflare
# Type: CNAME
# Name: assets (or cdn, static, etc.)
# Target: s3.danubedata.ro
# Proxy: Enabled (orange cloud)
Custom Domain for S3 Endpoint
# If you were using a custom domain like assets.example.com
# pointing to your-bucket.s3.amazonaws.com, update it:
# Old DNS record
assets.example.com CNAME your-bucket.s3.amazonaws.com
# New DNS record (through Cloudflare for HTTPS + caching)
assets.example.com CNAME s3.danubedata.ro
Zero-Downtime Migration Strategy
For production applications, here's a detailed zero-downtime migration plan:
Phase 1: Dual-Write Setup
Modify your application to write to both the old and new storage simultaneously:
# Python example: dual-write wrapper
class DualS3Writer:
def __init__(self, primary_client, secondary_client, primary_bucket, secondary_bucket):
self.primary = primary_client
self.secondary = secondary_client
self.primary_bucket = primary_bucket
self.secondary_bucket = secondary_bucket
def upload_file(self, local_path, key, **kwargs):
# Write to primary (AWS - still serving reads)
self.primary.upload_file(local_path, self.primary_bucket, key, **kwargs)
# Write to secondary (DanubeData - future primary)
try:
self.secondary.upload_file(local_path, self.secondary_bucket, key, **kwargs)
except Exception as e:
# Log but don't fail - secondary is not yet serving reads
logger.warning(f"Secondary write failed for {key}: {e}")
def delete_object(self, key):
self.primary.delete_object(Bucket=self.primary_bucket, Key=key)
try:
self.secondary.delete_object(Bucket=self.secondary_bucket, Key=key)
except Exception:
pass
Phase 2: Copy Historical Data
# While dual-write is active, copy all historical data
rclone copy aws:my-aws-bucket danubedata:my-new-bucket
--transfers 16
--progress
--log-file historical-copy.log
# Verify completeness
rclone check aws:my-aws-bucket danubedata:my-new-bucket
--one-way
Phase 3: Switch Reads
# Update your application to read from the new storage
# Keep dual-writes active as a safety net
# Environment variable approach
S3_READ_ENDPOINT=https://s3.danubedata.ro
S3_READ_REGION=fsn1
S3_WRITE_ENDPOINTS=https://s3.amazonaws.com,https://s3.danubedata.ro
Phase 4: Stop Writing to AWS
# After confirming reads work from DanubeData:
# 1. Remove dual-write logic
# 2. Point all writes to DanubeData
# 3. Keep AWS bucket read-only for 1-2 weeks (safety net)
# 4. Delete AWS bucket when confident
Phase 5: Cleanup
# Final verification
rclone check aws:my-aws-bucket danubedata:my-new-bucket --one-way
# Remove the old AWS bucket (after sufficient observation period)
aws s3 rb s3://my-aws-bucket --force
# Revoke old IAM credentials
aws iam delete-access-key --user-name s3-user --access-key-id OLD_KEY
Handling Presigned URLs During Transition
Presigned URLs are a common challenge during migration because they're generated with a specific endpoint and signature:
The Problem
Presigned URLs generated for AWS S3 won't work with a different endpoint. Any URLs already shared with users or embedded in emails will break.
Solutions
# Strategy 1: Use short expiry times during migration
# Generate presigned URLs with max 1-hour expiry during transition
# Python - generate presigned URL for new endpoint
url = s3_danubedata.generate_presigned_url(
'get_object',
Params={'Bucket': 'my-bucket', 'Key': 'reports/q4.pdf'},
ExpiresIn=3600 # 1 hour
)
# Result: https://s3.danubedata.ro/my-bucket/reports/q4.pdf?X-Amz-Algorithm=...
# Strategy 2: Proxy approach during transition
# Set up a reverse proxy that routes based on URL signature
# Old signatures -> AWS, new signatures -> DanubeData
# Strategy 3: Application-level redirect
# When a presigned URL 404s on new storage, generate a new one
# and redirect the user
PHP / Laravel Presigned URL Update
// Presigned URLs automatically use the configured endpoint
// No code change needed beyond the .env update
// Generate a presigned URL (works with DanubeData)
$url = Storage::disk('s3')->temporaryUrl(
'documents/contract.pdf',
now()->addMinutes(60)
);
// Before: https://my-bucket.s3.eu-central-1.amazonaws.com/documents/contract.pdf?...
// After: https://s3.danubedata.ro/my-bucket/documents/contract.pdf?...
Common Pitfalls and How to Avoid Them
1. Region Name Differences
AWS uses region names like eu-central-1. Other providers use their own naming:
| Provider | Region Value | Location |
|---|---|---|
| AWS S3 | eu-central-1 |
Frankfurt, Germany |
| DanubeData | fsn1 |
Falkenstein, Germany |
| Hetzner | fsn1 / nbg1 |
Falkenstein / Nuremberg |
| Cloudflare R2 | auto |
Auto-selected |
| Backblaze B2 | eu-central-003 |
Amsterdam, Netherlands |
2. Bucket Naming Conventions
# AWS allows bucket names like:
my-bucket
my.bucket.name
mybucket123
# Some S3-compatible providers have additional rules:
# - DanubeData prefixes with dd-{team_id}- automatically
# - Some providers don't support dots in bucket names
# - Maximum length varies (AWS: 63 chars, others may differ)
3. Path-Style vs Virtual-Hosted Style
# AWS uses virtual-hosted style by default:
# https://my-bucket.s3.eu-central-1.amazonaws.com/key
# Most S3-compatible providers use path-style:
# https://s3.danubedata.ro/my-bucket/key
# Always set forcePathStyle/use_path_style_endpoint = true
# when using S3-compatible providers
4. IAM vs Simple Access Keys
AWS uses IAM with fine-grained policies. Most S3-compatible providers use simpler access key pairs:
- No IAM policies: Access keys have full access to all buckets in your account
- No STS: No temporary credentials or role assumption
- No bucket policies: Access control is typically per-key or per-bucket in the provider dashboard
5. Multipart Upload Differences
# AWS default part size: 8MB
# Some providers have different minimum part sizes
# Always set explicit part sizes in your rclone/SDK config
# rclone: set chunk size explicitly
rclone copy aws:bucket danubedata:bucket --s3-chunk-size 64M
# boto3: configure transfer settings
from boto3.s3.transfer import TransferConfig
config = TransferConfig(
multipart_threshold=64 * 1024 * 1024, # 64MB
multipart_chunksize=64 * 1024 * 1024, # 64MB
max_concurrency=4
)
s3.upload_file('large-file.zip', 'bucket', 'key', Config=config)
6. ListObjects Pagination
# AWS returns up to 1,000 objects per ListObjects call
# S3-compatible providers may have different defaults
# Always use pagination to be safe
import boto3
s3 = boto3.client('s3',
endpoint_url='https://s3.danubedata.ro',
region_name='fsn1'
)
paginator = s3.get_paginator('list_objects_v2')
for page in paginator.paginate(Bucket='my-bucket', Prefix='uploads/'):
for obj in page.get('Contents', []):
print(f"{obj['Key']} - {obj['Size']} bytes")
Real Cost Savings Example
Let's calculate savings for a real-world SaaS application:
Scenario
- Application: SaaS platform with user file uploads
- Storage: 800GB of user files (growing 50GB/month)
- Egress: 500GB/month (API downloads, CDN origin pulls)
- Requests: 2M GET + 500K PUT per month
AWS S3 Monthly Cost
| Component | Usage | Rate | Cost |
|---|---|---|---|
| Storage | 800 GB | $0.023/GB | $18.40 |
| Egress | 500 GB | $0.09/GB | $45.00 |
| GET Requests | 2,000,000 | $0.0004/1K | $0.80 |
| PUT Requests | 500,000 | $0.005/1K | $2.50 |
| Total | $66.70/month |
DanubeData Monthly Cost
| Component | Usage | Rate | Cost |
|---|---|---|---|
| Base Plan (1TB storage + 1TB traffic) | 800 GB storage, 500 GB egress | Flat rate | $3.99 |
| Requests | 2.5M total | Included | $0.00 |
| Total | $3.99/month |
Annual Savings Summary
| Period | AWS S3 | DanubeData | Savings |
|---|---|---|---|
| Monthly | $66.70 | $3.99 | $62.71 (94%) |
| Annually | $800.40 | $47.88 | $752.52 (94%) |
| 3 Years | $2,401.20 | $143.64 | $2,257.56 (94%) |
Bottom line: This SaaS application saves over $750 per year by migrating from AWS S3 to DanubeData—and gains GDPR compliance with European data hosting.
Post-Migration Validation Checklist
After completing the migration, verify everything works:
- Object count matches: Source and destination have the same number of objects
- Total size matches: Aggregate storage size is identical
- Checksum verification: rclone check passes with zero differences
- Application uploads: New files upload successfully to the new storage
- Application downloads: Existing files download correctly
- Presigned URLs: Generated URLs work and expire correctly
- CDN cache: CDN is pulling from the new origin successfully
- Backup jobs: Automated backups write to the new storage
- CI/CD pipelines: Build artifacts deploy to the correct bucket
- Monitoring: Storage metrics and alerts are configured for the new provider
- Access keys: Old AWS credentials are revoked
- DNS: Custom domain records updated and propagated
Monitoring Script
#!/bin/bash
# post-migration-check.sh
# Run this for a few days after migration to catch issues
ENDPOINT="https://s3.danubedata.ro"
BUCKET="my-bucket"
REGION="fsn1"
echo "=== Post-Migration Health Check ==="
echo "Date: $(date)"
# 1. Test upload
echo "test-$(date +%s)" > /tmp/health-check.txt
aws s3 cp /tmp/health-check.txt s3://$BUCKET/_health/check.txt
--endpoint-url $ENDPOINT --region $REGION
echo "[OK] Upload test passed"
# 2. Test download
aws s3 cp s3://$BUCKET/_health/check.txt /tmp/health-download.txt
--endpoint-url $ENDPOINT --region $REGION
diff /tmp/health-check.txt /tmp/health-download.txt && echo "[OK] Download test passed"
# 3. Test listing
COUNT=$(aws s3 ls s3://$BUCKET/ --recursive
--endpoint-url $ENDPOINT --region $REGION | wc -l)
echo "[OK] Bucket contains $COUNT objects"
# 4. Test presigned URL
URL=$(aws s3 presign s3://$BUCKET/_health/check.txt
--endpoint-url $ENDPOINT --region $REGION
--expires-in 60)
curl -s -o /dev/null -w "%{http_code}" "$URL" | grep -q "200" &&
echo "[OK] Presigned URL test passed"
# 5. Cleanup
aws s3 rm s3://$BUCKET/_health/check.txt
--endpoint-url $ENDPOINT --region $REGION
rm /tmp/health-check.txt /tmp/health-download.txt
echo "=== All checks passed ==="
Migration Timeline Template
| Day | Task | Risk |
|---|---|---|
| Day 1 | Inventory S3 usage, create destination bucket, test connectivity | None |
| Day 2 | Deploy dual-write code to staging, test upload/download | Low |
| Day 3 | Deploy dual-write to production | Low |
| Day 3-5 | rclone copy of historical data (in background) | None |
| Day 5 | Verify data integrity (rclone check) | None |
| Day 6 | Switch reads to new storage, keep dual-writes | Medium |
| Day 7-9 | Monitor, fix any issues, validate presigned URLs | Low |
| Day 10 | Stop dual-writes, point all traffic to new storage | Low |
| Day 10-24 | Keep old bucket read-only as safety net | None |
| Day 25 | Delete old AWS bucket, revoke credentials | None |
Conclusion
Migrating from AWS S3 to a European S3-compatible provider is one of the highest-ROI infrastructure changes you can make in 2026. The S3 API's universality means your application code barely changes, while you gain:
- 60-94% cost savings on storage and egress
- True GDPR compliance with European data sovereignty
- Lower latency for European users
- Predictable pricing without surprise egress bills
- Vendor flexibility with providers that don't charge exit fees
The migration process is well-understood, tools like rclone make data transfer painless, and a dual-write strategy ensures zero downtime for your users.
Ready to start saving? Create your DanubeData account and get started with S3-compatible storage at EUR 3.99/month—including 1TB storage and 1TB traffic. Your first bucket is just a few clicks away at danubedata.ro/storage/create.