Wasabi built a loyal following with a simple pitch: hot cloud storage at US$6.99 per TB/month, no egress fees, no API call fees, S3-compatible. For backup-heavy workloads that occasionally need to be read, it looked like a no-brainer versus AWS S3 or Azure Blob.
But in 2026, a lot of European teams are re-evaluating. The reasons are almost never headline price — they're the fine print: a 90-day minimum storage duration penalty that catches people off guard during migrations and cleanup, an egress cap equal to 1× your monthly storage (exceed it and you start paying), a US parent company exposed to the CLOUD Act regardless of which region you pick, and sporadic but real reports of uploader throughput variability and regional availability incidents.
Good news: the European market has quietly turned into the cheapest place on earth to store bytes. This guide compares seven concrete Wasabi alternatives with European data residency — DanubeData, Hetzner Object Storage, OVHcloud, Scaleway, IDrive e2, Storj, and self-hosted MinIO — with real pricing at 1TB, 10TB and 100TB, including the egress you're actually going to use.
TL;DR: for EU-only backups bundled with compute, DanubeData wins on simplicity and bundling. For a pure price-per-TB battle with German residency, Hetzner is the cheapest credible name on the list. For paranoid Schrems II workloads, Scaleway and OVH are the big EU-sovereign names. If you want the theoretically cheapest option and you trust yourself to operate it, self-host MinIO on a DanubeData VPS.
Table of contents
- What Wasabi actually offers in 2026
- Why European teams are leaving Wasabi
- The seven alternatives at a glance
- Side-by-side comparison table
- Real cost at 1TB, 10TB and 100TB
- Which provider wins for which use case
- Migrating away from Wasabi with rclone
- Use cases: backups, media, log archival
- Frequently asked questions
What Wasabi actually offers in 2026
Before we leave, let's be fair about what Wasabi gets right. The headline spec sheet is genuinely compelling:
- Storage: US$6.99 per TB/month (≈ €6.40 depending on EUR/USD), single tier — no "infrequent access" or "archive" tricks
- Egress: no per-GB fee up to 1× your monthly storage (more on this below)
- API calls: no per-request fees, unlike S3 and Azure
- Protocol: fully S3-compatible API — Veeam, Restic, Duplicacy, rclone, s3cmd, CyberDuck, Arq all work out of the box
- EU regions: London (eu-west-1), Frankfurt (eu-central-1, eu-central-2), Amsterdam (eu-west-2), Paris (eu-west-3)
- Object lock: S3 Object Lock for immutable backups (ransomware protection)
- Hot tier only: no restore times, no rehydration — everything is always online
If your workload is "write once, read rarely, want it online for disaster recovery" and fits neatly inside the 1× egress cap, Wasabi is fine. The issue is that a lot of workloads don't actually fit neatly, and a lot of European buyers are newly allergic to US data processors.
Why European teams are leaving Wasabi
1. The 90-day minimum storage duration
This is the single most common complaint and it surprises almost every new user. Wasabi bills each object for a minimum of 90 days even if you delete it sooner. Upload 10TB, realize on day 5 you uploaded the wrong dataset, delete it — you're still billed for the remaining 85 days of those 10TB.
This bites hardest in exactly the moments you'd want flexibility: cutting over from a previous provider, doing a restore test, rotating short-lived log archives, or migrating away from Wasabi. Migration in particular is brutal: if your backup rotation churns through objects faster than 90 days, you're paying for ghost data.
2. The egress cap nobody reads about
Wasabi markets "no egress fees," and it's technically true — up to a point. Your free egress allowance equals 1× your average monthly storage. Store 10TB, download up to 10TB/month free. Exceed that and egress becomes a paid overage (or your account gets rate-limited depending on terms). For most backup workloads this is fine. For anything read-heavy (media delivery, dataset distribution, content origins) it's a trap.
3. CLOUD Act exposure
Wasabi is a US-incorporated company. Under the US CLOUD Act (2018), US law enforcement can compel US providers to hand over data regardless of where the data physically sits — including EU regions. Post-Schrems II, this is a real problem for any EU controller processing personal data. A Frankfurt or Amsterdam region does not, by itself, insulate you from US subpoenas. For truly sovereign storage, the processor needs to be an EU legal entity with EU-controlled infrastructure.
4. Reliability and throughput variability
Wasabi has had several notable regional outages and user-reported throughput issues, especially during peak hours in European regions. It's not catastrophic — no provider is immune — but it's worth honestly comparing against alternatives whose infrastructure is on Tier-3+ European data centers with transparent SLAs and status pages.
5. Hidden retrieval and request limits under "unlimited"
The "no API fees" claim has quiet caveats: if your application hammers the API at very high request rates, Wasabi reserves the right to throttle or charge. For the overwhelming majority of backup workloads this is a non-issue. For media CDN-style delivery or high-QPS app workloads, it can be.
6. EUR billing and price predictability
Wasabi prices in USD. If you're a European company, your actual monthly cost fluctuates with EUR/USD. Providers that price natively in EUR give you a flat, predictable line item for your P&L.
The seven alternatives at a glance
Here are the seven options we'll compare. Each has a different sweet spot — there is no single "best."
1. DanubeData Object Storage (Falkenstein, Germany)
Pitch: €3.99/month base bundle that includes 1TB of storage and 1TB of egress traffic, transparent per-GB overage beyond that, S3-compatible endpoint at s3.danubedata.ro, versioning / CORS / lifecycle rules supported. No minimum storage duration penalty, no egress tricks inside the bundled quota, and no egress charges for traffic that stays on the DanubeData platform (e.g. a VPS or Kubernetes workload reading from the same bucket). German data center, EU legal entity, signed DPA, €50 signup credit.
Best for: teams that also run compute (VPS, databases, serverless) on DanubeData and want storage bundled in; small-to-medium backup targets; anyone who wants one invoice in euros for compute + storage.
2. Hetzner Object Storage (Nuremberg / Falkenstein, Germany)
Pitch: €4.99/TB/month storage, €1 per additional TB of egress, S3-compatible, three EU locations, German data center, German legal entity. Cheapest credible per-TB price on this list.
Best for: pure price optimization, large cold-ish storage pools, customers already on Hetzner Cloud compute.
3. OVHcloud Object Storage (France / multi-EU)
Pitch: French sovereign provider with multiple tiers (Standard, High Performance, Cold Archive), SWIFT and S3 APIs, strong SecNumCloud-adjacent story in France. Pricing typically lands around €7–12/TB/month for Standard depending on region and tier.
Best for: French public sector / regulated customers; teams already on OVH Public Cloud; anyone who needs an archive tier at €2–3/TB for truly cold data.
4. Scaleway Object Storage (Paris / Amsterdam / Warsaw)
Pitch: French sovereign provider, friendly developer UX, three tiers (Standard, One Zone IA, Glacier). Standard around €11–14/TB/month, Glacier around €2/TB/month. Includes 75GB/month free tier. Very developer-centric tooling.
Best for: developer workflows, mixed hot+cold with tiering, teams already on Scaleway Elements / Kubernetes.
5. IDrive e2 (US-HQ, EU regions available)
Pitch: Direct Wasabi competitor at US$4/TB/month with no egress fees. EU regions in Frankfurt and Dublin. S3-compatible. Essentially "cheaper Wasabi."
Best for: pure price-per-TB chasers who don't care about the US legal-entity issue. Same CLOUD Act exposure as Wasabi.
6. Storj (decentralized, EU satellite available)
Pitch: Decentralized network where your data is erasure-coded across independent storage nodes globally. US$4/TB/month storage, US$7/TB egress. EU satellite operated by Storj Labs for compliance-sensitive workloads. S3-compatible gateway.
Best for: teams interested in the decentralization story, geographically distributed redundancy, or paying egress for genuinely fast downloads from many edge nodes at once.
7. Self-hosted MinIO on a DanubeData VPS
Pitch: Run MinIO yourself on a VPS with attached NVMe storage. Pay for the VPS (from €4.49/month) plus any additional volumes; storage cost becomes roughly "whatever the VPS disk costs you." For a dedicated 1TB pool, this can land around €10–20/month all-in with excellent throughput. Full S3 API, full control, zero egress fees within the platform.
Best for: operators comfortable running a service, teams needing very high throughput, or backup destinations where you want zero per-GB accounting.
Side-by-side comparison table
| Provider | Storage (EU) | Egress policy | Min. duration | EU residency | Free tier / credit | Schrems II safe |
|---|---|---|---|---|---|---|
| Wasabi | US$6.99/TB/mo | Free up to 1× storage; overage thereafter | 90 days | London, Frankfurt, Amsterdam, Paris | 30-day / 1TB trial | No (US entity, CLOUD Act) |
| DanubeData | €3.99/mo incl. 1TB storage + 1TB egress | 1TB included; transparent per-GB overage; free within platform | None | Falkenstein, DE | €50 signup credit | Yes (EU entity, DE data center) |
| Hetzner Object Storage | €4.99/TB/mo (1TB included) | 1TB/mo included; €1/TB beyond | None | Nuremberg, Falkenstein, Helsinki | — | Yes (EU entity, DE/FI data center) |
| OVHcloud | €7–12/TB/mo (Standard) | Egress billed per GB (region-dependent) | None (Standard) | Multiple FR/EU regions | Cold archive tier ~€2/TB | Yes (FR entity) |
| Scaleway | €11–14/TB/mo (Standard), ~€2/TB (Glacier) | Free egress up to monthly quota, then per-GB | None (Standard); 90d (Glacier) | Paris, Amsterdam, Warsaw | 75GB/mo free | Yes (FR entity) |
| IDrive e2 | US$4/TB/mo | No egress fees (with usage caps in terms) | None (as of 2026 terms) | Frankfurt, Dublin | 10GB free, occasional promos | No (US entity) |
| Storj | US$4/TB/mo | US$7/TB egress (not "free") | None | EU satellite available | 25GB free | Partial (depends on satellite + node geography) |
| Self-hosted MinIO on DanubeData | ~€10–20/mo effective for 1TB | Free within platform; 20TB VPS egress included | None | Falkenstein, DE | €50 signup credit | Yes |
Note: USD prices are approximate EUR depending on exchange rate. Exact pricing may shift year-to-year; always check the provider's current page before committing to a migration.
Real cost at 1TB, 10TB and 100TB
Headline per-TB numbers are misleading because egress, minimum-duration penalties, and included quotas radically change the effective bill. Here are realistic scenarios.
Scenario A: 1TB stored, 200GB egress/month (typical small backup)
This is what a small business running nightly Veeam/Restic/Duplicacy backups to the cloud typically looks like.
| Provider | Storage | Egress | Total / month |
|---|---|---|---|
| Wasabi | ~€6.40 | €0 (within 1× cap) | ~€6.40 |
| DanubeData | €3.99 (bundle) | €0 (within 1TB bundle) | €3.99 |
| Hetzner | €4.99 (incl. 1TB egress) | €0 | €4.99 |
| OVHcloud Standard | ~€9 | ~€1–2 | ~€10–11 |
| Scaleway Standard | ~€13 | €0 (within quota) | ~€13 |
| IDrive e2 | ~€3.70 | €0 | ~€3.70 |
| Storj | ~€3.70 | ~€1.30 | ~€5.00 |
Verdict: for 1TB with light egress, DanubeData and IDrive e2 are the cheapest, but DanubeData is the only EU-sovereign option among them. If you want Schrems II compliance, DanubeData at €3.99 is the best deal on this table.
Scenario B: 10TB stored, 2TB egress/month (mid-size backup + occasional restores)
| Provider | Storage | Egress | Total / month |
|---|---|---|---|
| Wasabi | ~€64 | €0 (within 10TB cap) | ~€64 |
| DanubeData | €3.99 base + 9TB × overage | 1TB free + 1TB overage | ~€45–55 (check current per-GB rate) |
| Hetzner | €49.90 | €1 (1TB overage beyond 10TB included) | ~€50.90 |
| OVHcloud Standard | ~€90 | ~€15–20 | ~€105–110 |
| Scaleway Standard | ~€130 | ~€0–20 | ~€130–150 |
| IDrive e2 | ~€37 | €0 | ~€37 (but US entity) |
| Storj | ~€37 | ~€13 | ~€50 |
Verdict: Hetzner becomes the clear per-TB winner among EU-sovereign options at this scale. DanubeData is competitive and keeps the "one invoice" simplicity if you also run VPS/database compute on the platform — very often the bundled egress inside the platform is worth more than the raw €/TB difference.
Scenario C: 100TB stored, 10TB egress/month (large backup pool / media archive)
| Provider | Storage | Egress | Total / month |
|---|---|---|---|
| Wasabi | ~€640 | €0 (within 100TB cap) | ~€640 |
| Hetzner | €499 (100 × €4.99) | €0 (within 100TB included) | ~€499 |
| DanubeData | €3.99 base + 99TB overage | 10TB included; 0 overage | Enterprise quote recommended at this scale |
| OVHcloud Standard | ~€900 | ~€80–100 | ~€980–1000 |
| OVHcloud Cold Archive | ~€200 | Retrieval fees apply | ~€200+ (cold only) |
| Scaleway Standard | ~€1300 | ~€100–150 | ~€1400–1450 |
| Scaleway Glacier | ~€200 | Retrieval fees apply | ~€200+ (cold only) |
| IDrive e2 | ~€370 | €0 | ~€370 (US entity) |
| Self-host MinIO | Dedicated VPS with 100TB+ storage | Bundled 20TB+ VPS traffic | ~€200–400 at scale (depends on plan) |
Verdict: at 100TB, self-hosted MinIO on dedicated infrastructure becomes meaningfully cheaper than any managed option — if you have the operational appetite. Among managed EU-sovereign services, Hetzner Object Storage at ~€499 is the pure-price winner. OVHcloud Cold Archive and Scaleway Glacier become attractive for genuinely cold data you'll rarely read.
Which provider wins for which use case
"I want EU-only backups bundled with my compute, zero friction"
Winner: DanubeData. You get one EU entity, one EUR invoice, €3.99 covers the first terabyte including egress, and any VPS / database / Kubernetes workload you already run on DanubeData reads/writes the bucket for free inside the platform. That last point matters more than people expect: if your backup agent and your storage live on the same provider, you pay zero egress for the restore operation, which is usually when egress bites.
"I just want the cheapest per-TB with German residency"
Winner: Hetzner Object Storage. €4.99/TB/month with 1TB egress included per TB stored. Hard to beat on pure price among credible EU providers. No bundling with a fancy dashboard — just cheap, reliable S3.
"I'm a French public-sector / regulated entity and need stronger sovereignty"
Winner: OVHcloud. Multiple tiers including SecNumCloud-adjacent options, French entity, French data centers. More expensive, but matches the compliance narrative that regulated buyers need.
"I need a hot+cold tiering story for mixed workloads"
Winner: Scaleway. Their Standard / One Zone / Glacier tiering is developer-friendly and well documented. Great if you have clearly different access patterns for different buckets.
"I'm a home-lab / small-team cheapskate and don't care about Schrems II"
Winner: IDrive e2. Cheapest per-TB overall. Same legal-entity caveat as Wasabi.
"I'm philosophically interested in decentralization"
Winner: Storj. Erasure coding across independent nodes, EU satellite available, strong throughput for parallel downloads. Note the egress is not free — it's US$7/TB.
"I want theoretical zero marginal cost per GB and I know how to operate a service"
Winner: Self-hosted MinIO on a DanubeData VPS. For dedicated throughput and large storage pools (say, 10TB+), rolling your own MinIO on NVMe-backed VPS plans is often half the price of managed. Full S3 API, versioning, object lock, erasure coding — all available in MinIO Community Edition. The trade-off is you're the one on-call when a disk fails, and you need to plan your own DR.
Migrating away from Wasabi with rclone
Here's the practical part. The tool of choice is rclone — it speaks S3 natively and supports parallel transfers, server-side copy where available, and resumable syncs.
Before you start: plan for the 90-day minimum
This is the single most important migration gotcha. Wasabi will keep billing you for any object deleted before it's been stored 90 days. Three options:
- Migrate, then wait. Copy everything to the new provider, flip your backup targets, and leave Wasabi untouched for 90 days before cancelling. You'll pay Wasabi for those 90 days, but you won't eat surprise penalties.
- Gradual cutover. Only upload new backups to the new provider. Let Wasabi's existing objects age past 90 days naturally. After ~3 months, the old objects can be deleted cleanly.
- Accept the penalty. If you need the switch immediate and the bill is small enough, just pay the ghost-month cost.
For most teams, option 2 is the right answer: your backup rotation is already pushing new data daily, and in ~3 months the old generation on Wasabi naturally reaches its minimum duration.
Install rclone
# Linux / WSL
curl https://rclone.org/install.sh | sudo bash
# macOS
brew install rclone
# Windows: download from rclone.org
Configure the Wasabi source
rclone config
# n for new remote
# name: wasabi
# storage: s3
# provider: Wasabi
# access_key_id: <your Wasabi key>
# secret_access_key: <your Wasabi secret>
# region: eu-central-1 (or your region)
# endpoint: s3.eu-central-1.wasabisys.com
Configure the DanubeData destination
rclone config
# n for new remote
# name: danubedata
# storage: s3
# provider: Other
# access_key_id: <your DanubeData access key>
# secret_access_key: <your DanubeData secret>
# region: fsn1
# endpoint: https://s3.danubedata.ro
# acl: private
You can generate DanubeData S3 credentials from the storage section of the DanubeData dashboard. Each team has its own access keys scoped to its buckets.
Dry-run the copy
Always dry-run first to sanity-check what'll move:
rclone copy wasabi:my-backup-bucket danubedata:my-backup-bucket
--dry-run
--progress
--transfers=32
--checkers=32
Run the real sync
# Transfer with lots of parallelism
rclone copy wasabi:my-backup-bucket danubedata:my-backup-bucket
--progress
--transfers=32
--checkers=64
--multi-thread-streams=4
--s3-chunk-size=64M
--s3-upload-concurrency=8
--retries=5
--low-level-retries=10
--log-file=wasabi-to-danubedata.log
--log-level INFO
Key flags:
--transfers=32: number of parallel file transfers. Raise or lower based on your bandwidth.--multi-thread-streams=4: splits large files into 4 parallel streams. Great for big backup archives.--s3-chunk-size=64M: multipart upload chunk size. 64M is a good default for multi-GB objects.--retries / --low-level-retries: eat transient network blips without restarting the whole job.
Verify after the copy
# Check object count and total bytes
rclone size wasabi:my-backup-bucket
rclone size danubedata:my-backup-bucket
# Detailed diff (hashes)
rclone check wasabi:my-backup-bucket danubedata:my-backup-bucket
Flip your clients
Once rclone check shows parity, update your backup tools' endpoint configuration:
- Restic: change
RESTIC_REPOSITORYor the-rflag from Wasabi's endpoint tos3:https://s3.danubedata.ro/my-backup-bucket. - Duplicacy: edit the storage URL in
.duplicacy/preferences. - Veeam: edit the Object Storage Repository configuration → point it at the new endpoint and credentials. Re-run a full active backup first to avoid dependency on Wasabi generations.
- Arq: add a new Generic S3 destination pointing at
s3.danubedata.ro; keep the old Wasabi destination around until you trust the new one. - rclone cron jobs / application code: update the remote name everywhere, then grep your shell history and CI configs to make sure nothing still points at Wasabi.
After cutover: don't immediately delete Wasabi
Keep the Wasabi bucket read-only for at least one full backup cycle. If something breaks on the new provider, you still have a working fallback. Once you've done a successful restore test from DanubeData, and the 90-day clock has run on old Wasabi objects, then cancel.
Use cases: backups, media, log archival
Veeam Backup & Replication with S3 Object Lock (immutability)
Veeam's Hardened Repositories and Object Lock on S3-compatible endpoints give you ransomware-resistant backups. DanubeData supports S3 Object Lock, so you can configure a Veeam Scale-Out Backup Repository with immutability enabled.
1. Create a bucket on DanubeData with versioning and object lock enabled.
2. In Veeam, Backup Infrastructure → Backup Repositories → Add Repository.
3. Object Storage → S3 Compatible.
4. Service point: https://s3.danubedata.ro
5. Region: fsn1 (or as provided).
6. Paste your DanubeData access key / secret key.
7. Choose the bucket, enable "Make recent backups immutable for X days."
Restic for Linux / server backups
export RESTIC_REPOSITORY="s3:https://s3.danubedata.ro/my-backup-bucket"
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export RESTIC_PASSWORD="strong-repo-password"
# First time
restic init
# Nightly cron
restic backup /etc /home /var/www --exclude-caches
restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --prune
Duplicacy for workstations and servers
Duplicacy's deduplication is the killer feature for users backing up many machines: shared chunks are stored once across the whole repository. Just point the storage URL at DanubeData's endpoint and you're done.
Arq for Mac / Windows users
Arq supports Generic S3 — add DanubeData as a destination with the endpoint and credentials. Arq handles scheduling, encryption, and retention entirely client-side.
Media archival (photography, video, podcasts)
For sizable media collections (raw photography, finished video masters, podcast archives), the 1TB base bundle is usually overkill for metadata-heavy use and just-right for finished-master storage. Pair DanubeData Object Storage with versioning to keep edit history.
Log archival (compliance-driven)
For logs you need to keep for 1–7 years for compliance but rarely read, S3 + lifecycle rules is the textbook pattern. Enable lifecycle rules to transition or expire objects automatically. If you need cold tiering to save money on multi-year retention, Scaleway Glacier or OVHcloud Cold Archive are worth a look.
Static assets for websites / apps
DanubeData's S3 endpoint supports CORS, public-read ACLs, and custom-domain CNAMEs, which makes it suitable for serving static assets (images, JS bundles, downloads). For true CDN edge delivery you'd layer a CDN in front, but for lower-traffic origins this is fine as-is.
Frequently asked questions
What exactly is Wasabi's 90-day minimum storage duration?
Every object you upload is billed for a minimum of 90 days, even if you delete it sooner. If you upload 1TB, delete it after 10 days, you still pay for the remaining 80 days of 1TB storage. It's designed to discourage short-term churn but it routinely surprises migrators and customers doing restore testing. Alternatives like DanubeData, Hetzner, Scaleway Standard, and IDrive e2 have no such minimum.
How fast are downloads from European object storage compared to Wasabi?
For single-stream downloads, modern EU providers (DanubeData, Hetzner, OVHcloud) typically saturate a 1 Gbps link to another European peer without issues, often matching or exceeding Wasabi's European regions. Multi-stream / multipart downloads are where S3-compatible providers really shine — rclone with --multi-thread-streams=4 will effectively saturate most customer uplinks. The single biggest factor is your own client bandwidth, not the provider.
Are there multipart upload limits I should know about?
S3 spec: up to 10,000 parts per object, 5MB minimum per part (except the last), 5TB max object size. DanubeData, Hetzner, OVH, and Scaleway all honor these limits. Practical advice: set --s3-chunk-size in rclone to 64M–256M for large files — you'll hit better throughput and stay well under the 10,000-part ceiling for objects up to ~2.5TB per file.
Can I use DanubeData object storage as a Veeam immutable repository?
Yes. DanubeData supports S3 Object Lock and versioning, which are the two features Veeam requires for its "Make recent backups immutable" option on Object Storage repositories. You configure it exactly like any other S3-compatible immutable target: enable versioning + object lock on the bucket, point Veeam at https://s3.danubedata.ro, enable immutability in the repository settings.
Does the CLOUD Act actually affect me if I pick Wasabi's EU region?
Yes, legally. The CLOUD Act applies to the provider regardless of data location — if the provider is a US entity, US authorities can compel disclosure of data held anywhere in the world. A Frankfurt region run by a US company does not insulate you. Whether this matters practically depends on your risk model, your legal team's Schrems II position, and whether you process personal data about EU data subjects. For a regulated EU entity, the safest answer is "use an EU-controlled provider with an EU legal entity." DanubeData, Hetzner, OVHcloud, and Scaleway all fit.
Do you offer a Data Processing Agreement (DPA) and sign it without drama?
Yes. DanubeData provides a standard GDPR-compliant DPA that covers Article 28 requirements, sub-processor transparency, and security measures. You can request it through your account dashboard or by contacting support. No custom-negotiation fees, no "enterprise plan" gating.
What about egress between DanubeData object storage and DanubeData VPS / databases?
Free. Traffic that stays inside the DanubeData platform — for example, a VPS or Kubernetes pod reading from a bucket on s3.danubedata.ro — doesn't count against your object-storage egress or your VPS traffic quota. This is a significant benefit if your application and storage live on the same provider.
Can I still use AWS SDKs and tools like s3cmd, CyberDuck, Cyberduck Mountain Duck, s5cmd, Terraform?
Yes. DanubeData's endpoint is standard S3 over HTTPS. Every major tool that speaks S3 works by pointing it at https://s3.danubedata.ro and providing your access keys:
- aws-cli:
aws --endpoint-url https://s3.danubedata.ro s3 ls - s3cmd: set
host_base = s3.danubedata.roandhost_bucket = %(bucket)s.s3.danubedata.roin~/.s3cfg - Terraform: use the AWS provider with a custom
endpointsblock ands3_use_path_styleas needed - CyberDuck / Mountain Duck: "S3 (HTTPS)" profile pointing at the endpoint
- Boto3 / AWS SDK JS / Go: pass
endpoint_url/endpointwhen creating the client
Getting started with DanubeData Object Storage
If you've read this far, the short version is: European data residency is in 2026 the cheapest and simplest it has ever been, and the only reason most teams are still on Wasabi is inertia.
For a migration away from Wasabi, the playbook is boring on purpose:
- Sign up at danubedata.ro and claim your €50 signup credit — that covers roughly a year of the €3.99 base plan.
- Create a bucket and generate S3 access keys from the storage section.
- Configure rclone with both Wasabi (source) and DanubeData (destination) remotes.
- Start copying new backups to DanubeData; leave Wasabi read-only.
- Run
rclone checkfor parity, then a full restore test. - After ~90 days, cancel Wasabi.
Total effort for most small-to-midsize teams: an afternoon, plus a 90-day clock running in the background.
Summary: which alternative should you pick?
- DanubeData — best bundled simplicity, €3.99/mo covers 1TB with 1TB egress, EU entity, free egress inside the platform, €50 signup credit.
- Hetzner Object Storage — cheapest per-TB among credible EU-sovereign providers at scale.
- OVHcloud — French sovereignty story, regulated-entity friendly, archive tier for cold data.
- Scaleway — developer UX, native Standard/Glacier tiering.
- IDrive e2 — cheapest headline price, but same CLOUD Act exposure as Wasabi.
- Storj — decentralized, interesting for geo-distributed redundancy, egress is not free.
- Self-hosted MinIO on DanubeData VPS — cheapest at scale if you're comfortable operating it.
For most European teams migrating from Wasabi in 2026, the default answer is DanubeData for small-to-medium / bundled-compute workloads and Hetzner Object Storage for large pure-storage pools.
Ready to switch? Create a DanubeData account, claim your €50 credit, spin up a bucket, and point rclone at s3.danubedata.ro. The rest writes itself.