BlogBusinessCloudflare R2 Alternatives: European S3-Compatible Storage (2026)

Cloudflare R2 Alternatives: European S3-Compatible Storage (2026)

Adrian Silaghi
Adrian Silaghi
April 20, 2026
14 min read
1 views
#cloudflare-r2 #r2-alternatives #object-storage #s3 #europe #gdpr #hetzner #scaleway
Cloudflare R2 Alternatives: European S3-Compatible Storage (2026)

Cloudflare R2 changed the object-storage conversation. Zero egress fees, an S3-compatible API, tight integration with Workers and a global anycast network made it the easy default for anyone tired of AWS S3 bandwidth bills. Four years in, R2 is a mature product with thousands of production workloads behind it.

But 2026 is a different year than 2022. European buyers are under pressure from procurement, auditors and DPOs to answer two questions that R2 doesn't answer cleanly: where exactly is our data, and which legal regime controls access to it? The CLOUD Act, Schrems II, the EU-US Data Privacy Framework, NIS2 and sector-specific rules (DORA for finance, EHDS for health) have tightened what "EU-hosted" has to mean. "The provider has a European region" is no longer the same thing as "the provider is legally European".

This guide breaks down the seven credible R2 alternatives that keep the S3 API and GDPR-safe residency in 2026 — starting with DanubeData Object Storage at €3.99/month for 1 TB storage plus 1 TB egress included, hosted in Falkenstein, Germany on infrastructure operated by an EU entity. We'll cover real-world egress math, CDN replacement patterns, presigned URLs, and a working rclone migration script you can run today.

What is Cloudflare R2, really?

R2 is Cloudflare's object storage service, launched in 2022 as a direct response to the egress-fee tax that AWS, Google and Azure had normalized. It speaks the S3 API, so any S3 SDK, CLI or tool works out of the box. The headline features:

  • No egress fees — you pay only for storage and operations, not for bytes leaving the bucket
  • $0.015/GB/month storage (Standard class) — roughly one third of S3's list price
  • Class A requests (writes, lists) priced at $4.50 per million; Class B requests (reads) at $0.36 per million
  • Global distribution via Cloudflare's 300+ edge locations with automatic caching
  • Tight integration with Workers, Pages, Images, Stream and the rest of the Cloudflare stack
  • Infrequent Access tier at $0.01/GB/month for archival-style data, with retrieval fees
  • 10 GB free tier plus 1 million Class A and 10 million Class B operations per month

For a blog serving images to a global audience, R2 is genuinely excellent. You upload the files, put Cloudflare in front and forget about it. The egress-free model is also a lifesaver for training ML models, serving large game assets or distributing video.

Why teams are leaving R2 in 2026

The reasons fall into three buckets: legal posture, total cost when requests dominate, and vendor lock-in.

1. Cloudflare is a US-incorporated company

Cloudflare, Inc. is headquartered in San Francisco and subject to US law — including the Clarifying Lawful Overseas Use of Data (CLOUD) Act of 2018. The CLOUD Act compels US-based providers to produce data in their custody on request from US law enforcement, regardless of where the data physically sits. A bucket in R2's "EEUR" region is physically in Europe, but the legal relationship between the data controller and Cloudflare is still governed by US law.

After the Court of Justice of the European Union invalidated Privacy Shield in Schrems II (2020), and while the EU-US Data Privacy Framework (2023) is now the active transfer mechanism, many EU data protection authorities and sectoral regulators have pushed controllers toward EU-only processors — meaning providers incorporated and operated inside the EU, with no US parent in the chain. For public-sector bodies, healthcare providers, financial institutions under DORA and anyone running a GDPR DPIA with a sensitive category, "operated by an EU entity" is increasingly the minimum bar.

2. R2 location hints are opaque

R2 lets you pin a bucket to a jurisdictional hint (eu, fedramp, auto), but the exact datacenter is not disclosed. For most workloads this is fine — for a data residency clause in an enterprise contract or a BSI C5 audit, "somewhere in Europe, we'll let you know if it changes" is not an answer procurement can sign.

3. "Free egress" isn't free when requests dominate

R2's no-egress-fee promise only covers bandwidth. Every read, list and write is a billable operation. For workloads where you have many small objects and heavy read patterns — think a CMS with thumbnails, a feature store, or a static site with tens of thousands of pages — Class B request costs can quietly dominate your bill.

Consider a realistic CMS bucket: 2 TB of data, 500,000 objects, serving 50 million reads per month and 100,000 writes. Storage is $30. Class B reads at $0.36/M = $18. Class A writes at $4.50/M = $0.45. Egress is zero. Total: ~$48/month.

The same workload on DanubeData Object Storage: €3.99 base bundles 1 TB storage and 1 TB egress, +1 TB overage at €1.99 = ~€5.98/month. Requests are not metered. If your egress exceeds the 1 TB included, you pay per-TB overage, still typically less than R2's request fees for request-heavy workloads.

4. Vendor lock-in to the Cloudflare ecosystem

R2 plays beautifully with Workers, Pages, Images and Stream — that's the point. The flip side is that the value proposition leans on being inside the Cloudflare perimeter. If your app runs on a European VPS provider and only uses R2 for storage, you're paying for an integration you aren't using. Co-locating compute and storage with one EU provider removes an inter-provider hop from every request and simplifies your subprocessor list in the GDPR Record of Processing Activities (ROPA).

5. Enterprise pricing opacity

R2's list pricing is transparent up to a point. Enterprise-scale contracts — volume discounts, SLAs, dedicated support, custom DPAs — go through Cloudflare sales. Smaller European providers typically publish a single price list and let you sign the standard DPA without a procurement call.

What a good R2 alternative looks like in 2026

Before we get to the list, here's the checklist I use when evaluating a replacement:

  • S3 API compatibility — any SDK, rclone, s3cmd, Cyberduck, MinIO Client, Hashicorp tools — works out of the box
  • Published datacenter location, not a hint — you want "Falkenstein, DE" on the invoice, not "EEUR"
  • EU-incorporated operating entity with no US parent (this is the bit that actually moves the needle on CLOUD Act exposure)
  • Predictable pricing without per-request surprises — ideally a bundle that includes egress
  • Versioning, lifecycle rules, CORS, presigned URLs — the full S3 feature set most apps rely on
  • Signed DPA under GDPR Article 28 available without a sales call
  • A way to put a CDN in front if you need global distribution (you almost certainly do)

The seven alternatives

1. DanubeData Object Storage (Germany)

Endpoint: s3.danubedata.ro · Location: Falkenstein, DE · Operator: IFAS Consult SRL (EU entity, Romania)

DanubeData runs S3-compatible object storage on self-hosted MinIO deployed in a Hetzner Falkenstein datacenter. Pricing is €3.99/month base which includes 1 TB of storage and 1 TB of egress — no separate request tier. Beyond the bundle, overage is billed per GB on both storage and traffic. The €3.99 base is designed for the "real bucket" sweet spot of a small-to-mid app that would otherwise pay R2 a few dollars for storage plus a surprise request bill.

  • Full S3 API: versioning, lifecycle rules, CORS, bucket policies, presigned URLs, multipart upload
  • Works with AWS CLI, rclone, Cyberduck, s3cmd, MinIO Client (mc), boto3, aws-sdk-js, any Go/Rust/Java S3 client
  • Colocated with DanubeData VPS, Cache and Database products — zero-cost traffic between instances and the bucket inside the cluster
  • GDPR-compliant by default; DPA available on signup
  • €50 signup credit covers roughly the first year of a single-bucket setup

Best fit: European SaaS teams, agencies, indie hackers and anyone who wants a single EU provider for compute, database and storage. If you bundle a VPS and object storage, the traffic between them is free.

2. Scaleway Object Storage (France)

Endpoint: s3.fr-par.scw.cloud, s3.nl-ams.scw.cloud, s3.pl-waw.scw.cloud · Locations: Paris, Amsterdam, Warsaw · Operator: Scaleway SAS (EU entity, France)

Scaleway is one of the largest European cloud providers and has been S3-compatible since 2016. Their Multi-AZ Standard class is listed at €0.012/GB/month storage plus €0.011/GB egress, with a 75 GB free tier. One-Zone-IA goes as low as €0.006/GB for archival workloads. Request pricing exists but is modest compared to R2's Class A tier.

  • Strong presence in three EU regions, useful if you want Multi-AZ or want to hold backups in a different country
  • Comprehensive S3 API support including Object Lock (WORM), lifecycle policies, and Bucket Replication
  • Published DPA, BSI C5 and ISO 27001 certifications
  • Excellent documentation and a mature CLI

Best fit: Mid-size teams wanting a big-name EU cloud with regional choice. Beware: the per-GB pricing model means you need to model request and egress volumes carefully at scale.

3. OVHcloud Object Storage (France)

Endpoint: s3.gra.cloud.ovh.net, s3.sbg.cloud.ovh.net, s3.de.cloud.ovh.net · Locations: Gravelines, Strasbourg, Frankfurt, Warsaw · Operator: OVH Groupe SA (EU entity, France)

OVHcloud offers two tiers: Standard Object Storage (for hot data) and High-Performance Object Storage, both S3-compatible. Storage pricing starts around €0.01/GB/month for Standard; bandwidth is billed per GB outgoing with a generous free allowance on most regions. OVHcloud is Europe's largest home-grown cloud provider and publishes a SecNumCloud-qualified offer for sovereignty-sensitive workloads.

  • S3 API plus Swift API (Ceph-backed) — useful if you have OpenStack tooling
  • SecNumCloud qualification available on Sovereign Cloud, which satisfies French government and regulated-industry requirements
  • Regional choice across four EU countries
  • Free inbound traffic

Best fit: French and Benelux customers, public-sector buyers, teams that need SecNumCloud.

4. Hetzner Object Storage (Germany/Finland)

Endpoint: nbg1.your-objectstorage.com, fsn1.your-objectstorage.com, hel1.your-objectstorage.com · Locations: Nuremberg, Falkenstein, Helsinki · Operator: Hetzner Online GmbH (EU entity, Germany)

Hetzner launched its own S3-compatible Object Storage in late 2024. Pricing is €5.99 per TB per month in 2026, bundled with 1 TB of outbound traffic included per TB stored. For volume-heavy workloads this is the cheapest-per-TB option on the list. The product is newer than Scaleway or OVHcloud, so feature parity is still filling in (versioning and lifecycle arrived in 2025; Object Lock is on the roadmap).

  • Extremely low price for pure storage volume
  • Three EU regions (Germany × 2, Finland × 1)
  • Sits on the same physical network as Hetzner Cloud, reducing inter-service cost if you also use Hetzner VPS

Best fit: Backup and archive targets, data lakes, any workload that's storage-heavy and request-light. If you need versioning plus lifecycle plus replication plus bucket replication plus Object Lock today, verify current feature parity before you commit.

5. Exoscale SOS — Simple Object Storage (Switzerland/EU)

Endpoint: sos-ch-dk-2.exo.io, sos-de-fra-1.exo.io, sos-at-vie-1.exo.io · Locations: Zurich, Geneva, Frankfurt, Vienna, Sofia, Munich · Operator: Akamai / Exoscale AG (Swiss entity)

Exoscale (now part of Akamai) positions itself as the Swiss sovereign cloud, with datacenters across six European cities. SOS is S3-compatible at €0.02/GB/month storage and €0.02/GB egress, with inbound free. The service is less well-known than Scaleway or OVH but has a loyal following among Swiss and German enterprise customers who value the jurisdiction.

  • Swiss jurisdiction option (outside EU but with equivalence decision — important for some workloads)
  • Mature platform with six regions
  • Strong compliance posture including ISO 27001 and FINMA-friendly

Best fit: Swiss or DACH-region enterprises, regulated industries valuing the Swiss data protection regime.

6. IDrive e2 (global, with EU region)

Endpoint: e2.idrivee2-10.com (Frankfurt) · Location: Frankfurt · Operator: IDrive Inc. (US entity)

IDrive e2 is worth listing for completeness because it's often cited as the price-per-TB leader. Storage lists at roughly $0.004/GB/month — the cheapest credible S3 API in the market — with egress free up to your storage quota. The catch is the operator: IDrive Inc. is incorporated in Calabasas, California, which puts it on the same CLOUD Act footing as Cloudflare.

If your only concern is price-per-TB and CLOUD Act exposure is acceptable in your threat model, e2 is a fair R2 alternative. If you're migrating away from R2 specifically to reduce US-provider exposure, e2 does not solve that problem — even with the Frankfurt region.

Best fit: Cost-sensitive workloads where jurisdiction is not the driver — secondary backup copies, crash-only disaster recovery targets, dev/test environments.

7. Self-hosted MinIO on a European VPS

Endpoint: whatever you configure · Location: wherever your VPS is · Operator: you

The nuclear option: run MinIO on a VPS in the EU, get an S3-compatible endpoint, own the stack end to end. On a DanubeData 4 GB RAM VPS at €8.99/month with 160 GB of NVMe, you can serve S3 to yourself for a flat cost with no per-GB storage, no egress fees and no request fees. The bucket-size ceiling is whatever your disk is.

  • Zero per-GB fees — only the VPS cost
  • Full control over encryption at rest, TLS, access policies
  • Single-node MinIO runs happily on a €8.99 VPS; Distributed MinIO needs 4+ nodes for erasure coding
  • You now operate a storage system — including backups, capacity planning, upgrades, and monitoring

Best fit: Engineers who enjoy running infrastructure, internal company backup targets, dev environments, and anyone with a strict "no third party" requirement. Not a good fit for production-grade durability unless you run a proper distributed deployment with quorum, off-site replication and monitoring.

Comparison table: R2 vs the alternatives

Provider Storage (per TB/mo) Egress (per TB) Class A / B requests EU residency CLOUD Act exposure Free tier
Cloudflare R2 $15.00 $0 $4.50 / $0.36 per million Hint only (EEUR) Yes (US operator) 10 GB + 1M/10M ops
DanubeData €3.99 base (1TB incl.) 1TB included, then €1.99/TB Not metered Falkenstein, DE (published) No (EU operator) €50 signup credit
Scaleway €12.00 €11.00 Modest per-request Paris/Amsterdam/Warsaw No (EU operator) 75 GB
OVHcloud €10.00 Free allowance + €10/TB Low per-request Gravelines/Strasbourg/FR1/WAW No (EU operator) Varies by plan
Hetzner Object Storage €5.99 1 TB included per TB stored Not metered Nuremberg/Falkenstein/Helsinki No (EU operator) None
Exoscale SOS €20.00 €20.00 Per-request 6 EU/CH cities No (Swiss/EU) None
IDrive e2 $4.00 Free up to quota Not metered Frankfurt region Yes (US operator) 10 GB
Self-hosted MinIO VPS cost only VPS bandwidth Not metered Whatever your VPS is No (you operate) N/A

Prices are list prices at time of writing (April 2026). Always check the provider pricing page for current rates.

The real-world egress math

The most common mistake I see in R2 cost modeling is treating "$0 egress" as end-of-story. Let's work through three realistic scenarios at the scale where R2 makes an intuitive case and see what happens on an EU alternative.

Scenario A — Small SaaS, 500 GB assets, 5 TB egress/month

  • R2: 500 GB × $0.015 = $7.50 storage. Requests ~1M reads, negligible. Egress $0. Total ~$8/mo.
  • DanubeData: €3.99 base (includes 1TB storage + 1TB egress) + 4 TB overage egress × €1.99 = €11.95. Total ~€12/mo.
  • Hetzner Object Storage: 1 TB allocation × €5.99 + 4 TB overage egress. Total ~€6-10/mo (cheapest if egress stays close to included).

At this scale R2 wins on headline cost. The gap is roughly €4-5/month, which may or may not be worth the CLOUD Act exposure depending on your posture. If you're already a DanubeData customer running VPS and Cache, the object storage is marginal.

Scenario B — CMS with 50 million reads/month, 2 TB storage, 100 GB egress/month

  • R2: Storage $30, Class B reads 50M × $0.36/M = $18, Class A 100k × $4.50/M = $0.45. Egress $0. Total ~$48/mo.
  • DanubeData: €3.99 base + 1 TB storage overage × €3.99 (indicative) = €7.98. Egress under the 1TB bundle. Total ~€8/mo.
  • Hetzner Object Storage: 2 TB × €5.99 + included egress. Total ~€12/mo.

Here the story flips. Request volume dominates R2's bill. A CMS with thumbnails, a feature store or a static site generator burning through many small reads will see its R2 invoice creep up every month while the EU alternatives sit flat.

Scenario C — Backup target, 5 TB storage, 50 GB egress/month

  • R2: 5 TB × $0.015 = $75. Requests negligible. Egress $0. Total ~$75/mo.
  • DanubeData: €3.99 base + 4 TB overage storage × €3.99 (indicative) = €19.95. Total ~€20/mo.
  • Hetzner Object Storage: 5 × €5.99 = €29.95. Total ~€30/mo.
  • IDrive e2: 5 TB × $4 = $20. Total ~$20/mo.

Storage-heavy, low-egress, low-request workloads are where EU alternatives beat R2 soundly. Backup targets, content archives, Loki or ClickHouse storage buckets — all look like this, and all cost two to three times more on R2 than on a per-TB European provider.

Migration: moving from R2 to DanubeData with rclone

Both R2 and DanubeData speak the S3 API, so migration is an rclone sync away. Below is a complete, tested script that copies all objects from an R2 bucket to a DanubeData bucket, verifies counts, and reports progress.

Step 1: Install rclone

# Linux / macOS
curl https://rclone.org/install.sh | sudo bash

# Verify
rclone version

Step 2: Configure the two remotes

Create ~/.config/rclone/rclone.conf with both endpoints:

[r2]
type = s3
provider = Cloudflare
access_key_id = YOUR_R2_ACCESS_KEY_ID
secret_access_key = YOUR_R2_SECRET_ACCESS_KEY
endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com
region = auto
acl = private

[danubedata]
type = s3
provider = Minio
access_key_id = YOUR_DANUBEDATA_ACCESS_KEY
secret_access_key = YOUR_DANUBEDATA_SECRET_KEY
endpoint = https://s3.danubedata.ro
region = eu-central-1
acl = private

Replace ACCOUNT_ID with your Cloudflare account ID (find it in the R2 dashboard under "Account details"). Get the DanubeData access keys from the DanubeData dashboard → Object Storage → Access Keys.

Step 3: List buckets and sanity-check access

# Confirm both sides work
rclone lsd r2:
rclone lsd danubedata:

# Show the first few objects in the source bucket
rclone ls r2:my-r2-bucket | head -20

Step 4: Create the destination bucket

# Create the destination bucket at DanubeData
rclone mkdir danubedata:my-danubedata-bucket

# Verify
rclone lsd danubedata:

Step 5: Run the migration

For small buckets, a simple sync is enough:

rclone sync r2:my-r2-bucket danubedata:my-danubedata-bucket 
  --progress 
  --transfers 16 
  --checkers 32 
  --fast-list

For larger buckets (hundreds of GB or millions of objects), add resumability, rate limits and logging:

#!/bin/bash
# r2-to-danubedata-migration.sh
set -e

SOURCE="r2:my-r2-bucket"
DEST="danubedata:my-danubedata-bucket"
LOG_DIR="$HOME/r2-migration-logs"
mkdir -p "$LOG_DIR"

# Run with full logging and resumable transfer
rclone sync "$SOURCE" "$DEST" 
  --progress 
  --transfers 32 
  --checkers 64 
  --fast-list 
  --s3-upload-concurrency 8 
  --s3-chunk-size 64M 
  --retries 10 
  --low-level-retries 20 
  --stats 30s 
  --stats-log-level NOTICE 
  --log-file "$LOG_DIR/sync-$(date +%Y%m%d-%H%M%S).log" 
  --log-level INFO

echo "Sync complete. Verifying object counts..."

SRC_COUNT=$(rclone size "$SOURCE" --json | jq -r '.count')
DST_COUNT=$(rclone size "$DEST" --json | jq -r '.count')

echo "Source objects: $SRC_COUNT"
echo "Destination objects: $DST_COUNT"

if [ "$SRC_COUNT" != "$DST_COUNT" ]; then
  echo "WARNING: object counts differ, re-running sync..."
  rclone sync "$SOURCE" "$DEST" --progress
fi

echo "Migration done."

Step 6: Swap the endpoint in your application

In most apps, the switchover is a single environment-variable change. For example, in a Node.js app using @aws-sdk/client-s3:

// Before (R2)
const s3 = new S3Client({
  region: 'auto',
  endpoint: `https://${process.env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
  credentials: {
    accessKeyId: process.env.R2_ACCESS_KEY_ID,
    secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,
  },
});

// After (DanubeData)
const s3 = new S3Client({
  region: 'eu-central-1',
  endpoint: 'https://s3.danubedata.ro',
  forcePathStyle: false,
  credentials: {
    accessKeyId: process.env.DD_ACCESS_KEY_ID,
    secretAccessKey: process.env.DD_SECRET_ACCESS_KEY,
  },
});

For Laravel, update config/filesystems.php:

'danubedata' => [
    'driver' => 's3',
    'key' => env('DD_ACCESS_KEY_ID'),
    'secret' => env('DD_SECRET_ACCESS_KEY'),
    'region' => 'eu-central-1',
    'bucket' => env('DD_BUCKET'),
    'endpoint' => 'https://s3.danubedata.ro',
    'use_path_style_endpoint' => false,
    'throw' => true,
],

Step 7: Run a cutover sync and flip the key

For zero-downtime migrations, run the script once while the app still writes to R2, then:

  1. Put the app in read-only mode (or stop writes to the bucket)
  2. Run a second rclone sync to pick up any objects written since the first pass
  3. Flip the endpoint environment variable and redeploy
  4. Run one more rclone sync in the reverse direction for a few days as a safety net (destination → source, --dry-run) to confirm nothing was missed
  5. After a retention window (typically 30-90 days), delete the R2 bucket

CDN strategy: what replaces Cloudflare's edge?

R2 is genuinely unbeatable for global read-heavy workloads when you put Cloudflare CDN in front. If you move your bucket to DanubeData and still want global edge caching, you have three good options:

Option 1: Bunny.net CDN (EU-based)

Bunny is a Slovenian company (Slovenia is an EU member state) running a global CDN at ~$0.01/GB in Europe, ~$0.02/GB in North America, and similar in Asia. Point a Bunny pull-zone at your DanubeData bucket and you get an anycast-cached endpoint with TLS. Price for a reasonable SaaS scale (1 TB egress) is typically $10-15/month and Bunny operates entirely in the EU.

# In Bunny.net dashboard:
# 1. Create a Pull Zone
# 2. Origin URL: https://your-bucket.s3.danubedata.ro
# 3. Origin Host: your-bucket.s3.danubedata.ro
# 4. Enable SmartCache
# 5. Assign a CNAME (e.g., cdn.yourdomain.com)

Option 2: Cloudflare CDN in front of your new bucket

Nothing stops you from keeping Cloudflare as a CDN (zero egress back to the origin at the L7 layer) while moving the authoritative bucket off R2. You still use Cloudflare's anycast network, you still get free egress to end-users, but the data at rest lives with an EU operator. For GDPR purposes the origin is what matters for residency, and Cloudflare-as-CDN is already part of most companies' subprocessor list.

Option 3: Use the built-in DanubeData edge

For a lot of use cases, a single EU datacenter with TLS is fast enough. European users are, by definition, geographically close. If your audience is >80% European, the difference between "Falkenstein with no CDN" and "Falkenstein with a CDN" is often single-digit milliseconds — not worth the added complexity.

Public bucket URLs and presigned URLs

Two of the most common R2 features — public bucket URLs and presigned URLs — map 1:1 to DanubeData because both are standard S3 behavior.

Public bucket URLs

On R2 you'd configure a public bucket URL like https://pub-XXXX.r2.dev/object-key or attach a custom domain. On DanubeData, the bucket is reachable at https://BUCKET.s3.danubedata.ro/object-key (virtual-hosted style) once you set a public-read bucket policy.

{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "AllowPublicRead",
    "Effect": "Allow",
    "Principal": "*",
    "Action": ["s3:GetObject"],
    "Resource": ["arn:aws:s3:::my-bucket/*"]
  }]
}

Apply with the AWS CLI:

aws s3api put-bucket-policy 
  --bucket my-bucket 
  --policy file://public-read.json 
  --endpoint-url https://s3.danubedata.ro

Presigned URLs

Presigned URLs are a standard S3 feature and work identically. Here's Node.js:

import { S3Client, GetObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';

const s3 = new S3Client({
  region: 'eu-central-1',
  endpoint: 'https://s3.danubedata.ro',
  credentials: { accessKeyId: '...', secretAccessKey: '...' },
});

const url = await getSignedUrl(
  s3,
  new GetObjectCommand({ Bucket: 'my-bucket', Key: 'private/report.pdf' }),
  { expiresIn: 3600 }
);
// Returns: https://s3.danubedata.ro/my-bucket/private/report.pdf?X-Amz-Algorithm=...

And in Laravel:

$url = Storage::disk('danubedata')->temporaryUrl(
    'private/report.pdf',
    now()->addHour()
);

GDPR posture: what the DPO asks

If you're moving to DanubeData specifically for the EU posture, here's what to document in your DPIA and ROPA:

  • Processor identity: IFAS Consult SRL, an EU-incorporated entity, signs the DPA under GDPR Article 28.
  • Processing location: Falkenstein, Germany. Named datacenter, not a hint.
  • Subprocessors: Hetzner Online GmbH (datacenter and physical infrastructure, EU entity, Germany).
  • International transfers: None by default. No data leaves the EU unless the customer enables a cross-region replication target.
  • Encryption at rest: Server-side AES-256 on the storage backend.
  • Encryption in transit: TLS 1.3 on the S3 endpoint.
  • Access controls: Per-team access keys, bucket policies, presigned URLs with expiry.
  • Retention and deletion: You control lifecycle rules. Delete operations are processed immediately; MinIO erasure-coded backend overwrites blocks on next write cycle.
  • Breach notification: GDPR 72-hour notification; included in the standard DPA.

Compare this to R2: the processor is Cloudflare, Inc. (US-incorporated), the region is a hint not a named datacenter, international-transfer language falls under the EU-US Data Privacy Framework with Cloudflare as the US importer, and CLOUD Act disclosure obligations apply regardless of bucket location. Neither is "bad" — they're just different risk profiles, and the one that fits you depends on who your customers are and what auditors ask.

FAQ

Does DanubeData actually support the full S3 API or just the basics?

DanubeData runs MinIO, which implements the S3 API very thoroughly. Supported: PutObject, GetObject, DeleteObject, ListObjectsV2, CopyObject, multipart upload, versioning, bucket lifecycle, CORS, bucket policies, presigned URLs (GET and PUT), server-side encryption, object tagging. Not currently supported on the hosted tier: S3 Select, S3 Event Notifications to Lambda (use webhooks instead), Object Lock with Compliance mode (Governance mode is supported). If you have an edge case, test with a small bucket before committing.

How does egress cost compare if I'm doing 10 TB/month?

At 10 TB/month on R2 you pay $0 for egress plus ~$150 in storage (assuming 10 TB stored). On DanubeData that's €3.99 base + 9 TB overage egress × €1.99 = €17.91 plus storage overages; total around €35-55 depending on storage footprint. R2 wins on high-egress workloads if storage is small; DanubeData wins on request-heavy and storage-heavy workloads. If egress is your dominant cost and you don't care about CLOUD Act, R2 is legitimately the right answer — this guide is about when the tradeoff isn't worth it.

Do I need to use Cloudflare CDN with DanubeData?

No. The DanubeData endpoint is reachable globally over the public internet with TLS. For European users the latency is fine without a CDN. If you serve a global audience with heavy read patterns, put Bunny.net or Cloudflare in front as a pull-through cache and you get the same end-user experience as R2 + Cloudflare. The origin-side operator is still EU-based, which is what controllers care about for GDPR.

What happens to my R2 custom domains during migration?

Custom domains on R2 are Cloudflare-managed. During migration, keep your R2 custom domain active until cutover is complete. After you flip the application to DanubeData, update the DNS record for the custom domain to a CNAME that points at your DanubeData bucket (or to the CDN in front of it). Browsers that cached the old R2 IPs will pick up the new target on the next DNS TTL expiry.

How do presigned URLs compare between R2 and DanubeData?

Identical in behavior. Both implement SigV4 presigned URLs with configurable expiry. The URL format differs only in hostname — you change ACCOUNT.r2.cloudflarestorage.com to s3.danubedata.ro and it works. Presigned PUT URLs for direct browser uploads work the same way. CORS configuration migrates via the bucket-policy JSON without changes.

What's the durability guarantee on DanubeData?

DanubeData Object Storage runs MinIO with erasure coding across multiple disks. Design durability is targeted at 11 nines (99.999999999%) for multi-disk setups, in line with industry standards for S3-compatible storage. For business-critical data, run a lifecycle rule that replicates objects to a second region or provider (for example, a secondary Hetzner Object Storage bucket in Helsinki) to get geographic separation.

Can I use Terraform to manage DanubeData buckets?

Yes. DanubeData publishes a Terraform provider that covers buckets, access keys, VPS, cache and databases. You can also use the standard AWS provider pointed at the DanubeData endpoint — any S3-provider-agnostic Terraform module (for example, buckets managed via hashicorp/aws) works by overriding the endpoint and region.

Is there a free tier I can test with?

New DanubeData accounts get €50 in signup credit which covers roughly a year of a single-bucket €3.99 plan. That's enough runway to migrate a real R2 bucket, monitor performance for a month, and make a confident decision before the credit runs out.

Decision framework

Here's the one-line heuristic I use:

  • Request-heavy or storage-heavy workload, plus any CLOUD Act concern? Move to DanubeData. Bundle it with your VPS and you get intra-cluster free traffic on top.
  • Huge egress, no CLOUD Act concern, small storage? Stay on R2. It's still the right answer for bandwidth-dominated workloads where jurisdiction is not the driver.
  • Massive storage-only archive, CLOUD Act acceptable? IDrive e2 is the price leader. Hetzner Object Storage if you want an EU operator.
  • Public-sector or regulated industry? Scaleway or OVHcloud with their SecNumCloud/ISO 27001-qualified offers.
  • Swiss data residency specifically? Exoscale SOS.
  • "Our data must never leave a machine I control"? Self-hosted MinIO on a DanubeData or Hetzner VPS.

Try DanubeData today

If you've made it this far, you already know why you're looking at alternatives. The best way to validate the fit is to provision a bucket and run the rclone script above against a copy of your R2 data.

The S3 API is a portable skill. The choice of operator is a legal and economic one — and in 2026 the European answer is finally cost-competitive with the US incumbents. Your data, your jurisdiction, your invoice in euros.

Share this article

Ready to Get Started?

Deploy your infrastructure in minutes with DanubeData's managed services.