Object storage breaches are among the most common and damaging security incidents in cloud computing. Misconfigured S3 buckets have exposed billions of records—from customer databases to medical records to classified government documents. The pattern is always the same: a simple misconfiguration turns private data into a public resource, discoverable by anyone with a web browser.
Whether you are using AWS S3, a self-hosted solution like MinIO or Ceph, or a managed European provider like DanubeData, the security principles are the same. This guide covers everything you need to protect your S3 storage in 2026.
Why S3 Security Matters
Real-World Breach Examples
These incidents are composites based on publicly reported breach patterns—they happen regularly across industries:
- Healthcare provider (2024): A misconfigured bucket exposed 3.2 million patient records including names, diagnoses, and insurance details. The bucket was publicly readable for 14 months before discovery. Resulting fines exceeded $4.5 million.
- E-commerce platform (2023): An application deployed with hardcoded access keys in a public GitHub repository. Attackers used the keys to download 800,000 customer payment records within hours of the commit.
- SaaS startup (2024): A developer enabled public-read ACL on a staging bucket for testing and forgot to revert. The bucket contained production database backups with full user credentials. Discovered by a security researcher six months later.
- Government contractor (2023): An S3 bucket used for file sharing between departments was configured with
public-read-write. Attackers uploaded malicious files that were subsequently downloaded by employees. - Media company (2025): Versioning was not enabled. A ransomware attack encrypted all objects in place. Without versioning or external backups, the company lost two years of digital assets permanently.
The Cost of a Breach
| Impact Category | Typical Cost | Notes |
|---|---|---|
| GDPR Fine | Up to 4% of annual revenue | Or EUR 20 million, whichever is greater |
| Breach notification costs | $50,000–$500,000 | Legal, communications, credit monitoring |
| Forensic investigation | $20,000–$200,000 | Identifying scope and root cause |
| Business interruption | $10,000–$1M+/day | Depends on business size |
| Reputation damage | Incalculable | Customer churn, lost contracts |
| Average total cost (IBM 2025) | $4.88 million | Per breach incident globally |
The good news: S3 security is entirely preventable. Every breach listed above could have been avoided with the practices in this guide.
1. Access Key Management
Access keys are the most common attack vector for S3 breaches. A leaked key gives an attacker full access to everything that key can reach.
Use Per-Application Keys
Never share a single access key across multiple applications. Create dedicated keys for each service:
# Bad: One key for everything
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# Good: Separate keys per application
# web-app-production
AWS_ACCESS_KEY_ID=AKIAWEBAPP001PROD
AWS_SECRET_ACCESS_KEY=...
# backup-service
AWS_ACCESS_KEY_ID=AKIABACKUP001PROD
AWS_SECRET_ACCESS_KEY=...
# analytics-pipeline
AWS_ACCESS_KEY_ID=AKIAANALYT001PROD
AWS_SECRET_ACCESS_KEY=...
On DanubeData, you create access keys per team. Best practice is to create separate keys for different applications or environments:
# Create a key for your web application
# Dashboard: Storage > Access Keys > Create Access Key
# Name it descriptively: "web-app-production-uploads"
# Create a separate key for backups
# Name: "backup-service-nightly"
# Each key has independent credentials
# Revoking one doesn't affect others
Rotate Keys Regularly
Access keys should be rotated every 90 days at minimum. Here is a safe rotation process:
# Step 1: Create new access key
# (via dashboard or API)
# Step 2: Update your application with the new key
# Use environment variables, never hardcode
export AWS_ACCESS_KEY_ID="new-key-id"
export AWS_SECRET_ACCESS_KEY="new-secret-key"
# Step 3: Deploy and verify application works
# Step 4: Monitor for errors (wait 24 hours)
# Step 5: Delete the old key
# Only after confirming nothing uses it
Never Hardcode Keys
This is the single most important rule. Never put access keys in source code:
# TERRIBLE: Hardcoded in application code
import boto3
s3 = boto3.client('s3',
aws_access_key_id='AKIAIOSFODNN7EXAMPLE', # NEVER DO THIS
aws_secret_access_key='wJalrXUtnFEMI/K7MDENG' # NEVER DO THIS
)
# GOOD: Load from environment variables
import boto3
import os
s3 = boto3.client('s3',
endpoint_url=os.environ['S3_ENDPOINT'],
aws_access_key_id=os.environ['S3_ACCESS_KEY'],
aws_secret_access_key=os.environ['S3_SECRET_KEY'],
region_name=os.environ.get('S3_REGION', 'fsn1')
)
Use a Secrets Manager
For production deployments, use a secrets manager instead of environment variables:
# Using HashiCorp Vault
vault kv put secret/myapp/s3
access_key="AKIAIOSFODNN7EXAMPLE"
secret_key="wJalrXUtnFEMI/K7MDENG"
# Application reads at runtime
import hvac
client = hvac.Client(url='https://vault.example.com:8200')
secret = client.secrets.kv.v2.read_secret_version(path='myapp/s3')
access_key = secret['data']['data']['access_key']
secret_key = secret['data']['data']['secret_key']
Scan for Leaked Keys
Use automated tools to detect accidentally committed credentials:
# Install git-secrets (pre-commit hook)
git secrets --install
git secrets --register-aws
# Or use trufflehog for scanning existing repos
trufflehog git file://./my-repo --only-verified
# GitHub also scans for leaked secrets automatically
# Enable "Secret scanning" in repository settings
2. Bucket Policies and Access Control
Principle of Least Privilege
Every application should have only the minimum permissions it needs. Common permission patterns:
| Application Role | Required Permissions | Unnecessary Permissions to Deny |
|---|---|---|
| Web app (user uploads) | PutObject, GetObject, DeleteObject |
ListBucket, DeleteBucket, policy changes |
| CDN / static assets | GetObject only |
All write operations |
| Backup service | PutObject, ListBucket |
DeleteObject, GetObject (write-only) |
| Analytics reader | GetObject, ListBucket |
All write operations |
| Admin / operations | Full access | Should only be used for emergencies |
Bucket Policy Examples
Restrict access to specific IP addresses or CIDR ranges:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowOfficeIPOnly",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"203.0.113.0/24",
"198.51.100.0/24"
]
}
}
}
]
}
Enforce HTTPS-only access:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyNonHTTPS",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
}
Disable ACLs When Possible
ACLs (Access Control Lists) are a legacy access control mechanism. They are harder to audit and easier to misconfigure than bucket policies. Modern S3 deployments should disable ACLs entirely:
# AWS CLI: Disable ACLs on a bucket
aws s3api put-bucket-ownership-controls
--bucket my-bucket
--ownership-controls '{"Rules": [{"ObjectOwnership": "BucketOwnerEnforced"}]}'
# With ACLs disabled, ALL access is controlled through:
# 1. Bucket policies
# 2. IAM policies
# 3. Access key permissions
3. Encryption: Protecting Data at Rest
Server-Side Encryption (SSE-S3)
SSE-S3 encrypts objects automatically using AES-256. The storage provider manages all encryption keys:
# Upload with SSE-S3 encryption
aws s3 cp myfile.pdf s3://my-bucket/myfile.pdf
--sse AES256
--endpoint-url https://s3.danubedata.ro
# Or set as default for all uploads via bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyUnencryptedUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}
]
}
Server-Side Encryption with KMS (SSE-KMS)
SSE-KMS uses a dedicated Key Management Service for encryption key control. This provides additional audit trails and key rotation capabilities:
# Upload with SSE-KMS
aws s3 cp myfile.pdf s3://my-bucket/myfile.pdf
--sse aws:kms
--sse-kms-key-id "arn:aws:kms:eu-central-1:123456789:key/my-key-id"
# Benefits over SSE-S3:
# - Separate key access permissions
# - Automatic key rotation
# - CloudTrail logs every key usage
# - Can disable key to immediately block all access
Client-Side Encryption
For the highest level of security, encrypt data before it reaches the storage provider. The provider never sees unencrypted data:
# Python: Client-side encryption with AES-256-GCM
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
import os
import boto3
def encrypt_and_upload(file_path, bucket, key, encryption_key):
"""Encrypt file locally, then upload ciphertext to S3."""
# Generate a random 96-bit nonce
nonce = os.urandom(12)
# Read the plaintext file
with open(file_path, 'rb') as f:
plaintext = f.read()
# Encrypt with AES-256-GCM
aesgcm = AESGCM(encryption_key)
ciphertext = aesgcm.encrypt(nonce, plaintext, None)
# Prepend nonce to ciphertext for storage
encrypted_data = nonce + ciphertext
# Upload encrypted data to S3
s3 = boto3.client('s3',
endpoint_url='https://s3.danubedata.ro',
region_name='fsn1'
)
s3.put_object(
Bucket=bucket,
Key=key,
Body=encrypted_data,
Metadata={'x-encryption': 'AES-256-GCM'}
)
# Generate a 256-bit encryption key (store securely!)
encryption_key = AESGCM.generate_key(bit_length=256)
encrypt_and_upload('sensitive.pdf', 'my-bucket', 'docs/sensitive.pdf', encryption_key)
Encryption Decision Matrix
| Method | Key Management | Performance Impact | Best For |
|---|---|---|---|
| SSE-S3 | Provider managed | None | General data, compliance baseline |
| SSE-KMS | KMS managed (you control) | Minimal (API call per request) | Regulated data, audit requirements |
| Client-side | You manage entirely | Moderate (CPU on client) | Highly sensitive data, zero-trust |
| SSE-S3 + Client-side | Both | Moderate | Maximum security (defense in depth) |
4. HTTPS Enforcement (TLS-Only Access)
All S3 traffic should use HTTPS. Unencrypted HTTP transmits data—including access credentials—in plaintext, making it trivial for attackers on the network path to intercept.
# Always use HTTPS endpoints
# Good
aws s3 ls --endpoint-url https://s3.danubedata.ro
# Bad - credentials sent in plaintext
aws s3 ls --endpoint-url http://s3.danubedata.ro
# Enforce HTTPS in your application configuration
S3_ENDPOINT=https://s3.danubedata.ro
S3_USE_SSL=true
DanubeData's S3 endpoint (s3.danubedata.ro) enforces HTTPS by default. HTTP connections are rejected at the load balancer level, so credentials are never transmitted in plaintext.
Verify TLS Configuration
# Check the TLS certificate and protocol
openssl s_client -connect s3.danubedata.ro:443 -servername s3.danubedata.ro 2>/dev/null |
openssl x509 -noout -subject -dates -issuer
# Verify minimum TLS version (should be 1.2+)
curl -v --tlsv1.2 https://s3.danubedata.ro 2>&1 | grep "SSL connection"
# Expected: SSL connection using TLSv1.3
5. CORS Configuration
Cross-Origin Resource Sharing (CORS) controls which web domains can access your bucket from a browser. A misconfigured CORS policy can allow any website to read your objects.
Restrictive CORS Configuration
# cors-config.json - Only allow your domain
{
"CORSRules": [
{
"AllowedOrigins": ["https://myapp.example.com"],
"AllowedMethods": ["GET", "PUT", "POST"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag", "x-amz-request-id"],
"MaxAgeSeconds": 3600
}
]
}
# Apply the CORS configuration
aws s3api put-bucket-cors
--bucket my-bucket
--cors-configuration file://cors-config.json
--endpoint-url https://s3.danubedata.ro
Common CORS Mistakes
# DANGEROUS: Allow all origins
"AllowedOrigins": ["*"]
# DANGEROUS: Allow all methods
"AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"]
# DANGEROUS: Allow all headers including auth
"AllowedHeaders": ["*"]
# SAFE: Specific origins, limited methods
"AllowedOrigins": ["https://myapp.example.com", "https://admin.example.com"]
"AllowedMethods": ["GET", "PUT"]
"AllowedHeaders": ["Content-Type", "Content-Length"]
6. Bucket Naming Security
Bucket names are globally unique identifiers. Predictable names make buckets easy targets for enumeration attacks.
Bad Naming Patterns
# Easily guessable names - attackers will try these
my-company-backups
my-company-uploads
my-company-production
my-company-staging
client-data
customer-files
database-backups
Better Naming Patterns
# Include random suffixes or unique identifiers
myco-uploads-a7f3x9k2
myco-prod-assets-2026-eu-west
proj-b47e2c-media-fsn1
# DanubeData auto-prefixes with team ID for isolation:
# dd-{team_id}-{your-name}
# Example: dd-019cd6e4-uploads
# This provides namespace isolation and prevents guessing
7. Monitoring and Audit Logging
You cannot secure what you cannot see. Enable comprehensive logging for all S3 operations.
What to Log
- Access logs: Who accessed which objects and when
- Authentication failures: Failed access key attempts
- Permission changes: Bucket policy or ACL modifications
- Object deletions: What was deleted, by whom
- Large downloads: Unusual data transfer volumes
Setting Up S3 Server Access Logging
# Enable access logging (logs go to a separate bucket)
aws s3api put-bucket-logging
--bucket my-bucket
--bucket-logging-status '{
"LoggingEnabled": {
"TargetBucket": "my-bucket-logs",
"TargetPrefix": "access-logs/"
}
}'
--endpoint-url https://s3.danubedata.ro
Alerting on Suspicious Activity
Set up alerts for these patterns:
- Bulk downloads: More than 1,000 GET requests in 5 minutes from a single IP
- Failed authentication: More than 10 failed attempts in 1 minute
- Policy changes: Any modification to bucket policies or ACLs
- New IP addresses: Access from previously unseen IP ranges
- After-hours access: Operations outside normal business hours
- Large uploads: Objects over a defined size threshold
8. Preventing Public Access Mistakes
The most devastating S3 breaches happen when buckets are accidentally made public. Implement multiple layers of protection.
Block Public Access (Account Level)
# Block ALL public access at the account level
# This overrides any bucket-level settings
aws s3control put-public-access-block
--account-id 123456789012
--public-access-block-configuration
"BlockPublicAcls=true,
IgnorePublicAcls=true,
BlockPublicPolicy=true,
RestrictPublicBuckets=true"
Block Public Access (Bucket Level)
# Block public access on individual bucket
aws s3api put-public-access-block
--bucket my-bucket
--public-access-block-configuration
"BlockPublicAcls=true,
IgnorePublicAcls=true,
BlockPublicPolicy=true,
RestrictPublicBuckets=true"
--endpoint-url https://s3.danubedata.ro
Use Presigned URLs for Temporary Access
Instead of making objects public, generate temporary presigned URLs:
# Python: Generate a presigned URL (expires in 1 hour)
import boto3
s3 = boto3.client('s3',
endpoint_url='https://s3.danubedata.ro',
aws_access_key_id='your-access-key',
aws_secret_access_key='your-secret-key',
region_name='fsn1'
)
url = s3.generate_presigned_url(
'get_object',
Params={
'Bucket': 'my-bucket',
'Key': 'documents/report.pdf'
},
ExpiresIn=3600 # URL valid for 1 hour
)
print(f"Temporary download link: {url}")
# For uploads (allow users to upload without giving them credentials)
upload_url = s3.generate_presigned_url(
'put_object',
Params={
'Bucket': 'my-bucket',
'Key': 'uploads/user-file.pdf',
'ContentType': 'application/pdf'
},
ExpiresIn=900 # 15 minutes to complete upload
)
9. Data Classification and Retention Policies
Not all data requires the same level of protection. Classify your data and apply appropriate controls:
| Classification | Examples | Encryption | Access Control | Retention |
|---|---|---|---|---|
| Confidential | PII, financial records, health data | SSE-KMS + client-side | Named individuals only | Per regulation (GDPR: purpose-limited) |
| Internal | Source code, internal docs, configs | SSE-S3 minimum | Team-based access | Business need |
| Restricted | Customer-uploaded content | SSE-S3 | Application-level isolation | Account lifecycle |
| Public | Marketing assets, public docs | Optional | Read-only for public | Indefinite |
Implementing Retention with Lifecycle Policies
# lifecycle-policy.json
{
"Rules": [
{
"ID": "DeleteTempUploadsAfter30Days",
"Status": "Enabled",
"Filter": {
"Prefix": "temp-uploads/"
},
"Expiration": {
"Days": 30
}
},
{
"ID": "DeleteOldLogsAfter90Days",
"Status": "Enabled",
"Filter": {
"Prefix": "logs/"
},
"Expiration": {
"Days": 90
}
},
{
"ID": "ArchiveOldBackupsAfter365Days",
"Status": "Enabled",
"Filter": {
"Prefix": "backups/"
},
"Expiration": {
"Days": 365
}
}
]
}
# Apply the lifecycle policy
aws s3api put-bucket-lifecycle-configuration
--bucket my-bucket
--lifecycle-configuration file://lifecycle-policy.json
--endpoint-url https://s3.danubedata.ro
10. GDPR Considerations for EU Data
If you store personal data of EU residents, GDPR imposes specific obligations on your S3 usage.
Key GDPR Requirements for Object Storage
- Data residency: Know where your data is physically stored. EU data should stay in the EU.
- Right to erasure (Article 17): You must be able to permanently delete a user's data on request—including all versions and backups.
- Data portability (Article 20): Users can request their data in a machine-readable format.
- Encryption: GDPR considers encryption a key safeguard (Article 32).
- Breach notification: You must report data breaches within 72 hours (Article 33).
- Data Processing Agreement (DPA): Required with your storage provider.
Why EU Data Residency Matters
The Schrems II ruling (2020) invalidated the EU-US Privacy Shield. Transferring personal data to US-based providers requires additional safeguards:
- Standard Contractual Clauses (SCCs) with supplementary measures
- Transfer Impact Assessments (TIAs)
- Risk of US government access under FISA Section 702 and the CLOUD Act
DanubeData eliminates this complexity entirely. All data is stored in Germany (Falkenstein datacenter), on European-owned infrastructure. No US entity has jurisdiction over the data. No SCCs or TIAs required.
Implementing Right to Erasure
# Script to fully delete a user's data (GDPR Article 17)
import boto3
def gdpr_delete_user_data(user_id, bucket):
"""Delete ALL objects and versions for a user."""
s3 = boto3.client('s3',
endpoint_url='https://s3.danubedata.ro',
region_name='fsn1'
)
prefix = f"users/{user_id}/"
# List all object versions (including delete markers)
paginator = s3.get_paginator('list_object_versions')
for page in paginator.paginate(Bucket=bucket, Prefix=prefix):
# Delete all versions
for version in page.get('Versions', []):
s3.delete_object(
Bucket=bucket,
Key=version['Key'],
VersionId=version['VersionId']
)
# Delete all delete markers
for marker in page.get('DeleteMarkers', []):
s3.delete_object(
Bucket=bucket,
Key=marker['Key'],
VersionId=marker['VersionId']
)
print(f"All data for user {user_id} permanently deleted.")
gdpr_delete_user_data('user-12345', 'my-app-data')
11. Backup and Disaster Recovery
S3 security is not just about preventing unauthorized access—it is also about ensuring data survives disasters.
The 3-2-1 Backup Rule
- 3 copies of your data
- 2 different storage types (e.g., S3 + local disk)
- 1 offsite copy (different geographic location)
Cross-Region Replication
# Replicate critical buckets to a second region or provider
# Using rclone for cross-provider replication:
rclone sync danubedata:my-critical-bucket backup-provider:my-critical-bucket
--transfers 16
--checkers 32
--log-file /var/log/s3-replication.log
--log-level INFO
# Schedule via cron (every 6 hours)
0 */6 * * * rclone sync danubedata:my-critical-bucket backup-provider:my-critical-bucket
Enable Versioning for Recovery
# Enable versioning to protect against accidental deletions
aws s3api put-bucket-versioning
--bucket my-bucket
--versioning-configuration Status=Enabled
--endpoint-url https://s3.danubedata.ro
# Recover a deleted file
aws s3api list-object-versions
--bucket my-bucket
--prefix "important-file.pdf"
--endpoint-url https://s3.danubedata.ro
# Restore the previous version
aws s3api get-object
--bucket my-bucket
--key "important-file.pdf"
--version-id "v1-abc123"
restored-file.pdf
--endpoint-url https://s3.danubedata.ro
12. Common Security Mistakes and How to Fix Them
| Mistake | Risk Level | Fix |
|---|---|---|
| Hardcoded access keys in source code | Critical | Use environment variables or secrets manager |
| Public-read ACL on sensitive bucket | Critical | Block public access, disable ACLs |
| No encryption at rest | High | Enable SSE-S3 as minimum baseline |
| HTTP access allowed (no TLS) | High | Enforce HTTPS via bucket policy |
| Single access key for all applications | High | Create per-application keys with least privilege |
| No access logging enabled | Medium | Enable server access logging to a separate bucket |
| CORS set to AllowedOrigins: ["*"] | Medium | Restrict to specific domains |
| No key rotation policy | Medium | Rotate every 90 days minimum |
| Predictable bucket names | Medium | Use random suffixes or provider-managed prefixes |
| No versioning on critical data | Medium | Enable versioning + lifecycle policies |
| No backup of S3 data | Medium | Implement 3-2-1 backup strategy |
| Using root/admin credentials for apps | Critical | Create scoped service credentials |
13. Security Comparison: AWS S3 vs Self-Hosted vs DanubeData
| Security Feature | AWS S3 | Self-Hosted (MinIO/Ceph) | DanubeData |
|---|---|---|---|
| Encryption at rest | SSE-S3, SSE-KMS, SSE-C | Manual setup required | SSE-S3 by default |
| TLS enforcement | HTTPS by default | Must configure TLS certs | HTTPS enforced |
| Access key management | IAM + key rotation | Manual management | Dashboard + API |
| Public access blocking | Account + bucket level | Must configure manually | Private by default |
| Audit logging | CloudTrail + S3 logs | Must build logging | Built-in monitoring |
| EU data residency | EU regions available (US company) | You control location | Germany only (EU company) |
| GDPR compliance | Requires SCCs + TIA | Your responsibility entirely | Built-in, no SCCs needed |
| Multi-tenancy | IAM policies | Complex to implement | Automatic tenant isolation |
| Security maintenance | Managed by AWS | You patch everything | Fully managed |
| Pricing | $23/TB + $90/TB egress | Hardware cost only | EUR 3.99/mo (1TB included) |
Complete S3 Security Checklist
Use this checklist to audit your S3 security posture. Every item should be checked for production buckets:
| Category | Check | Priority |
|---|---|---|
| Access Keys | No hardcoded credentials in source code | P0 |
| Per-application access keys (not shared) | P0 | |
| Key rotation policy (90 days or less) | P1 | |
| Git pre-commit hooks scanning for secrets | P1 | |
| Secrets stored in a secrets manager | P2 | |
| Access Control | Public access blocked at account level | P0 |
| Least privilege permissions per application | P0 | |
| ACLs disabled (BucketOwnerEnforced) | P1 | |
| Presigned URLs used for temporary access | P1 | |
| Encryption | Server-side encryption enabled (SSE-S3 minimum) | P0 |
| HTTPS enforced (deny HTTP in bucket policy) | P0 | |
| Client-side encryption for confidential data | P2 | |
| Monitoring | Access logging enabled | P1 |
| Alerting on suspicious activity | P1 | |
| Regular security audits (quarterly) | P2 | |
| Data Protection | Versioning enabled on critical buckets | P1 |
| Lifecycle policies for data retention | P1 | |
| Cross-region or cross-provider backups | P2 | |
| Compliance | Data residency requirements documented | P1 |
| GDPR erasure procedure implemented | P1 | |
| DPA signed with storage provider | P1 | |
| Network | CORS restricted to specific domains | P1 |
| IP-based access restrictions where possible | P2 |
Secure Your S3 Storage with DanubeData
DanubeData provides S3-compatible object storage with security built in, not bolted on:
- Private by default: All buckets are private. No accidental public exposure.
- HTTPS enforced: TLS encryption in transit with no HTTP fallback.
- Encryption at rest: SSE-S3 encryption on all stored objects.
- Tenant isolation: Automatic namespace isolation between teams. Bucket names are prefixed with team IDs to prevent enumeration.
- EU data residency: All data stored in Germany (Falkenstein). No US jurisdiction. No SCCs required.
- Simple pricing: EUR 3.99/month includes 1 TB storage + 1 TB egress traffic. No surprise charges.
- S3-compatible API: Works with AWS CLI, boto3, any S3 SDK. Endpoint:
s3.danubedata.ro, region:fsn1.
# Get started in 60 seconds
# 1. Create your account at https://danubedata.ro/register
# 2. Create a storage bucket from the dashboard
# 3. Generate access keys
# 4. Configure your application:
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="fsn1"
export AWS_ENDPOINT_URL="https://s3.danubedata.ro"
# Upload your first file
aws s3 cp myfile.pdf s3://your-bucket/myfile.pdf --endpoint-url https://s3.danubedata.ro
Create your free DanubeData account and start storing data securely in under a minute.
Or go directly to create an S3 storage bucket—all buckets include encryption, private access, and EU data residency by default.
Have questions about S3 security or GDPR compliance? Contact our team for a free security consultation.