BlogTutorialsS3-Compatible Object Storage: A Complete Guide for Developers

S3-Compatible Object Storage: A Complete Guide for Developers

Adrian Silaghi
Adrian Silaghi
December 6, 2025
12 min read
263 views
#s3 #object-storage #cloud-storage #aws #storage #file-upload

Object storage has become the standard for storing unstructured data—from user uploads and backups to static assets and data lakes. Amazon S3 pioneered this space, but S3-compatible alternatives offer the same API with better pricing and control.

What is S3-Compatible Object Storage?

Object storage treats files as discrete units (objects) with metadata, accessed via HTTP APIs. Unlike traditional file systems with hierarchical directories, object storage uses a flat namespace with buckets containing objects.

Key Concepts

  • Buckets: Containers for objects, like top-level folders
  • Objects: Individual files with unique keys (paths)
  • Keys: Unique identifiers within a bucket (e.g., uploads/2025/image.jpg)
  • Metadata: Custom key-value pairs attached to objects
  • Versioning: Keep multiple versions of the same object

Why "S3-Compatible"?

S3-compatible means the service implements the same API as Amazon S3, allowing you to:

  • Use existing S3 client libraries (AWS SDK, boto3, s3cmd)
  • Migrate from AWS S3 by changing the endpoint URL
  • Switch providers without rewriting code
  • Use familiar tools and documentation

Object Storage vs Traditional Storage

Feature Object Storage Block Storage File System
Access Method HTTP API Block-level POSIX filesystem
Scalability Unlimited Limited by disk size Limited by server
Durability 11 nines (99.999999999%) Depends on RAID Single server risk
Metadata Rich, custom None Limited
Performance High throughput Low latency Medium
Cost Very low per GB Medium Varies
Best For Static files, backups, archives Databases, VMs Application data

Common Use Cases

1. User-Generated Content

Store uploaded images, videos, documents, and profile pictures:

// Node.js example - Upload user avatar
const AWS = require('aws-sdk');
const s3 = new AWS.S3({
    endpoint: 'https://s3.danubedata.ro',
    accessKeyId: process.env.S3_ACCESS_KEY,
    secretAccessKey: process.env.S3_SECRET_KEY,
    s3ForcePathStyle: true,
    signatureVersion: 'v4'
});

async function uploadAvatar(userId, fileBuffer, contentType) {
    const key = `avatars/${userId}/profile.jpg`;

    await s3.putObject({
        Bucket: 'my-app-uploads',
        Key: key,
        Body: fileBuffer,
        ContentType: contentType,
        Metadata: {
            'user-id': userId.toString(),
            'uploaded-at': new Date().toISOString()
        }
    }).promise();

    return `https://s3.danubedata.ro/my-app-uploads/${key}`;
}

2. Static Website Hosting

Host your static frontend directly from object storage:

  • React/Vue/Angular build outputs
  • Marketing websites and landing pages
  • Documentation sites
  • No web server needed—S3 handles HTTP requests
# Deploy your static site
aws s3 sync ./dist s3://my-website --endpoint-url https://s3.danubedata.ro
aws s3 website s3://my-website --index-document index.html --error-document 404.html

3. Application Backups

Automated backup storage with lifecycle policies:

# Python backup script
import boto3
from datetime import datetime

s3 = boto3.client('s3',
    endpoint_url='https://s3.danubedata.ro',
    aws_access_key_id='YOUR_ACCESS_KEY',
    aws_secret_access_key='YOUR_SECRET_KEY'
)

def backup_database():
    timestamp = datetime.now().strftime('%Y-%m-%d-%H%M%S')
    backup_file = f'database-backup-{timestamp}.sql.gz'

    # Upload backup
    s3.upload_file(
        backup_file,
        'my-app-backups',
        f'databases/{backup_file}',
        ExtraArgs={
            'ServerSideEncryption': 'AES256',
            'StorageClass': 'STANDARD'
        }
    )

    print(f'Backup uploaded: {backup_file}')

4. Media Processing Pipelines

Trigger processing when files are uploaded:

  • Image thumbnailing and resizing
  • Video transcoding
  • Document conversion (PDF, Office files)
  • Extract metadata for search indexing

5. Data Lakes and Analytics

Store raw data for analysis:

  • Application logs in JSON format
  • Event streams and clickstream data
  • Machine learning training datasets
  • Query with tools like DuckDB or Pandas

Essential S3 Features

Versioning

Keep multiple versions of objects to protect against accidental deletions:

// Enable versioning on a bucket
s3.putBucketVersioning({
    Bucket: 'my-app-uploads',
    VersioningConfiguration: {
        Status: 'Enabled'
    }
}).promise();

// List all versions of an object
const versions = await s3.listObjectVersions({
    Bucket: 'my-app-uploads',
    Prefix: 'documents/report.pdf'
}).promise();

Lifecycle Rules

Automatically delete or archive old objects:

Use Case Rule Benefit
Temporary uploads Delete after 7 days Automatic cleanup
Old backups Delete after 90 days Cost control
Log files Delete logs/ after 30 days Compliance
Old versions Keep latest 3 versions Storage efficiency

CORS Configuration

Allow web browsers to access your objects:

// Enable CORS for your web app
s3.putBucketCors({
    Bucket: 'my-app-uploads',
    CORSConfiguration: {
        CORSRules: [{
            AllowedOrigins: ['https://myapp.com'],
            AllowedMethods: ['GET', 'PUT', 'POST', 'DELETE'],
            AllowedHeaders: ['*'],
            MaxAgeSeconds: 3600
        }]
    }
}).promise();

Server-Side Encryption

Encrypt objects at rest automatically:

  • SSE-S3: Managed encryption keys (easiest)
  • SSE-KMS: Customer-managed keys (most control)
  • Transparent to applications—decryption happens automatically
  • Required for compliance (GDPR, HIPAA, etc.)

Cost Comparison: AWS S3 vs Alternatives

Let's compare costs for a typical application storing 500GB with 1TB monthly transfer:

Provider Storage Cost Transfer Cost Total/Month
AWS S3 (US East) $11.50 $92.16 $103.66
Google Cloud Storage $10.00 $120.00 $130.00
DigitalOcean Spaces $10.00 (250GB) $10.00 (1TB) $30.00*
Backblaze B2 $3.00 $10.00 $13.00
DanubeData S3 Included in $3.99 Included (1TB) $3.99 + overage

*DigitalOcean includes 250GB storage and 1TB transfer in base plan. Actual costs depend on usage patterns.

DanubeData S3 Pricing Breakdown

  • Base price: $3.99/month per bucket
  • Includes: 1TB storage + 1TB transfer
  • Storage overage: $3.85/TB/month (beyond 1TB)
  • Transfer overage: $0.80/TB (beyond 1TB)
  • API requests: Included (no per-request charges)

Security Best Practices

1. Use IAM-Style Access Keys

Create separate access keys for different applications:

// Create read-only credentials for public website
// Create full-access credentials for admin panel
// Rotate keys regularly (every 90 days)

2. Enable Encryption by Default

Always encrypt sensitive data:

// Python - Upload with encryption
s3.put_object(
    Bucket='my-bucket',
    Key='sensitive-document.pdf',
    Body=file_data,
    ServerSideEncryption='AES256'  # Encrypt at rest
)

3. Set Proper Bucket Policies

Control who can access your buckets:

// Public read for static website
{
    "Version": "2012-10-17",
    "Statement": [{
        "Sid": "PublicReadGetObject",
        "Effect": "Allow",
        "Principal": "*",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::my-website/*"
    }]
}

// Private bucket - authenticated access only
{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": [
            "arn:aws:s3:::my-private-bucket",
            "arn:aws:s3:::my-private-bucket/*"
        ],
        "Condition": {
            "StringNotEquals": {
                "aws:PrincipalArn": "arn:aws:iam::account:user/admin"
            }
        }
    }]
}

4. Use Presigned URLs for Temporary Access

Generate time-limited URLs for uploads or downloads:

// Generate presigned URL for secure upload
const url = s3.getSignedUrl('putObject', {
    Bucket: 'my-app-uploads',
    Key: `uploads/${userId}/document.pdf`,
    Expires: 3600, // 1 hour
    ContentType: 'application/pdf'
});

// Client uploads directly to S3 without exposing credentials
fetch(url, {
    method: 'PUT',
    body: file,
    headers: { 'Content-Type': 'application/pdf' }
});

5. Enable Versioning for Critical Data

Protect against accidental deletions:

  • Keep 30 days of versions for user documents
  • Permanent versioning for legal/compliance data
  • Combine with lifecycle rules to manage costs

Performance Optimization

Multipart Uploads for Large Files

Split large files into chunks for faster, resumable uploads:

// Node.js - Multipart upload for files > 100MB
const upload = s3.upload({
    Bucket: 'my-bucket',
    Key: 'large-video.mp4',
    Body: fs.createReadStream('/path/to/video.mp4'),
}, {
    partSize: 10 * 1024 * 1024, // 10MB chunks
    queueSize: 4 // Upload 4 chunks in parallel
});

upload.on('httpUploadProgress', (progress) => {
    console.log(`Uploaded: ${progress.loaded}/${progress.total}`);
});

await upload.promise();

CDN Integration

Use a CDN in front of S3 for better performance:

  • Cloudflare (free tier available)
  • BunnyCDN (cost-effective)
  • Amazon CloudFront (if using AWS)
  • Cache static assets closer to users worldwide

Optimize Object Keys

Use partition-friendly key naming:

// Good - Distributes load across partitions
uploads/2025/01/17/user-123/image.jpg
uploads/2025/01/17/user-456/document.pdf

// Bad - Creates hotspots
uploads/user-123/2025-01-17/image.jpg
uploads/user-123/2025-01-17/document.pdf

Monitoring and Debugging

Key Metrics to Track

  • Storage used: Total bytes stored
  • Number of objects: Track growth over time
  • Transfer out: Monitor bandwidth usage
  • 4xx/5xx errors: Detect permission or availability issues
  • Request latency: Upload/download performance

Common Issues and Solutions

Issue Cause Solution
403 Forbidden Permission denied Check bucket policy and access keys
CORS errors Browser blocking request Configure CORS rules on bucket
Slow uploads Large files, single part Use multipart upload
High costs Excessive transfer Add CDN, optimize delivery
Lost files Accidental deletion Enable versioning

Getting Started with DanubeData S3

1. Create Your Bucket

  1. Log in to DanubeData dashboard
  2. Navigate to Storage section
  3. Click "Create Bucket"
  4. Choose bucket name and region
  5. Configure encryption and versioning
  6. Deploy in under 60 seconds

2. Generate Access Keys

# From the bucket dashboard:
1. Click "Access Keys"
2. Create new key pair
3. Save credentials securely
4. Use in your application

3. Configure Your Application

Node.js:

const AWS = require('aws-sdk');

const s3 = new AWS.S3({
    endpoint: 'https://s3.danubedata.ro',
    region: 'fsn1',
    accessKeyId: process.env.S3_ACCESS_KEY,
    secretAccessKey: process.env.S3_SECRET_KEY,
    s3ForcePathStyle: true,
    signatureVersion: 'v4'
});

Python:

import boto3

s3 = boto3.client('s3',
    endpoint_url='https://s3.danubedata.ro',
    aws_access_key_id='YOUR_ACCESS_KEY',
    aws_secret_access_key='YOUR_SECRET_KEY',
    region_name='fsn1'
)

PHP (Laravel):

// config/filesystems.php
's3' => [
    'driver' => 's3',
    'key' => env('AWS_ACCESS_KEY_ID'),
    'secret' => env('AWS_SECRET_ACCESS_KEY'),
    'region' => 'fsn1',
    'bucket' => env('AWS_BUCKET'),
    'endpoint' => 'https://s3.danubedata.ro',
    'use_path_style_endpoint' => true,
],

4. Upload Your First Object

// Simple upload example
async function uploadFile(fileBuffer, fileName) {
    try {
        const result = await s3.putObject({
            Bucket: 'my-app-bucket',
            Key: fileName,
            Body: fileBuffer,
            ContentType: 'image/jpeg',
            Metadata: {
                'uploaded-by': 'admin',
                'uploaded-at': new Date().toISOString()
            }
        }).promise();

        console.log('File uploaded successfully');
        return `https://s3.danubedata.ro/my-app-bucket/${fileName}`;
    } catch (error) {
        console.error('Upload failed:', error);
        throw error;
    }
}

Migration from AWS S3

Switching from AWS S3 to DanubeData is straightforward:

Option 1: Change Endpoint (Zero Downtime)

# Update environment variable
AWS_ENDPOINT=https://s3.danubedata.ro

# Your code stays the same - just point to new endpoint

Option 2: Use AWS CLI for Bulk Transfer

# Sync from AWS S3 to DanubeData
aws s3 sync s3://my-aws-bucket s3://my-danubedata-bucket \
    --source-region us-east-1 \
    --endpoint-url https://s3.danubedata.ro \
    --region fsn1

Option 3: Rclone for Large Migrations

# Configure both sources in rclone
rclone copy aws-s3:my-bucket danubedata-s3:my-bucket -P

# Verify sync
rclone check aws-s3:my-bucket danubedata-s3:my-bucket

Advanced Features

Object Tagging

Add metadata for organization and lifecycle management:

// Tag objects for classification
await s3.putObjectTagging({
    Bucket: 'my-bucket',
    Key: 'document.pdf',
    Tagging: {
        TagSet: [
            { Key: 'department', Value: 'engineering' },
            { Key: 'project', Value: 'migration' },
            { Key: 'retention', Value: '90-days' }
        ]
    }
}).promise();

Bucket Analytics

Track usage patterns in the DanubeData dashboard:

  • Storage breakdown by prefix
  • Most accessed objects
  • Upload/download trends
  • Cost projections
  • Bandwidth usage by time period

Event Notifications (Coming Soon)

Trigger webhooks when objects are created or deleted:

// Future capability - webhook on upload
{
    "event": "s3:ObjectCreated:Put",
    "bucket": "my-app-uploads",
    "object": "images/photo.jpg",
    "size": 1524234,
    "timestamp": "2025-01-17T10:30:00Z"
}

DanubeData S3 Features

Feature Status Details
S3 API Compatibility ✓ Full All standard S3 operations
TLS/SSL Encryption ✓ Included HTTPS by default
Server-Side Encryption ✓ SSE-S3, SSE-KMS Transparent encryption
Versioning ✓ Available Keep multiple versions
Lifecycle Rules ✓ Available Auto-delete old objects
CORS Support ✓ Configurable Browser upload support
Object Tagging ✓ Available Custom metadata
Multipart Upload ✓ Supported Large file uploads
Presigned URLs ✓ Supported Temporary access
Web Dashboard ✓ Included Browse and manage files
Automated Backups ✓ Daily Disaster recovery
European Data Residency ✓ Hetzner Germany GDPR compliant

Comparison: Why DanubeData S3?

Factor DanubeData AWS S3 Self-Hosted MinIO
Pricing Model Simple, predictable Complex, per-request Infrastructure only
Setup Time < 1 minute < 5 minutes Hours to days
Maintenance Fully managed Fully managed Your responsibility
Data Location EU (Germany) Global regions Your choice
Backups Automated daily Your responsibility Your responsibility
Support Email + Dashboard Paid plans Community/Paid
Cost (500GB + 1TB transfer) $3.99/mo ~$104/mo $50-80/mo + ops

Frequently Asked Questions

Is DanubeData S3 truly compatible with AWS S3?

Yes. We implement the S3 API standard, so all standard S3 client libraries work without modification. Simply change the endpoint URL.

What happens if I exceed my storage or transfer limits?

You're charged overage rates: $3.85/TB/month for storage and $0.80/TB for transfer. We'll notify you before overages occur.

Can I use a custom domain for my bucket?

Yes. Configure a CNAME record pointing to your bucket, and objects will be accessible via your domain.

How durable is my data?

We store data on Hetzner's enterprise storage with RAID redundancy and automated daily backups to separate infrastructure.

Can I migrate from AWS S3 without downtime?

Yes. Use our migration tools to copy data in the background, then update your application endpoint when ready.

Do you support static website hosting?

Yes. Enable static website hosting on your bucket and configure index/error documents.

Get Started Today

Stop overpaying for object storage. Deploy your S3-compatible bucket in under a minute:

  1. Create your free DanubeData account
  2. Navigate to Storage → Create Bucket
  3. Choose your bucket name and configuration
  4. Get your access keys
  5. Start uploading

First bucket includes:

  • 1TB storage
  • 1TB monthly transfer
  • Automated daily backups
  • Server-side encryption
  • Web dashboard for file management
  • All for $3.99/month

Questions? Contact our team or check our documentation for implementation guides.

Share this article

Ready to Get Started?

Deploy your infrastructure in minutes with DanubeData's managed services.