BlogTutorialsS3 Versioning and Data Protection: Complete Guide (2026)

S3 Versioning and Data Protection: Complete Guide (2026)

Adrian Silaghi
Adrian Silaghi
March 19, 2026
11 min read
3 views
#s3 #versioning #data protection #object storage #backup #disaster recovery #lifecycle #retention
S3 Versioning and Data Protection: Complete Guide (2026)

Every organization eventually loses data. A developer runs a bad script and deletes production assets. A ransomware attack encrypts your files. An application bug overwrites customer uploads with corrupted data. The question is not if it will happen, but whether you can recover when it does.

S3 versioning is the first line of defense. It keeps every version of every object, turning destructive operations into recoverable events. Combined with lifecycle policies, object lock, and backup strategies, you can build a data protection system that survives any failure scenario.

What Is S3 Versioning?

S3 versioning is a feature that keeps multiple variants of an object in the same bucket. When you enable versioning, every PUT, POST, or DELETE operation creates a new version instead of overwriting or removing the object. Previous versions remain accessible indefinitely until you explicitly delete them.

How It Works

  • Without versioning: Uploading a file with the same key replaces the previous file permanently. Deleting removes the file forever.
  • With versioning: Uploading a file with the same key creates a new version. The old version remains accessible via its version ID. Deleting adds a “delete marker” but does not remove any data.
# Without versioning:
# PUT report.pdf (v1) -> stored as report.pdf
# PUT report.pdf (v2) -> v1 is GONE FOREVER, replaced by v2
# DELETE report.pdf   -> report.pdf is GONE FOREVER

# With versioning:
# PUT report.pdf (v1) -> stored as report.pdf, version-id: abc111
# PUT report.pdf (v2) -> stored as report.pdf, version-id: def222 (v1 still exists)
# DELETE report.pdf   -> delete marker added (both v1 and v2 still exist)
# GET report.pdf      -> returns 404 (delete marker is "current")
# GET report.pdf?versionId=def222 -> returns v2
# GET report.pdf?versionId=abc111 -> returns v1

Version IDs

Each version of an object gets a unique version ID assigned by the S3 service. These IDs are opaque strings—you cannot predict or control them:

# Example version IDs
abc123def456ghi789jkl012mno345pqr
zyx987wvu654tsr321qpo098nml765kji

# Version IDs are:
# - Globally unique within the bucket
# - Assigned automatically on upload
# - Immutable (cannot be changed after creation)
# - Required to access non-current versions

Delete Markers

When you delete a versioned object without specifying a version ID, S3 does not actually delete anything. Instead, it inserts a delete marker—a zero-byte object that becomes the “current” version:

# Before delete:
# report.pdf (current) -> version-id: def222
# report.pdf           -> version-id: abc111

# After DELETE report.pdf (no version ID specified):
# report.pdf (current) -> DELETE MARKER (version-id: ghi333)
# report.pdf           -> version-id: def222 (still exists!)
# report.pdf           -> version-id: abc111 (still exists!)

# GET report.pdf -> 404 Not Found (because current version is a delete marker)
# The data is NOT deleted. It is hidden.

Enabling Versioning on Buckets

Using AWS CLI

# Enable versioning on a DanubeData bucket
aws s3api put-bucket-versioning 
    --bucket my-bucket 
    --versioning-configuration Status=Enabled 
    --endpoint-url https://s3.danubedata.ro

# Verify versioning status
aws s3api get-bucket-versioning 
    --bucket my-bucket 
    --endpoint-url https://s3.danubedata.ro

# Expected output:
# {
#     "Status": "Enabled"
# }

Using Python (boto3)

import boto3

s3 = boto3.client('s3',
    endpoint_url='https://s3.danubedata.ro',
    aws_access_key_id='your-access-key',
    aws_secret_access_key='your-secret-key',
    region_name='fsn1'
)

# Enable versioning
s3.put_bucket_versioning(
    Bucket='my-bucket',
    VersioningConfiguration={
        'Status': 'Enabled'
    }
)

# Check versioning status
response = s3.get_bucket_versioning(Bucket='my-bucket')
print(f"Versioning status: {response.get('Status', 'Not configured')}")

Using JavaScript (AWS SDK v3)

import { S3Client, PutBucketVersioningCommand } from '@aws-sdk/client-s3';

const s3 = new S3Client({
    endpoint: 'https://s3.danubedata.ro',
    region: 'fsn1',
    credentials: {
        accessKeyId: process.env.S3_ACCESS_KEY,
        secretAccessKey: process.env.S3_SECRET_KEY,
    },
    forcePathStyle: true,
});

await s3.send(new PutBucketVersioningCommand({
    Bucket: 'my-bucket',
    VersioningConfiguration: {
        Status: 'Enabled',
    },
}));

console.log('Versioning enabled successfully');

Versioning States

A bucket can be in one of three versioning states:

State Behavior Can Revert?
Unversioned (default) Overwrites and deletes are permanent N/A (initial state)
Enabled All versions preserved, delete markers used Can suspend (not disable)
Suspended New uploads get null version ID; existing versions preserved Can re-enable

Important: You cannot fully disable versioning once enabled. You can only suspend it. Existing versions remain accessible. This is a safety feature—it prevents accidental data loss from a configuration change.

Working with Object Versions

Listing Object Versions

# List all versions of all objects in a bucket
aws s3api list-object-versions 
    --bucket my-bucket 
    --endpoint-url https://s3.danubedata.ro

# List versions of a specific object
aws s3api list-object-versions 
    --bucket my-bucket 
    --prefix "documents/report.pdf" 
    --endpoint-url https://s3.danubedata.ro

# Example output:
# {
#     "Versions": [
#         {
#             "Key": "documents/report.pdf",
#             "VersionId": "def222",
#             "IsLatest": true,
#             "LastModified": "2026-03-15T14:30:00Z",
#             "Size": 1048576
#         },
#         {
#             "Key": "documents/report.pdf",
#             "VersionId": "abc111",
#             "IsLatest": false,
#             "LastModified": "2026-03-10T09:15:00Z",
#             "Size": 982016
#         }
#     ],
#     "DeleteMarkers": []
# }

Listing Versions with Python

import boto3
from datetime import datetime

s3 = boto3.client('s3',
    endpoint_url='https://s3.danubedata.ro',
    aws_access_key_id='your-access-key',
    aws_secret_access_key='your-secret-key',
    region_name='fsn1'
)

def list_object_versions(bucket, prefix=''):
    """List all versions of objects matching a prefix."""
    paginator = s3.get_paginator('list_object_versions')

    for page in paginator.paginate(Bucket=bucket, Prefix=prefix):
        # Print versions
        for version in page.get('Versions', []):
            status = "CURRENT" if version['IsLatest'] else "previous"
            size_kb = version['Size'] / 1024
            modified = version['LastModified'].strftime('%Y-%m-%d %H:%M:%S')
            print(f"  [{status}] {version['Key']} "
                  f"v={version['VersionId'][:12]}... "
                  f"size={size_kb:.1f}KB "
                  f"modified={modified}")

        # Print delete markers
        for marker in page.get('DeleteMarkers', []):
            status = "CURRENT" if marker['IsLatest'] else "previous"
            modified = marker['LastModified'].strftime('%Y-%m-%d %H:%M:%S')
            print(f"  [{status}] {marker['Key']} "
                  f"DELETE MARKER v={marker['VersionId'][:12]}... "
                  f"modified={modified}")

list_object_versions('my-bucket', 'documents/')

Retrieving a Specific Version

# Download a specific version
aws s3api get-object 
    --bucket my-bucket 
    --key "documents/report.pdf" 
    --version-id "abc111" 
    restored-report.pdf 
    --endpoint-url https://s3.danubedata.ro

# Python: Download a specific version
s3.download_file(
    Bucket='my-bucket',
    Key='documents/report.pdf',
    Filename='restored-report.pdf',
    ExtraArgs={'VersionId': 'abc111'}
)

Restoring a Deleted Object

# When an object is "deleted" (has a delete marker), restore it by
# removing the delete marker:

# Step 1: Find the delete marker's version ID
aws s3api list-object-versions 
    --bucket my-bucket 
    --prefix "documents/report.pdf" 
    --endpoint-url https://s3.danubedata.ro

# Step 2: Delete the delete marker (this restores the object)
aws s3api delete-object 
    --bucket my-bucket 
    --key "documents/report.pdf" 
    --version-id "ghi333" 
    --endpoint-url https://s3.danubedata.ro

# The previous version (def222) becomes the current version again
# GET documents/report.pdf now returns the file

Python: Full Restore Script

import boto3

def restore_deleted_object(bucket, key, endpoint='https://s3.danubedata.ro'):
    """Restore a deleted object by removing its delete marker."""
    s3 = boto3.client('s3',
        endpoint_url=endpoint,
        region_name='fsn1'
    )

    # Find the current delete marker
    response = s3.list_object_versions(
        Bucket=bucket,
        Prefix=key,
        MaxKeys=1
    )

    delete_markers = response.get('DeleteMarkers', [])
    if not delete_markers:
        print(f"No delete marker found for {key}")
        return False

    latest_marker = delete_markers[0]
    if not latest_marker['IsLatest']:
        print(f"{key} is not deleted (no current delete marker)")
        return False

    # Remove the delete marker
    s3.delete_object(
        Bucket=bucket,
        Key=key,
        VersionId=latest_marker['VersionId']
    )

    print(f"Restored {key} (removed delete marker {latest_marker['VersionId']})")
    return True

# Usage
restore_deleted_object('my-bucket', 'documents/report.pdf')

Soft Delete vs Hard Delete

Understanding the difference is critical for data protection:

Operation Command Effect Recoverable?
Soft delete DELETE key (no version ID) Adds delete marker; data preserved Yes
Hard delete DELETE key?versionId=X Permanently removes that specific version No
Purge all versions Delete each version ID individually Permanently removes all data No
# Soft delete (recoverable) - DO NOT specify version ID
aws s3 rm s3://my-bucket/documents/report.pdf 
    --endpoint-url https://s3.danubedata.ro
# This adds a delete marker. All versions preserved.

# Hard delete (permanent) - specify the exact version ID
aws s3api delete-object 
    --bucket my-bucket 
    --key "documents/report.pdf" 
    --version-id "abc111" 
    --endpoint-url https://s3.danubedata.ro
# This PERMANENTLY removes version abc111. Cannot be undone.

Lifecycle Policies

Versioning without lifecycle policies will cause your storage costs to grow indefinitely. Lifecycle policies automate the cleanup of old versions.

Expiration Rules

# lifecycle-versions.json
{
    "Rules": [
        {
            "ID": "DeleteOldVersionsAfter90Days",
            "Status": "Enabled",
            "Filter": {},
            "NoncurrentVersionExpiration": {
                "NoncurrentDays": 90
            }
        },
        {
            "ID": "CleanupDeleteMarkersAfter30Days",
            "Status": "Enabled",
            "Filter": {},
            "Expiration": {
                "ExpiredObjectDeleteMarker": true
            },
            "NoncurrentVersionExpiration": {
                "NoncurrentDays": 30
            }
        },
        {
            "ID": "KeepOnly5VersionsOfConfigs",
            "Status": "Enabled",
            "Filter": {
                "Prefix": "configs/"
            },
            "NoncurrentVersionExpiration": {
                "NewerNoncurrentVersions": 5,
                "NoncurrentDays": 1
            }
        }
    ]
}

# Apply the lifecycle policy
aws s3api put-bucket-lifecycle-configuration 
    --bucket my-bucket 
    --lifecycle-configuration file://lifecycle-versions.json 
    --endpoint-url https://s3.danubedata.ro

Common Lifecycle Patterns

Use Case Policy Storage Cost Impact
Keep all versions for 30 days NoncurrentDays: 30 Low (max 30 days of history)
Keep last 5 versions only NewerNoncurrentVersions: 5 Predictable (max 5x object size)
Keep versions for 1 year NoncurrentDays: 365 Moderate (depends on change frequency)
Keep forever (compliance) No expiration rule High (grows indefinitely)
Delete temp files after 7 days Prefix: temp/, Days: 7 Minimal

Cost Implications of Versioning

Versioning stores every version of every object. This directly increases your storage costs. Understanding the cost impact helps you make informed decisions.

Estimating Versioning Storage Overhead

# Calculate total versioned storage with AWS CLI
# List all versions and sum their sizes
aws s3api list-object-versions 
    --bucket my-bucket 
    --endpoint-url https://s3.danubedata.ro 
    --query "sum(Versions[].Size)" 
    --output text

Python: Storage Analysis Script

import boto3
from collections import defaultdict

def analyze_versioning_costs(bucket, endpoint='https://s3.danubedata.ro'):
    """Analyze storage cost of versioned objects."""
    s3 = boto3.client('s3', endpoint_url=endpoint, region_name='fsn1')
    paginator = s3.get_paginator('list_object_versions')

    stats = {
        'current_size': 0,
        'noncurrent_size': 0,
        'current_count': 0,
        'noncurrent_count': 0,
        'delete_markers': 0,
    }
    top_versioned = defaultdict(lambda: {'versions': 0, 'total_size': 0})

    for page in paginator.paginate(Bucket=bucket):
        for version in page.get('Versions', []):
            key = version['Key']
            size = version['Size']
            top_versioned[key]['versions'] += 1
            top_versioned[key]['total_size'] += size

            if version['IsLatest']:
                stats['current_size'] += size
                stats['current_count'] += 1
            else:
                stats['noncurrent_size'] += size
                stats['noncurrent_count'] += 1

        stats['delete_markers'] += len(page.get('DeleteMarkers', []))

    # Print summary
    total = stats['current_size'] + stats['noncurrent_size']
    overhead = (stats['noncurrent_size'] / stats['current_size'] * 100
                if stats['current_size'] > 0 else 0)

    print(f"Current objects:     {stats['current_count']} "
          f"({stats['current_size'] / 1024 / 1024:.1f} MB)")
    print(f"Non-current versions: {stats['noncurrent_count']} "
          f"({stats['noncurrent_size'] / 1024 / 1024:.1f} MB)")
    print(f"Delete markers:      {stats['delete_markers']}")
    print(f"Total storage:       {total / 1024 / 1024:.1f} MB")
    print(f"Versioning overhead: {overhead:.1f}%")

    # Top 5 objects by version count
    sorted_keys = sorted(top_versioned.items(),
                        key=lambda x: x[1]['versions'], reverse=True)
    print(f"
Top 5 most-versioned objects:")
    for key, data in sorted_keys[:5]:
        print(f"  {key}: {data['versions']} versions, "
              f"{data['total_size'] / 1024 / 1024:.1f} MB total")

analyze_versioning_costs('my-bucket')

Cost Example on DanubeData

Scenario Current Data Old Versions Total Storage DanubeData Cost
Small app, few changes 50 GB 10 GB 60 GB EUR 3.99/mo (within 1TB)
Active app, daily updates 200 GB 150 GB 350 GB EUR 3.99/mo (within 1TB)
Media platform, heavy versioning 500 GB 800 GB 1.3 TB EUR 3.99 + overage
No versioning (risky) 500 GB 0 GB 500 GB EUR 3.99/mo (within 1TB)

Key insight: On DanubeData, the EUR 3.99/month plan includes 1 TB of storage. For most applications, versioning fits comfortably within this allowance with no additional cost. The data protection benefit far outweighs the marginal storage increase.

Versioning vs Backups vs Snapshots

These three data protection strategies serve different purposes. Most production systems should use a combination:

Feature S3 Versioning Backups (rclone/restic) Volume Snapshots
Granularity Per-object, per-change Full or incremental bucket copy Entire volume at a point in time
Recovery speed Instant (version already in bucket) Minutes to hours (restore from backup) Fast (create new volume from snapshot)
Protection from Accidental overwrite/delete Bucket deletion, provider failure Volume corruption, VM failure
Ransomware protection Partial (attacker can delete versions with same key) Strong (separate credentials/location) Strong (separate storage system)
Cross-region Same bucket/region only Any destination Depends on provider
Setup complexity One API call Moderate (cron, credentials, scripts) Simple (scheduled via dashboard)
Cost Storage for old versions Storage at backup destination Snapshot storage (usually cheaper)
Best for Day-to-day protection Disaster recovery VM/database recovery

Recommended strategy: Enable versioning for operational recovery (accidental deletes, overwrites). Add cross-provider backups for disaster recovery. Use snapshots for database and VPS protection.

Real-World Use Cases

1. Document Management System

A legal firm stores contracts and documents in S3. Versioning provides a complete audit trail of every document change:

# Upload initial contract
aws s3 cp contract-v1.pdf s3://legal-docs/contracts/acme-2026.pdf 
    --endpoint-url https://s3.danubedata.ro

# Client requests changes -> upload revised version
aws s3 cp contract-v2.pdf s3://legal-docs/contracts/acme-2026.pdf 
    --endpoint-url https://s3.danubedata.ro

# Need to review what changed? List all versions:
aws s3api list-object-versions 
    --bucket legal-docs 
    --prefix "contracts/acme-2026.pdf" 
    --endpoint-url https://s3.danubedata.ro

# Download the original version for comparison:
aws s3api get-object 
    --bucket legal-docs 
    --key "contracts/acme-2026.pdf" 
    --version-id "original-version-id" 
    contract-original.pdf 
    --endpoint-url https://s3.danubedata.ro

2. Configuration File Management

Track changes to application configuration files with automatic rollback capability:

import boto3
import json

s3 = boto3.client('s3',
    endpoint_url='https://s3.danubedata.ro',
    region_name='fsn1'
)

def deploy_config(bucket, config_key, config_data):
    """Deploy a new configuration version."""
    response = s3.put_object(
        Bucket=bucket,
        Key=config_key,
        Body=json.dumps(config_data, indent=2),
        ContentType='application/json'
    )
    version_id = response.get('VersionId', 'unversioned')
    print(f"Deployed config version: {version_id}")
    return version_id

def rollback_config(bucket, config_key, target_version_id):
    """Rollback to a specific configuration version."""
    # Download the old version
    response = s3.get_object(
        Bucket=bucket,
        Key=config_key,
        VersionId=target_version_id
    )
    old_config = response['Body'].read()

    # Upload it as the new current version
    new_response = s3.put_object(
        Bucket=bucket,
        Key=config_key,
        Body=old_config,
        ContentType='application/json'
    )
    print(f"Rolled back to version {target_version_id}")
    print(f"New current version: {new_response.get('VersionId')}")

# Deploy a new config
config = {
    "database_pool_size": 20,
    "cache_ttl": 3600,
    "feature_flags": {"new_ui": True}
}
version = deploy_config('my-configs', 'app/production.json', config)

# Something goes wrong -> rollback
rollback_config('my-configs', 'app/production.json', 'previous-version-id')

3. Media Asset Pipeline

A media company processes and re-processes images. Versioning preserves originals even when optimized versions overwrite the same key:

# Original high-res upload
aws s3 cp photo-original.jpg s3://media-assets/photos/hero-banner.jpg 
    --endpoint-url https://s3.danubedata.ro

# Automated pipeline re-processes and overwrites with optimized version
# (e.g., WebP conversion, resize, compression)
aws s3 cp photo-optimized.webp s3://media-assets/photos/hero-banner.jpg 
    --endpoint-url https://s3.danubedata.ro

# Original is preserved as a previous version
# Can always recover the full-resolution original

Ransomware Protection with Versioning

Ransomware targeting S3 typically encrypts objects in place by overwriting them with encrypted versions. Versioning provides a recovery path:

How Ransomware Attacks S3

  1. Attacker obtains access credentials (phishing, leaked keys, compromised server)
  2. Attacker lists all objects in the bucket
  3. Attacker downloads each object, encrypts it, and uploads the encrypted version with the same key
  4. Attacker demands ransom for the decryption key

Recovery with Versioning

import boto3

def recover_from_ransomware(bucket, after_timestamp,
                             endpoint='https://s3.danubedata.ro'):
    """
    Restore all objects to versions from before the attack.

    after_timestamp: datetime of when the attack started.
    All versions created AFTER this timestamp are assumed compromised.
    """
    s3 = boto3.client('s3', endpoint_url=endpoint, region_name='fsn1')
    paginator = s3.get_paginator('list_object_versions')

    restored = 0
    errors = 0

    for page in paginator.paginate(Bucket=bucket):
        # Group versions by key
        versions_by_key = {}
        for version in page.get('Versions', []):
            key = version['Key']
            if key not in versions_by_key:
                versions_by_key[key] = []
            versions_by_key[key].append(version)

        for key, versions in versions_by_key.items():
            # Sort by last modified (newest first)
            versions.sort(key=lambda v: v['LastModified'], reverse=True)

            # Find the last good version (before the attack)
            good_version = None
            for v in versions:
                if v['LastModified'] < after_timestamp:
                    good_version = v
                    break

            if good_version and not good_version['IsLatest']:
                try:
                    # Copy the good version to become the current version
                    s3.copy_object(
                        Bucket=bucket,
                        Key=key,
                        CopySource={
                            'Bucket': bucket,
                            'Key': key,
                            'VersionId': good_version['VersionId']
                        }
                    )
                    restored += 1
                    print(f"Restored: {key} -> version {good_version['VersionId'][:12]}...")
                except Exception as e:
                    errors += 1
                    print(f"Error restoring {key}: {e}")

    print(f"
Recovery complete: {restored} restored, {errors} errors")

# Usage: Recover everything to state before March 15, 2026 at 14:00
from datetime import datetime, timezone
attack_time = datetime(2026, 3, 15, 14, 0, 0, tzinfo=timezone.utc)
recover_from_ransomware('my-bucket', attack_time)

Strengthening Ransomware Protection

Versioning alone is not enough—a sophisticated attacker with full credentials can also delete old versions. Additional protections:

  • Object Lock (WORM): Prevents anyone from deleting versions during the retention period
  • MFA Delete: Requires multi-factor authentication to permanently delete versions
  • Separate backup credentials: Backup system uses different credentials that the main application cannot access
  • Cross-provider replication: Replicate to a second S3 provider with separate credentials

Object Lock and WORM Compliance

Object Lock enforces Write Once Read Many (WORM) protection. Once an object is locked, it cannot be deleted or overwritten until the retention period expires—not even by the root account.

Retention Modes

Mode Can Override? Use Case
Governance Yes, with special permissions Internal compliance, can be overridden by admin
Compliance No, not even by root Regulatory compliance (SEC, FINRA, HIPAA)

Enabling Object Lock

# Object Lock must be enabled at bucket creation time
aws s3api create-bucket 
    --bucket compliance-records 
    --object-lock-enabled-for-bucket 
    --endpoint-url https://s3.danubedata.ro

# Set default retention for the bucket
aws s3api put-object-lock-configuration 
    --bucket compliance-records 
    --object-lock-configuration '{
        "ObjectLockEnabled": "Enabled",
        "Rule": {
            "DefaultRetention": {
                "Mode": "COMPLIANCE",
                "Days": 2555
            }
        }
    }' 
    --endpoint-url https://s3.danubedata.ro
# 2555 days = ~7 years (common for financial records)

Per-Object Retention

# Upload with per-object retention
aws s3api put-object 
    --bucket compliance-records 
    --key "financial/q1-2026-report.pdf" 
    --body q1-report.pdf 
    --object-lock-mode COMPLIANCE 
    --object-lock-retain-until-date "2033-03-19T00:00:00Z" 
    --endpoint-url https://s3.danubedata.ro

# This object CANNOT be deleted before March 19, 2033
# Even the bucket owner cannot override COMPLIANCE mode

Legal Hold

# Place a legal hold on an object (no expiration date)
aws s3api put-object-legal-hold 
    --bucket compliance-records 
    --key "contracts/disputed-agreement.pdf" 
    --legal-hold '{
        "Status": "ON"
    }' 
    --endpoint-url https://s3.danubedata.ro

# Remove legal hold when litigation concludes
aws s3api put-object-legal-hold 
    --bucket compliance-records 
    --key "contracts/disputed-agreement.pdf" 
    --legal-hold '{
        "Status": "OFF"
    }' 
    --endpoint-url https://s3.danubedata.ro

Implementing a Retention Policy

A practical retention policy balances data protection needs against storage costs. Here is a template:

Data Type Versioning Versions Kept Retention Period Object Lock
User uploads (images, docs) Enabled Last 5 30 days for old versions No
Application configs Enabled Last 10 90 days for old versions No
Database backups Enabled All 90 days Governance
Financial records Enabled All 7 years Compliance
Access logs Disabled N/A 1 year (then delete) No
Temp/staging files Disabled N/A 7 days (then delete) No
Legal/compliance docs Enabled All 10+ years Compliance + Legal Hold

Full Lifecycle Configuration for This Policy

{
    "Rules": [
        {
            "ID": "UserUploads-Keep5Versions-30Days",
            "Status": "Enabled",
            "Filter": {"Prefix": "uploads/"},
            "NoncurrentVersionExpiration": {
                "NewerNoncurrentVersions": 5,
                "NoncurrentDays": 30
            }
        },
        {
            "ID": "Configs-Keep10Versions-90Days",
            "Status": "Enabled",
            "Filter": {"Prefix": "configs/"},
            "NoncurrentVersionExpiration": {
                "NewerNoncurrentVersions": 10,
                "NoncurrentDays": 90
            }
        },
        {
            "ID": "Backups-AllVersions-90Days",
            "Status": "Enabled",
            "Filter": {"Prefix": "backups/"},
            "NoncurrentVersionExpiration": {
                "NoncurrentDays": 90
            }
        },
        {
            "ID": "Logs-Delete-1Year",
            "Status": "Enabled",
            "Filter": {"Prefix": "logs/"},
            "Expiration": {
                "Days": 365
            }
        },
        {
            "ID": "Temp-Delete-7Days",
            "Status": "Enabled",
            "Filter": {"Prefix": "temp/"},
            "Expiration": {
                "Days": 7
            }
        },
        {
            "ID": "CleanupOrphanedDeleteMarkers",
            "Status": "Enabled",
            "Filter": {},
            "Expiration": {
                "ExpiredObjectDeleteMarker": true
            }
        }
    ]
}

Best Practices Checklist

Category Best Practice Priority
Versioning Setup Enable versioning on all production buckets P0
Add lifecycle policies to control version retention P0
Monitor total storage including old versions P1
Data Protection Implement cross-provider backups for critical data P0
Use Object Lock for compliance-critical data P1
Test restore procedures quarterly P1
Ransomware Defense Enable versioning (minimum protection) P0
Use separate credentials for backup destination P0
Consider Object Lock for immutable backup copies P1
Cost Management Set NoncurrentVersionExpiration lifecycle rules P0
Clean up orphaned delete markers periodically P2
Review and adjust retention periods annually P2
Operations Document your versioning and retention policy P1
Build restore runbooks and test them P1
Alert on unexpected version count growth P2

Start Protecting Your Data with DanubeData

DanubeData S3-compatible object storage makes data protection straightforward:

  • Versioning support: Enable versioning with a single API call. Recover from accidental deletes and overwrites instantly.
  • Lifecycle policies: Automate version cleanup to control storage costs while maintaining protection.
  • Affordable storage: EUR 3.99/month includes 1 TB storage + 1 TB egress. Versioning overhead fits comfortably within most plans.
  • EU data residency: All data stored in Germany (Falkenstein). GDPR compliant by default.
  • S3-compatible API: Works with AWS CLI, boto3, and any S3 SDK. Endpoint: s3.danubedata.ro, region: fsn1.
  • Private by default: All buckets are private with HTTPS enforced.
# Get started in 3 commands:

# 1. Create a bucket (via dashboard at https://danubedata.ro/storage/create)

# 2. Enable versioning
aws s3api put-bucket-versioning 
    --bucket your-bucket 
    --versioning-configuration Status=Enabled 
    --endpoint-url https://s3.danubedata.ro

# 3. Add a lifecycle policy
aws s3api put-bucket-lifecycle-configuration 
    --bucket your-bucket 
    --lifecycle-configuration '{
        "Rules": [{
            "ID": "CleanOldVersions",
            "Status": "Enabled",
            "Filter": {},
            "NoncurrentVersionExpiration": {"NoncurrentDays": 90}
        }]
    }' 
    --endpoint-url https://s3.danubedata.ro

# Done. Your data is now protected against accidental loss.

Create your free DanubeData account and start protecting your data in under a minute.

Or go directly to create an S3 storage bucket with versioning, lifecycle policies, and EU data residency included.

Need help designing a data protection strategy? Contact our team for a free consultation.

Share this article

Ready to Get Started?

Deploy your infrastructure in minutes with DanubeData's managed services.