BlogTutorialsHow to Use S3-Compatible Storage as Nextcloud Primary Storage (2026)

How to Use S3-Compatible Storage as Nextcloud Primary Storage (2026)

Adrian Silaghi
Adrian Silaghi
March 19, 2026
14 min read
3 views
#nextcloud #s3 #object storage #self-hosted #cloud storage #file sharing #gdpr #primary storage
How to Use S3-Compatible Storage as Nextcloud Primary Storage (2026)

Nextcloud is the leading self-hosted file sharing and collaboration platform, used by millions of individuals and organizations worldwide. By default, Nextcloud stores files on the local filesystem — but as your data grows past a few hundred gigabytes, local storage becomes a bottleneck.

Configuring S3-compatible object storage as Nextcloud's primary storage separates compute from storage, enabling you to scale each independently. Your Nextcloud server stays lean and fast, while your files live on infinitely scalable S3 storage.

In this guide, we'll walk through every step: from initial configuration to performance tuning, data migration, and production deployment with Docker Compose.

Why Use S3 as Nextcloud Primary Storage?

There are compelling reasons to move Nextcloud's file storage from local disk to S3:

  • Separation of compute and storage: Upgrade or replace your Nextcloud server without migrating terabytes of data. Your files stay in S3 regardless of what happens to the server.
  • Scalability: Local disks have fixed capacity. S3 storage grows automatically — no disk swaps, no RAID rebuilds, no capacity planning.
  • Horizontal scaling: Multiple Nextcloud instances can share the same S3 backend, enabling true horizontal scaling for large organizations.
  • Cost efficiency: S3 storage at €3.99/TB/month is far cheaper than provisioning large NVMe drives on every Nextcloud server.
  • Durability: S3 storage provides redundancy across multiple disks. Local disk failures don't affect your data.
  • Simplified backups: S3 versioning provides point-in-time file recovery without complex backup software.
  • GDPR compliance: Store data in a specific EU region (e.g., Germany) to meet data residency requirements.

Nextcloud Storage Architecture Options

Nextcloud offers three ways to use external storage. Understanding the differences is critical:

Mode Configuration User Experience Best For
Local (default) datadirectory in config.php Files stored as-is on disk Small deployments (<500 GB)
S3 Primary Storage objectstore in config.php All files stored in S3 (transparent to users) Medium to large deployments, horizontal scaling
S3 External Storage External Storage app S3 bucket appears as a folder in user's files Accessing existing S3 data alongside local files

Key Difference: Primary vs. External

  • Primary storage (objectstore): Replaces the local filesystem entirely. All files, including user data, are stored in S3. Files are stored with internal object IDs (not human-readable filenames). This is transparent to users — they see their files normally in Nextcloud.
  • External storage: Mounts an S3 bucket as an additional folder. Files keep their original names on S3. Does not replace local storage — it adds to it.

This guide focuses on S3 as primary storage, which is the recommended approach for new deployments.

Prerequisites

Before you begin, you'll need:

  • A DanubeData account with an S3 storage bucket and access keys
  • Nextcloud 28 or later (we recommend the latest stable release)
  • PHP 8.1+ with the following extensions: curl, xml, gd, mbstring, zip
  • A MySQL/MariaDB or PostgreSQL database for Nextcloud metadata

Create Your S3 Bucket

  1. Sign up for DanubeData
  2. Navigate to Object Storage and create a new bucket (e.g., nextcloud-data)
  3. Generate S3 access keys from the Access Keys section
  4. Note your credentials:
    Endpoint: https://s3.danubedata.ro
    Region: fsn1
    Access Key: your-access-key
    Secret Key: your-secret-key
    Bucket: nextcloud-data
    

Configuring S3 as Primary Storage in config.php

This is the core configuration. Add the objectstore array to your Nextcloud config.php:

<?php
$CONFIG = array(
    // ... existing config ...

    'objectstore' => array(
        'class' => '\OC\Files\ObjectStore\S3',
        'arguments' => array(
            'bucket'         => 'nextcloud-data',
            'hostname'       => 's3.danubedata.ro',
            'port'           => 443,
            'region'         => 'fsn1',
            'key'            => 'YOUR_ACCESS_KEY',
            'secret'         => 'YOUR_SECRET_KEY',
            'use_ssl'        => true,
            'use_path_style' => true,
            // Performance tuning (see below)
            'uploadPartSize' => 524288000,  // 500 MB part size
            'concurrency'    => 5,
        ),
    ),
);

Configuration Parameters Explained

Parameter Value Description
bucket nextcloud-data Your S3 bucket name
hostname s3.danubedata.ro S3 endpoint (without https://)
port 443 HTTPS port
region fsn1 DanubeData region (Falkenstein, Germany)
key YOUR_ACCESS_KEY S3 access key ID
secret YOUR_SECRET_KEY S3 secret access key
use_ssl true Always use HTTPS for security
use_path_style true Required for S3-compatible providers (not AWS)
uploadPartSize 524288000 Multipart upload chunk size in bytes (500 MB)
concurrency 5 Number of parallel upload threads

Important: Once you configure objectstore as primary storage and Nextcloud creates objects in S3, do not switch back to local storage without a proper migration. The files stored in S3 use internal object IDs — they are not human-readable filenames on S3.

Configuring S3 as External Storage

If you want to keep local primary storage but mount an S3 bucket as an additional folder, use the External Storage app instead.

Step 1: Enable the External Storage App

# Via occ command
php occ app:enable files_external

# Or enable from Admin > Apps > External storage support

Step 2: Configure via Admin Panel

  1. Go to Admin Settings > External Storage
  2. Add a new storage mount:
    • Folder name: S3 Files
    • External storage type: Amazon S3
    • Bucket: nextcloud-external
    • Hostname: s3.danubedata.ro
    • Port: 443
    • Region: fsn1
    • Enable SSL: Yes
    • Enable Path Style: Yes
    • Access Key: YOUR_ACCESS_KEY
    • Secret Key: YOUR_SECRET_KEY
  3. Click the checkmark to save and test the connection

Step 3: Configure via occ Command

php occ files_external:create 
    "S3 Files" 
    amazons3 
    amazons3::accesskey 
    --config bucket="nextcloud-external" 
    --config hostname="s3.danubedata.ro" 
    --config port="443" 
    --config region="fsn1" 
    --config use_ssl="true" 
    --config use_path_style="true" 
    --config key="YOUR_ACCESS_KEY" 
    --config secret="YOUR_SECRET_KEY"

Performance Tuning

S3 storage introduces network latency compared to local disk. Proper tuning eliminates most of the performance gap.

Multipart Upload Settings

Large files are uploaded in parts. Tuning the part size and concurrency dramatically affects upload speed:

'objectstore' => array(
    'class' => '\OC\Files\ObjectStore\S3',
    'arguments' => array(
        // ... credentials ...

        // Upload tuning
        'uploadPartSize' => 524288000,  // 500 MB (default: 5 MB)
        'concurrency'    => 5,          // Parallel upload threads
        'putSizeLimit'   => 524288000,  // Switch to multipart above 500 MB
    ),
),

Recommended Settings by Connection Speed

Server Connection uploadPartSize concurrency Expected Upload Speed
100 Mbps 52428800 (50 MB) 3 ~10 MB/s
1 Gbps 524288000 (500 MB) 5 ~80 MB/s
10 Gbps 524288000 (500 MB) 10 ~500 MB/s

PHP Configuration for Large Files

; php.ini adjustments for Nextcloud + S3
upload_max_filesize = 16G
post_max_size = 16G
max_execution_time = 3600
max_input_time = 3600
memory_limit = 512M

; OPcache (essential for Nextcloud performance)
opcache.enable = 1
opcache.memory_consumption = 128
opcache.interned_strings_buffer = 16
opcache.max_accelerated_files = 10000
opcache.revalidate_freq = 1
opcache.save_comments = 1

Nextcloud config.php Performance Settings

<?php
$CONFIG = array(
    // ... existing config ...

    // Enable file caching
    'filelocking.enabled' => true,
    'memcache.locking' => '\OC\Memcache\Redis',
    'memcache.local' => '\OC\Memcache\APCu',
    'memcache.distributed' => '\OC\Memcache\Redis',

    // Redis for file locking and caching
    'redis' => array(
        'host' => 'redis',
        'port' => 6379,
    ),

    // Chunk size for uploads from the web UI
    'chunk_size' => 524288000,  // 500 MB
);

Migrating Existing Nextcloud Data to S3

If you have an existing Nextcloud installation with data on local disk, migrating to S3 primary storage requires careful planning.

Migration Overview

  1. Put Nextcloud in maintenance mode
  2. Back up your database and config.php
  3. Add the objectstore configuration
  4. Run the migration command
  5. Verify the migration
  6. Exit maintenance mode

Step-by-Step Migration

# 1. Enable maintenance mode
php occ maintenance:mode --on

# 2. Back up the database
mysqldump -u nextcloud -p nextcloud_db > nextcloud_backup_$(date +%Y%m%d).sql

# 3. Back up config.php
cp /var/www/nextcloud/config/config.php /var/www/nextcloud/config/config.php.backup

# 4. Add objectstore to config.php (see configuration section above)
# Edit config.php and add the 'objectstore' array

# 5. Run the migration (this uploads all files to S3)
php occ files:transfer-ownership --all

# Or use the objectstore migration tool (Nextcloud 28+)
php occ objectstore:migrate

# 6. Scan files to update the database
php occ files:scan --all

# 7. Verify a few users' files
php occ files:scan admin
php occ files:list admin

# 8. Exit maintenance mode
php occ maintenance:mode --off

Important notes on migration:

  • The migration can take hours for large installations. Plan for downtime.
  • Ensure your S3 bucket has enough free space for all existing data.
  • Monitor the migration with tail -f /var/www/nextcloud/data/nextcloud.log.
  • Keep the local data directory intact until you've fully verified the migration.

Using Objectstore Multibucket for Large Deployments

For very large Nextcloud installations (100+ users or 10+ TB), the objectstore_multibucket feature distributes files across multiple S3 buckets for better performance.

<?php
$CONFIG = array(
    // ... existing config ...

    'objectstore_multibucket' => array(
        'class' => '\OC\Files\ObjectStore\S3',
        'arguments' => array(
            'bucket'         => 'nextcloud-data-',  // Prefix - buckets will be nextcloud-data-0 through nextcloud-data-63
            'num_buckets'    => 64,
            'hostname'       => 's3.danubedata.ro',
            'port'           => 443,
            'region'         => 'fsn1',
            'key'            => 'YOUR_ACCESS_KEY',
            'secret'         => 'YOUR_SECRET_KEY',
            'use_ssl'        => true,
            'use_path_style' => true,
        ),
    ),
);

How Multibucket Works

  • Nextcloud creates num_buckets buckets (e.g., nextcloud-data-0 through nextcloud-data-63)
  • Each user's files are assigned to a specific bucket based on their user ID hash
  • This distributes the load across multiple buckets, avoiding S3 performance limitations on single-bucket operations
  • The num_buckets value must be a power of 2 (1, 2, 4, 8, 16, 32, 64, 128, 256)

Pre-create the buckets:

#!/bin/bash
# create-nextcloud-multibuckets.sh
# Pre-create all buckets for Nextcloud multibucket configuration

BUCKET_PREFIX="nextcloud-data-"
NUM_BUCKETS=64

for i in $(seq 0 $((NUM_BUCKETS - 1))); do
    BUCKET="${BUCKET_PREFIX}${i}"
    echo "Creating bucket: $BUCKET"
    aws s3 mb "s3://$BUCKET" 
        --endpoint-url https://s3.danubedata.ro 
        --profile danubedata
done

echo "Created $NUM_BUCKETS buckets."

Server-Side Encryption with S3

DanubeData S3 supports server-side encryption (SSE) to protect data at rest. You can enable this in Nextcloud:

'objectstore' => array(
    'class' => '\OC\Files\ObjectStore\S3',
    'arguments' => array(
        // ... credentials ...

        // Server-side encryption
        'sse_c_key' => 'your-32-byte-base64-encryption-key',
    ),
),

Generate an Encryption Key

# Generate a 32-byte encryption key (base64 encoded)
openssl rand -base64 32
# Output: k7Jz9H2Qm4xR8Bn3Yp1Wv6Tf5Ae0Cg+Ls4Ud/Mh7XI=

Critical: Store this encryption key securely. If you lose it, your data is unrecoverable. Use a secrets manager or hardware security module (HSM) in production.

Nextcloud's Built-In Encryption

Alternatively, you can use Nextcloud's built-in server-side encryption, which encrypts files before they leave the Nextcloud server:

# Enable Nextcloud encryption
php occ app:enable encryption
php occ encryption:enable

# Encrypt all existing files
php occ encryption:encrypt-all

This approach is simpler but adds CPU overhead for encryption/decryption on every file operation.

Nextcloud Performance: S3 vs Local Storage

Here's how Nextcloud performance compares between local NVMe storage and S3 object storage:

Operation Local NVMe S3 (Same Region) Notes
Small file upload (1 MB) 5 ms 20-50 ms Network latency adds ~20 ms per operation
Large file upload (1 GB) 2 seconds 3-5 seconds Multipart upload parallelizes the transfer
File download (100 MB) 0.2 seconds 0.5-1 second S3 throughput is excellent for large files
Directory listing (1000 files) 10 ms 15 ms Metadata served from Nextcloud DB, not S3
File rename 1 ms 2 ms Metadata-only operation in both cases
Thumbnail generation 50 ms 100-200 ms File must be downloaded to generate thumbnail
Desktop client sync (1000 files) Fast Similar Sync protocol uses metadata; actual transfers are parallelized

Key takeaways:

  • For typical use (uploading documents, sharing files, syncing folders), S3 performance is virtually indistinguishable from local storage.
  • Operations that involve many small files (e.g., syncing thousands of tiny files) may see slightly higher latency.
  • Large file transfers (videos, ISOs, archives) perform nearly as well as local storage thanks to multipart uploads.
  • Directory listings and file operations (rename, move, delete) are served from the database, not S3, so they are equally fast.

Scaling Nextcloud Horizontally with Shared S3 Backend

One of the biggest advantages of S3 primary storage is enabling horizontal scaling. Multiple Nextcloud web servers can share the same S3 backend and database.

Architecture

                    +-------------------+
                    |   Load Balancer   |
                    +---+-----+-----+---+
                        |     |     |
              +---------+  +--+--+  +---------+
              |            |     |            |
        +-----v----+ +----v-----+ +----v-----+
        | Nextcloud | | Nextcloud | | Nextcloud |
        | Web #1    | | Web #2    | | Web #3    |
        +-----------+ +-----------+ +-----------+
              |            |              |
              +-----+------+------+-------+
                    |             |
              +-----v-----+ +----v------+
              |   MySQL    | | DanubeData|
              |  (shared)  | |  S3       |
              +-----------+ +-----------+

Requirements for Horizontal Scaling

  • Shared database: All Nextcloud instances connect to the same MySQL/PostgreSQL database
  • Shared S3 storage: All instances use the same objectstore configuration
  • Redis for session/cache: Shared Redis instance for file locking and sessions
  • Shared config.php: Identical configuration on all instances
  • Sticky sessions: Load balancer should use sticky sessions (or shared session storage)

Shared config.php for All Instances

<?php
$CONFIG = array(
    'trusted_domains' => array(
        0 => 'cloud.example.com',
    ),
    'overwrite.cli.url' => 'https://cloud.example.com',
    'overwriteprotocol' => 'https',

    // Shared database
    'dbtype' => 'mysql',
    'dbhost' => 'mysql-server.internal',
    'dbname' => 'nextcloud',
    'dbuser' => 'nextcloud',
    'dbpassword' => 'secure-password',

    // Shared S3 storage
    'objectstore' => array(
        'class' => '\OC\Files\ObjectStore\S3',
        'arguments' => array(
            'bucket'         => 'nextcloud-data',
            'hostname'       => 's3.danubedata.ro',
            'port'           => 443,
            'region'         => 'fsn1',
            'key'            => 'YOUR_ACCESS_KEY',
            'secret'         => 'YOUR_SECRET_KEY',
            'use_ssl'        => true,
            'use_path_style' => true,
        ),
    ),

    // Shared Redis
    'memcache.locking' => '\OC\Memcache\Redis',
    'memcache.distributed' => '\OC\Memcache\Redis',
    'memcache.local' => '\OC\Memcache\APCu',
    'redis' => array(
        'host' => 'redis-server.internal',
        'port' => 6379,
    ),

    // Session handling via Redis
    'session_handler' => '\OC\Session\Internal',
);

Docker Compose Example: Nextcloud + S3

Here's a complete Docker Compose setup for running Nextcloud with DanubeData S3 as primary storage:

# docker-compose.yml
version: "3.9"

services:
  nextcloud:
    image: nextcloud:30-apache
    container_name: nextcloud
    restart: unless-stopped
    ports:
      - "8080:80"
    environment:
      # Database
      MYSQL_HOST: db
      MYSQL_DATABASE: nextcloud
      MYSQL_USER: nextcloud
      MYSQL_PASSWORD: nextcloud-db-password

      # S3 Primary Storage
      OBJECTSTORE_S3_HOST: s3.danubedata.ro
      OBJECTSTORE_S3_BUCKET: nextcloud-data
      OBJECTSTORE_S3_KEY: YOUR_ACCESS_KEY
      OBJECTSTORE_S3_SECRET: YOUR_SECRET_KEY
      OBJECTSTORE_S3_PORT: 443
      OBJECTSTORE_S3_SSL: "true"
      OBJECTSTORE_S3_REGION: fsn1
      OBJECTSTORE_S3_USEPATH_STYLE: "true"

      # Redis cache
      REDIS_HOST: redis

      # Admin account
      NEXTCLOUD_ADMIN_USER: admin
      NEXTCLOUD_ADMIN_PASSWORD: secure-admin-password
      NEXTCLOUD_TRUSTED_DOMAINS: cloud.example.com localhost
    volumes:
      - nextcloud_config:/var/www/html/config
      - nextcloud_custom_apps:/var/www/html/custom_apps
      - nextcloud_themes:/var/www/html/themes
    depends_on:
      - db
      - redis

  db:
    image: mariadb:11
    container_name: nextcloud-db
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: root-password
      MYSQL_DATABASE: nextcloud
      MYSQL_USER: nextcloud
      MYSQL_PASSWORD: nextcloud-db-password
    volumes:
      - db_data:/var/lib/mysql
    command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW

  redis:
    image: redis:7-alpine
    container_name: nextcloud-redis
    restart: unless-stopped
    command: redis-server --save 60 1 --loglevel warning
    volumes:
      - redis_data:/data

  cron:
    image: nextcloud:30-apache
    container_name: nextcloud-cron
    restart: unless-stopped
    entrypoint: /cron.sh
    volumes:
      - nextcloud_config:/var/www/html/config
      - nextcloud_custom_apps:/var/www/html/custom_apps
    depends_on:
      - nextcloud
      - db
      - redis

volumes:
  nextcloud_config:
  nextcloud_custom_apps:
  nextcloud_themes:
  db_data:
  redis_data:

Deploy and Initialize

# Start the stack
docker compose up -d

# Wait for initialization (first start takes 1-2 minutes)
docker compose logs -f nextcloud

# Verify S3 is configured as primary storage
docker compose exec -u www-data nextcloud php occ config:list system | grep objectstore

# Check that files are being stored in S3
docker compose exec -u www-data nextcloud php occ files:scan admin

# Verify with rclone or AWS CLI
aws s3 ls s3://nextcloud-data/ 
    --endpoint-url https://s3.danubedata.ro 
    --profile danubedata

Troubleshooting Common Issues

Error: "Access Denied" or "403 Forbidden"

  • Verify your access key and secret key are correct
  • Ensure the bucket exists and your access key has read/write permissions
  • Check that the bucket name matches exactly (case-sensitive)
  • Verify the endpoint URL: s3.danubedata.ro (no https:// prefix in config.php)
# Test S3 connectivity from the Nextcloud container
docker compose exec nextcloud bash
apt-get update && apt-get install -y awscli
aws s3 ls s3://nextcloud-data/ 
    --endpoint-url https://s3.danubedata.ro 
    --region fsn1

Error: "RequestTimeTooSkewed"

S3 requires the server clock to be within 15 minutes of the actual time. Fix:

# Check server time
date

# Install and sync NTP
apt-get install -y ntp
systemctl enable ntp
systemctl start ntp

# Or use timedatectl
timedatectl set-ntp true

Slow Upload Performance

  • Increase uploadPartSize to 500 MB (524288000 bytes)
  • Increase concurrency to 5 or 10
  • Ensure PHP memory_limit is at least 512M
  • Check network bandwidth between Nextcloud server and S3 endpoint

Large File Upload Timeout

# Increase PHP timeouts
php_value max_execution_time 3600
php_value max_input_time 3600

# If using nginx as reverse proxy
proxy_read_timeout 3600;
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
client_max_body_size 16G;

# If using Apache
Timeout 3600
ProxyTimeout 3600
LimitRequestBody 17179869184

Error: "Could not create folder"

This usually means the S3 bucket doesn't exist or the access key lacks PutObject permission:

# Create the bucket manually
aws s3 mb s3://nextcloud-data 
    --endpoint-url https://s3.danubedata.ro 
    --profile danubedata

# Verify write permission
echo "test" > /tmp/test.txt
aws s3 cp /tmp/test.txt s3://nextcloud-data/test.txt 
    --endpoint-url https://s3.danubedata.ro 
    --profile danubedata

# Clean up
aws s3 rm s3://nextcloud-data/test.txt 
    --endpoint-url https://s3.danubedata.ro 
    --profile danubedata

Files Missing After Migration

# Re-scan all files
docker compose exec -u www-data nextcloud php occ files:scan --all

# Check for errors in the log
docker compose exec nextcloud tail -100 /var/www/html/data/nextcloud.log | grep -i error

# Verify objects exist in S3
aws s3 ls s3://nextcloud-data/ --recursive --summarize 
    --endpoint-url https://s3.danubedata.ro 
    --profile danubedata

Cost Comparison: Local Disk vs. S3 for Nextcloud

Factor Local Disk (VPS with storage) DanubeData S3
500 GB storage €15-30/month (larger VPS plan) €3.99/month (included in base)
2 TB storage €40-80/month (dedicated server) €7.98/month
5 TB storage €80-150/month (dedicated server) €19.95/month
10 TB storage €150-300/month (multiple servers) €39.90/month
Disk redundancy Extra cost (RAID or replicas) Included
Server migration Must move all data (hours/days) Just point new server at S3
Horizontal scaling Requires shared filesystem (NFS, GlusterFS) Works out of the box
Backup complexity Full disk snapshots or rsync S3 versioning handles it

Based on DanubeData S3 storage pricing: €3.99/month base includes 1TB storage + 1TB traffic. Additional storage €3.99/TB/month.

Backup Strategy for Nextcloud on S3

Even with S3's built-in redundancy, you should have a backup strategy for Nextcloud:

What to Back Up

  • Database (critical): Contains all metadata, user accounts, shares, and file mappings. Without the database, your S3 objects are useless.
  • config.php (critical): Contains S3 credentials and encryption keys.
  • S3 data (important): The actual file content. Enable versioning for point-in-time recovery.
  • Custom apps (optional): Any non-standard apps you've installed.

Automated Backup Script

#!/bin/bash
# nextcloud-backup.sh - Back up Nextcloud database and config
# Run daily via cron

BACKUP_DIR="/backups/nextcloud"
DATE=$(date +%Y%m%d)
S3_REMOTE="danubedata"
S3_BACKUP_BUCKET="nextcloud-backups"
RETENTION_DAYS=30

mkdir -p "$BACKUP_DIR"

echo "[$(date)] Starting Nextcloud backup..."

# 1. Enable maintenance mode
docker compose exec -u www-data nextcloud php occ maintenance:mode --on

# 2. Back up the database
docker compose exec db mysqldump 
    -u nextcloud 
    -pnextcloud-db-password 
    nextcloud > "$BACKUP_DIR/db-$DATE.sql"

# 3. Back up config.php
docker compose cp nextcloud:/var/www/html/config/config.php 
    "$BACKUP_DIR/config-$DATE.php"

# 4. Disable maintenance mode
docker compose exec -u www-data nextcloud php occ maintenance:mode --off

# 5. Compress the backup
tar -czf "$BACKUP_DIR/nextcloud-backup-$DATE.tar.gz" 
    -C "$BACKUP_DIR" 
    "db-$DATE.sql" 
    "config-$DATE.php"

# 6. Upload to a separate S3 bucket
rclone copy "$BACKUP_DIR/nextcloud-backup-$DATE.tar.gz" 
    "$S3_REMOTE:$S3_BACKUP_BUCKET/"

# 7. Clean up old local backups
find "$BACKUP_DIR" -name "*.tar.gz" -mtime +$RETENTION_DAYS -delete
find "$BACKUP_DIR" -name "*.sql" -mtime +1 -delete
find "$BACKUP_DIR" -name "*.php" -mtime +1 -delete

echo "[$(date)] Backup completed: nextcloud-backup-$DATE.tar.gz"

Enable S3 Versioning

S3 versioning keeps previous versions of every file, providing point-in-time recovery:

# Enable versioning on the Nextcloud data bucket
aws s3api put-bucket-versioning 
    --bucket nextcloud-data 
    --versioning-configuration Status=Enabled 
    --endpoint-url https://s3.danubedata.ro 
    --profile danubedata

# Optionally, set a lifecycle policy to clean up old versions after 90 days
cat > /tmp/versioning-lifecycle.json << 'EOF'
{
    "Rules": [
        {
            "ID": "Clean up old versions",
            "Filter": {
                "Prefix": ""
            },
            "Status": "Enabled",
            "NoncurrentVersionExpiration": {
                "NoncurrentDays": 90
            }
        }
    ]
}
EOF

aws s3api put-bucket-lifecycle-configuration 
    --bucket nextcloud-data 
    --lifecycle-configuration file:///tmp/versioning-lifecycle.json 
    --endpoint-url https://s3.danubedata.ro 
    --profile danubedata

Nextcloud Apps That Work Well with S3

Most Nextcloud apps work transparently with S3 primary storage. Here are some particularly useful ones:

  • Nextcloud Office (Collabora/OnlyOffice): Real-time document editing works perfectly — files are streamed from S3 as needed.
  • Talk: Video calls and chat. Attachments are stored in S3.
  • Photos: Thumbnail generation works but may be slightly slower on first load (thumbnails are cached after generation).
  • Memories: Photo management app. Works well with S3 — enable preview pre-generation for best performance.
  • Contacts and Calendar: Metadata stored in the database — no S3 impact.

Preview Pre-Generation for S3

Photo thumbnails are slower to generate from S3 because the original image must be downloaded. Pre-generate them:

# Install the Preview Generator app
docker compose exec -u www-data nextcloud php occ app:enable previewgenerator

# Generate previews for all existing files
docker compose exec -u www-data nextcloud php occ preview:generate-all

# Set up a cron job to pre-generate previews for new files
# Add to crontab:
# */10 * * * * docker compose exec -u www-data nextcloud php occ preview:pre-generate

Security Best Practices

  • Use a dedicated S3 access key for Nextcloud — don't share it with other applications.
  • Enable Nextcloud server-side encryption if you want zero-knowledge encryption (the S3 provider can't read your files).
  • Use HTTPS for all S3 connections (use_ssl => true).
  • Rotate access keys periodically (at least annually).
  • Back up your encryption keys if using SSE-C or Nextcloud encryption.
  • Restrict S3 bucket access to only the Nextcloud access key — don't make the bucket public.
  • Monitor S3 access logs for unauthorized access attempts.

Get Started with Nextcloud + S3

Moving Nextcloud's storage to S3 is one of the best upgrades you can make for scalability, durability, and cost efficiency. Whether you're running a personal cloud for your family or a collaborative platform for your company, DanubeData S3 provides the reliable foundation your Nextcloud instance needs.

  1. Create a DanubeData account
  2. Create a storage bucket for Nextcloud
  3. Generate S3 access keys
  4. Configure objectstore in your Nextcloud config.php
  5. Enjoy scalable, durable, GDPR-compliant file storage

DanubeData S3 Storage for Nextcloud:

  • €3.99/month includes 1TB storage + 1TB traffic
  • Additional storage just €3.99/TB/month
  • No egress fees for normal usage
  • GDPR compliant (German data center in Falkenstein)
  • S3-compatible API — native Nextcloud support
  • 99.9% uptime SLA

Create Your Nextcloud Storage Bucket Now

Need help configuring Nextcloud with S3? Contact our team — we run Nextcloud on DanubeData S3 ourselves and can guide you through the setup.

Share this article

Ready to Get Started?

Deploy your infrastructure in minutes with DanubeData's managed services.