Static websites are making a massive comeback. With modern static site generators, JAMstack architecture, and S3-compatible object storage, you can host a blazing-fast website for pennies per month—no servers to manage, no scaling to worry about, no security patches to apply.
In this guide, you'll learn how to host a static website on S3-compatible storage from start to finish: uploading files, configuring public access, setting up custom domains with HTTPS, automating deployments with GitHub Actions, and optimizing performance.
What Is Static Website Hosting on S3?
Static website hosting on S3 means storing your HTML, CSS, JavaScript, and image files in an object storage bucket and serving them directly to visitors' browsers. There's no server-side processing—no PHP, no Node.js, no database queries at request time.
How It Works
- You upload your static files (HTML, CSS, JS, images) to an S3 bucket
- You configure the bucket for public read access
- You set an index document (usually
index.html) and an error document (404.html) - Visitors access your site via the bucket's URL or your custom domain
- A CDN like Cloudflare sits in front for HTTPS, caching, and global distribution
What Can You Host?
- Marketing sites and landing pages
- Documentation sites (like this blog!)
- Portfolios and personal websites
- Single-page applications (React, Vue, Svelte builds)
- Blog sites generated by Hugo, Jekyll, Astro, or Gatsby
- E-commerce storefronts with headless CMS backends
- Progressive Web Apps (PWAs)
Benefits of S3 Static Hosting
1. Cost
Static hosting on S3 is dramatically cheaper than running a web server. A typical blog or portfolio site costs under $5/month—and that includes generous storage and bandwidth.
2. Performance
Files are served directly from storage with no server-side processing. Combined with a CDN, your site loads in milliseconds from anywhere in the world. There's no PHP execution time, no database queries, no cold starts.
3. Simplicity
No server to configure, no OS to patch, no runtime to update. Upload your files and you're done. Updates are as simple as uploading new files.
4. Reliability
Object storage is designed for 99.99%+ availability. Your site won't go down because of a crashed process, memory leak, or runaway query.
5. Security
With no server-side code running, the attack surface is minimal. No SQL injection, no remote code execution, no server misconfiguration. Your only concern is keeping your S3 credentials safe.
6. Scalability
Object storage handles thousands of concurrent requests without any configuration. There's no need to set up auto-scaling groups or load balancers.
Setting Up a Bucket for Static Hosting
Step 1: Create Your Bucket
Using the DanubeData dashboard or the AWS CLI with a custom endpoint:
# Create bucket via aws CLI
aws s3 mb s3://my-website
--endpoint-url https://s3.danubedata.ro
--region fsn1
# Verify the bucket was created
aws s3 ls
--endpoint-url https://s3.danubedata.ro
--region fsn1
Step 2: Create Your Website Files
Let's start with a minimal website structure:
my-website/
├── index.html
├── 404.html
├── css/
│ └── style.css
├── js/
│ └── app.js
└── images/
├── logo.svg
└── hero.jpg
A simple index.html:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>My Static Website</title>
<link rel="stylesheet" href="/css/style.css">
</head>
<body>
<header>
<h1>Welcome to My Website</h1>
<nav>
<a href="/">Home</a>
<a href="/about.html">About</a>
<a href="/blog/">Blog</a>
</nav>
</header>
<main>
<p>This site is hosted on S3-compatible storage.</p>
</main>
<script src="/js/app.js"></script>
</body>
</html>
A custom 404.html:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Page Not Found</title>
<link rel="stylesheet" href="/css/style.css">
</head>
<body>
<h1>404 - Page Not Found</h1>
<p>The page you're looking for doesn't exist.</p>
<a href="/">Go Home</a>
</body>
</html>
Uploading Static Files
Upload with AWS CLI
# Upload entire directory
aws s3 sync ./my-website/ s3://my-website/
--endpoint-url https://s3.danubedata.ro
--region fsn1
--delete
# Upload with correct content types (important!)
aws s3 sync ./my-website/ s3://my-website/
--endpoint-url https://s3.danubedata.ro
--region fsn1
--delete
--content-type "text/html"
--exclude "*" --include "*.html"
# Better: let aws CLI auto-detect content types (default behavior)
aws s3 sync ./my-website/ s3://my-website/
--endpoint-url https://s3.danubedata.ro
--region fsn1
--delete
Upload with rclone
# Configure rclone remote (one-time setup)
# ~/.config/rclone/rclone.conf
[danubedata]
type = s3
provider = Other
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
endpoint = https://s3.danubedata.ro
region = fsn1
acl = public-read
# Sync website files
rclone sync ./my-website/ danubedata:my-website/
--progress
--s3-acl public-read
Upload with s3cmd
# Sync with public-read ACL
s3cmd sync ./my-website/ s3://my-website/
--acl-public
--delete-removed
--no-mime-magic
--guess-mime-type
Configuring Public Read Access
For a public website, all objects need to be readable by anyone. There are two approaches:
Option 1: Bucket Policy (Recommended)
Apply a bucket policy that grants public read access to all objects:
# Create a bucket policy file
cat > bucket-policy.json <<'EOF'
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-website/*"
}
]
}
EOF
# Apply the policy
aws s3api put-bucket-policy
--bucket my-website
--policy file://bucket-policy.json
--endpoint-url https://s3.danubedata.ro
--region fsn1
Option 2: Per-Object ACL
# Upload with public-read ACL on each object
aws s3 sync ./my-website/ s3://my-website/
--endpoint-url https://s3.danubedata.ro
--region fsn1
--acl public-read
Setting Content Types and Cache Headers
Correct content types and cache headers are essential for performance and proper rendering:
Content Types
The AWS CLI auto-detects most content types, but you should verify critical ones:
| Extension | Content-Type | Notes |
|---|---|---|
.html |
text/html; charset=utf-8 |
Include charset for proper encoding |
.css |
text/css |
Must be correct or styles won't load |
.js |
application/javascript |
Not text/javascript (deprecated) |
.json |
application/json |
API responses, config files |
.svg |
image/svg+xml |
Often incorrectly set as text/xml |
.woff2 |
font/woff2 |
Web fonts |
.webp |
image/webp |
Modern image format |
.avif |
image/avif |
Next-gen image format |
Cache Headers
Set appropriate cache headers to reduce bandwidth and improve load times:
#!/bin/bash
# deploy.sh - Upload with proper cache headers
ENDPOINT="https://s3.danubedata.ro"
BUCKET="my-website"
REGION="fsn1"
# HTML files: short cache (they change often)
aws s3 sync ./public/ s3://$BUCKET/
--endpoint-url $ENDPOINT --region $REGION
--exclude "*" --include "*.html"
--cache-control "public, max-age=300"
--content-type "text/html; charset=utf-8"
--acl public-read
# CSS and JS with content hashes: long cache (fingerprinted filenames)
aws s3 sync ./public/ s3://$BUCKET/
--endpoint-url $ENDPOINT --region $REGION
--exclude "*" --include "*.css" --include "*.js"
--cache-control "public, max-age=31536000, immutable"
--acl public-read
# Images: long cache
aws s3 sync ./public/ s3://$BUCKET/
--endpoint-url $ENDPOINT --region $REGION
--exclude "*" --include "*.jpg" --include "*.png" --include "*.webp" --include "*.avif" --include "*.svg" --include "*.gif" --include "*.ico"
--cache-control "public, max-age=2592000"
--acl public-read
# Fonts: long cache
aws s3 sync ./public/ s3://$BUCKET/
--endpoint-url $ENDPOINT --region $REGION
--exclude "*" --include "*.woff" --include "*.woff2" --include "*.ttf" --include "*.eot"
--cache-control "public, max-age=31536000, immutable"
--acl public-read
echo "Deployment complete!"
Custom Domain Setup
Option 1: Cloudflare (Recommended)
Cloudflare provides free HTTPS, CDN caching, and DDoS protection:
- Add your domain to Cloudflare and update your nameservers
- Create a CNAME record:
# DNS Configuration in Cloudflare
Type: CNAME
Name: www (or @ for root domain)
Target: s3.danubedata.ro
Proxy: Enabled (orange cloud icon)
TTL: Auto
- Configure Cloudflare Page Rules (or Transform Rules):
# Page Rule 1: Force HTTPS
Match: http://www.example.com/*
Setting: Always Use HTTPS
# Page Rule 2: Redirect root to www (if needed)
Match: http://example.com/*
Setting: Forwarding URL (301) → https://www.example.com/$1
- Configure Cloudflare Cache Rules:
# Cache Rule: Cache static assets aggressively
Match: *.css, *.js, *.jpg, *.png, *.webp, *.svg, *.woff2
Edge TTL: 1 month
Browser TTL: 1 year
# Cache Rule: Cache HTML with shorter TTL
Match: *.html
Edge TTL: 5 minutes
Browser TTL: 5 minutes
Option 2: Nginx Reverse Proxy
If you already have a server, use Nginx as a reverse proxy with Let's Encrypt:
# /etc/nginx/sites-available/my-website
server {
listen 80;
server_name www.example.com example.com;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 ssl http2;
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem;
# Proxy to S3
location / {
proxy_pass https://s3.danubedata.ro/my-website/;
proxy_set_header Host s3.danubedata.ro;
proxy_intercept_errors on;
# Cache static assets
proxy_cache_valid 200 1d;
proxy_cache_valid 404 1m;
add_header X-Cache-Status $upstream_cache_status;
# Try index.html for directory requests
proxy_intercept_errors on;
error_page 404 /404.html;
}
# Custom error page
location = /404.html {
proxy_pass https://s3.danubedata.ro/my-website/404.html;
proxy_set_header Host s3.danubedata.ro;
}
}
Option 3: Caddy (Simplest)
# Caddyfile - automatic HTTPS with Let's Encrypt
www.example.com {
reverse_proxy https://s3.danubedata.ro {
header_up Host s3.danubedata.ro
rewrite * /my-website{uri}
}
handle_errors {
rewrite * /my-website/404.html
reverse_proxy https://s3.danubedata.ro {
header_up Host s3.danubedata.ro
}
}
}
example.com {
redir https://www.example.com{uri} permanent
}
CI/CD Deployment with GitHub Actions
Automate deployments so every push to main updates your website:
Basic Deployment Workflow
# .github/workflows/deploy.yml
name: Deploy Static Site
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS CLI for S3-compatible storage
run: |
aws configure set aws_access_key_id ${{ secrets.S3_ACCESS_KEY }}
aws configure set aws_secret_access_key ${{ secrets.S3_SECRET_KEY }}
aws configure set default.region fsn1
- name: Deploy to S3
run: |
aws s3 sync ./public/ s3://${{ secrets.S3_BUCKET }}/
--endpoint-url https://s3.danubedata.ro
--delete
--acl public-read
- name: Set cache headers for assets
run: |
# Long cache for fingerprinted assets
aws s3 sync ./public/ s3://${{ secrets.S3_BUCKET }}/
--endpoint-url https://s3.danubedata.ro
--exclude "*"
--include "*.css" --include "*.js"
--cache-control "public, max-age=31536000, immutable"
--acl public-read
- name: Purge Cloudflare cache
if: success()
run: |
curl -X POST "https://api.cloudflare.com/client/v4/zones/${{ secrets.CF_ZONE_ID }}/purge_cache"
-H "Authorization: Bearer ${{ secrets.CF_API_TOKEN }}"
-H "Content-Type: application/json"
--data '{"purge_everything":true}'
Hugo Build + Deploy Workflow
# .github/workflows/hugo-deploy.yml
name: Build and Deploy Hugo Site
on:
push:
branches: [main]
jobs:
build-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
submodules: true # Hugo themes are often submodules
fetch-depth: 0 # Needed for .GitInfo and .Lastmod
- name: Setup Hugo
uses: peaceiris/actions-hugo@v3
with:
hugo-version: 'latest'
extended: true
- name: Build
run: hugo --minify
- name: Configure S3
run: |
aws configure set aws_access_key_id ${{ secrets.S3_ACCESS_KEY }}
aws configure set aws_secret_access_key ${{ secrets.S3_SECRET_KEY }}
aws configure set default.region fsn1
- name: Deploy
run: |
aws s3 sync ./public/ s3://${{ secrets.S3_BUCKET }}/
--endpoint-url https://s3.danubedata.ro
--delete
--acl public-read
--cache-control "public, max-age=300"
- name: Set immutable cache for hashed assets
run: |
aws s3 sync ./public/ s3://${{ secrets.S3_BUCKET }}/
--endpoint-url https://s3.danubedata.ro
--exclude "*"
--include "*.css" --include "*.js" --include "*.woff2"
--cache-control "public, max-age=31536000, immutable"
--acl public-read
Astro Build + Deploy Workflow
# .github/workflows/astro-deploy.yml
name: Build and Deploy Astro Site
on:
push:
branches: [main]
jobs:
build-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
- name: Install dependencies
run: npm ci
- name: Build
run: npm run build
- name: Configure S3
run: |
aws configure set aws_access_key_id ${{ secrets.S3_ACCESS_KEY }}
aws configure set aws_secret_access_key ${{ secrets.S3_SECRET_KEY }}
aws configure set default.region fsn1
- name: Deploy
run: |
aws s3 sync ./dist/ s3://${{ secrets.S3_BUCKET }}/
--endpoint-url https://s3.danubedata.ro
--delete
--acl public-read
Static Site Generators: Which One to Choose?
Static site generators (SSGs) transform your content (Markdown, data files) into a complete HTML website. Here's a comparison of the most popular options in 2026:
| Generator | Language | Build Speed | Best For | Learning Curve |
|---|---|---|---|---|
| Hugo | Go | Fastest (<1s for 1K pages) | Blogs, docs, large content sites | Medium |
| Astro | JS/TS | Fast | Content sites with interactive islands | Low-Medium |
| Next.js (export) | JS/TS | Medium | React apps, complex SPAs | Medium |
| Jekyll | Ruby | Slow | Blogs (GitHub Pages native) | Low |
| Gatsby | JS/TS | Slow | React sites with GraphQL data | High |
| Eleventy (11ty) | JS | Fast | Simple sites, blogs | Low |
| Zola | Rust | Very Fast | Blogs, docs (single binary) | Low |
Example: Deploying a Hugo Site to S3
Let's walk through a complete example of setting up a Hugo blog and deploying it to S3-compatible storage.
Step 1: Create a New Hugo Site
# Install Hugo
brew install hugo # macOS
# or: sudo apt install hugo # Ubuntu/Debian
# or: sudo dnf install hugo # Fedora
# Create a new site
hugo new site my-blog
cd my-blog
# Add a theme
git init
git submodule add https://github.com/adityatelange/hugo-PaperMod.git themes/PaperMod
# Configure Hugo
cat > hugo.toml <<'EOF'
baseURL = "https://www.example.com/"
languageCode = "en-us"
title = "My Blog"
theme = "PaperMod"
[params]
env = "production"
description = "My personal blog hosted on S3"
[outputs]
home = ["HTML", "RSS", "JSON"]
EOF
Step 2: Create Content
# Create your first post
hugo new content posts/hello-world.md
# Edit the post
cat > content/posts/hello-world.md <<'EOF'
---
title: "Hello World"
date: 2026-03-19
draft: false
tags: ["first-post", "hugo"]
description: "My first blog post hosted on S3 storage."
---
Welcome to my blog! This site is built with Hugo and hosted on
S3-compatible storage for maximum performance and minimum cost.
## Why Hugo + S3?
- **Build time**: Hugo generates thousands of pages in under a second
- **Hosting cost**: Under $5/month on S3 storage
- **No servers**: No patching, no scaling, no downtime
- **Global CDN**: Cloudflare caches everything at the edge
EOF
Step 3: Build and Deploy
# Build the site
hugo --minify
# Preview locally first
hugo server -D # serves at http://localhost:1313
# Deploy to S3
aws s3 sync ./public/ s3://my-blog/
--endpoint-url https://s3.danubedata.ro
--region fsn1
--delete
--acl public-read
# Verify deployment
curl -I https://s3.danubedata.ro/my-blog/index.html
Step 4: Automate with a Deploy Script
#!/bin/bash
# deploy.sh - Build and deploy Hugo site
set -euo pipefail
ENDPOINT="https://s3.danubedata.ro"
BUCKET="my-blog"
REGION="fsn1"
echo "Building Hugo site..."
hugo --minify --gc
echo "Deploying to S3..."
# Upload HTML with short cache
aws s3 sync ./public/ s3://$BUCKET/
--endpoint-url $ENDPOINT --region $REGION
--delete
--acl public-read
--exclude "*.css" --exclude "*.js" --exclude "*.woff2" --exclude "*.jpg" --exclude "*.png" --exclude "*.webp"
--cache-control "public, max-age=300"
# Upload assets with long cache
aws s3 sync ./public/ s3://$BUCKET/
--endpoint-url $ENDPOINT --region $REGION
--exclude "*"
--include "*.css" --include "*.js" --include "*.woff2"
--cache-control "public, max-age=31536000, immutable"
--acl public-read
# Upload images with medium cache
aws s3 sync ./public/ s3://$BUCKET/
--endpoint-url $ENDPOINT --region $REGION
--exclude "*"
--include "*.jpg" --include "*.png" --include "*.webp" --include "*.svg" --include "*.gif"
--cache-control "public, max-age=2592000"
--acl public-read
echo "Deployment complete! Site is live."
echo "URL: https://s3.danubedata.ro/$BUCKET/index.html"
Example: Deploying an Astro Site to S3
# Create a new Astro project
npm create astro@latest my-astro-site
cd my-astro-site
# Configure for static output (astro.config.mjs)
cat > astro.config.mjs <<'EOF'
import { defineConfig } from 'astro/config';
export default defineConfig({
output: 'static',
build: {
assets: '_assets' // fingerprinted assets directory
},
site: 'https://www.example.com'
});
EOF
# Build
npm run build
# Deploy (output is in ./dist/)
aws s3 sync ./dist/ s3://my-astro-site/
--endpoint-url https://s3.danubedata.ro
--region fsn1
--delete
--acl public-read
Example: Deploying a Next.js Static Export to S3
// next.config.js - enable static export
/** @type {import('next').NextConfig} */
const nextConfig = {
output: 'export',
trailingSlash: true, // important for S3 directory-style URLs
images: {
unoptimized: true // required for static export
}
}
module.exports = nextConfig
# Build the static export
npm run build
# Deploy (output is in ./out/)
aws s3 sync ./out/ s3://my-nextjs-site/
--endpoint-url https://s3.danubedata.ro
--region fsn1
--delete
--acl public-read
Performance Optimization
1. Enable Compression
Pre-compress your files before uploading for the best performance:
# Pre-compress HTML, CSS, JS with gzip
find ./public -type f ( -name "*.html" -o -name "*.css" -o -name "*.js" -o -name "*.json" -o -name "*.xml" -o -name "*.svg" ) -exec gzip -9 -k {} ;
# Upload compressed versions with Content-Encoding header
aws s3 sync ./public/ s3://my-website/
--endpoint-url https://s3.danubedata.ro
--region fsn1
--exclude "*" --include "*.gz"
--content-encoding gzip
--acl public-read
# Or let Cloudflare handle compression automatically (easier)
# Just enable "Auto Minify" and "Brotli" in Cloudflare dashboard
2. Optimize Images
# Convert images to WebP (much smaller than JPEG/PNG)
for file in ./public/images/*.{jpg,png}; do
cwebp -q 80 "$file" -o "${file%.*}.webp"
done
# Or use Astro's built-in image optimization
# (automatically generates optimized formats during build)
3. Use Asset Fingerprinting
Most SSGs support content hashing in filenames. This lets you set very long cache times:
# Hugo: enable fingerprinting in templates
{{ $style := resources.Get "css/main.css" | resources.Minify | resources.Fingerprint "sha256" }}
<link rel="stylesheet" href="{{ $style.RelPermalink }}" integrity="{{ $style.Data.Integrity }}">
# Output: /css/main.abc123def456.css
# Can be cached forever (immutable) because the URL changes when content changes
4. Preload Critical Resources
<!-- In your HTML head -->
<link rel="preload" href="/css/style.css" as="style">
<link rel="preload" href="/fonts/inter.woff2" as="font" type="font/woff2" crossorigin>
<link rel="preconnect" href="https://fonts.googleapis.com">
Comparison: S3 Hosting vs Netlify vs Vercel vs GitHub Pages
| Feature | S3 + Cloudflare | Netlify | Vercel | GitHub Pages |
|---|---|---|---|---|
| Free tier | No (from $3.99/mo) | Yes (100GB bandwidth) | Yes (100GB bandwidth) | Yes (100GB bandwidth) |
| Custom domain | Unlimited | Yes | Yes | Yes (1 per repo) |
| HTTPS | Via Cloudflare (free) | Automatic | Automatic | Automatic |
| Build system | GitHub Actions (flexible) | Built-in | Built-in | GitHub Actions |
| Deploy previews | Manual setup | Automatic per PR | Automatic per PR | No |
| Bandwidth limit | 1TB included | 100GB free / $20 for 1TB | 100GB free / $20 for 1TB | 100GB soft limit |
| Storage limit | 1TB included | Not specified | Not specified | 1GB repo limit |
| Serverless functions | No (static only) | Yes | Yes | No |
| Forms | External service | Built-in | External service | External service |
| Data sovereignty | EU (Germany) | US company | US company | US company |
| Vendor lock-in | None (standard S3 API) | Medium (Netlify features) | Medium (Vercel features) | Low |
Cost Comparison
Let's compare costs for a medium-traffic blog with 50GB of assets and 200GB bandwidth/month:
| Platform | Free Tier Covers? | Monthly Cost | Annual Cost |
|---|---|---|---|
| DanubeData S3 + Cloudflare | 1TB storage + 1TB traffic | $3.99 | $47.88 |
| GitHub Pages | Yes (under limits) | $0.00 | $0.00 |
| Cloudflare Pages | Yes (unlimited bandwidth) | $0.00 | $0.00 |
| Netlify | No (200GB > 100GB free) | $19.00 | $228.00 |
| Vercel | No (200GB > 100GB free) | $20.00 | $240.00 |
| AWS S3 + CloudFront | No | $19.15 | $229.80 |
Note: GitHub Pages and Cloudflare Pages are free but come with limitations—1GB repo size for GitHub Pages, no server-side redirects, and US-company data handling. If you need GDPR compliance, predictable pricing, and no vendor lock-in, S3 storage is the better choice.
When S3 Hosting Beats the Alternatives
S3 static hosting is the best choice when:
- GDPR compliance matters: European-hosted storage with a European data controller
- You exceed free tier limits: 1TB storage + 1TB traffic for $3.99/mo is hard to beat
- You want zero vendor lock-in: Standard S3 API means you can switch providers in minutes
- You need to store large assets: Videos, high-res images, downloads that blow past free tier limits
- You already use S3 storage: Serve static sites from the same bucket as your app's file uploads
- Custom build pipelines: Full control over your CI/CD without platform constraints
When to Choose Netlify or Vercel Instead
Platform-as-a-service solutions are better when:
- You need serverless functions: API routes, form handlers, authentication
- Deploy previews per PR: Automatic preview deployments for team review
- Built-in forms: Netlify Forms handles form submissions without a backend
- Team collaboration: Built-in role management and approval workflows
- You're under the free tier: For small personal sites, free is free
Troubleshooting Common Issues
Files Download Instead of Displaying
# Problem: Browser downloads HTML files instead of rendering them
# Cause: Incorrect Content-Type header
# Fix: Re-upload with correct content type
aws s3 cp s3://my-website/index.html s3://my-website/index.html
--endpoint-url https://s3.danubedata.ro
--region fsn1
--content-type "text/html; charset=utf-8"
--metadata-directive REPLACE
--acl public-read
CSS/JS Not Loading (CORS Issues)
# Set CORS configuration on the bucket
cat > cors.json <<'EOF'
{
"CORSRules": [
{
"AllowedOrigins": ["https://www.example.com"],
"AllowedMethods": ["GET", "HEAD"],
"AllowedHeaders": ["*"],
"MaxAgeSeconds": 86400
}
]
}
EOF
aws s3api put-bucket-cors
--bucket my-website
--cors-configuration file://cors.json
--endpoint-url https://s3.danubedata.ro
--region fsn1
404 Errors for Clean URLs
# Problem: /about returns 404 (works as /about.html or /about/index.html)
# Solution 1: Use trailing slash URLs + index.html in each directory
# Hugo config: trailingSlash = true
# Solution 2: Cloudflare Transform Rule
# URL Rewrite: If URI path does not end with "/" and does not contain "."
# Rewrite to: concat(http.request.uri.path, "/index.html")
# Solution 3: Nginx rewrite
location / {
try_files $uri $uri/ $uri.html =404;
}
Stale Content After Deployment
# Purge Cloudflare cache after deployment
curl -X POST
"https://api.cloudflare.com/client/v4/zones/YOUR_ZONE_ID/purge_cache"
-H "Authorization: Bearer YOUR_API_TOKEN"
-H "Content-Type: application/json"
--data '{"purge_everything":true}'
# Or purge specific URLs
curl -X POST
"https://api.cloudflare.com/client/v4/zones/YOUR_ZONE_ID/purge_cache"
-H "Authorization: Bearer YOUR_API_TOKEN"
-H "Content-Type: application/json"
--data '{"files":["https://www.example.com/index.html","https://www.example.com/css/style.css"]}'
Conclusion
Hosting a static website on S3-compatible storage is one of the simplest, most cost-effective, and most reliable ways to put a website on the internet. You get:
- Near-zero maintenance: No servers, no patches, no scaling headaches
- Incredible performance: Static files served through a CDN load in milliseconds
- Predictable costs: Flat-rate pricing that doesn't spike with traffic
- Full control: Your content, your deployment pipeline, no vendor lock-in
- GDPR compliance: European-hosted storage under EU jurisdiction
Whether you're building a personal blog with Hugo, a marketing site with Astro, or a documentation portal with Next.js, S3 storage gives you a solid, affordable foundation that scales effortlessly.
Get started today. Create your DanubeData account, set up a bucket, and deploy your first static site in under 10 minutes. At EUR 3.99/month with 1TB storage and 1TB traffic included, it's the most affordable way to host a fast, reliable website in Europe. Create your bucket now.