The serverless vs server debate has evolved. It's no longer just about Lambda functions vs EC2 instances—now we have serverless containers that blur the line between the two. Understanding when to use serverless containers versus traditional VPS hosting can save you thousands in infrastructure costs.
This guide compares serverless containers with VPS hosting across cost, performance, complexity, and use cases to help you make the right choice.
Understanding the Options
Traditional VPS (Virtual Private Server)
A dedicated virtual machine that runs 24/7. You get full control over the operating system, installed software, and configuration.
Examples: DanubeData VPS, DigitalOcean Droplets, AWS EC2, Linode
Serverless Containers
Containers that run on-demand and scale to zero when not in use. You pay only for the time your code is actually executing.
Examples: AWS Fargate, Google Cloud Run, Azure Container Instances, DanubeData Serverless
Functions as a Service (FaaS)
Event-driven functions that execute in response to triggers. More limited than containers but even more granular scaling.
Examples: AWS Lambda, Google Cloud Functions, Cloudflare Workers
Quick Comparison
| Aspect | VPS | Serverless Containers | FaaS |
|---|---|---|---|
| Pricing Model | Fixed monthly | Per-request + duration | Per-invocation + duration |
| Cold Starts | None | 100ms-2s | 100ms-10s |
| Scaling | Manual/scripted | Automatic | Automatic |
| Max Runtime | Unlimited | Hours (varies) | 15 min max |
| Control | Full | Container only | Code only |
| Stateful | Yes | No (ephemeral) | No |
| WebSockets | Yes | Limited | No |
| Background Jobs | Native | Via triggers | Via triggers |
The Cost Math: When Serverless Gets Expensive
Serverless seems cheap until you do the math at scale.
VPS Cost (Fixed)
DanubeData VPS Small (2 vCPU, 4GB RAM):
€8.99/month = €0.012/hour
Available 24/7 = 720 hours/month
Effective cost per hour of availability: €0.012
Can handle: ~1000 concurrent requests (depending on app)
Cost per 1M requests (at 100ms avg): €0.012 × 720 = €8.99 total
Serverless Container Cost (Variable)
Google Cloud Run (comparable specs):
$0.00002400/vCPU-second + $0.00000250/GiB-second
For 1M requests at 100ms average, 2 vCPU, 4GB:
vCPU cost: 1M × 0.1s × 2 × $0.00002400 = $4.80
Memory cost: 1M × 0.1s × 4 × $0.00000250 = $1.00
Total: $5.80/month (~€5.40)
Serverless wins! ...but wait
The Break-Even Point
When does VPS become cheaper?
At ~2M requests/month (200ms avg response time):
- Serverless: $11.60/month
- VPS: €8.99/month
At 10M requests/month:
- Serverless: $58/month
- VPS: €8.99/month (same!)
At 50M requests/month:
- Serverless: $290/month
- VPS: €17.99/month (Medium VPS handles this)
Cost Summary by Traffic Level
| Monthly Requests | Serverless Cost | VPS Cost | Winner |
|---|---|---|---|
| 100K | ~$0.60 | €8.99 | Serverless |
| 500K | ~$2.90 | €8.99 | Serverless |
| 1M | ~$5.80 | €8.99 | Serverless |
| 2M | ~$11.60 | €8.99 | Break-even |
| 5M | ~$29 | €8.99 | VPS |
| 10M | ~$58 | €8.99 | VPS (6x cheaper) |
| 50M | ~$290 | €17.99 | VPS (16x cheaper) |
Key insight: Serverless is economical for sporadic, low-traffic workloads. Once you have consistent traffic above ~2M requests/month, VPS is significantly cheaper.
The Cold Start Problem
Serverless containers "sleep" when not in use to save costs. Waking them up takes time.
Cold Start Latency
| Platform | Cold Start (min) | Cold Start (typical) |
|---|---|---|
| AWS Lambda (Node.js) | 100ms | 200-500ms |
| AWS Lambda (Java) | 500ms | 3-10 seconds |
| Google Cloud Run | 200ms | 500ms-2s |
| Azure Container Instances | 500ms | 2-5 seconds |
| DanubeData Serverless | 200ms | 500ms-1.5s |
| VPS (always running) | 0ms | 0ms |
When Cold Starts Matter
- User-facing APIs: 2-second delays frustrate users
- Real-time applications: Chat, gaming, live updates
- Payment processing: Timeout risks during checkout
- Health checks: Load balancers may mark cold containers as unhealthy
Mitigating Cold Starts
# Option 1: Minimum instances (Cloud Run)
# Keeps containers warm but adds baseline cost
gcloud run deploy my-service
--min-instances=1
--max-instances=100
# Option 2: Provisioned concurrency (Lambda)
# Pre-warms function instances
aws lambda put-provisioned-concurrency-config
--function-name my-function
--provisioned-concurrent-executions 10
# Option 3: Scheduled warm-up requests
# Ping your service every few minutes
# (Adds complexity and doesn't guarantee warm instances)
The irony: Once you add minimum instances to avoid cold starts, you're paying for always-on compute—similar to a VPS, but more expensive.
When to Choose Serverless Containers
Ideal Use Cases
- Sporadic traffic: Internal tools used a few times per day
- Batch processing: ETL jobs, report generation, data pipelines
- Webhooks: GitHub/Stripe webhooks with unpredictable timing
- Dev/staging environments: Scale to zero when not testing
- Proof of concepts: Validate ideas without infrastructure commitment
- Event-driven workloads: Process uploads, send notifications
Example: Webhook Handler
# Perfect serverless use case: Stripe webhook handler
# Receives 0-1000 webhooks/day with unpredictable timing
# main.py
from fastapi import FastAPI, Request
import stripe
app = FastAPI()
@app.post("/webhooks/stripe")
async def stripe_webhook(request: Request):
payload = await request.body()
sig_header = request.headers.get("stripe-signature")
event = stripe.Webhook.construct_event(
payload, sig_header, webhook_secret
)
if event.type == "payment_intent.succeeded":
# Process payment
pass
return {"status": "success"}
# Deploy to DanubeData Serverless
# Scales to zero between webhooks
# Handles traffic spikes during sales
When to Choose VPS
Ideal Use Cases
- Consistent traffic: APIs with steady request volume
- Latency-sensitive: User-facing applications where every ms matters
- Stateful applications: In-memory caches, connection pools
- Background workers: Queue consumers, scheduled jobs
- WebSockets: Real-time chat, live updates, gaming
- Large applications: Complex apps with many dependencies
- Cost optimization: Consistent workloads > 2M requests/month
Example: API Server
# VPS is better for: Production API with consistent traffic
# API handles 5M requests/month
# Needs sub-100ms latency
# Uses connection pooling to PostgreSQL
# Runs background job queue
# docker-compose.yml
services:
api:
build: .
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://...
- REDIS_URL=redis://...
restart: always
worker:
build: .
command: celery -A app worker
environment:
- DATABASE_URL=postgresql://...
- REDIS_URL=redis://...
restart: always
# Cost: €17.99/month (VPS Medium)
# vs $145/month on serverless
# Latency: Consistent 50ms
# vs 50ms-2000ms with cold starts
The Hybrid Approach: Best of Both Worlds
Many production systems combine both approaches:
Architecture Pattern
┌─────────────────────────────────────────────────────────────┐
│ Internet │
└─────────────────────────────┬───────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ VPS: Main Application │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ FastAPI │ │ Redis │ │ Workers │ │
│ │ (API) │ │ (Cache) │ │ (Celery) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
│ Consistent traffic, low latency, stateful │
│ Cost: €17.99/month (fixed) │
└───────────────────────────────┬─────────────────────────────┘
│
┌─────────────────────┼─────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Serverless │ │ Serverless │ │ Serverless │
│ Image Resize │ │ PDF Generate │ │ Email Send │
│ │ │ │ │ │
│ Sporadic │ │ Batch jobs │ │ Event-driven │
│ ~$2/month │ │ ~$5/month │ │ ~$1/month │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Implementation Example
# VPS: Main API (handles user requests)
@app.post("/orders")
async def create_order(order: OrderCreate):
# Process order (VPS - consistent latency)
db_order = await create_order_in_db(order)
# Trigger serverless functions for heavy work
# These can scale independently and run async
# Resize product images (serverless)
await trigger_serverless("resize-images", order.product_ids)
# Generate invoice PDF (serverless)
await trigger_serverless("generate-invoice", db_order.id)
# Send confirmation email (serverless)
await trigger_serverless("send-email", {
"to": order.email,
"template": "order_confirmation",
"data": db_order
})
return db_order
# Serverless function: Image resize
# Only runs when needed, scales to zero
async def resize_images(product_ids: list):
for product_id in product_ids:
image = await download_image(product_id)
resized = await resize(image, sizes=[100, 300, 800])
await upload_to_s3(resized)
DanubeData Options: VPS and Serverless
DanubeData offers both approaches:
VPS Instances
| Plan | Specs | Price | Best For |
|---|---|---|---|
| Micro | 1 vCPU, 2GB RAM | €4.49/mo | Small apps, development |
| Small | 2 vCPU, 4GB RAM | €8.99/mo | Production APIs, web apps |
| Medium | 4 vCPU, 8GB RAM | €17.99/mo | High-traffic, multi-service |
| Large | 8 vCPU, 16GB RAM | €35.99/mo | Heavy workloads, ML inference |
Serverless Containers
DanubeData Serverless uses Knative for scale-to-zero containers:
- Scale to zero: Pay nothing when not running
- Auto-scaling: Handle traffic spikes automatically
- Docker support: Deploy any container image
- Git deployment: Push to deploy from GitHub
- Custom domains: Automatic TLS certificates
Decision Framework
Choose Serverless When:
- ☐ Traffic is sporadic or unpredictable
- ☐ Workload is event-driven (webhooks, triggers)
- ☐ You need automatic scaling for traffic spikes
- ☐ Cost optimization for low-traffic applications
- ☐ Batch processing or background jobs
- ☐ Development/staging environments
Choose VPS When:
- ☐ Traffic is consistent (> 2M requests/month)
- ☐ Latency is critical (need sub-100ms consistently)
- ☐ Application is stateful (connection pools, caches)
- ☐ Need WebSockets or long-running connections
- ☐ Running background workers alongside API
- ☐ Cost optimization for high-traffic applications
Choose Hybrid When:
- ☐ Core API needs consistent performance (VPS)
- ☐ Heavy async tasks can run separately (Serverless)
- ☐ Different parts of system have different traffic patterns
- ☐ Want to optimize costs across workload types
Migration Paths
Serverless to VPS
Common when serverless costs grow or cold starts become problematic:
# 1. Containerize your serverless function
# If not already containerized, create Dockerfile
# 2. Set up VPS with Docker
docker compose up -d
# 3. Update DNS/load balancer to point to VPS
# 4. Monitor and verify performance
# 5. Disable serverless functions
VPS to Serverless
Common for cost optimization of low-traffic services:
# 1. Ensure application is stateless
# - Move sessions to Redis
# - Remove local file storage
# - Use external database
# 2. Containerize if not already
# 3. Deploy to serverless platform
# 4. Test cold start latency
# 5. Update DNS
# 6. Decommission VPS
Conclusion
There's no universal "better" option—it depends on your specific workload:
Serverless containers excel for sporadic, event-driven workloads where you want to pay only for actual usage. They're perfect for webhooks, batch jobs, and low-traffic applications.
VPS hosting wins for consistent traffic, latency-sensitive applications, and workloads above ~2M requests/month. The fixed cost becomes dramatically cheaper than pay-per-request pricing at scale.
The hybrid approach gives you the best of both: run your core API on a VPS for consistent performance and cost, while offloading sporadic heavy tasks to serverless functions.
Get Started
Ready to deploy? DanubeData offers both options:
👉 Create a VPS - Starting at €4.49/month
👉 Deploy Serverless Container - Pay only for usage
Not sure which to choose? Contact our team for architecture guidance based on your specific workload.