Deploying a Node.js application to the cloud usually means writing a Dockerfile, configuring a base image, installing dependencies, setting up a non-root user, and getting the entrypoint just right. But what if you could skip all of that?
With Cloud Native Buildpacks, you push your code and the platform figures out the rest. It detects your package.json, installs dependencies, configures the runtime, and produces an optimized container image—all without a single line of Docker configuration.
In this guide, you'll deploy a Node.js Express API to DanubeData Rapids (our Knative-based serverless platform) using Git push deployment with Buildpacks. You'll get auto-scaling from zero to 100 replicas, custom domains with automatic TLS, and pay-per-use pricing—all hosted in the EU.
What Are Buildpacks?
Buildpacks are a Cloud Native Computing Foundation (CNCF) project that automatically transforms your source code into a production-ready container image. Instead of writing a Dockerfile, the buildpack inspects your project, detects the language and framework, and builds an optimized image.
DanubeData Rapids uses Paketo Buildpacks, the most widely adopted buildpack implementation. Here's what happens when you push a Node.js project:
- The buildpack detects
package.jsonand identifies it as a Node.js application - It selects the appropriate Node.js version (from
enginesinpackage.jsonor latest LTS) - It runs
npm install(oryarn install) with production dependencies - It executes the
buildscript if one is defined - It configures the start command from your
scripts.startfield - It produces a minimal, layered OCI image ready for production
The result is a container image that's often smaller and more secure than a hand-written Dockerfile—with automatic security patches for the OS layer and Node.js runtime.
What You'll Build
We'll deploy a REST API built with Express.js that includes:
- A health check endpoint for the serverless platform's readiness probes
- A JSON API with sample routes
- Environment variable configuration for database connections
- Proper production settings for serverless environments
The full deployment takes about 5 minutes from start to a live URL.
Prerequisites
- A DanubeData account (free tier available)
- Node.js 18+ installed locally for development
- A GitHub, GitLab, or Bitbucket account for Git deployment
- Basic familiarity with Express.js
Step 1: Create the Express API
Start by creating a new project and installing dependencies:
mkdir my-serverless-api
cd my-serverless-api
npm init -y
npm install express cors helmet
Create the main application file:
// index.js
const express = require('express');
const cors = require('cors');
const helmet = require('helmet');
const app = express();
const PORT = process.env.PORT || 8080;
// Middleware
app.use(helmet());
app.use(cors());
app.use(express.json());
// Health check — used by the serverless platform for readiness probes
app.get('/health', (req, res) => {
res.json({ status: 'ok', timestamp: new Date().toISOString() });
});
// API routes
app.get('/', (req, res) => {
res.json({
message: 'Welcome to My Serverless API',
version: '1.0.0',
environment: process.env.NODE_ENV || 'development',
});
});
app.get('/api/items', (req, res) => {
const items = [
{ id: 1, name: 'Item One', category: 'general' },
{ id: 2, name: 'Item Two', category: 'premium' },
{ id: 3, name: 'Item Three', category: 'general' },
];
const category = req.query.category;
const filtered = category
? items.filter((item) => item.category === category)
: items;
res.json({ data: filtered, count: filtered.length });
});
app.get('/api/items/:id', (req, res) => {
const id = parseInt(req.params.id, 10);
const items = [
{ id: 1, name: 'Item One', category: 'general', price: 9.99 },
{ id: 2, name: 'Item Two', category: 'premium', price: 29.99 },
{ id: 3, name: 'Item Three', category: 'general', price: 14.99 },
];
const item = items.find((i) => i.id === id);
if (!item) {
return res.status(404).json({ error: 'Item not found' });
}
res.json({ data: item });
});
// Start server
app.listen(PORT, '0.0.0.0', () => {
console.log(`Server running on port ${PORT}`);
});
Step 2: Configure package.json for Buildpacks
Buildpacks read your package.json to determine how to build and run your application. Two fields matter most: engines and scripts.start.
{
"name": "my-serverless-api",
"version": "1.0.0",
"description": "Express API deployed to DanubeData Rapids",
"main": "index.js",
"engines": {
"node": "20.x"
},
"scripts": {
"start": "node index.js",
"dev": "node --watch index.js"
},
"dependencies": {
"cors": "^2.8.5",
"express": "^4.21.0",
"helmet": "^8.0.0"
}
}
Key configuration points:
engines.node: Tells the buildpack which Node.js version to use. If omitted, it defaults to the latest LTS release. Pin a major version (e.g.,20.x) for predictable builds.scripts.start: The command the buildpack uses to launch your application. This is required.- No
devDependenciesin production: The buildpack runsnpm ciwith--productionby default, so dev dependencies are excluded from the final image.
Step 3: Push to a Git Repository
Initialize a Git repository and push your code:
git init
git add .
git commit -m "Initial Express API"
# Push to GitHub (or GitLab/Bitbucket)
git remote add origin https://github.com/your-username/my-serverless-api.git
git branch -M main
git push -u origin main
Make sure to add a .gitignore to exclude node_modules:
# .gitignore
node_modules/
.env
Step 4: Deploy on DanubeData Rapids
Now for the deployment. Log in to the DanubeData dashboard and follow these steps:
- Navigate to Serverless Containers in the sidebar
- Click Create Container
- Select Git Repository as the deployment source
- Connect your GitHub/GitLab/Bitbucket account and select the repository
- Choose the main branch
- Select a builder — Paketo Base is recommended for Node.js
- Select a resource profile (Micro is fine for getting started)
- Click Deploy
That's it. The platform clones your repository, runs the Paketo Node.js buildpack, produces a container image, and deploys it as a Knative service—all automatically.
Within a couple of minutes, your API is live at:
https://my-serverless-api-yourteam.danubedata.run
What Happened Behind the Scenes
When you clicked Deploy, the platform executed this sequence:
- Clone: Pulled the latest code from your Git repository
- Detect: The Paketo buildpack found
package.jsonand identified a Node.js project - Build: Installed Node.js 20.x, ran
npm ci, and packaged the application - Image: Produced a minimal OCI container image with your app and its dependencies
- Deploy: Created a Knative service with auto-scaling from 0 to 100 replicas
- Route: Configured HTTPS with automatic TLS certificate provisioning
Step 5: Configure Environment Variables
Most applications need environment variables for database connections, API keys, and configuration. DanubeData supports up to 100 environment variables per container.
In the dashboard, go to your container's Settings tab and add your variables:
| Variable | Example Value | Description |
|---|---|---|
NODE_ENV |
production | Enables production optimizations in Express |
DATABASE_URL |
mysql://user:pass@host:3306/db | Connection string for DanubeData MySQL |
REDIS_URL |
redis://host:6379 | Connection string for DanubeData Redis |
API_KEY |
your-secret-key | Any custom secrets your app needs |
After saving environment variables, the container automatically redeploys with the new configuration. Your code accesses them through process.env as usual—no changes needed.
Connecting to DanubeData Databases
Serverless containers on DanubeData have internal network access to your team's databases and caches. This means low-latency connections without exposing your database to the public internet.
Here's an example connecting to a DanubeData MySQL instance:
// db.js
const mysql = require('mysql2/promise');
const pool = mysql.createPool({
uri: process.env.DATABASE_URL,
waitForConnections: true,
connectionLimit: 5, // Keep low for serverless
queueLimit: 0,
enableKeepAlive: true,
keepAliveInitialDelay: 0,
});
module.exports = pool;
A key consideration for serverless: keep connection pool sizes small. Each replica manages its own pool, and with auto-scaling you could have many replicas. A connectionLimit of 5-10 per replica prevents overwhelming your database.
Step 6: Enable Auto-Deploy with Git Webhooks
DanubeData can automatically redeploy your container every time you push to your repository. In the container's Settings tab:
- Enable Auto-deploy
- Copy the provided webhook URL
- Add the webhook to your repository:
- GitHub: Settings → Webhooks → Add webhook
- GitLab: Settings → Webhooks → Add new webhook
- Bitbucket: Repository settings → Webhooks → Add webhook
- Set the content type to
application/json - Select push events for the configured branch
Now every git push to your main branch triggers a new build and deployment. The new version rolls out with zero downtime—the platform keeps the old version running until the new one passes health checks.
# Make a change, push, and watch it deploy automatically
echo "// updated" >> index.js
git add .
git commit -m "Trigger auto-deploy"
git push origin main
Step 7: Configure Auto-Scaling
DanubeData Rapids scales your container based on traffic. By default, it scales based on concurrent requests per replica. You can customize the scaling behavior in the container settings:
| Setting | Default | Description |
|---|---|---|
| Min replicas | 0 | Scale to zero when idle (set to 1 to avoid cold starts) |
| Max replicas | 100 | Upper limit for auto-scaling |
| Scaling metric | Concurrency | Scale on concurrent requests or requests per second (RPS) |
| Target concurrency | 100 | New replica spawns when concurrency exceeds this threshold |
Scale-to-Zero and Cold Starts
With min replicas set to 0, your container shuts down when there's no traffic—so you pay nothing during idle periods. The tradeoff is a cold start when the first request arrives after a period of inactivity.
Node.js cold starts on DanubeData Rapids are fast—typically 1-3 seconds for a standard Express application. If your use case requires consistent response times, set the minimum replicas to 1.
Step 8: Add a Custom Domain
Every container gets a default URL like https://my-api-yourteam.danubedata.run, but you can add your own custom domain with automatic TLS.
- Go to your container's Domains tab
- Click Add Domain
- Enter your domain (e.g.,
api.yourdomain.com) - Add the provided CNAME record to your DNS provider
- The platform automatically provisions and renews a TLS certificate
Once DNS propagates (usually within minutes), your API is reachable at https://api.yourdomain.com with a valid SSL certificate.
Optimizing for Serverless
A few best practices to get the most out of serverless Node.js deployments:
1. Keep the Start Time Fast
Buildpacks produce well-optimized images, but your application code also affects cold start time. Avoid heavy initialization at startup:
// Good: Lazy-load heavy modules
let db;
function getDb() {
if (!db) {
const mysql = require('mysql2/promise');
db = mysql.createPool({ uri: process.env.DATABASE_URL, connectionLimit: 5 });
}
return db;
}
app.get('/api/data', async (req, res) => {
const pool = getDb();
const [rows] = await pool.query('SELECT * FROM items LIMIT 20');
res.json({ data: rows });
});
2. Use the PORT Environment Variable
The serverless platform injects a PORT environment variable that your application must listen on. Always use process.env.PORT with a fallback:
const PORT = process.env.PORT || 8080;
app.listen(PORT, '0.0.0.0', () => {
console.log(`Listening on port ${PORT}`);
});
3. Handle Graceful Shutdown
When the platform scales down a replica, it sends a SIGTERM signal. Handle it to close connections cleanly:
process.on('SIGTERM', () => {
console.log('SIGTERM received, shutting down gracefully');
server.close(() => {
console.log('Server closed');
process.exit(0);
});
});
4. Use a .node-version or engines Field
Pin your Node.js version so builds are deterministic:
# .node-version
20.11.0
Or in package.json:
"engines": {
"node": "20.x"
}
5. Add a Procfile for Custom Start Commands
If your start command differs from npm start, you can add a Procfile:
# Procfile
web: node server.js
The buildpack reads the web process type and uses it as the container's entrypoint.
Paketo Builder Options
DanubeData Rapids offers three Paketo builder variants:
| Builder | Best For | Image Size | Includes |
|---|---|---|---|
| Base (recommended) | Most Node.js apps | ~200MB | Ubuntu + common libraries |
| Full | Apps with native dependencies | ~500MB | Ubuntu + build tools, compilers |
| Tiny | Minimal, no-shell environments | ~80MB | Distroless base, smallest footprint |
For most Express.js applications, the Base builder is the right choice. Use Full if your npm packages include native C/C++ addons (e.g., bcrypt, sharp, or sqlite3). Use Tiny for minimal APIs where you want the smallest possible cold start time.
Buildpacks vs. Dockerfile: When to Use Each
| Aspect | Buildpacks | Dockerfile |
|---|---|---|
| Setup effort | Zero — just push code | Write and maintain a Dockerfile |
| Security patches | Automatic OS/runtime updates | Manual base image updates |
| Customization | Environment variables, Procfile | Full control over every layer |
| Native dependencies | Use Full builder | Install anything you need |
| Best for | Standard Node.js apps, fast iteration | Complex builds, multi-service, custom system deps |
Start with Buildpacks for simplicity and speed. Switch to a Dockerfile only when you need fine-grained control that Buildpacks cannot provide—for example, installing system packages not included in any builder, or running multi-stage builds with custom toolchains.
Pricing: What It Costs
DanubeData Rapids uses pay-per-use pricing with a generous free tier:
Free Tier (Every Month)
- 2 million requests
- 250,000 vCPU-seconds
- 500,000 GiB-seconds
- 5 GB egress bandwidth
Pay-Per-Use (Beyond Free Tier)
| Resource | Price |
|---|---|
| vCPU-second | €0.000012 |
| GiB-second | €0.000002 |
| Per million requests | €0.12 |
Resource Profiles
If you prefer predictable monthly pricing, choose a resource profile instead of pay-per-use:
| Profile | Monthly Cost |
|---|---|
| Micro | €5/month |
| Small | €10/month |
| Medium | €20/month |
| Large | €40/month |
For a typical Node.js API that handles a few thousand requests per day, the free tier covers everything. A moderately busy API processing 100,000 requests per day stays well under €10/month.
Complete Example: Full-Stack Deployment
Here's a real-world example: an Express API with a DanubeData MySQL database and Redis cache, deployed entirely without a Dockerfile.
Project Structure
my-serverless-api/
index.js # Express application
db.js # Database connection
cache.js # Redis connection
package.json # Dependencies and scripts
.gitignore # Exclude node_modules
Procfile # Optional: custom start command
package.json
{
"name": "my-serverless-api",
"version": "1.0.0",
"engines": {
"node": "20.x"
},
"scripts": {
"start": "node index.js"
},
"dependencies": {
"cors": "^2.8.5",
"express": "^4.21.0",
"helmet": "^8.0.0",
"ioredis": "^5.4.1",
"mysql2": "^3.11.0"
}
}
index.js
const express = require('express');
const cors = require('cors');
const helmet = require('helmet');
const pool = require('./db');
const redis = require('./cache');
const app = express();
const PORT = process.env.PORT || 8080;
app.use(helmet());
app.use(cors());
app.use(express.json());
app.get('/health', (req, res) => {
res.json({ status: 'ok' });
});
app.get('/api/products', async (req, res) => {
// Check cache first
const cached = await redis.get('products:all');
if (cached) {
return res.json({ data: JSON.parse(cached), source: 'cache' });
}
// Query database
const [rows] = await pool.query('SELECT * FROM products ORDER BY created_at DESC LIMIT 50');
// Cache for 5 minutes
await redis.setex('products:all', 300, JSON.stringify(rows));
res.json({ data: rows, source: 'database' });
});
app.post('/api/products', async (req, res) => {
const { name, price } = req.body;
const [result] = await pool.query(
'INSERT INTO products (name, price) VALUES (?, ?)',
[name, price]
);
// Invalidate cache
await redis.del('products:all');
res.status(201).json({ id: result.insertId, name, price });
});
const server = app.listen(PORT, '0.0.0.0', () => {
console.log(`Server running on port ${PORT}`);
});
process.on('SIGTERM', () => {
server.close(() => process.exit(0));
});
db.js
const mysql = require('mysql2/promise');
const pool = mysql.createPool({
uri: process.env.DATABASE_URL,
waitForConnections: true,
connectionLimit: 5,
queueLimit: 0,
});
module.exports = pool;
cache.js
const Redis = require('ioredis');
const redis = new Redis(process.env.REDIS_URL || 'redis://localhost:6379', {
maxRetriesPerRequest: 3,
retryStrategy(times) {
return Math.min(times * 50, 2000);
},
});
module.exports = redis;
Environment Variables
Set these in the DanubeData dashboard for your container:
NODE_ENV=production
DATABASE_URL=mysql://user:password@db-host.svc.cluster.local:3306/mydb
REDIS_URL=redis://cache-host.svc.cluster.local:6379
Push to your repository and the auto-deploy webhook handles the rest. Within minutes, you have a production API with a managed database and cache—no Dockerfile, no server provisioning, no infrastructure to maintain.
Monitoring Your Deployment
DanubeData provides built-in monitoring for serverless containers:
- Request metrics: Total requests, response times, error rates
- Replica count: How many instances are running at any point
- Resource usage: CPU and memory consumption per replica
- Build history: Status of each build triggered by Git push
All of this is available in the container's Monitoring tab in the dashboard. No additional setup or third-party tools required.
Summary
Deploying Node.js to serverless containers without a Dockerfile is straightforward with Buildpacks on DanubeData Rapids:
- Write your Express app with a proper
package.json - Push to Git — GitHub, GitLab, or Bitbucket
- Create a container in the DanubeData dashboard, selecting your repository
- Configure environment variables for database and cache connections
- Enable auto-deploy for zero-effort continuous deployment
No Dockerfile to write. No base image to choose. No dependency caching to configure. Buildpacks handle everything, and the serverless platform gives you auto-scaling, custom domains, and TLS—all hosted in the EU with GDPR compliance.
Get Started
Create your free DanubeData account and deploy your first Node.js serverless container in under 5 minutes. The free tier includes 2 million requests per month—enough for most projects to get started without any cost.
Questions about deploying your Node.js application? Contact our team for help.