BlogTutorialsNode.js Production Best Practices: The Complete 2025 Guide

Node.js Production Best Practices: The Complete 2025 Guide

Adrian Silaghi
Adrian Silaghi
January 8, 2026
15 min read
18 views
#nodejs #production #pm2 #express #deployment #best-practices #docker

Node.js powers millions of production applications, from startups to Fortune 500 companies. But running Node.js reliably in production requires more than just node app.js. You need process management, clustering, graceful shutdowns, proper logging, and security hardening.

This guide covers everything you need to run Node.js applications reliably in production.

The Production Checklist

Before deploying, ensure your Node.js application has:

  • ☐ Process manager (PM2 or systemd)
  • ☐ Cluster mode for multi-core utilization
  • ☐ Graceful shutdown handling
  • ☐ Health check endpoints
  • ☐ Structured logging
  • ☐ Error handling and recovery
  • ☐ Environment configuration
  • ☐ Security headers and rate limiting
  • ☐ Monitoring and alerting

1. Process Management with PM2

Never run node app.js directly in production. Use a process manager.

Why PM2?

  • Auto-restart: Restarts crashed processes automatically
  • Cluster mode: Utilizes all CPU cores
  • Zero-downtime reload: Update without dropping connections
  • Log management: Aggregates and rotates logs
  • Monitoring: Built-in CPU/memory monitoring

Installation and Basic Usage

# Install PM2 globally
npm install -g pm2

# Start application
pm2 start app.js --name "my-api"

# Start in cluster mode (use all CPUs)
pm2 start app.js -i max --name "my-api"

# Or specify number of instances
pm2 start app.js -i 4 --name "my-api"

# View running processes
pm2 list

# View logs
pm2 logs my-api

# Restart
pm2 restart my-api

# Reload (zero-downtime)
pm2 reload my-api

# Stop
pm2 stop my-api

# Delete from PM2
pm2 delete my-api

PM2 Configuration File (ecosystem.config.js)

// ecosystem.config.js
module.exports = {
  apps: [{
    name: 'my-api',
    script: './dist/server.js',
    instances: 'max',        // Use all CPUs
    exec_mode: 'cluster',    // Enable cluster mode

    // Environment variables
    env: {
      NODE_ENV: 'development',
      PORT: 3000
    },
    env_production: {
      NODE_ENV: 'production',
      PORT: 8000
    },

    // Logging
    log_file: '/var/log/pm2/my-api.log',
    error_file: '/var/log/pm2/my-api-error.log',
    merge_logs: true,
    log_date_format: 'YYYY-MM-DD HH:mm:ss Z',

    // Process management
    max_memory_restart: '1G',    // Restart if memory exceeds 1GB
    restart_delay: 4000,           // Wait 4s before restart
    max_restarts: 10,              // Max restarts within min_uptime
    min_uptime: 10000,             // Min uptime to consider started

    // Graceful shutdown
    kill_timeout: 5000,            // Time to wait for graceful shutdown
    listen_timeout: 3000,          // Time to wait for app to listen
    wait_ready: true,              // Wait for process.send('ready')

    // Monitoring
    instance_var: 'INSTANCE_ID',  // Instance ID env var
  }]
};

Start with Config File

# Start with config
pm2 start ecosystem.config.js --env production

# Save process list for auto-start on reboot
pm2 save

# Generate startup script
pm2 startup

# This outputs a command to run, e.g.:
sudo env PATH=$PATH:/usr/bin pm2 startup systemd -u ubuntu --hp /home/ubuntu

2. Graceful Shutdown

Your application must handle shutdown signals to avoid dropping requests.

Implementation

// server.js
const express = require('express');
const app = express();

// Track server and connections
let server;
const connections = new Set();

// Track connections for graceful shutdown
server = app.listen(process.env.PORT || 3000, () => {
  console.log(`Server started on port ${server.address().port}`);

  // Tell PM2 we're ready
  if (process.send) {
    process.send('ready');
  }
});

// Track all connections
server.on('connection', (connection) => {
  connections.add(connection);
  connection.on('close', () => connections.delete(connection));
});

// Graceful shutdown function
async function gracefulShutdown(signal) {
  console.log(`Received ${signal}, starting graceful shutdown...`);

  // Stop accepting new connections
  server.close(async () => {
    console.log('HTTP server closed');

    // Close database connections
    try {
      await db.end();
      console.log('Database connections closed');
    } catch (err) {
      console.error('Error closing database:', err);
    }

    // Close Redis connections
    try {
      await redis.quit();
      console.log('Redis connection closed');
    } catch (err) {
      console.error('Error closing Redis:', err);
    }

    console.log('Graceful shutdown complete');
    process.exit(0);
  });

  // Force close connections after timeout
  setTimeout(() => {
    console.log('Forcing remaining connections closed...');
    connections.forEach((connection) => connection.destroy());
  }, 10000); // 10 second timeout

  // Final exit if still running
  setTimeout(() => {
    console.error('Shutdown timed out, forcing exit');
    process.exit(1);
  }, 15000);
}

// Handle shutdown signals
process.on('SIGTERM', () => gracefulShutdown('SIGTERM'));
process.on('SIGINT', () => gracefulShutdown('SIGINT'));

// Handle uncaught exceptions
process.on('uncaughtException', (err) => {
  console.error('Uncaught Exception:', err);
  gracefulShutdown('uncaughtException');
});

// Handle unhandled promise rejections
process.on('unhandledRejection', (reason, promise) => {
  console.error('Unhandled Rejection at:', promise, 'reason:', reason);
  // Don't exit for unhandled rejections in production
  // Just log and continue
});

3. Health Check Endpoints

Health checks enable load balancers and orchestrators to monitor your application.

Implementation

// health.js
const express = require('express');
const router = express.Router();

// Liveness probe - is the process running?
router.get('/health/live', (req, res) => {
  res.status(200).json({ status: 'ok' });
});

// Readiness probe - can the app handle requests?
router.get('/health/ready', async (req, res) => {
  const checks = {
    database: false,
    redis: false,
  };

  try {
    // Check database
    await db.query('SELECT 1');
    checks.database = true;
  } catch (err) {
    console.error('Database health check failed:', err.message);
  }

  try {
    // Check Redis
    await redis.ping();
    checks.redis = true;
  } catch (err) {
    console.error('Redis health check failed:', err.message);
  }

  const isReady = Object.values(checks).every(Boolean);

  res.status(isReady ? 200 : 503).json({
    status: isReady ? 'ready' : 'not_ready',
    checks,
    timestamp: new Date().toISOString(),
  });
});

// Detailed health for monitoring
router.get('/health', async (req, res) => {
  const uptime = process.uptime();
  const memoryUsage = process.memoryUsage();

  res.json({
    status: 'ok',
    version: process.env.npm_package_version || 'unknown',
    uptime: Math.floor(uptime),
    memory: {
      heapUsed: Math.round(memoryUsage.heapUsed / 1024 / 1024),
      heapTotal: Math.round(memoryUsage.heapTotal / 1024 / 1024),
      rss: Math.round(memoryUsage.rss / 1024 / 1024),
    },
    pid: process.pid,
    timestamp: new Date().toISOString(),
  });
});

module.exports = router;

4. Structured Logging

Use structured logging for better debugging and log aggregation.

With Pino (Fastest)

// logger.js
const pino = require('pino');

const logger = pino({
  level: process.env.LOG_LEVEL || 'info',
  formatters: {
    level: (label) => ({ level: label }),
  },
  timestamp: pino.stdTimeFunctions.isoTime,
  // In production, use JSON format for log aggregation
  // In development, use pino-pretty
  transport: process.env.NODE_ENV === 'development'
    ? { target: 'pino-pretty' }
    : undefined,
});

module.exports = logger;

// Usage
const logger = require('./logger');

logger.info({ userId: 123 }, 'User logged in');
logger.error({ err, requestId: req.id }, 'Request failed');

Request Logging Middleware

// middleware/requestLogger.js
const pino = require('pino-http');

const requestLogger = pino({
  logger: require('./logger'),
  customLogLevel: (req, res, err) => {
    if (res.statusCode >= 500 || err) return 'error';
    if (res.statusCode >= 400) return 'warn';
    return 'info';
  },
  customSuccessMessage: (req, res) => {
    return `${req.method} ${req.url} ${res.statusCode}`;
  },
  customErrorMessage: (req, res, err) => {
    return `${req.method} ${req.url} ${res.statusCode} - ${err.message}`;
  },
  // Don't log health check endpoints
  autoLogging: {
    ignore: (req) => req.url.startsWith('/health'),
  },
});

module.exports = requestLogger;

5. Error Handling

Global Error Handler

// middleware/errorHandler.js
const logger = require('./logger');

// Custom error classes
class AppError extends Error {
  constructor(message, statusCode = 500, isOperational = true) {
    super(message);
    this.statusCode = statusCode;
    this.isOperational = isOperational;
    Error.captureStackTrace(this, this.constructor);
  }
}

class NotFoundError extends AppError {
  constructor(message = 'Resource not found') {
    super(message, 404);
  }
}

class ValidationError extends AppError {
  constructor(message = 'Validation failed', errors = []) {
    super(message, 400);
    this.errors = errors;
  }
}

// Error handler middleware
function errorHandler(err, req, res, next) {
  // Default error values
  let statusCode = err.statusCode || 500;
  let message = err.message || 'Internal Server Error';
  let errors = err.errors || undefined;

  // Log error
  if (statusCode >= 500) {
    logger.error({
      err,
      requestId: req.id,
      method: req.method,
      url: req.url,
    }, 'Server error');
  } else {
    logger.warn({
      statusCode,
      message,
      requestId: req.id,
    }, 'Client error');
  }

  // Don't leak error details in production
  if (process.env.NODE_ENV === 'production' && statusCode >= 500) {
    message = 'Internal Server Error';
  }

  res.status(statusCode).json({
    error: {
      message,
      ...(errors && { errors }),
      ...(process.env.NODE_ENV !== 'production' && { stack: err.stack }),
    },
  });
}

// 404 handler
function notFoundHandler(req, res, next) {
  next(new NotFoundError(`Cannot ${req.method} ${req.url}`));
}

module.exports = {
  AppError,
  NotFoundError,
  ValidationError,
  errorHandler,
  notFoundHandler,
};

Async Error Wrapper

// utils/asyncHandler.js
const asyncHandler = (fn) => (req, res, next) => {
  Promise.resolve(fn(req, res, next)).catch(next);
};

module.exports = asyncHandler;

// Usage in routes
const asyncHandler = require('./utils/asyncHandler');

router.get('/users/:id', asyncHandler(async (req, res) => {
  const user = await User.findById(req.params.id);
  if (!user) throw new NotFoundError('User not found');
  res.json(user);
}));

6. Security Best Practices

Security Middleware

// security.js
const helmet = require('helmet');
const rateLimit = require('express-rate-limit');
const cors = require('cors');

// Helmet - security headers
app.use(helmet({
  contentSecurityPolicy: {
    directives: {
      defaultSrc: ["'self'"],
      styleSrc: ["'self'", "'unsafe-inline'"],
      scriptSrc: ["'self'"],
      imgSrc: ["'self'", 'data:', 'https:'],
    },
  },
  hsts: {
    maxAge: 31536000,
    includeSubDomains: true,
    preload: true,
  },
}));

// CORS configuration
app.use(cors({
  origin: process.env.ALLOWED_ORIGINS?.split(',') || 'https://yourdomain.com',
  credentials: true,
  optionsSuccessStatus: 200,
}));

// Rate limiting
const limiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 100, // 100 requests per window
  standardHeaders: true,
  legacyHeaders: false,
  message: { error: { message: 'Too many requests' } },
});
app.use('/api', limiter);

// Strict rate limit for auth endpoints
const authLimiter = rateLimit({
  windowMs: 60 * 60 * 1000, // 1 hour
  max: 5, // 5 attempts per hour
  message: { error: { message: 'Too many login attempts' } },
});
app.use('/api/auth/login', authLimiter);

// Body parser limits
app.use(express.json({ limit: '10kb' }));
app.use(express.urlencoded({ extended: true, limit: '10kb' }));

Environment Variables

// config.js
require('dotenv').config();

const config = {
  // Server
  port: parseInt(process.env.PORT, 10) || 3000,
  nodeEnv: process.env.NODE_ENV || 'development',

  // Database
  databaseUrl: process.env.DATABASE_URL,

  // Redis
  redisUrl: process.env.REDIS_URL,

  // Security
  jwtSecret: process.env.JWT_SECRET,
  jwtExpiresIn: process.env.JWT_EXPIRES_IN || '1d',

  // Validate required variables
  validate() {
    const required = ['DATABASE_URL', 'JWT_SECRET'];
    const missing = required.filter((key) => !process.env[key]);

    if (missing.length > 0) {
      throw new Error(`Missing required env vars: ${missing.join(', ')}`);
    }
  },
};

// Validate on startup
config.validate();

module.exports = config;

7. Database Connection Best Practices

Connection Pooling (PostgreSQL)

// db.js
const { Pool } = require('pg');
const config = require('./config');

const pool = new Pool({
  connectionString: config.databaseUrl,
  ssl: config.nodeEnv === 'production' ? { rejectUnauthorized: true } : false,

  // Pool configuration
  max: 20,                      // Max connections
  min: 5,                       // Min connections
  idleTimeoutMillis: 30000,     // Close idle connections after 30s
  connectionTimeoutMillis: 5000, // Fail if can't connect in 5s
  maxUses: 7500,                // Close connection after N queries
});

// Handle pool errors
pool.on('error', (err) => {
  console.error('Unexpected database pool error:', err);
});

// Test connection on startup
pool.query('SELECT NOW()')
  .then(() => console.log('Database connected'))
  .catch((err) => {
    console.error('Database connection failed:', err);
    process.exit(1);
  });

module.exports = pool;

8. Deployment Architecture

Docker Configuration

# Dockerfile
FROM node:20-alpine AS base

# Install dumb-init for proper signal handling
RUN apk add --no-cache dumb-init

WORKDIR /app

# =====================
# Dependencies stage
# =====================
FROM base AS deps

COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force

# =====================
# Build stage (if using TypeScript)
# =====================
FROM base AS build

COPY package*.json ./
RUN npm ci

COPY . .
RUN npm run build

# =====================
# Production stage
# =====================
FROM base AS production

ENV NODE_ENV=production

# Copy production dependencies
COPY --from=deps /app/node_modules ./node_modules

# Copy built application
COPY --from=build /app/dist ./dist
COPY package.json ./

# Create non-root user
RUN addgroup -g 1001 -S nodejs && 
    adduser -S nodejs -u 1001 -G nodejs
USER nodejs

EXPOSE 3000

# Use dumb-init to handle signals
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/server.js"]

Docker Compose

# docker-compose.yml
services:
  api:
    build: .
    restart: always
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
      - DATABASE_URL=${DATABASE_URL}
      - REDIS_URL=${REDIS_URL}
      - JWT_SECRET=${JWT_SECRET}
    healthcheck:
      test: ["CMD", "wget", "-qO-", "http://localhost:3000/health/live"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s
    deploy:
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M

9. Monitoring

Prometheus Metrics

// metrics.js
const promClient = require('prom-client');

// Create a Registry
const register = new promClient.Registry();

// Add default metrics (CPU, memory, etc.)
promClient.collectDefaultMetrics({ register });

// Custom metrics
const httpRequestDuration = new promClient.Histogram({
  name: 'http_request_duration_seconds',
  help: 'Duration of HTTP requests in seconds',
  labelNames: ['method', 'route', 'status_code'],
  buckets: [0.01, 0.05, 0.1, 0.5, 1, 5],
});
register.registerMetric(httpRequestDuration);

const httpRequestsTotal = new promClient.Counter({
  name: 'http_requests_total',
  help: 'Total number of HTTP requests',
  labelNames: ['method', 'route', 'status_code'],
});
register.registerMetric(httpRequestsTotal);

// Middleware to record metrics
function metricsMiddleware(req, res, next) {
  const start = Date.now();

  res.on('finish', () => {
    const duration = (Date.now() - start) / 1000;
    const route = req.route?.path || req.path;

    httpRequestDuration
      .labels(req.method, route, res.statusCode)
      .observe(duration);

    httpRequestsTotal
      .labels(req.method, route, res.statusCode)
      .inc();
  });

  next();
}

// Metrics endpoint
async function metricsHandler(req, res) {
  res.set('Content-Type', register.contentType);
  res.end(await register.metrics());
}

module.exports = { metricsMiddleware, metricsHandler, register };

Complete Express Setup

// server.js
const express = require('express');
const config = require('./config');
const logger = require('./logger');
const requestLogger = require('./middleware/requestLogger');
const { errorHandler, notFoundHandler } = require('./middleware/errorHandler');
const { metricsMiddleware, metricsHandler } = require('./metrics');
const healthRoutes = require('./routes/health');
const apiRoutes = require('./routes/api');

const app = express();

// Trust proxy (for rate limiting behind reverse proxy)
app.set('trust proxy', 1);

// Security middleware
require('./security')(app);

// Metrics middleware
app.use(metricsMiddleware);

// Request logging
app.use(requestLogger);

// Body parsing
app.use(express.json({ limit: '10kb' }));

// Routes
app.use(healthRoutes);
app.get('/metrics', metricsHandler);
app.use('/api', apiRoutes);

// Error handling
app.use(notFoundHandler);
app.use(errorHandler);

// Start server
const server = app.listen(config.port, () => {
  logger.info({ port: config.port }, 'Server started');
  if (process.send) process.send('ready');
});

// Graceful shutdown
require('./gracefulShutdown')(server);

module.exports = app;

Cost Summary: Node.js on DanubeData

Component Service Monthly Cost
Node.js Server VPS Small (2 vCPU, 4GB) €8.99
Database PostgreSQL Small €19.99
Cache/Sessions Redis Micro €4.99
Total €33.97/month

Get Started

Ready to deploy your Node.js application to production?

👉 Create Your DanubeData Account

Deploy a production-ready Node.js stack in minutes:

  • VPS with Docker pre-installed
  • Managed PostgreSQL with automatic backups
  • Redis for caching and sessions
  • 20TB included bandwidth

Need help with your Node.js deployment? Contact our team for guidance.

Share this article

Ready to Get Started?

Deploy your infrastructure in minutes with DanubeData's managed services.