BlogDatabasesDatabase Connection Pooling Explained: When and Why You Need It

Database Connection Pooling Explained: When and Why You Need It

Adrian Silaghi
Adrian Silaghi
December 30, 2025
12 min read
43 views
#database #connection-pooling #postgresql #mysql #performance #pgbouncer #sqlalchemy

Every time your application connects to a database, it performs a handshake that takes 25-50 milliseconds. For a web application handling 100 requests per second, that's up to 5 seconds of wasted time every second. Connection pooling solves this by maintaining a pool of ready-to-use connections.

This guide explains how connection pooling works, when you need it, and how to configure it properly for your database.

What is Connection Pooling?

Connection pooling is a technique that maintains a cache of database connections that can be reused. Instead of creating a new connection for every request and destroying it after, your application borrows a connection from the pool, uses it, and returns it.

Without Connection Pooling

Request 1: Connect (50ms) → Query (5ms) → Disconnect (5ms) = 60ms
Request 2: Connect (50ms) → Query (5ms) → Disconnect (5ms) = 60ms
Request 3: Connect (50ms) → Query (5ms) → Disconnect (5ms) = 60ms

Total for 3 requests: 180ms

With Connection Pooling

Pool initialized: Connect (50ms × 3) = 150ms (once at startup)

Request 1: Borrow (0.1ms) → Query (5ms) → Return (0.1ms) = 5.2ms
Request 2: Borrow (0.1ms) → Query (5ms) → Return (0.1ms) = 5.2ms
Request 3: Borrow (0.1ms) → Query (5ms) → Return (0.1ms) = 5.2ms

Total for 3 requests: 15.6ms

That's an 11x improvement in response time.

Connection Overhead: The Numbers

Creating a database connection involves multiple steps:

Step PostgreSQL MySQL
TCP handshake ~1-5ms ~1-5ms
TLS negotiation (if SSL) ~10-30ms ~10-30ms
Authentication ~5-15ms ~5-10ms
Fork/spawn process (PG) ~5-20ms N/A (threaded)
Session setup ~2-5ms ~2-5ms
Total 25-75ms 18-50ms

On a managed database with SSL (like DanubeData), expect 30-50ms per new connection. With pooling, this drops to microseconds.

When You Need Connection Pooling

You Need Pooling If:

  • Web applications: Multiple concurrent requests to the database
  • API servers: High request throughput
  • Serverless functions: Lambda, Cloud Functions (critical!)
  • Microservices: Multiple services connecting to one database
  • Any production application: Basically always

You Might Skip Pooling If:

  • Single-threaded scripts: Cron jobs, one-off scripts
  • Development: Local testing (though good practice to use it)
  • Very low traffic: < 1 request/second

Types of Connection Pooling

1. Application-Level Pooling (Built-in)

Most modern frameworks and ORMs include built-in connection pooling:

Language/Framework Pooling Library Default Pool Size
Python/SQLAlchemy Built-in QueuePool 5
Node.js/pg pg-pool 10
PHP/Laravel PDO persistent Per-worker
Java/HikariCP HikariCP 10
Go/database/sql Built-in Unlimited(!)
Ruby/Rails ActiveRecord 5

Pros: Simple, no extra infrastructure
Cons: Pool per application instance (doesn't help with many instances)

2. External Connection Pooler (Middleware)

Dedicated pooling servers that sit between your app and database:

  • PgBouncer: Most popular for PostgreSQL
  • Pgpool-II: PostgreSQL pooling + load balancing
  • ProxySQL: MySQL/MariaDB pooling + query routing

Pros: Central pool for all app instances, more connections per server
Cons: Additional infrastructure to manage

3. Managed Pooling

Cloud providers often include built-in pooling:

  • AWS RDS Proxy: Built-in pooling for RDS
  • DanubeData: Connection pooling via application-level configuration

Pool Sizing: The Formula

The optimal pool size depends on your workload and database capacity.

For Application-Level Pools

# Conservative formula
pool_size = (cpu_cores * 2) + 1

# Example: 4-core VPS
pool_size = (4 * 2) + 1 = 9

# With multiple application instances
total_connections = pool_size * num_instances
# 9 connections * 4 instances = 36 connections

Database Connection Limits

Database Default Max Connections Memory per Connection
PostgreSQL 100 ~5-10 MB
MySQL 151 ~1-5 MB
MariaDB 151 ~1-5 MB

Important: Never use all available connections. Reserve some for:

  • Administrative connections (migrations, monitoring)
  • Background workers
  • Emergency access

Recommended Pool Configuration

# For a typical web application
# With 4 app instances on a database with 100 max connections

Per-instance pool:
  min_size: 2          # Minimum idle connections
  max_size: 15         # Maximum connections per instance
  max_overflow: 5      # Extra connections when pool is exhausted

Total: 4 instances × 20 (15 + 5) = 80 max connections
Reserved: 20 connections for admin/workers
Database max: 100 ✓

Configuration Examples

Python with SQLAlchemy

# Synchronous
from sqlalchemy import create_engine

engine = create_engine(
    "postgresql://user:pass@host:5432/db",
    pool_size=10,          # Steady-state connections
    max_overflow=20,       # Additional connections when busy
    pool_timeout=30,       # Seconds to wait for available connection
    pool_recycle=1800,     # Recycle connections after 30 minutes
    pool_pre_ping=True,    # Check connection health before using
)

# Async with asyncpg
from sqlalchemy.ext.asyncio import create_async_engine

async_engine = create_async_engine(
    "postgresql+asyncpg://user:pass@host:5432/db",
    pool_size=10,
    max_overflow=20,
    pool_timeout=30,
    pool_recycle=1800,
    pool_pre_ping=True,
)

Node.js with pg-pool

const { Pool } = require('pg');

const pool = new Pool({
  host: 'your-db.danubedata.com',
  port: 5432,
  database: 'mydb',
  user: 'user',
  password: 'password',
  ssl: { rejectUnauthorized: true },

  // Pool configuration
  max: 20,                    // Maximum connections
  min: 5,                     // Minimum connections
  idleTimeoutMillis: 30000,   // Close idle connections after 30s
  connectionTimeoutMillis: 5000,  // Fail if can't connect in 5s
  maxUses: 7500,              // Close connection after N uses
});

// Query
const result = await pool.query('SELECT * FROM users WHERE id = $1', [1]);

// Important: Release on app shutdown
process.on('SIGTERM', async () => {
  await pool.end();
});

PHP with Laravel

// config/database.php
'pgsql' => [
    'driver' => 'pgsql',
    'host' => env('DB_HOST'),
    'port' => env('DB_PORT', '5432'),
    'database' => env('DB_DATABASE'),
    'username' => env('DB_USERNAME'),
    'password' => env('DB_PASSWORD'),

    // Connection pooling settings
    'options' => [
        PDO::ATTR_PERSISTENT => true,  // Persistent connections
    ],
],

// For Laravel Octane (Swoole/RoadRunner)
// Configure in config/octane.php
'warm' => [
    ...Octane::defaultServicesToWarm(),
],

// Connection pool size is per-worker in PHP-FPM
// workers × 1 connection = total connections

Go with database/sql

import (
    "database/sql"
    _ "github.com/lib/pq"
)

func main() {
    db, err := sql.Open("postgres", "postgres://user:pass@host:5432/db?sslmode=require")
    if err != nil {
        log.Fatal(err)
    }

    // Critical: Go defaults to unlimited connections!
    db.SetMaxOpenConns(25)          // Maximum open connections
    db.SetMaxIdleConns(10)          // Maximum idle connections
    db.SetConnMaxLifetime(5 * time.Minute)  // Max connection lifetime
    db.SetConnMaxIdleTime(1 * time.Minute)  // Max idle time

    // Verify connection
    if err := db.Ping(); err != nil {
        log.Fatal(err)
    }
}

Java with HikariCP

// application.properties (Spring Boot)
spring.datasource.url=jdbc:postgresql://host:5432/db
spring.datasource.username=user
spring.datasource.password=pass

# HikariCP settings
spring.datasource.hikari.maximum-pool-size=20
spring.datasource.hikari.minimum-idle=5
spring.datasource.hikari.idle-timeout=300000
spring.datasource.hikari.connection-timeout=30000
spring.datasource.hikari.max-lifetime=1800000
spring.datasource.hikari.leak-detection-threshold=60000

// Or programmatic configuration
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:postgresql://host:5432/db");
config.setUsername("user");
config.setPassword("pass");
config.setMaximumPoolSize(20);
config.setMinimumIdle(5);
config.setIdleTimeout(300000);
config.setConnectionTimeout(30000);

HikariDataSource ds = new HikariDataSource(config);

PgBouncer: External Pooling for PostgreSQL

When you have many application instances or serverless functions, PgBouncer provides centralized pooling.

PgBouncer Pooling Modes

Mode Description Best For
Session Connection held for entire session Apps using session-level features
Transaction Connection held for transaction only Most web applications (recommended)
Statement Connection held for single statement Simple queries only (limited use)

PgBouncer Configuration

# pgbouncer.ini
[databases]
mydb = host=your-db.danubedata.com port=5432 dbname=mydb

[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt

# Pool configuration
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 20
min_pool_size = 5
reserve_pool_size = 5
reserve_pool_timeout = 3

# Timeouts
server_idle_timeout = 600
server_lifetime = 3600
query_timeout = 300

# Logging
log_connections = 1
log_disconnections = 1
log_pooler_errors = 1

Deploy PgBouncer with Docker

# docker-compose.yml
services:
  pgbouncer:
    image: edoburu/pgbouncer:latest
    environment:
      - DATABASE_URL=postgres://user:pass@your-db.danubedata.com:5432/mydb
      - POOL_MODE=transaction
      - MAX_CLIENT_CONN=1000
      - DEFAULT_POOL_SIZE=20
    ports:
      - "6432:6432"
    restart: always

  app:
    build: .
    environment:
      # App connects to PgBouncer, not directly to database
      - DATABASE_URL=postgres://user:pass@pgbouncer:6432/mydb
    depends_on:
      - pgbouncer

Common Pitfalls and Solutions

1. Connection Pool Exhaustion

# Symptom: "too many connections" or "connection pool timeout"

# Causes:
# - Pool too small for workload
# - Connections not being returned (leaks)
# - Long-running queries blocking pool

# Solutions:

# 1. Increase pool size (if database can handle it)
pool_size=20
max_overflow=30

# 2. Add connection timeout
pool_timeout=30  # Fail fast instead of waiting forever

# 3. Find and fix connection leaks
# Bad:
conn = pool.connect()
# ... (crash here = leaked connection)

# Good:
with pool.connect() as conn:
    # Connection automatically returned on exit
    pass

2. Stale Connections

# Symptom: "connection reset by peer" or intermittent failures

# Cause: Connection was closed by database/firewall but pool doesn't know

# Solutions:

# 1. Enable connection health checks
pool_pre_ping=True  # SQLAlchemy

# 2. Set max connection lifetime
pool_recycle=1800  # Recycle every 30 minutes

# 3. Match database/firewall timeouts
# If firewall closes connections after 10 minutes:
pool_recycle=540  # Recycle before firewall timeout

3. Too Many Connections

# Symptom: Database refuses connections, high memory usage

# Cause: More instances × pool_size > max_connections

# Solutions:

# 1. Reduce per-instance pool size
pool_size=5  # Instead of 20

# 2. Use external pooler (PgBouncer)
# Centralizes connections across all instances

# 3. Increase database max_connections (with more RAM)
# PostgreSQL: ALTER SYSTEM SET max_connections = 200;
# Requires restart

4. Wrong Pool Mode (PgBouncer)

# Symptom: "prepared statement does not exist" or transaction issues

# Cause: Using transaction mode with features that need session mode

# Features that require session mode:
# - Prepared statements (some ORMs)
# - LISTEN/NOTIFY
# - Advisory locks
# - Temporary tables
# - SET commands (non-transaction)

# Solution: Use session mode if needed, or disable problematic features

# For SQLAlchemy with transaction mode:
engine = create_engine(
    url,
    connect_args={"prepare_threshold": None}  # Disable prepared statements
)

Monitoring Pool Health

Metrics to Watch

  • Pool size: Current active connections
  • Pool overflow: Connections beyond pool_size
  • Wait time: Time waiting for available connection
  • Checkout count: Total connections borrowed
  • Timeout count: Failed attempts to get connection

SQLAlchemy Pool Events

from sqlalchemy import event
from sqlalchemy.pool import Pool

@event.listens_for(Pool, "checkout")
def receive_checkout(dbapi_connection, connection_record, connection_proxy):
    print(f"Connection checked out: {connection_record.info}")

@event.listens_for(Pool, "checkin")
def receive_checkin(dbapi_connection, connection_record):
    print(f"Connection returned: {connection_record.info}")

@event.listens_for(Pool, "connect")
def receive_connect(dbapi_connection, connection_record):
    print("New connection created")

# Pool statistics
print(f"Pool size: {engine.pool.size()}")
print(f"Checked out: {engine.pool.checkedout()}")
print(f"Overflow: {engine.pool.overflow()}")

PgBouncer Statistics

# Connect to PgBouncer admin console
psql -h localhost -p 6432 -U pgbouncer pgbouncer

# Show pool statistics
SHOW POOLS;
#  database  |  user   | cl_active | cl_waiting | sv_active | sv_idle
# -----------+---------+-----------+------------+-----------+---------
#  mydb      | myuser  |        15 |          0 |         8 |      12

# Show client connections
SHOW CLIENTS;

# Show server connections
SHOW SERVERS;

# Show overall stats
SHOW STATS;

Best Practices Summary

Practice Recommendation
Pool size (CPU cores × 2) + 1 per instance
Max overflow 50-100% of pool_size
Connection timeout 30 seconds (fail fast)
Connection lifetime 30-60 minutes
Health checks Always enable (pool_pre_ping)
Connection return Use context managers/try-finally
Multi-instance apps Consider external pooler (PgBouncer)
Serverless Use external pooler (required!)

Connection Pooling on DanubeData

DanubeData managed databases work with all connection pooling approaches:

  • Application-level pooling: Works out of the box with SQLAlchemy, pg-pool, etc.
  • PgBouncer: Run PgBouncer on your VPS, connect to managed database
  • High connection limits: Our managed databases support 100+ connections

Recommended Stack for High Traffic

┌───────────────────────────────────────────────────────────────┐
│                        Application Instances                   │
│  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌──────────┐      │
│  │ App + Pool│  │ App + Pool│  │ App + Pool│  │ App + Pool│      │
│  │ (5 conn) │  │ (5 conn) │  │ (5 conn) │  │ (5 conn) │      │
│  └────┬─────┘  └────┬─────┘  └────┬─────┘  └────┬─────┘      │
│       │             │             │             │             │
└───────┴─────────────┴─────────────┴─────────────┴─────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────┐
│                   PgBouncer (Optional)                       │
│                   Max 1000 client connections                │
│                   Pool of 20-50 server connections           │
└─────────────────────────────┬───────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────┐
│                   DanubeData PostgreSQL                      │
│                   100 max_connections                        │
│                   Managed backups, SSL, monitoring           │
└─────────────────────────────────────────────────────────────┘

Get Started

Need a database with proper connection pooling support?

👉 Create a Managed Database

DanubeData managed databases include:

  • PostgreSQL, MySQL, MariaDB with high connection limits
  • SSL/TLS encryption for secure pooled connections
  • Connection metrics in the monitoring dashboard
  • Automatic parameter tuning for your instance size

Questions about connection pooling for your setup? Contact our team for architecture guidance.

Share this article

Ready to Get Started?

Deploy your infrastructure in minutes with DanubeData's managed services.