Documentation

PostgreSQL Database Instances

Complete guide to creating and managing PostgreSQL database instances on DanubeData.

Overview

PostgreSQL is a powerful, open-source object-relational database system. DanubeData provides fully managed PostgreSQL instances with:

  • Automated backups - Daily backups with 7-day retention
  • High availability - Read replicas and failover
  • Performance monitoring - Real-time metrics
  • Easy scaling - Vertical and horizontal
  • Security - Encrypted connections and data

Supported Versions

  • PostgreSQL 17 - Latest, recommended
  • PostgreSQL 16 - Current stable
  • PostgreSQL 15 - Previous stable
  • PostgreSQL 14 - LTS version

Version Selection

PostgreSQL 17 (Recommended):

  • Latest features
  • Best performance
  • Improved partitioning
  • Better JSON support

PostgreSQL 16:

  • Stable and mature
  • Logical replication improvements
  • Better monitoring
  • Wide adoption

PostgreSQL 15:

  • Very stable
  • Good for production
  • Regular updates
  • Strong ecosystem

PostgreSQL 14:

  • Long-term support
  • Maximum compatibility
  • Conservative choice

Creating a PostgreSQL Instance

Step-by-Step

  1. Navigate to Databases
  2. Click Create Database Instance
  3. Select PostgreSQL as engine
  4. Choose version (17 recommended)
  5. Select resource profile
  6. Configure settings:
    • Instance name
    • Database name
    • Master username
    • Master password
    • Region
  7. Optional: Parameter group, firewall
  8. Click Create Database

Creation Time: 2-3 minutes

Configuration Tips

Instance Name: prod-postgres-users
Database Name: production (initial database)
Master Username: postgres (default) or custom
Master Password: Auto-generated or custom (min 8 chars)

Connecting to PostgreSQL

Connection Information

After creation:

  • Endpoint: postgres-abc123.danubedata.com
  • Port: 5432
  • Username: Your master username
  • Password: Master password

psql Command Line

psql -h postgres-abc123.danubedata.com \
     -p 5432 \
     -U postgres \
     -d production

# Or with full connection string
psql "postgresql://postgres:password@postgres-abc123.danubedata.com:5432/production"

Connection String

postgresql://username:password@host:port/database
postgresql://postgres:mypassword@postgres-abc123.danubedata.com:5432/production

From Applications

Python (psycopg2):

import psycopg2

conn = psycopg2.connect(
    host="postgres-abc123.danubedata.com",
    port=5432,
    database="production",
    user="postgres",
    password="your_password"
)

cur = conn.cursor()
cur.execute("SELECT version()")
version = cur.fetchone()
print(f"PostgreSQL version: {version[0]}")

cur.close()
conn.close()

Python (SQLAlchemy):

from sqlalchemy import create_engine

engine = create_engine(
    'postgresql://postgres:password@postgres-abc123.danubedata.com:5432/production'
)

with engine.connect() as conn:
    result = conn.execute("SELECT 1")
    print(result.fetchone())

Node.js (pg):

const { Client } = require('pg');

const client = new Client({
  host: 'postgres-abc123.danubedata.com',
  port: 5432,
  database: 'production',
  user: 'postgres',
  password: 'your_password',
});

client.connect();

client.query('SELECT NOW()', (err, res) => {
  console.log(err ? err.stack : res.rows[0]);
  client.end();
});

Java (JDBC):

import java.sql.*;

public class PostgreSQLExample {
    public static void main(String[] args) {
        String url = "jdbc:postgresql://postgres-abc123.danubedata.com:5432/production";
        String user = "postgres";
        String password = "your_password";
        
        try {
            Connection conn = DriverManager.getConnection(url, user, password);
            System.out.println("Connected to PostgreSQL");
            
            Statement stmt = conn.createStatement();
            ResultSet rs = stmt.executeQuery("SELECT version()");
            
            while (rs.next()) {
                System.out.println(rs.getString(1));
            }
            
            conn.close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
    }
}

PHP (PDO):

<?php
$host = 'postgres-abc123.danubedata.com';
$port = 5432;
$dbname = 'production';
$user = 'postgres';
$password = 'your_password';

try {
    $dsn = "pgsql:host=$host;port=$port;dbname=$dbname";
    $pdo = new PDO($dsn, $user, $password);
    $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
    
    $stmt = $pdo->query('SELECT version()');
    $version = $stmt->fetchColumn();
    echo "PostgreSQL version: $version\n";
} catch (PDOException $e) {
    echo "Error: " . $e->getMessage();
}
?>

Go:

package main

import (
    "database/sql"
    "fmt"
    _ "github.com/lib/pq"
)

func main() {
    connStr := "host=postgres-abc123.danubedata.com port=5432 user=postgres password=your_password dbname=production sslmode=require"
    
    db, err := sql.Open("postgres", connStr)
    if err != nil {
        panic(err)
    }
    defer db.close()
    
    var version string
    err = db.QueryRow("SELECT version()").Scan(&version)
    if err != nil {
        panic(err)
    }
    
    fmt.Println("PostgreSQL version:", version)
}

Database Management

Create Databases

-- Create database
CREATE DATABASE myapp_production
    WITH ENCODING 'UTF8'
    LC_COLLATE = 'en_US.UTF-8'
    LC_CTYPE = 'en_US.UTF-8'
    TEMPLATE template0;

-- Use database
\c myapp_production

-- List databases
\l

-- Drop database
DROP DATABASE old_database;

Create Users

-- Create user
CREATE USER appuser WITH PASSWORD 'strong_password';

-- Grant privileges
GRANT ALL PRIVILEGES ON DATABASE myapp_production TO appuser;

-- Grant specific privileges
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO appuser;

-- Grant on future tables
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO appuser;

-- List users
\du

-- Drop user
DROP USER olduser;

Create Tables

-- Create table
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    username VARCHAR(50) UNIQUE NOT NULL,
    email VARCHAR(100) UNIQUE NOT NULL,
    password_hash VARCHAR(255) NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Create index
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_username ON users(username);

-- List tables
\dt

-- Describe table
\d users

-- Show create statement
\d+ users

PostgreSQL-Specific Features

JSONB Support

-- Create table with JSONB
CREATE TABLE events (
    id SERIAL PRIMARY KEY,
    data JSONB NOT NULL,
    created_at TIMESTAMP DEFAULT NOW()
);

-- Insert JSON data
INSERT INTO events (data) VALUES 
    ('{"user_id": 123, "action": "login", "ip": "192.168.1.1"}');

-- Query JSON
SELECT * FROM events WHERE data->>'action' = 'login';

-- JSON indexing
CREATE INDEX idx_events_action ON events ((data->>'action'));

-- GIN index for full JSON search
CREATE INDEX idx_events_data ON events USING GIN (data);

Full-Text Search

-- Add tsvector column
ALTER TABLE articles ADD COLUMN search_vector tsvector;

-- Update search vector
UPDATE articles 
SET search_vector = to_tsvector('english', title || ' ' || content);

-- Create GIN index
CREATE INDEX idx_articles_search ON articles USING GIN(search_vector);

-- Search
SELECT * FROM articles 
WHERE search_vector @@ to_tsquery('english', 'postgresql & database');

-- Rank results
SELECT *, ts_rank(search_vector, query) AS rank
FROM articles, to_tsquery('english', 'postgresql') query
WHERE search_vector @@ query
ORDER BY rank DESC;

Array Types

-- Create table with array
CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100),
    tags TEXT[]
);

-- Insert with array
INSERT INTO products (name, tags) 
VALUES ('Laptop', ARRAY['electronics', 'computers', 'portable']);

-- Query array
SELECT * FROM products WHERE 'electronics' = ANY(tags);

-- Array contains
SELECT * FROM products WHERE tags @> ARRAY['computers'];

Window Functions

-- Rank employees by salary
SELECT 
    name,
    department,
    salary,
    RANK() OVER (PARTITION BY department ORDER BY salary DESC) as rank
FROM employees;

-- Running total
SELECT 
    date,
    amount,
    SUM(amount) OVER (ORDER BY date) as running_total
FROM sales;

Common Table Expressions (CTEs)

-- Recursive CTE for hierarchy
WITH RECURSIVE org_chart AS (
    -- Base case
    SELECT id, name, manager_id, 1 as level
    FROM employees
    WHERE manager_id IS NULL
    
    UNION ALL
    
    -- Recursive case
    SELECT e.id, e.name, e.manager_id, oc.level + 1
    FROM employees e
    JOIN org_chart oc ON e.manager_id = oc.id
)
SELECT * FROM org_chart ORDER BY level, name;

Performance Optimization

Indexes

-- B-tree index (default)
CREATE INDEX idx_users_email ON users(email);

-- Partial index
CREATE INDEX idx_active_users ON users(email) WHERE active = true;

-- Composite index
CREATE INDEX idx_orders_user_date ON orders(user_id, created_at DESC);

-- GIN index for full-text search
CREATE INDEX idx_articles_content ON articles USING GIN(to_tsvector('english', content));

-- List indexes
\di

-- Index usage stats
SELECT schemaname, tablename, indexname, idx_scan, idx_tup_read, idx_tup_fetch
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;

Query Optimization

-- Analyze query plan
EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'user@example.com';

-- Analyze table
ANALYZE users;

-- Vacuum table
VACUUM ANALYZE users;

-- Full vacuum (requires maintenance window)
VACUUM FULL users;

Connection Pooling

pgBouncer (recommended):

[databases]
myapp = host=postgres-abc123.danubedata.com port=5432 dbname=production

[pgbouncer]
listen_port = 6432
listen_addr = *
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25

Application-level (Python):

from sqlalchemy import create_engine
from sqlalchemy.pool import QueuePool

engine = create_engine(
    'postgresql://postgres:password@postgres-abc123.danubedata.com:5432/production',
    poolclass=QueuePool,
    pool_size=10,
    max_overflow=20,
    pool_pre_ping=True
)

Monitoring & Metrics

Key Metrics

Monitor in dashboard:

  • CPU Usage
  • Memory Usage
  • Connections (active/max)
  • Queries per second
  • Cache hit ratio
  • Replication lag

Database Statistics

-- Database size
SELECT pg_size_pretty(pg_database_size('production'));

-- Table sizes
SELECT 
    schemaname,
    tablename,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC
LIMIT 10;

-- Connection stats
SELECT count(*), state 
FROM pg_stat_activity 
GROUP BY state;

-- Long-running queries
SELECT pid, now() - pg_stat_activity.query_start AS duration, query
FROM pg_stat_activity
WHERE state = 'active'
AND now() - pg_stat_activity.query_start > interval '5 minutes'
ORDER BY duration DESC;

-- Cache hit ratio
SELECT 
    sum(heap_blks_read) as heap_read,
    sum(heap_blks_hit) as heap_hit,
    sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio
FROM pg_statio_user_tables;

Backup & Recovery

Automated Backups

  • Daily at 2 AM UTC
  • 7-day retention
  • Point-in-time recovery
  • No performance impact

Manual Snapshots

Create on-demand:

  1. Go to database page
  2. Snapshots tab
  3. Create Snapshot
  4. Name and create

Export Database

# Export entire database
pg_dump -h postgres-abc123.danubedata.com \
        -U postgres \
        -d production \
        -F custom \
        -f backup.dump

# Export with compression
pg_dump -h postgres-abc123.danubedata.com \
        -U postgres \
        -d production \
        | gzip > backup.sql.gz

# Export schema only
pg_dump -h postgres-abc123.danubedata.com \
        -U postgres \
        -d production \
        --schema-only \
        -f schema.sql

# Export specific tables
pg_dump -h postgres-abc123.danubedata.com \
        -U postgres \
        -d production \
        -t users -t orders \
        -f tables.sql

Import Database

# Import from custom format
pg_restore -h postgres-abc123.danubedata.com \
           -U postgres \
           -d production \
           -F custom \
           backup.dump

# Import from SQL
psql -h postgres-abc123.danubedata.com \
     -U postgres \
     -d production \
     < backup.sql

# Import compressed
gunzip < backup.sql.gz | psql -h postgres-abc123.danubedata.com \
                               -U postgres \
                               -d production

Read Replicas

Create Replica

  1. Go to database page
  2. Click Add Replica
  3. Select node and profile
  4. Click Create

Benefits

  • Scale read operations
  • Reporting without impacting primary
  • Geographic distribution
  • High availability

Using Replicas

# Primary for writes
primary_engine = create_engine('postgresql://postgres:pass@primary-host:5432/db')

# Replica for reads
replica_engine = create_engine('postgresql://postgres:pass@replica-host:5432/db')

# Write to primary
with primary_engine.connect() as conn:
    conn.execute("INSERT INTO users ...")

# Read from replica
with replica_engine.connect() as conn:
    users = conn.execute("SELECT * FROM users").fetchall()

Security

SSL/TLS Connections

Always use SSL:

# psql with SSL
psql "postgresql://postgres:password@postgres-abc123.danubedata.com:5432/production?sslmode=require"

Connection modes:

  • sslmode=require - Require SSL
  • sslmode=verify-ca - Verify certificate
  • sslmode=verify-full - Full verification

User Permissions

-- Application user
CREATE USER app_user WITH PASSWORD 'strong_pass';
GRANT CONNECT ON DATABASE production TO app_user;
GRANT USAGE ON SCHEMA public TO app_user;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user;

-- Read-only user
CREATE USER readonly WITH PASSWORD 'strong_pass';
GRANT CONNECT ON DATABASE production TO readonly;
GRANT USAGE ON SCHEMA public TO readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;

Troubleshooting

Connection Issues

-- Check max connections
SHOW max_connections;

-- Check current connections
SELECT count(*) FROM pg_stat_activity;

-- Kill idle connections
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE state = 'idle'
AND now() - state_change > interval '10 minutes';

Performance Issues

-- Find slow queries
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;

-- Table bloat
SELECT schemaname, tablename, 
       pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
FROM pg_tables
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;

-- Run VACUUM
VACUUM ANALYZE;

Best Practices

Schema Design

  1. Use appropriate data types
  2. Add foreign keys
  3. Create necessary indexes
  4. Normalize when appropriate
  5. Use constraints

Query Optimization

  1. Use EXPLAIN ANALYZE
  2. Add indexes for frequent queries
  3. Avoid SELECT *
  4. Use LIMIT for large results
  5. Optimize JOINs

Maintenance

  1. Regular VACUUM ANALYZE
  2. Monitor index usage
  3. Review slow queries
  4. Update statistics
  5. Monitor disk space

Next Steps

Need help? Contact support through the dashboard.