BlogGuidesMigrate from MongoDB Atlas to PostgreSQL: Complete Guide 2025

Migrate from MongoDB Atlas to PostgreSQL: Complete Guide 2025

Adrian Silaghi
Adrian Silaghi
December 25, 2025
12 min read
26 views
#mongodb #postgresql #migration #jsonb #database-migration #prisma

MongoDB was the database of the 2010s. Its flexible schema and JSON documents made it perfect for rapid prototyping. But as applications mature, many teams are discovering that PostgreSQL offers the best of both worlds: relational integrity when you need it, and JSONB flexibility when you don't.

This guide walks you through migrating from MongoDB (specifically MongoDB Atlas) to PostgreSQL. We'll cover schema design decisions, data migration strategies, query translation, and application code changes.

Why Companies Are Leaving MongoDB

1. Licensing and Cost Concerns

MongoDB's SSPL license has raised concerns for many organizations. While not as restrictive as some initially feared, it has pushed cloud providers away and limited hosting options. MongoDB Atlas pricing has also increased significantly, with many teams seeing bills double or triple over the past few years.

2. Schema Chaos at Scale

The "schemaless" nature of MongoDB becomes a liability as teams grow:

  • Different documents in the same collection have different fields
  • No database-level enforcement of required fields
  • Application-level validation scattered and inconsistent
  • Migrations become frightening without schema guarantees

3. Join Limitations

MongoDB's $lookup is limited compared to SQL JOINs:

  • No support for multiple $lookups in the same stage
  • Performance degrades significantly with large datasets
  • Complex aggregations become unreadable

4. Transaction Complexity

While MongoDB added multi-document transactions, they're more complex than PostgreSQL:

  • Transactions require replica set or sharded cluster
  • More overhead and limitations
  • Not as battle-tested as PostgreSQL ACID

5. PostgreSQL JSONB Exists

The main reason to choose MongoDB—flexible JSON documents—is now available in PostgreSQL with JSONB. You get document flexibility plus:

  • Proper relational modeling when needed
  • SQL for querying (widely known)
  • Mature ecosystem and tooling
  • ACID transactions by default

When to Migrate (And When Not To)

Good Candidates for Migration

  • Applications with relational data: Users, orders, products with clear relationships
  • Growing teams: Schema enforcement helps onboard new developers
  • Cost-sensitive projects: PostgreSQL hosting is typically cheaper
  • Compliance requirements: GDPR, SOC2, etc. are easier with PostgreSQL
  • Complex reporting: SQL makes analytics queries trivial

Consider Staying on MongoDB If

  • Truly document-centric: Content management, logging, event sourcing
  • Massive scale: Already heavily sharded and working well
  • Time series focus: MongoDB's time series collections are excellent
  • Migration cost prohibitive: Millions of documents, limited team

PostgreSQL JSONB: The Best of Both Worlds

Before diving into migration, understand JSONB—it's key to a smooth transition.

What is JSONB?

JSONB is a binary JSON format in PostgreSQL that allows you to:

  • Store JSON documents in a column
  • Index and query JSON fields efficiently
  • Mix structured and unstructured data in the same table

JSONB Example

-- Create a table with JSONB
CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    name VARCHAR(255) NOT NULL,
    price DECIMAL(10,2) NOT NULL,
    attributes JSONB DEFAULT '{}'
);

-- Insert with flexible attributes
INSERT INTO products (name, price, attributes) VALUES
('Laptop', 999.99, '{
    "brand": "Dell",
    "cpu": "Intel i7",
    "ram": "16GB",
    "storage": {"type": "SSD", "size": "512GB"}
}'),
('T-Shirt', 29.99, '{
    "color": "blue",
    "size": "M",
    "material": "cotton"
}');

-- Query JSONB fields
SELECT name, attributes->>'brand' AS brand
FROM products
WHERE attributes->>'brand' = 'Dell';

-- Query nested JSONB
SELECT name
FROM products
WHERE attributes->'storage'->>'type' = 'SSD';

-- Index JSONB for performance
CREATE INDEX idx_products_attributes ON products USING GIN (attributes);

Schema Conversion Strategies

You have three main approaches for converting MongoDB collections to PostgreSQL:

Strategy 1: Full Normalization

Convert embedded documents to separate tables with foreign keys.

// MongoDB: Embedded document
{
  "_id": ObjectId("..."),
  "name": "John Doe",
  "email": "john@example.com",
  "addresses": [
    {"type": "home", "street": "123 Main St", "city": "Boston"},
    {"type": "work", "street": "456 Corp Ave", "city": "Boston"}
  ],
  "orders": [
    {"product": "Widget", "price": 9.99, "date": ISODate("...")},
    {"product": "Gadget", "price": 19.99, "date": ISODate("...")}
  ]
}
-- PostgreSQL: Normalized tables
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    name VARCHAR(255) NOT NULL,
    email VARCHAR(255) UNIQUE NOT NULL
);

CREATE TABLE addresses (
    id SERIAL PRIMARY KEY,
    user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
    type VARCHAR(50),
    street VARCHAR(255),
    city VARCHAR(100)
);

CREATE TABLE orders (
    id SERIAL PRIMARY KEY,
    user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
    product VARCHAR(255),
    price DECIMAL(10,2),
    ordered_at TIMESTAMP
);

Pros: Full relational power, normalized data, efficient queries

Cons: More tables to manage, JOINs required, more application changes

Strategy 2: JSONB Columns

Keep flexible data as JSONB, normalize only what needs relations.

-- PostgreSQL: Hybrid approach
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    name VARCHAR(255) NOT NULL,
    email VARCHAR(255) UNIQUE NOT NULL,
    -- Keep flexible data as JSONB
    addresses JSONB DEFAULT '[]',
    metadata JSONB DEFAULT '{}'
);

CREATE TABLE orders (
    id SERIAL PRIMARY KEY,
    user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
    product VARCHAR(255),
    price DECIMAL(10,2),
    ordered_at TIMESTAMP,
    -- Flexible order attributes
    attributes JSONB DEFAULT '{}'
);

Pros: Faster migration, preserves flexibility, fewer tables

Cons: JSONB querying is more verbose, potential for schema drift

Strategy 3: Document Table (MongoDB-Style)

Store entire documents as JSONB with minimal structure.

-- PostgreSQL: Document-centric
CREATE TABLE users (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    document JSONB NOT NULL,
    created_at TIMESTAMP DEFAULT NOW(),
    updated_at TIMESTAMP DEFAULT NOW()
);

-- Create indexes on commonly queried fields
CREATE INDEX idx_users_email ON users ((document->>'email'));
CREATE INDEX idx_users_document ON users USING GIN (document);

Pros: Minimal application changes, fastest migration

Cons: Loses relational benefits, essentially MongoDB with SQL syntax

Recommended: Hybrid Approach

For most applications, we recommend Strategy 2—a hybrid approach:

  • Normalize core entities (users, products, orders)
  • Use JSONB for truly flexible data (metadata, preferences, attributes)
  • Create foreign key relationships where they make sense

Query Translation Guide

Here's how to convert common MongoDB queries to PostgreSQL:

Basic CRUD Operations

Operation MongoDB PostgreSQL
Find one db.users.findOne({email: "a@b.com"}) SELECT * FROM users WHERE email = 'a@b.com' LIMIT 1
Find many db.users.find({active: true}) SELECT * FROM users WHERE active = true
Insert db.users.insertOne({name: "John"}) INSERT INTO users (name) VALUES ('John')
Update db.users.updateOne({_id: id}, {$set: {name: "Jane"}}) UPDATE users SET name = 'Jane' WHERE id = $1
Delete db.users.deleteOne({_id: id}) DELETE FROM users WHERE id = $1

Query Operators

// MongoDB: Comparison operators
db.products.find({price: {$gt: 100, $lt: 500}})

-- PostgreSQL
SELECT * FROM products WHERE price > 100 AND price < 500;
// MongoDB: IN operator
db.products.find({category: {$in: ["electronics", "books"]}})

-- PostgreSQL
SELECT * FROM products WHERE category IN ('electronics', 'books');
// MongoDB: Text search
db.products.find({$text: {$search: "laptop"}})

-- PostgreSQL (full-text search)
SELECT * FROM products
WHERE to_tsvector('english', name || ' ' || description)
      @@ plainto_tsquery('english', 'laptop');

Aggregation Pipeline → SQL

// MongoDB: Group and count
db.orders.aggregate([
  {$match: {status: "completed"}},
  {$group: {_id: "$userId", total: {$sum: "$amount"}}},
  {$sort: {total: -1}},
  {$limit: 10}
])

-- PostgreSQL
SELECT user_id, SUM(amount) as total
FROM orders
WHERE status = 'completed'
GROUP BY user_id
ORDER BY total DESC
LIMIT 10;
// MongoDB: $lookup (join)
db.orders.aggregate([
  {$lookup: {
    from: "users",
    localField: "userId",
    foreignField: "_id",
    as: "user"
  }}
])

-- PostgreSQL
SELECT o.*, u.name, u.email
FROM orders o
JOIN users u ON o.user_id = u.id;

JSONB Queries

// MongoDB: Query nested field
db.products.find({"specs.ram": "16GB"})

-- PostgreSQL with JSONB
SELECT * FROM products WHERE specs->>'ram' = '16GB';

-- Or for nested objects
SELECT * FROM products WHERE specs->'storage'->>'type' = 'SSD';
// MongoDB: Array contains
db.users.find({tags: "premium"})

-- PostgreSQL JSONB array
SELECT * FROM users WHERE tags ? 'premium';

-- Or with containment
SELECT * FROM users WHERE tags @> '["premium"]'::jsonb;

Data Migration Process

Step 1: Export from MongoDB

# Export collection to JSON
mongoexport 
  --uri="mongodb+srv://user:pass@cluster.mongodb.net/database" 
  --collection=users 
  --out=users.json

# For large collections, use BSON format
mongodump 
  --uri="mongodb+srv://user:pass@cluster.mongodb.net/database" 
  --collection=users 
  --out=./backup

Step 2: Transform Data

Create a migration script to transform MongoDB documents:

# migrate.py
import json
import psycopg2
from bson import ObjectId
from datetime import datetime

def transform_user(doc):
    """Transform MongoDB user document to PostgreSQL row"""
    return {
        'id': str(doc['_id']),
        'name': doc.get('name', ''),
        'email': doc.get('email', ''),
        'created_at': doc.get('createdAt', datetime.now()),
        'metadata': json.dumps({
            k: v for k, v in doc.items()
            if k not in ['_id', 'name', 'email', 'createdAt']
        })
    }

def migrate_users(mongo_file, pg_conn):
    """Migrate users from MongoDB export to PostgreSQL"""
    cursor = pg_conn.cursor()

    with open(mongo_file, 'r') as f:
        for line in f:
            doc = json.loads(line)
            user = transform_user(doc)

            cursor.execute("""
                INSERT INTO users (id, name, email, created_at, metadata)
                VALUES (%s, %s, %s, %s, %s)
                ON CONFLICT (id) DO UPDATE SET
                    name = EXCLUDED.name,
                    email = EXCLUDED.email,
                    metadata = EXCLUDED.metadata
            """, (
                user['id'],
                user['name'],
                user['email'],
                user['created_at'],
                user['metadata']
            ))

    pg_conn.commit()
    cursor.close()

# Usage
conn = psycopg2.connect(
    host="your-db.danubedata.com",
    database="your_database",
    user="your_user",
    password="your_password"
)

migrate_users('users.json', conn)
conn.close()

Step 3: Batch Import for Large Datasets

# For large datasets, use COPY for better performance
import io
import csv

def batch_migrate_users(mongo_file, pg_conn, batch_size=10000):
    cursor = pg_conn.cursor()
    batch = []

    with open(mongo_file, 'r') as f:
        for i, line in enumerate(f):
            doc = json.loads(line)
            user = transform_user(doc)
            batch.append(user)

            if len(batch) >= batch_size:
                insert_batch(cursor, batch)
                pg_conn.commit()
                print(f"Migrated {i + 1} users...")
                batch = []

        if batch:
            insert_batch(cursor, batch)
            pg_conn.commit()

    cursor.close()

def insert_batch(cursor, batch):
    # Use COPY for fast bulk insert
    buffer = io.StringIO()
    writer = csv.writer(buffer, delimiter='	')

    for user in batch:
        writer.writerow([
            user['id'],
            user['name'],
            user['email'],
            user['created_at'].isoformat(),
            user['metadata']
        ])

    buffer.seek(0)
    cursor.copy_from(
        buffer,
        'users',
        columns=('id', 'name', 'email', 'created_at', 'metadata')
    )

Step 4: Verify Migration

-- Check row counts
SELECT 'MongoDB count' as source, COUNT(*) FROM temp_mongo_counts
UNION ALL
SELECT 'PostgreSQL count', COUNT(*) FROM users;

-- Sample comparison
SELECT id, name, email FROM users LIMIT 10;

-- Check for NULL values where unexpected
SELECT COUNT(*) FROM users WHERE name IS NULL;
SELECT COUNT(*) FROM users WHERE email IS NULL;

Application Code Changes

Mongoose → Prisma (Recommended)

Prisma is the most popular TypeScript ORM and provides excellent PostgreSQL support.

# Install Prisma
npm install prisma @prisma/client
npx prisma init
// prisma/schema.prisma
generator client {
  provider = "prisma-client-js"
}

datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

model User {
  id        String   @id @default(cuid())
  name      String
  email     String   @unique
  metadata  Json     @default("{}")
  createdAt DateTime @default(now()) @map("created_at")
  orders    Order[]

  @@map("users")
}

model Order {
  id        String   @id @default(cuid())
  userId    String   @map("user_id")
  user      User     @relation(fields: [userId], references: [id])
  product   String
  price     Decimal  @db.Decimal(10, 2)
  createdAt DateTime @default(now()) @map("created_at")

  @@map("orders")
}

Query Comparison: Mongoose vs Prisma

// Mongoose: Find user with orders
const user = await User.findById(id).populate('orders');

// Prisma: Find user with orders
const user = await prisma.user.findUnique({
  where: { id },
  include: { orders: true }
});
// Mongoose: Complex query
const users = await User.find({
  'metadata.subscription': 'premium',
  createdAt: { $gte: startDate }
}).sort({ createdAt: -1 }).limit(10);

// Prisma: Complex query with JSONB
const users = await prisma.user.findMany({
  where: {
    metadata: {
      path: ['subscription'],
      equals: 'premium'
    },
    createdAt: { gte: startDate }
  },
  orderBy: { createdAt: 'desc' },
  take: 10
});

TypeORM Alternative

// TypeORM Entity
@Entity('users')
export class User {
  @PrimaryColumn()
  id: string;

  @Column()
  name: string;

  @Column({ unique: true })
  email: string;

  @Column({ type: 'jsonb', default: {} })
  metadata: Record;

  @OneToMany(() => Order, order => order.user)
  orders: Order[];
}

// Query
const users = await userRepository.find({
  where: {
    metadata: Raw(alias => `${alias}->>'subscription' = 'premium'`)
  },
  order: { createdAt: 'DESC' },
  take: 10
});

Index Migration

MongoDB indexes need equivalent PostgreSQL indexes:

// MongoDB indexes
db.users.createIndex({ email: 1 }, { unique: true })
db.users.createIndex({ "metadata.subscription": 1 })
db.users.createIndex({ createdAt: -1 })
db.users.createIndex({ name: "text", bio: "text" })
-- PostgreSQL equivalents
CREATE UNIQUE INDEX idx_users_email ON users (email);
CREATE INDEX idx_users_subscription ON users ((metadata->>'subscription'));
CREATE INDEX idx_users_created_at ON users (created_at DESC);

-- Full-text search index
CREATE INDEX idx_users_search ON users
USING GIN (to_tsvector('english', name || ' ' || COALESCE(bio, '')));

-- GIN index for all JSONB queries
CREATE INDEX idx_users_metadata ON users USING GIN (metadata);

Performance Comparison

After migration, you may see performance changes:

Operation MongoDB PostgreSQL
Simple lookups Similar Similar
JOINs / Relations Slow ($lookup) Fast (native JOINs)
Aggregations Good Excellent (query planner)
Full-text search Good Excellent
JSONB queries Native Good (with GIN index)
Bulk inserts Fast Very fast (COPY)

Cost Comparison: Atlas vs DanubeData

Specification MongoDB Atlas DanubeData PostgreSQL
Shared (512MB) Free tier (limited) €19.99/mo (full features)
M10 / Small ~$57/mo €19.99/mo
M20 / Medium ~$150/mo €39.99/mo
M30 / Large ~$340/mo €79.99/mo
Data transfer $0.015-0.09/GB Included
Backups Extra cost Included

Potential savings: 50-75% depending on tier.

Migration Checklist

Phase Task Status
Planning Audit MongoDB collections and schemas
Design PostgreSQL schema
Identify JSONB vs normalized fields
Map MongoDB queries to SQL
Preparation Create DanubeData PostgreSQL instance
Write migration scripts
Update application code (ORM switch)
Testing Test migration on staging
Verify data integrity
Performance testing
Migration Export MongoDB data
Run migration scripts
Deploy updated application
Verify in production
Cleanup Monitor for issues (1-2 weeks)
Terminate MongoDB Atlas cluster

Ready to Migrate?

Migrating from MongoDB to PostgreSQL is a significant undertaking, but the benefits are substantial:

  • 50-75% cost savings compared to MongoDB Atlas
  • True relational power with JSONB flexibility
  • ACID transactions without complexity
  • SQL—the most widely known query language
  • Mature ecosystem with decades of tooling
  • European hosting for GDPR compliance

👉 Create your free DanubeData account

Start with a managed PostgreSQL instance and test your migration. Our team is available to help with schema design and query optimization.

Need help planning your MongoDB to PostgreSQL migration? Contact our team for a free consultation.

Share this article

Ready to Get Started?

Deploy your infrastructure in minutes with DanubeData's managed services.