BlogBusinessMongoDB Atlas Alternatives in Europe: GDPR-Compliant Options (2026)

MongoDB Atlas Alternatives in Europe: GDPR-Compliant Options (2026)

Adrian Silaghi
Adrian Silaghi
April 20, 2026
14 min read
2 views
#mongodb #mongodb-atlas #atlas-alternatives #postgresql #jsonb #europe #gdpr #database
MongoDB Atlas Alternatives in Europe: GDPR-Compliant Options (2026)

MongoDB Atlas is where most teams land when they want a managed document database. It is slick, fast to provision, has a generous free tier, and MongoDB Inc. has invested heavily in features like Atlas Search, Atlas Vector Search, and Stream Processing to make it feel like a complete data platform.

But by 2026, European engineering teams are increasingly questioning whether Atlas is the right home for their data. MongoDB Inc. is a US-based, Nasdaq-listed company (MDB) — which means Atlas, even when hosted in Frankfurt or Dublin, falls squarely within scope of the US CLOUD Act. Atlas pricing escalates aggressively past the free tier. The SSPL license that MongoDB adopted in 2018 creates downstream uncertainty for anyone embedding or redistributing the database. And dedicated clusters, which you need for anything resembling production, start at around $57/month per cluster and rise fast from there.

This guide lays out the European alternatives in 2026 — both for teams that genuinely need MongoDB and for the majority of teams who would be better served migrating to PostgreSQL with JSONB. Spoiler: most document-database workloads do not actually need MongoDB, and Postgres with JSONB plus GIN indexes covers 80% of use cases at 10–30% of the cost, with full GDPR sovereignty when hosted on a European provider like DanubeData in Falkenstein, Germany.

What MongoDB Atlas Actually Offers in 2026

Atlas has grown well beyond "managed MongoDB." The 2026 feature set includes:

  • Fully managed MongoDB clusters — M0 (free shared), M2/M5 (shared), M10 through M700+ (dedicated), with automated patching, backups, and monitoring
  • Atlas Search — Lucene-based full-text search embedded into the cluster, no separate Elasticsearch required
  • Atlas Vector Search — HNSW vector indexes for RAG, semantic search, recommendation engines
  • Atlas Stream Processing — continuous queries over Kafka or Atlas change streams
  • Atlas Data Federation — query across S3, BigQuery, and Atlas clusters with one MQL query
  • Atlas Data Lake — Parquet export for analytics
  • Atlas App Services (formerly Realm) — serverless functions, sync, authentication
  • Charts — embedded dashboards
  • Queryable Encryption — client-side field-level encryption where Atlas never sees plaintext

Atlas runs on AWS, GCP, and Azure, with EU regions in Frankfurt (AWS eu-central-1, GCP europe-west3, Azure Germany West Central), Dublin (AWS eu-west-1), Paris (GCP europe-west9), Amsterdam (GCP europe-west4), Milan, Stockholm, Zurich, and others. You can pin clusters to specific regions and enforce "no data leaves the EU" via network isolation and Private Endpoints.

Why European Teams Are Looking Elsewhere

1. The CLOUD Act Is Real

MongoDB Inc. is headquartered in New York City and trades on Nasdaq under MDB. Even when your Atlas cluster physically sits in a Frankfurt data center owned by AWS, the US CLOUD Act (2018) authorises US authorities to compel MongoDB to hand over customer data regardless of where it is stored. The 2023 EU-US Data Privacy Framework partially restored transatlantic transfer legitimacy, but the underlying CLOUD Act extraterritoriality was not touched. For healthcare, legal, defence, or public-sector workloads — or anyone who simply wants to avoid this entire class of risk — Atlas fails the sovereignty test by construction.

2. Pricing Escalates Aggressively

Atlas looks cheap until you need production. The M0 free tier gets you 512MB and is not intended for real traffic. The M2 and M5 shared tiers are priced per hour and capped at 2GB/5GB storage. The moment you need anything production-grade, you jump to dedicated clusters:

Atlas Tier RAM / vCPU / Storage Monthly (EU, replica set) Notes
M0 Free 512MB / shared / 512MB $0 Dev only, shared tenant
M10 2GB / 2 vCPU / 10GB ~$57/mo Minimum for production workloads
M20 4GB / 2 vCPU / 20GB ~$145/mo Small production app
M30 8GB / 2 vCPU / 40GB ~$280/mo Typical starting point most teams actually need
M40 16GB / 4 vCPU / 80GB ~$560/mo Mid-sized apps
M50 32GB / 8 vCPU / 160GB ~$950/mo Production SaaS workloads
M80+ 128GB+ / 32 vCPU+ $3,000+/mo Everything big

Add Atlas Search ($0.10–$0.30/hr per search node), Vector Search, Online Archive, backup retention tiers, and egress, and a real production deployment lands comfortably north of $1,000–$5,000/month. At those numbers, the honest question is: what am I actually getting that I could not get for one-tenth the price elsewhere?

3. SSPL License Concerns

MongoDB is no longer open source. In October 2018 MongoDB Inc. relicensed the server from AGPL to SSPL (Server Side Public License), which the OSI explicitly declined to recognise as open source. SSPL requires that anyone offering MongoDB "as a service" must release the full source of all service-management and infrastructure code under SSPL. For most end users this does not matter, but for European cloud vendors, ISVs, or regulated organisations with procurement rules that require OSI-approved licenses, SSPL is a blocker. It also means the MongoDB you get from Atlas is not the same open-source ecosystem you thought you were buying into.

4. Lock-In by Design

Atlas features are mostly Atlas-only. Atlas Search indexes, Stream Processing pipelines, Data Federation queries, Atlas Functions, Realm Sync — none of these ship with MongoDB Community Edition. The more you adopt Atlas-specific features, the harder it becomes to leave. That is good business for MongoDB Inc. and bad news for your optionality.

Alternatives Map: Two Paths

There are two fundamentally different ways to move off Atlas:

Path A: Keep MongoDB (or a Mongo-wire-compatible database). You still want the document model, MQL queries, and Mongo drivers. You just want different hosting and governance.

Path B: Migrate to PostgreSQL with JSONB. You accept a one-time schema migration in exchange for a battle-tested, OSI-licensed, SQL-native database with document capabilities, full-text search, vector search, and 70–90% cost reduction.

Most teams should honestly evaluate Path B first. PostgreSQL's JSONB column type plus GIN indexes gives you schema-flexible documents with indexed queries, and the surrounding ecosystem (pg_trgm for fuzzy search, pgvector for embeddings, logical replication, extensions for everything) is larger and better-maintained than anything in the Mongo ecosystem.

Path A: MongoDB-Compatible Alternatives in Europe

Self-Host MongoDB Community on a VPS

MongoDB Community Edition (SSPL-licensed) runs perfectly on a Linux VPS. For a three-node replica set on DanubeData VPS instances in Falkenstein, the math is simple:

  • 3 × Standard VPS (4GB RAM, 2 vCPU, 80GB NVMe) at €8.99/mo = €26.97/mo
  • Private network between nodes, replica set with automatic failover
  • No vendor lock-in, no egress fees up to 20TB/month
  • You handle upgrades, backups, monitoring yourself

This is comparable in capacity to an Atlas M20 ($145/mo), at roughly one-fifth the cost — but you are trading convenience for cost and sovereignty. Fine for teams with a platform engineer; painful for teams without.

FerretDB on Managed PostgreSQL

FerretDB is the most interesting alternative of 2026. It is an Apache 2.0 licensed, open-source proxy that speaks the MongoDB wire protocol but stores data in PostgreSQL (or SQLite). Your existing Mongo drivers, MQL queries, and tooling work unchanged — FerretDB translates MQL into SQL against JSONB on the fly.

Run FerretDB as a small container in front of DanubeData Managed PostgreSQL and you get:

  • Mongo-wire compatibility for existing codebases
  • Real PostgreSQL underneath (MVCC, WAL, PITR, battle-tested at scale)
  • Apache 2.0 license end-to-end — no SSPL concerns
  • Managed Postgres from €19.99/mo instead of $57+/mo per Atlas cluster
  • Full EU sovereignty (DanubeData is a Romanian company operating in Germany, no US parent)

FerretDB does not implement 100% of MQL. Aggregation pipeline coverage has improved significantly but some advanced stages, change streams, and transactions are partial or missing depending on version. Always check the FerretDB compatibility matrix for your workload.

Percona Server for MongoDB

Percona Server for MongoDB is an open-source drop-in replacement for MongoDB Community Edition, distributed under the SSPL but with additional enterprise-grade features backported from Percona's engineering (hot backups, LDAP authentication, audit logging, encryption-at-rest). It runs anywhere — including self-hosted on DanubeData VPS instances. Use it when you want Mongo itself plus enterprise features without paying for MongoDB Enterprise Advanced.

European Managed Mongo: OVHcloud, Scaleway

If you want managed MongoDB from a European-headquartered provider:

  • OVHcloud Managed MongoDB (France) — EU-headquartered, available in Gravelines, Strasbourg, Frankfurt. Falls under French law, not US CLOUD Act. Pricing starts around €23/mo for a small single-node development cluster, production replica sets start higher.
  • Scaleway Managed Document Database (France) — Scaleway-hosted document DB, Paris and Amsterdam regions, EU-governed.

These are genuinely EU-sovereign managed Mongo options and worth evaluating if you need the full Mongo surface area plus managed operations.

Path B: Migrate to Managed PostgreSQL with JSONB

Here is the uncomfortable truth about most MongoDB deployments: the vast majority of them are using Mongo as "a database with flexible schema and JSON queries" — and PostgreSQL has supported exactly that since JSONB was introduced in version 9.4 (2014). Twelve years later, PostgreSQL's JSONB implementation is faster, better-indexed, better-integrated, and sits inside a transactional SQL database with a vastly larger ecosystem.

Why Postgres JSONB Covers ~80% of MongoDB Use Cases

  • Schema-flexible documents: JSONB columns accept arbitrary JSON and are stored in a decomposed binary format optimised for query
  • Indexed queries on nested fields: GIN indexes with jsonb_path_ops support containment (@>), existence (?), and key/value queries
  • Partial indexes: Index only documents matching a predicate — a feature MongoDB lacks at the same level of granularity
  • Full-text search: pg_trgm for fuzzy/similarity, tsvector for ranked full-text — a viable Atlas Search replacement
  • Vector search: pgvector extension gives you HNSW and IVFFlat indexes for RAG and semantic search — a direct Atlas Vector Search alternative
  • Change streams: LISTEN/NOTIFY plus logical replication slots + wal2json cover most change-stream use cases
  • Transactions: Full ACID across any number of tables and rows, which Mongo only got in 4.0 and still has sharp edges in sharded clusters
  • Joins: Actual joins. Which Mongo famously does not have beyond $lookup.

Head-to-Head Comparison

Criterion Atlas M30 DanubeData Managed Postgres FerretDB on DanubeData Self-hosted Mongo on VPS
Typical monthly cost ~$280 €19.99–€79.99 €19.99 + small container €26.97 (3×€8.99)
EU data residency Yes (but US parent) Yes, Falkenstein DE Yes, Falkenstein DE Yes, Falkenstein DE
CLOUD Act exposure Yes (MDB Nasdaq-listed) No No No
License SSPL PostgreSQL License (OSI) Apache 2.0 + Postgres SSPL
Document model Native BSON JSONB JSONB via MQL Native BSON
Full-text search Atlas Search (Lucene) pg_trgm + tsvector Via Postgres directly Text indexes only
Vector search Atlas Vector Search pgvector (HNSW/IVFFlat) Via Postgres directly Not in Community
Change streams Native LISTEN/NOTIFY, wal2json Partial in FerretDB Native
Managed backups/PITR Yes Yes Yes (underlying PG) DIY
Operational burden Low Low Low-Medium High
Replicas Replica set built-in Read replicas incl. Read replicas incl. Manual replica set

Schema Migration: MongoDB Collection to Postgres JSONB

Let's walk through a concrete migration. Say you have a MongoDB collection of users:

// MongoDB: users collection
{
  "_id": ObjectId("..."),
  "email": "ada@example.com",
  "name": "Ada Lovelace",
  "profile": {
    "locale": "en-GB",
    "timezone": "Europe/London",
    "tags": ["admin", "beta"]
  },
  "addresses": [
    { "type": "home", "city": "London", "country": "UK" },
    { "type": "work", "city": "Cambridge", "country": "UK" }
  ],
  "created_at": ISODate("2024-06-01T09:00:00Z")
}

In Postgres, the simplest translation keeps the document as a single JSONB column, with a few first-class columns extracted for indexing and foreign key integrity:

CREATE TABLE users (
    id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    email       TEXT UNIQUE NOT NULL,
    doc         JSONB NOT NULL,
    created_at  TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    updated_at  TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

-- GIN index supporting @> containment queries across the whole document
CREATE INDEX idx_users_doc ON users USING GIN (doc jsonb_path_ops);

-- Targeted B-tree index on an extracted field for equality lookups
CREATE INDEX idx_users_locale ON users ((doc->'profile'->>'locale'));

-- Partial index — only active beta users
CREATE INDEX idx_users_beta ON users ((doc->'profile'->>'locale'))
  WHERE doc @> '{"profile": {"tags": ["beta"]}}';

For high-cardinality fields like email, promote them to real columns with proper constraints. Keep everything else in the JSONB document. This hybrid model — first-class columns for the stable, queried fields and JSONB for the flexible rest — is usually the sweet spot.

Query Translations

Here is a cheat sheet for translating the most common MQL operations into SQL against JSONB:

-- Mongo:  db.users.find({ email: "ada@example.com" })
SELECT * FROM users WHERE email = 'ada@example.com';

-- Mongo:  db.users.find({ "profile.locale": "en-GB" })
SELECT * FROM users WHERE doc->'profile'->>'locale' = 'en-GB';

-- Mongo:  db.users.find({ "profile.tags": "beta" })
-- (tag exists anywhere in the array)
SELECT * FROM users WHERE doc @> '{"profile": {"tags": ["beta"]}}';

-- Mongo:  db.users.find({ "addresses.city": "London" })
SELECT * FROM users
WHERE EXISTS (
  SELECT 1 FROM jsonb_array_elements(doc->'addresses') a
  WHERE a->>'city' = 'London'
);

-- Mongo:  db.users.find({ created_at: { $gt: ISODate("2025-01-01") } })
SELECT * FROM users WHERE created_at > '2025-01-01'::timestamptz;

-- Mongo:  db.users.aggregate([
--   { $match: { "profile.locale": "en-GB" } },
--   { $project: { email: 1, city: { $first: "$addresses.city" } } }
-- ])
SELECT
  email,
  doc->'addresses'->0->>'city' AS city
FROM users
WHERE doc->'profile'->>'locale' = 'en-GB';

-- Mongo:  $lookup across two collections — joins are native in SQL
SELECT u.email, o.doc->>'total' AS order_total
FROM users u
JOIN orders o ON o.user_id = u.id
WHERE u.doc->'profile'->>'locale' = 'en-GB';

-- Mongo:  aggregation pipeline with $group
-- Postgres: CTE + GROUP BY
WITH active AS (
  SELECT id, doc->'profile'->>'locale' AS locale
  FROM users
  WHERE doc @> '{"profile": {"tags": ["beta"]}}'
)
SELECT locale, COUNT(*) AS n
FROM active
GROUP BY locale
ORDER BY n DESC;

Full-Text Search with pg_trgm

For the "search by partial name or fuzzy match" workload that teams often reach for Atlas Search for, pg_trgm is a single extension away:

CREATE EXTENSION IF NOT EXISTS pg_trgm;

CREATE INDEX idx_users_name_trgm
  ON users USING GIN ((doc->>'name') gin_trgm_ops);

-- Fuzzy search: users whose name is similar to "Ada Love"
SELECT id, doc->>'name' AS name, similarity(doc->>'name', 'Ada Love') AS sim
FROM users
WHERE doc->>'name' %% 'Ada Love'
ORDER BY sim DESC
LIMIT 10;

For ranked document-style full-text search with stemming and stop words, tsvector + GIN gives you something very close to what Atlas Search provides:

ALTER TABLE users ADD COLUMN search_tsv tsvector
  GENERATED ALWAYS AS (
    to_tsvector('english', coalesce(doc->>'name','') || ' ' || coalesce(doc->>'email',''))
  ) STORED;

CREATE INDEX idx_users_tsv ON users USING GIN (search_tsv);

SELECT id, doc->>'name' AS name,
       ts_rank(search_tsv, plainto_tsquery('english', 'lovelace')) AS rank
FROM users
WHERE search_tsv @@ plainto_tsquery('english', 'lovelace')
ORDER BY rank DESC
LIMIT 20;

Vector Search with pgvector

The pgvector extension gives you HNSW and IVFFlat indexes — the same algorithms Atlas Vector Search is built on — with a simpler operational model:

CREATE EXTENSION IF NOT EXISTS vector;

ALTER TABLE users ADD COLUMN embedding vector(1536);

CREATE INDEX idx_users_embedding
  ON users USING hnsw (embedding vector_cosine_ops);

-- Nearest-neighbour search: 10 users closest to query embedding
SELECT id, doc->>'name'
FROM users
ORDER BY embedding <=> $1
LIMIT 10;

Migration Tooling

You have three realistic options to move data out of Atlas into Postgres:

1. mongoexport + Custom Ingestion Script

# Export a collection as JSONL
mongoexport 
  --uri="mongodb+srv://user:pass@cluster0.xxxxx.mongodb.net/mydb" 
  --collection=users 
  --out=users.jsonl

# Ingest into Postgres with a small Python script
python -c "
import json, psycopg
conn = psycopg.connect('postgres://...')
with conn.cursor() as cur, open('users.jsonl') as f:
    for line in f:
        d = json.loads(line)
        cur.execute(
            'INSERT INTO users (email, doc) VALUES (%s, %s)',
            (d.get('email'), json.dumps(d))
        )
conn.commit()"

This is the most flexible approach — you get full control over type coercion, extracted columns, and deduplication.

2. MongoDB to PostgreSQL FDW

The mongo_fdw Foreign Data Wrapper lets Postgres query Atlas directly over the Mongo wire protocol. Useful for one-shot bulk migrations:

CREATE EXTENSION mongo_fdw;

CREATE SERVER mongo_server FOREIGN DATA WRAPPER mongo_fdw
  OPTIONS (address 'cluster0.xxxxx.mongodb.net', port '27017');

CREATE USER MAPPING FOR current_user SERVER mongo_server
  OPTIONS (username 'user', password 'pass');

CREATE FOREIGN TABLE mongo_users (
  _id NAME,
  doc JSONB
) SERVER mongo_server OPTIONS (database 'mydb', collection 'users');

-- Bulk copy
INSERT INTO users (doc)
SELECT doc FROM mongo_users;

3. pgloader

pgloader has experimental MongoDB support but is more mature for MySQL/SQLite source databases. For MongoDB, the FDW or custom script routes are usually more reliable.

When You Genuinely Still Need MongoDB

Postgres JSONB covers most, but not all. Keep Mongo (or Mongo-compatible) if you depend on:

  • Massive horizontal sharding at write-heavy scale where Mongo's auto-sharding genuinely simplifies operations versus Postgres with Citus
  • Atlas Search features you rely on: highlighting with custom analyzers, faceted search, synonyms at MongoDB-specific DSL
  • Change streams with watch() semantics that your application architecture deeply depends on (though LISTEN/NOTIFY + logical replication often suffices)
  • Mongo drivers baked into your stack for legacy reasons and you cannot afford the migration — this is where FerretDB shines
  • Realm Sync / Atlas App Services for mobile offline sync — no direct Postgres equivalent (ElectricSQL and PowerSync are the emerging Postgres-native alternatives)

For everything else — and in our experience "everything else" means 80% of production workloads — moving to Postgres JSONB is the rational choice.

Recommended Path for European Teams in 2026

  1. Audit your actual MongoDB usage. How many of your queries are simple find/update by indexed field? How much of your data model is genuinely "wildly variable schema" vs "mostly consistent with a few optional fields"?
  2. Default to migrating to DanubeData Managed PostgreSQL with JSONB. Start at €19.99/mo, scale up as needed. You get full EU sovereignty, OSI-licensed software, pgvector for AI workloads, pg_trgm for fuzzy search, and a fraction of the Atlas bill.
  3. If the migration surface is too large right now, put FerretDB in front of Managed Postgres. Your Mongo drivers keep working, you get off Atlas immediately, and you can refactor to native SQL on your own timeline.
  4. Reserve self-hosted Mongo or OVHcloud/Scaleway managed Mongo for cases where you genuinely need Mongo-specific features and the SSPL/operational trade-offs are acceptable.

FAQ

Is Postgres JSONB really a MongoDB replacement?

For most use cases, yes. JSONB has been production-grade since 2014, supports GIN indexing for containment and key queries, and combined with partial indexes on extracted fields it routinely outperforms MongoDB on equivalent hardware for OLTP workloads. The EnterpriseDB benchmarks and repeated independent studies over the years have shown Postgres matching or beating Mongo on mixed workloads once properly indexed.

How does performance compare in practice?

On point queries by indexed field, both databases are extremely fast and the bottleneck is usually IO, not engine. On aggregation-heavy workloads, Postgres typically wins because its planner is much older and smarter than Mongo's. On write-heavy sharded workloads beyond a single node's capacity, MongoDB's auto-sharding is operationally simpler — though Citus (now part of Azure but open source) brings Postgres up to parity for distributed workloads. For most SaaS applications below 1TB of hot data, you will not be able to tell the difference in latency, and Postgres will cost you a fraction of the price.

What replaces MongoDB change streams?

Two options, depending on use case. For application-level notifications ("this row changed, wake up a worker"), use Postgres LISTEN/NOTIFY with triggers that NOTIFY on INSERT/UPDATE/DELETE. For a durable log of every change (Kafka-style event sourcing), use logical replication with wal2json or Debezium. The latter is what most production Postgres CDC pipelines look like in 2026 and is strictly more powerful than Mongo change streams because you get full WAL-level fidelity.

What replaces Atlas Search?

For most workloads, pg_trgm (fuzzy/similarity search) and tsvector + GIN (ranked full-text with stemming) are sufficient and live inside the same database as your data — no separate cluster to manage. For very large-scale search or features like faceting and advanced analyzers, pair Postgres with Typesense, Meilisearch, or OpenSearch — all of which have EU-hostable, Apache-licensed open-source versions and integrate cleanly with Postgres via CDC.

Does Postgres support schema validation?

Yes, in several ways. You can add CHECK constraints that validate JSONB documents against a JSON Schema using the pg_jsonschema extension, or keep validation at the application layer with Zod, Pydantic, or whatever your stack uses. Postgres also lets you promote fields to real columns with NOT NULL constraints — which is stricter than MongoDB's optional schema validation and catches bugs at insert time.

Is DanubeData GDPR-compliant?

Yes. DanubeData operates entirely out of Hetzner's Falkenstein, Germany data centres — GDPR's home turf. We are a Romanian company with no US parent, no US subsidiary, and no exposure to the CLOUD Act or FISA 702. Every managed Postgres instance is physically and legally inside the EU, with signed DPA available on request.

What about backup and point-in-time recovery?

DanubeData Managed PostgreSQL includes automated daily snapshots using TopoLVM VolumeSnapshots (fast LVM-level), WAL archival for point-in-time recovery, and optional offsite Velero backups to EU object storage. Retention and PITR windows are configurable per instance. All backups remain in the EU.

How hard is the migration really?

For a typical SaaS app with a handful of collections, a migration including schema design, query rewriting, and testing is usually 1–4 engineering weeks. The biggest time sink is updating application code from Mongo drivers to a SQL library like Prisma, Drizzle, or sqlc — but once done, your codebase becomes dramatically easier to reason about. Many teams report that the migration was less painful than they feared, and the resulting system is noticeably more debuggable.

Next Steps

Ready to move off Atlas?

  • Evaluate DanubeData Managed PostgreSQL — €19.99/mo, Postgres 15/16/17, JSONB, pgvector, pg_trgm, read replicas, automated backups, €50 signup credit. Create a managed Postgres instance →
  • Need Mongo-wire compatibility right now? Run FerretDB in front of DanubeData Managed Postgres — your drivers keep working while you migrate on your own timeline.
  • Want to self-host Mongo? Spin up three €8.99/mo VPS instances in Falkenstein and run a replica set for €26.97/mo total. Create a VPS →

Whichever path you take, the 2026 answer for European teams is the same: you do not need to pay Atlas prices for data sovereignty. The European ecosystem is more than ready, and for most workloads Postgres JSONB is a genuine upgrade — not a compromise.

Questions about migrating from Atlas? Get in touch — we have helped teams cut their database bills by 80%+ while moving fully inside the EU.

Share this article

Ready to Get Started?

Deploy your infrastructure in minutes with DanubeData's managed services.