When choosing a managed message queue, performance claims are everywhere — but the methodology behind them is rarely transparent. We decided to benchmark our managed RabbitMQ service using the exact same tool that CloudAMQP, VMware, and the RabbitMQ team use: rabbitmq-perf-test.
The results exceeded our expectations. Here is the full data, methodology, and what the numbers actually mean for your production workloads.
TL;DR — The Numbers
| Mode | Publish | Consume | Confirm Latency | Nacked |
|---|---|---|---|---|
| Auto-ack, no confirms | 123,296 msg/s | 123,296 msg/s | — | — |
| Auto-ack + confirm=5000 | 108,260 msg/s | 108,260 msg/s | 42 ms | 0 |
| Auto-ack + confirm=1000 | 18,564 msg/s | 18,564 msg/s | 52 ms | 0 |
All tests used 1 producer and 1 consumer on a single queue — the most honest benchmark configuration.
Why These Numbers Matter
Most managed RabbitMQ providers benchmark with auto-ack and no publisher confirms — a fire-and-forget configuration where the producer has no guarantee the broker received the message, and the consumer has no guarantee it processed it before acknowledgment.
Our 108,260 msg/s with publisher confirms means every single message was broker-acknowledged before the next batch was sent. This is the number that matters for production workloads where losing a message means losing a customer order, a payment event, or a critical notification.
Test Methodology
Tool
We used the official RabbitMQ PerfTest Docker image — the Java-based benchmarking tool maintained by the RabbitMQ team:
docker run --rm pivotalrabbitmq/perf-test:latest
--uri 'amqp://admin:PASSWORD@amqp-instance.queues.danubedata.ro:5674'
--producers 1
--consumers 1
--autoack
--time 30
Infrastructure
- Plan: Queue Micro (our smallest plan)
- Server: AMD Zen4 dedicated server with NVMe storage
- Location: Falkenstein, Germany (Hetzner DC)
- RabbitMQ version: 4.0
- Deployment: Kubernetes StatefulSet with persistent NVMe volume
Test Configurations
Each test ran for 30 seconds. We tested multiple confirm batch sizes to find the throughput vs. delivery guarantee tradeoff:
| Confirm Batch | Throughput | Confirm Latency (median) |
|---|---|---|
| No confirms | 123,296 msg/s | — |
| 10,000 | 111,554 msg/s | 73 ms |
| 5,000 (sweet spot) | 108,260 msg/s | 42 ms |
| 3,000 | 68,271 msg/s | 40 ms |
| 1,000 | 18,564 msg/s | 52 ms |
| 200 | 6,181 msg/s | 31 ms |
The sweet spot is confirm=5000 — only 12% slower than no-confirms, with sub-50ms confirm latency. At 10,000 you barely gain throughput but latency doubles.
Message Integrity Verification
Throughput numbers are meaningless if messages get dropped. We ran a separate integrity test using a custom Node.js script that:
- Generates 1,000,000 messages with unique IDs and MD5 checksums
- Publishes with publisher confirms (batched, guaranteed delivery)
- Consumes every message and verifies the checksum
- Checks for drops, duplicates, and corruption
=== RESULTS ===
Sent: 1,000,000
Consumed: 1,000,000
Remaining: 0 (still in queue)
Dropped: 0
Duplicates: 0
Corrupted: 0
Unique IDs: 1,000,000
Produce: 31,333 msg/s
Consume: 52,599 msg/s
ALL MESSAGES DELIVERED & VERIFIED
================
One million messages. Zero dropped. Zero duplicated. Zero corrupted.
How We Compare
CloudAMQP
CloudAMQP's "Loud Lion" plan (2x c3.4xlarge instances, 16 cores each) advertises 40,000 msg/s with 1 producer, 1 consumer, tested same-region on AWS with auto-ack and no publisher confirms.
Our Queue Micro plan achieves 123,296 msg/s under the same conditions — over 3x their headline number on a fraction of the hardware.
Amazon MQ for RabbitMQ
Amazon MQ does not publish specific msg/s numbers. Their documentation focuses on relative performance improvements between instance types (for example, "up to 50% higher throughput on M7g vs M5"). Their benchmark methodology uses 10 quorum queues with 1KB messages on a 3-node cluster — a very different configuration from single-queue benchmarks.
RabbitMQ Official
The RabbitMQ team's own benchmarks on an Intel NUC (8 cores, 32GB RAM) show 99,413 msg/s for AMQP 1.0 classic queues and 88,534 msg/s for AMQP 0.9.1. Our numbers are in the same ballpark — running on production infrastructure with real network overhead.
What Affects Your Real-World Throughput
These benchmarks use default 12-byte payloads. Your actual throughput depends on:
- Message size: Larger payloads reduce msg/s but increase data throughput (MB/s)
- Persistence: Durable queues with persistent messages add disk I/O (our NVMe storage minimizes this)
- Acknowledgment mode: Publisher confirms and manual consumer acks add round-trip latency
- Queue type: Quorum queues trade some throughput for stronger durability guarantees
- Number of queues: RabbitMQ queues are single-threaded — multiple queues distribute load across CPU cores
- Network location: Same-namespace traffic (VPS to queue) has sub-millisecond latency
Running Your Own Benchmarks
You can reproduce these results on your own DanubeData queue instance:
# Pull the official PerfTest image
docker pull pivotalrabbitmq/perf-test:latest
# Basic throughput test (auto-ack, no confirms)
docker run --rm pivotalrabbitmq/perf-test:latest
--uri 'amqp://admin:YOUR_PASSWORD@your-instance.queues.danubedata.ro:5672'
--producers 1
--consumers 1
--autoack
--time 30
# With publisher confirms (guaranteed delivery)
docker run --rm pivotalrabbitmq/perf-test:latest
--uri 'amqp://admin:YOUR_PASSWORD@your-instance.queues.danubedata.ro:5672'
--producers 1
--consumers 1
--autoack
--confirm 5000
--time 30
For the most accurate results, run the test from a DanubeData VPS in the same region — this eliminates internet latency from the equation.
Conclusion
Our managed RabbitMQ delivers 120,000+ messages per second on our smallest plan, with 108,000+ msg/s even with publisher confirms guaranteeing delivery. Across 1 million messages with integrity verification, we saw zero drops, zero duplicates, and zero corruption.
These are not theoretical numbers from a lab. They are real benchmarks on real production infrastructure, tested with the same official tool the RabbitMQ team uses. You can reproduce them yourself in under 2 minutes.