BlogPerformanceRabbitMQ Performance Benchmarks: 120K+ Messages Per Second on DanubeData

RabbitMQ Performance Benchmarks: 120K+ Messages Per Second on DanubeData

DanubeData Engineering
DanubeData Engineering
March 29, 2026
8 min read
3 views
#rabbitmq #benchmarks #performance #message-queue #amqp #throughput

When choosing a managed message queue, performance claims are everywhere — but the methodology behind them is rarely transparent. We decided to benchmark our managed RabbitMQ service using the exact same tool that CloudAMQP, VMware, and the RabbitMQ team use: rabbitmq-perf-test.

The results exceeded our expectations. Here is the full data, methodology, and what the numbers actually mean for your production workloads.

TL;DR — The Numbers

ModePublishConsumeConfirm LatencyNacked
Auto-ack, no confirms123,296 msg/s123,296 msg/s
Auto-ack + confirm=5000108,260 msg/s108,260 msg/s42 ms0
Auto-ack + confirm=100018,564 msg/s18,564 msg/s52 ms0

All tests used 1 producer and 1 consumer on a single queue — the most honest benchmark configuration.

Why These Numbers Matter

Most managed RabbitMQ providers benchmark with auto-ack and no publisher confirms — a fire-and-forget configuration where the producer has no guarantee the broker received the message, and the consumer has no guarantee it processed it before acknowledgment.

Our 108,260 msg/s with publisher confirms means every single message was broker-acknowledged before the next batch was sent. This is the number that matters for production workloads where losing a message means losing a customer order, a payment event, or a critical notification.

Test Methodology

Tool

We used the official RabbitMQ PerfTest Docker image — the Java-based benchmarking tool maintained by the RabbitMQ team:

docker run --rm pivotalrabbitmq/perf-test:latest 
  --uri 'amqp://admin:PASSWORD@amqp-instance.queues.danubedata.ro:5674' 
  --producers 1 
  --consumers 1 
  --autoack 
  --time 30

Infrastructure

  • Plan: Queue Micro (our smallest plan)
  • Server: AMD Zen4 dedicated server with NVMe storage
  • Location: Falkenstein, Germany (Hetzner DC)
  • RabbitMQ version: 4.0
  • Deployment: Kubernetes StatefulSet with persistent NVMe volume

Test Configurations

Each test ran for 30 seconds. We tested multiple confirm batch sizes to find the throughput vs. delivery guarantee tradeoff:

Confirm BatchThroughputConfirm Latency (median)
No confirms123,296 msg/s
10,000111,554 msg/s73 ms
5,000 (sweet spot)108,260 msg/s42 ms
3,00068,271 msg/s40 ms
1,00018,564 msg/s52 ms
2006,181 msg/s31 ms

The sweet spot is confirm=5000 — only 12% slower than no-confirms, with sub-50ms confirm latency. At 10,000 you barely gain throughput but latency doubles.

Message Integrity Verification

Throughput numbers are meaningless if messages get dropped. We ran a separate integrity test using a custom Node.js script that:

  1. Generates 1,000,000 messages with unique IDs and MD5 checksums
  2. Publishes with publisher confirms (batched, guaranteed delivery)
  3. Consumes every message and verifies the checksum
  4. Checks for drops, duplicates, and corruption
=== RESULTS ===
  Sent:         1,000,000
  Consumed:     1,000,000
  Remaining:    0 (still in queue)
  Dropped:      0
  Duplicates:   0
  Corrupted:    0
  Unique IDs:   1,000,000
  Produce:      31,333 msg/s
  Consume:      52,599 msg/s

  ALL MESSAGES DELIVERED & VERIFIED
================

One million messages. Zero dropped. Zero duplicated. Zero corrupted.

How We Compare

CloudAMQP

CloudAMQP's "Loud Lion" plan (2x c3.4xlarge instances, 16 cores each) advertises 40,000 msg/s with 1 producer, 1 consumer, tested same-region on AWS with auto-ack and no publisher confirms.

Our Queue Micro plan achieves 123,296 msg/s under the same conditions — over 3x their headline number on a fraction of the hardware.

Amazon MQ for RabbitMQ

Amazon MQ does not publish specific msg/s numbers. Their documentation focuses on relative performance improvements between instance types (for example, "up to 50% higher throughput on M7g vs M5"). Their benchmark methodology uses 10 quorum queues with 1KB messages on a 3-node cluster — a very different configuration from single-queue benchmarks.

RabbitMQ Official

The RabbitMQ team's own benchmarks on an Intel NUC (8 cores, 32GB RAM) show 99,413 msg/s for AMQP 1.0 classic queues and 88,534 msg/s for AMQP 0.9.1. Our numbers are in the same ballpark — running on production infrastructure with real network overhead.

What Affects Your Real-World Throughput

These benchmarks use default 12-byte payloads. Your actual throughput depends on:

  • Message size: Larger payloads reduce msg/s but increase data throughput (MB/s)
  • Persistence: Durable queues with persistent messages add disk I/O (our NVMe storage minimizes this)
  • Acknowledgment mode: Publisher confirms and manual consumer acks add round-trip latency
  • Queue type: Quorum queues trade some throughput for stronger durability guarantees
  • Number of queues: RabbitMQ queues are single-threaded — multiple queues distribute load across CPU cores
  • Network location: Same-namespace traffic (VPS to queue) has sub-millisecond latency

Running Your Own Benchmarks

You can reproduce these results on your own DanubeData queue instance:

# Pull the official PerfTest image
docker pull pivotalrabbitmq/perf-test:latest

# Basic throughput test (auto-ack, no confirms)
docker run --rm pivotalrabbitmq/perf-test:latest 
  --uri 'amqp://admin:YOUR_PASSWORD@your-instance.queues.danubedata.ro:5672' 
  --producers 1 
  --consumers 1 
  --autoack 
  --time 30

# With publisher confirms (guaranteed delivery)
docker run --rm pivotalrabbitmq/perf-test:latest 
  --uri 'amqp://admin:YOUR_PASSWORD@your-instance.queues.danubedata.ro:5672' 
  --producers 1 
  --consumers 1 
  --autoack 
  --confirm 5000 
  --time 30

For the most accurate results, run the test from a DanubeData VPS in the same region — this eliminates internet latency from the equation.

Conclusion

Our managed RabbitMQ delivers 120,000+ messages per second on our smallest plan, with 108,000+ msg/s even with publisher confirms guaranteeing delivery. Across 1 million messages with integrity verification, we saw zero drops, zero duplicates, and zero corruption.

These are not theoretical numbers from a lab. They are real benchmarks on real production infrastructure, tested with the same official tool the RabbitMQ team uses. You can reproduce them yourself in under 2 minutes.

Try it yourself →

Share this article

Ready to Get Started?

Deploy your infrastructure in minutes with DanubeData's managed services.