This blog was co-written by Howard Twine and Gregory Green.
A few years ago, a colleague of ours wrote an informative post to help readers understand when to use RabbitMQ and when to use Apache Kafka. While the two solutions take very different approaches architecturally and can solve different problems, many people find themselves comparing them for situations where there is overlap. In an increasingly distributed environment, where more and more services need to communicate with each other, RabbitMQ and Kafka have both come to be popular services that facilitate that communication.
Since we published that original blog post, many changes and developments in RabbitMQ have occurred. So, we thought this would be a great time to revisit how RabbitMQ and Kafka have changed, to check whether their respective strengths have shifted, and see how they fit into today’s use case.
What are RabbitMQ and Apache Kafka?
RabbitMQ is often summarized as an open source distributed message broker. Written in Erlang, it facilitates the efficient delivery of messages in complex routing scenarios. Initially built around the popular AMQP protocol, it’s also highly compatible with existing technologies (e.g., MQTT, JMS and more). RabbitMQ has its own append-only log streaming technologies, and its capabilities can be expanded through plug-ins enabled on the server. RabbitMQ brokers can be distributed and configured to be reliable in case of network or server failure.
Apache Kafka, on the other hand, is described as a distributed event streaming platform. Rather than focusing on flexible routing, it instead facilitates raw throughput. Written in Scala and Java, Kafka builds on the idea of a “distributed append-only log,” where messages are written to the end of a log that’s persisted to disk, and clients can choose where they begin reading from that log. Likewise, Kafka clusters can be distributed and clustered across multiple servers for a higher degree of availability.
RabbitMQ vs. Kafka
While they’re not exactly equivalent services, people often narrow their choice of messaging options down to these two and are left wondering which of them is better. We’ve long believed that’s not the correct question to ask. Instead, you want to focus on what each service excels at, analyze their differences, and then decide which one fits your use case best. In addition to the features offered by either service, you should also take into consideration the skills needed to operate the services and the developer communities that exist around them.
Requirements and use cases
In the past, there was a pretty clear-cut difference in design between RabbitMQ and Kafka, and as such, a difference in the use cases they served best. RabbitMQ’s message broker design excelled in use cases that had specific routing needs and per-message guarantees, whereas Kafka’s append-only log allowed developers access to the stream history and more direct stream processing. While the Venn diagram of use cases these two technologies could fulfill was tight, there were scenarios in which one was a demonstrably better choice than the other.
Now that RabbitMQ has introduced Streams, this is no longer the case. Sure, RabbitMQ still supports the well-established traditional queue model, but Streams add the append-to-log feature more closely associated with Kafka. Additionally, RabbitMQ Streams also supports AMQP as well as its own high-throughput, stream-specific binary protocol. To scale this up, multiple streams can be added to a Super Stream that is more akin to a Kafka topic, with each individual stream being a partition to a logical Super Stream.
Developer experience
The list of clients and libraries continue to grow, thanks to the work of their respective communities. Both RabbitMQ’s and Kafka’s client library lists have seen steady growth. As more languages and frameworks have grown in popularity, finding a well-supported and complete library for either service has become easier.
One thing to note is the growth of stream client libraries for both RabbitMQ and Kafka, which makes it considerably easier for developers to process streaming data. These libraries are particularly helpful when reading data from a stream, transferring or processing it, and/or writing it back to another queue. Additionally, ksqlDB and Greenplum are well worth checking out for developers looking to build streaming applications while taking advantage of their familiarity with relational databases. Either of these can be used with RabbitMQ or Kafka. In fact, with the addition of Gemfire for fast in-memory massive parallel processing, ML/AI and/or big data analytics become a lot easier.
Security and operations
RabbitMQ ships with out-of-the-box, pluggable authentication back ends, such as LDAP and OAuth2, to manage users and application access control, while Kafka uses the Java Authentication and Authorization Service (JAAS) for SASL framework configuration, which gives the user the freedom to choose a very broad range of authentication mechanisms. Whether you choose RabbitMQ or Kafka will of course depend on your specific requirements and your use case, but most security scenarios can have a proper conclusion with either technology.
It’s important to note the rise of Kubernetes over the last few years and how it affects the operations of services. Substantial work has been done to allow infrastructure operators to run both RabbitMQ and Kafka on Kubernetes. The RabbitMQ operator and Kafka Helm chart both have very fine control over how these services are configured as well as how to run them on Kubernetes specifically. This makes it easy to get up and running with both of them configured and clustered out of the box. The RabbitMQ operator has an advantage over a Helm chart because it adds built-in best practice knowledge to any Kubernetes platform. The operator helps to not only create clusters for Day 1 operation, but it will also manage RabbitMQ clusters for Day 2 operations and beyond.
For those who do not want to be burdened with Kubernetes, a growing number of commercial services are available that will automatically deploy and host RabbitMQ or Kafka clusters on pretty much any cloud platform provider’s infrastructure.
Reliable delivery
Apache Kafka confirms message delivery to producer applications based on a desired number of broker acknowledgements. The possible acknowledgement values are 0 (no acknowledgements), 1, or all. The acknowledgement value of 1 is most efficient, but there is a risk of message loss even after a confirmation is sent to a producer application. Acks=all is the most reliable, but it can be inefficient for a large number of brokers in a cluster.
RabbitMQ only confirms the delivery of a persistent message once there is an acknowledgment that the data is safely saved to disks for classic and quorum queue types. This reduces the risk of message loss. RabbitMQ Streams supports replication to a majority of brokers, but it has a risk of message loss similar to Apache Kafka. With Quorum Queues, RabbitMQ will confirm message delivery to disk on a majority (or quorum) number of brokers in the cluster. This provides a reliable and efficient mechanism when using large clusters.
Performance
Performance can be hard to quantify with so many variables coming into play, including how the service is configured, how your code interacts with it, and of course the hardware it’s running on. Everything from network to memory and disk speed can dramatically impact the performance of the service. Both RabbitMQ Streams and Kafka optimize for performance, but you should also make sure your use case leverages them to maximize efficiency.
For RabbitMQ, there are some great how-to resources about maximizing performance, such as how to benchmark, and a stream’s performance results of over a million messages per second. These guides detail best practices for how to configure your clusters and how your code should interact with them for the best performance possible. Much of this advice revolves around things like managing queue size and connections, and being careful about how your client consumes messages. The RabbitMQ clustering guide also includes things to keep in mind when building a cluster.
Likewise, Confluent has a great guide to running Kafka in production that covers many of the same concerns for when you’re building the hardware that will run your Kafka cluster, as well as how to configure the cluster itself. There are a couple of things you’ll need to keep in mind since Kafka runs on top of the JVM, but it does a great job of pointing those out.
If you’re interested in raw numbers, both the RabbitMQ team and the Confluent team have recently put out their respective benchmarks. Both include a lot of details on how the clusters were configured and the workload that was placed on them, so make sure you take that information into consideration when reading the results. Use case and operations should significantly factor into your decision as well.
In general, we find that the performance of RabbitMQ streams is comparable to Apache Kafka. We encourage you to try it out for yourself. Check out this open source project to explore comparing the publishing throughput between Apache Kafka, and RabbitMQ Streams using an example Spring Batch application.
Making the call
Deciding whether to use RabbitMQ or Kafka has never been easy, and with both technologies improving every day, the margins of advantage have only gotten smaller. The decision you make will depend on your individual scenario. Make use of the knowledge contained here and apply it to the familiarity you have with your use case along with any proofs of concept.
Learn more
If you’re new to messaging services in general, a great place to start learning is with this video on event-driven architectures. If you’re a Spring developer, make sure to check out our guides to get started with RabbitMQ, Kafka, and Spring Cloud Stream. You can also review this article on event streaming using RabbitMQ with Spring.
And if you would like to know more about the VMware RabbitMQ commercial offering, check out this page or contact us.
Common questions
Q: What is the difference between Kafka and RabbitMQ?
Although Apache Kafka is a distributed event streaming platform that facilitates raw throughput, it is focused on a distributed append-only log that can be clustered across multiple servers for a higher degree of availability. RabbitMQ offers the same kind of append-only log with Streams, which is comparable—if not better—in performance for some use cases. RabbitMQ capabilities can be expanded through the use of plug-ins enabled on the server. They can also be distributed and configured to be reliable in the case of server or network failure. RabbitMQ supports many different protocols "natively" as well as having extremely flexible message routing capabilities. The unique aspect of RabbitMQ is that you can mix and match both traditional messaging as well as high-throughput streams with the same broker.
Q: When should you use Kafka vs. RabbitMQ?
There is no one answer to this question. Quite often it boils down to previous experience within a development team for one broker or another. Now that RabbitMQ offers the same append-only data structure, it might be sensible to use RabbitMQ to migrate away from conventional messaging toward event streaming.
Q: Can Kafka and RabbitMQ be deployed on Kubernetes?
Yes, both Kafka and RabbitMQ can be deployed on Kubernetes.
Q: Should you use Kafka or RabbitMQ for microservices?
While Kafka utilizes a straightforward, high-performance routing approach that’s ideal for big-data use cases, RabbitMQ is ideal for blocking tasks and allows for faster server response time. Both options are suitable depending on your specific use case.
Q: Is Kafka higher performance than RabbitMQ?
Both Kafka and RabbitMQ optimize for performance, which can be very hard to quantify depending on your specific use case. However, RabbitMQ provides low latency and guaranteed message delivery. Of course, service configuration, code interaction, hardware, and network speed will dramatically impact the performance of either service.