RabbitMQ has, for a long time, been one of the most popular message brokers in the world. Last year, streams were introduced and today RabbitMQ can be used to support many use cases, whether it is as a message broker, for message streams, or doing both in unison. This blog will explain why RabbitMQ fits well into most messaging-streaming scenarios and why it’s an excellent choice. But before we jump into that, let’s cover some basics around event streaming.
When is event streaming needed?
Data solutions like relational databases, massive parallel databases, and NoSQL databases are great for getting the current state of information. For example, these solutions can easily manage the last known driving speed of a vehicle, a hospital patient’s health, or the current balance of a financial account. Capturing data state changes based on events often involves using triggers, listeners, or application code that will save state change details. Stored state changes in a database often contain event information to determine what changed, what the source of the change was, and when the change happened. Getting this more complex data—i.e., sending updates of vehicle position changes on a road trip, sharing updates on a patient’s vital statistics, or tracking credit card activity for hundreds of thousands of events per second—often requires an event-driven solution.
What is event streaming?
An event stream allows persistence of data state changes. The events in a stream can be produced at very high volumes. Event-streaming solutions play a significant role in increasing value for event-driven architectures.
Event-streaming brokers are repositories of events (sometimes called an event log). They log which consumers have processed what event messages. With event streaming, applications can read the history of data events, like all product updates or updates from a given point in time. Applications can also replay messages from a particular point in time or from the beginning of time. This is known as time-traveling.
There is significant business value in treating each data change as an event that can be published or replayed on demand. I was involved with a retail customer who wanted product catalog information, managed in their enterprise data center, to be made available to local retail stores in real time. We used an event-streaming solution to treat product updates as events. Each local retail store could then get the latest product information, such as safety instructions and pricing updates, based on new published events. In the future, newly onboarded retail stores can replay all product events to build their local inventory from the event stream.
Message brokers vs. event-streaming brokers
You may have heard about message brokers and wondered about the difference between them and event-streaming brokers.
Message brokers
A message broker delivers information like messages (or events) from a producer to a consumer. When consumers safely process the message, they send an acknowledgment back to the broker. The broker will then safely remove the message. Message brokers typically allow for intelligent routing. The logic to route messages based on rules can be configured within the broker itself.
Event-streaming brokers
An event-streaming broker also supports the delivery of events from publisher applications to consumers. The main difference is that streaming brokers do not remove acknowledged messages. Streaming brokers allow you to replay events as needed. Event-streaming brokers support high-volume throughput, but in order to do this, they often do not support intelligent routing within the broker. Intelligent message routing is normally delegated to client-side applications.
RabbitMQ: The best of both worlds for message and event-streaming brokers
RabbitMQ, available in both open source and commercial versions, has been a prominent message broker for many years. The streams capability was introduced in version 3.9, making RabbitMQ a viable event-streaming solution. This means it can be used as both a message and streaming broker. Customers can continue to use RabbitMQ for those popular message broker use cases. And they also have the option to use it as both a message and event-streaming broker in a unified solution.
RabbitMQ supports low-latency message delivery with flexible routing. RabbitMQ exchanges allow you to dynamically determine which consumers receive which events, using configured binding rules based on message properties. Exchanges can route these events to streams. The intelligent routing capabilities of exchanges with event streaming gives RabbitMQ a unique advantage over other event-streaming solutions.
RabbitMQ versus Kafka
RabbitMQ is often compared with Apache Kafka, and many articles have been written about the pros and cons of each. Unfortunately, the information in many of these comparisons is out of date since they do not reflect the fact that RabbitMQ now includes streams.
Apache Kafka offers event streaming with high throughput and event replay capabilities. I have seen customers using Apache Kafka and RabbitMQ (or other message brokers) as part of their infrastructure for different use cases. In the past, RabbitMQ was not considered for event-streaming use cases, and customers that needed very high throughput with replay support would turn to other solutions.
The VMware RabbitMQ product team has now bridged the gap for high-throughput and replay capabilities. Publicly available benchmark numbers for RabbitMQ demonstrate streams’ ability to process millions of events per second for those high-throughput use cases in addition to having replay capabilities.
Apache Kafka scales by partitions between topics. Each Apache Kafka topic partition supports first in first out (FIFO) conventions. The number of partitions needs to be planned ahead of time. RabbitMQ scales by streams (which are another type of RabbitMQ queue). Each RabbitMQ stream also supports FIFO conventions. However, with RabbitMQ, you can add more streams or run additional RabbitMQ brokers to horizontally scale with a cluster dynamically. The latest super stream features make it easier to partition streams by routing based on event message properties.
Customers can use Spring with RabbitMQ streams, along with other VMware Data Solutions such as GemFire, SQL, or Greenplum, to implement use cases similar to Apache Kafka KSQL/KTable.
Why choose VMware RabbitMQ?
VMware is a steward of the popular open source RabbitMQ and also offers a commercial version called VMware RabbitMQ. With VMware RabbitMQ, you get all the messaging and streaming capabilities of the open source version in an enterprise-grade supported distribution. It brings in new features, such as defect resolution and common vulnerabilities and exposures (CVE) patching. The commercial version also includes additional features that offer operational, deployment, and management efficiencies on top of open source RabbitMQ.
In addition, customers can purchase 24–7 support for open source RabbitMQ. They also have the option under their licensing agreement to take advantage of some of the commercial features. One can get the same level of support from VMware for open source with or without the commercial features.
Learn more about VMware RabbitMQ
Combining message broker and event broker workloads into a unified RabbitMQ solution solves multiple use-case scenarios and can help reduce your event-driven complexity and deployment costs.
To dive deeper into RabbitMQ's benefits, check out these related articles:
To learn more about VMware Data Solutions like RabbitMQ, GemFire, SQL, or Greenplum, feel free to contact us at any time.