Artificial Intelligence (AI) is revolutionizing how data moves across modern enterprises. Messaging and streaming technologies, long considered the backbone of event-driven architectures, now face an inflection point: Are they at risk of becoming obsolete, or are they on the verge of unlocking unprecedented opportunities?
The AI-Driven Shift in Data Processing
Traditional messaging and streaming platforms—such as RabbitMQ, Apache Kafka, and Pulsar—have been instrumental in enabling real-time data exchange. However, AI’s rapid adoption has altered how organizations consume, analyze, and act on data. Modern message or streaming brokers have enabled this evolution by ensuring the underlying infrastructure is decoupled and flexible enough without overburdening existing systems. AI-driven workloads demand many things but perhaps the most obvious are low latency and high scalability. The former being driven by the need to have the most up to date data especially if real time insights are expected, as well as rapid inferencing. With this need also comes the pressure of scalability and the need for any message bus to prioritise and classify data dynamically.
The Opportunity: AI-Enhanced Messaging and Streaming
Rather than replacing traditional messaging, AI is augmenting it, the use of LLMs to prioritise and route messages has been debated many times. This approach involves publishing messages to an inbound queue or stream, then processing them before prioritising and routing them to the calculated destination. However, this creates the potential for a classic messaging anti -pattern – the single large inbound message queue. This is a ‘no no’ in modern messaging systems because it is easy for things to go wrong and for messages to get lost. After all a message broker is not a database, despite what some may think. This can be mitigated though with the correct use of an intelligent load balancer. As with any AI augmented system the use of AI driven routing improves the accuracy and efficiency of message delivery and improves overall system performance.
LLM’s are very good at filtering any unwanted inbound messages which can significantly lighten the load on existing consuming applications and prevent message queue ‘bloating’. The use of streams though leads to a much more data in flight friendly approach to this problem. Publish large amounts of unfiltered or unprocessed data to a stream where AI powered filtering and prioritising can be better managed. Obviously stream processing is nothing new, but AI powered stream processing is the logical progression. However the implementation differences between streams and conventional message queues are obviously significant.
There are many other cases for the integration of AI into a traditional message broker architecture. The improvement of resource utilization, reduction of latency and the prevention of bottle necks across the system can be achieved through AI driven dynamic load balancing. AI components can predict traffic patterns and balance the load across different message queues, clusters or even AZ’s based on the current demand or historical trends. This is particularly useful in ‘spikey’ IoT applications where repetitive tasks occur at specific times of day. For example when thousands of hand held devices for a courier company are updated each morning with the day’s collections or deliveries.
Anomaly detection in message brokers is a new area that can guide operational teams to changes in normal message flow patterns. Through the careful monitoring of key metrics a system can automatically detect sudden data spikes, data corruption or potential malicious activity faster than any human can. The early detection of issues, reduced downtime, and provides more effective troubleshooting.
For message brokers in systems with human interactions (e.g., customer service systems), embedding AI components can analyze sentiment, context, or intent within the message payloads. This can help prioritize messages or trigger specific workflows based on the urgency or tone of the message. Similarly, AI can be deployed to monitor the health of message broker components and predict failures or resource depletion based on performance metrics and trends. This has the obvious benefit through proactive system maintenance of preventing system downtime and improving overall system reliability.
Using LLMs to calculate message tags can also help with system optimization. They can provide additional insights based on message payload and can potentially inject contextual data before forwarding them to downstream systems. Although this helps downstream systems make more informed decisions and automate processes based on enriched data. For many users the thought of another system changing the metadata, or even the message body poses a significant challenge. Most plan for this to be correct at source.
By integrating AI into conventional message brokers, it is possible to increase the level of automation, intelligence, and efficiency for message routing, processing, and system monitoring. This can enhance performance, reliability, and security while optimizing resource use and scaling capabilities without the need to completely redesign the whole messaging infrastructure.
The Threat: Are Traditional Messaging Systems at Risk?
While AI offers numerous opportunities for enhancing message brokers, it also introduces certain risks that need to be carefully managed. One of the largest is that of ‘event overload’ on messaging infrastructure due to the vast amount of data that AI/ML systems can consume, and indeed need to consume if they are to be of any real use to a business. Trying to simply ‘bolt-on’ a data hungry LLM to a messaging topology that has not been designed to cope with peak loads is a significant step towards the perfect storm of application melt down. Message queues are at their best when either empty or almost empty. This means that the ratio of publishing to consumption is well balanced. AI systems have the potential to break this balance very quickly.
Adding AI-driven routing, load balancing, or scaling might introduce unpredictability in a systems behavior, making it harder to trace errors or understand the cause of failures. This has the effect of increasing the difficulty in debugging, maintaining, and auditing the system, especially if the AI model’s decisions are opaque or difficult to interpret. A poorly trained LLM could also cause havoc with the routing of messages to the wrong consumer. Models that rely on historical data for prediction or classification, could break outdated or incomplete datasets. This could lead to incorrect routing, prioritization, or anomaly detection, ultimately causing performance degradation or system failures.
AI components need to be properly secured; they could be vulnerable to adversarial attacks where malicious actors feed the system misleading data to manipulate message routing or cause disruptions. As is the case for any infrastructure component, the risk of increased exposure to security breaches, unauthorized access, or denial-of-service attacks have the potential to compromise message integrity or confidentiality.
AI-driven message brokers might introduce latency due to the time required for real-time AI inference or model decision-making, which could be particularly problematic in low-latency or high-throughput systems. For any high performance message broker delivery delays and potential bottlenecks, are not acceptable, especially when handling large volumes of messages or complex AI computations.
Although improving with each generation, AI driven systems often require significant computational resources (especially for real-time data processing, model training, or inference). These added computational demands could strain the underlying infrastructure and negatively impact the performance of message brokers. In fact they are best kept apart from each other, from a resource point of view. This has the potential to impact operational costs, latency, and could compromise system performance, particularly in high-throughput environments.
There is a temptation with AI to potentially over-automate processes which could result in critical decisions (such as message prioritization or failure handling) being made without sufficient human oversight or intervention, especially in complex or edge-case scenarios. This has the effect of negating the benefit of AI in the first place and reducing system flexibility in adapting to unexpected situations. Which in turn could lead to failures in handling rare but critical events, or in cases where human judgment is required. Many AI models, especially complex ones like deep learning, are often considered “black boxes,” meaning it is difficult to understand how decisions are being made. This lack of transparency could hinder troubleshooting and trust in the system.
As with over automating there is a desire in some cases to use AI to auto scale a messaging system based on predicted load. Although it is technically possible to automatically add new message queues, increase the number of nodes in a cluster or even create a whole new cluster. This has to be very carefully policed. Although some cloud providers would love users to be able to scale up to infinity based on the predictions of an AI model, there is a fundamental problem and that is how do you scale down? Do you just switch off these new message queues? How do you know if applications are publishing or consuming to/from them? Just as the scale up process there would have to be lower limits as well as upper limits to ensure the messaging infrastructure doesn’t vanish before your eyes every time things are quiet.
AI models might unintentionally learn biases from historical data or the way some messages were previously handled. This could lead to biased routing or prioritization, affecting the system balance or fairness and potentially even compliance with regulations.
The Future: AI and Messaging Must Evolve Together
The intersection of AI and messaging presents a paradigm shift rather than a competition. Organizations that embrace AI-augmented messaging will no doubt unlock Smarter workflows with real-time AI-driven decisions. Their architectures will become more resilient through self-optimization in response to AI insights at levels of detail currently not possible with human eyes only.
VMware Tanzu RabbitMQ is no exception and having been around for more than 17 years is constantly evolving and improving. The latest version Tanzu RabbitMQ 4.0 is a good example of this with a new and more flexible metadata store along with new native protocol support. These (and other) features enable users to work towards some of the philosophies discussed here.
Conclusion
AI is not a threat to messaging and streaming technologies; it is a catalyst for their evolution. As AI becomes integral to modern enterprises, messaging systems must adapt—leveraging AI for intelligent message routing, anomaly detection, and real-time decision-making. The future lies in AI-driven event processing, where real-time intelligence fuels next-generation applications. With any new and emerging technology though it has to be deployed with care and understanding in the right places.
Will your messaging and streaming architecture evolve with AI, or risk falling behind?