case_studies Tanzu RabbitMQ

Case Study: Shift from Delayed Polling to Real-Time Telematics with RabbitMQ

header-grahic-case-study-lucid-logisticsReal-time requirements often mean architectures need to be re-evaluated.

Lucid Logistics recently ran into this type of situation—they needed to provide their customers vehicle telematics data in real-time based on information delivered via satellite. Lucid Logistics is in the business of providing companies with the kind of vehicle telematics that they consume and use. Currently, their application provides access to the data via user interfaces, email notifications, and reports. However, their customers now want to action on it using their own custom set of business rules. For instance, for customers looking to do fleet management, they would want to take the current position of their trucks and run algorithms to optimize routes, changing driver’s job orders based on location and skills. For this, individual customers needed to consume, store, and manipulate the data themselves and process their own business rules in real-time.

This shift required a fundamental change in architecture. Instead of allowing external services to poll and query the data store at will via SOAP and face data delays, the architecture needed to push data to subscribers in real-time. By applying open source technologies like RabbitMQ, PubSubHubbub (PuSH), and RabbitHub, Lucid Logistics is developing an architecture for subscriptions via a hub, and are now able to deliver information in real-time.

Background on Telematics and Satellite Data

Companies need GPS monitoring to manage vehicle fleets and marine vessels as well as non-mobile, fixed assets like compressors and generators. Lucid Logistics provides end-to-end solutions for global GPS monitoring, management, and tracking of remote assets via the Iridium Low Earth Orbit (LEO) satellite network.

The Lucid Logistics system provides near real-time telematics information, including asset location, emergency notifications, and vehicle driver behavior such as excessive speeding and idling. Asset health information is also provided by J1939/J1708 integration to read various sensor data. The following diagram outlines the components of the product and data flow:

LL-high-level-architecture

The Challenge with Real-Time

lucid-logistics-mapOne of the biggest challenges has been distributing data immediately as it arrives from the satellite provider.

Currently, data is stored in a backend database and available for web access and reporting within seconds of receiving it. The product scales under the assumption that the fleet manager or asset administrator is logged into the Lucid Logistics application and is monitoring the particular events. The manager can also configure a set of alarms and receive alerts via email notifications. Additionally, events can be aggregated and analyzed using database reporting tools. This model is recognized as the industry standard in asset tracking and management.

Some customers, however, have more complex requirements and want their own business rules implemented outside of the Lucid Logistics infrastructure. These companies need a direct data flow that is as close to real-time as possible.

The Lucid Logistics development team first tried another approach. In that case, a SOAP-based web service interface was exposed with an assumption—that end users would be polling for data on a schedule. The results of this approach were not pretty, and resources were wasted on both sides. Strict policies had to be implemented and enforced to prevent excessive resource drain. End-users were not happy because the feed was no longer real-time or even near real-time. The developers struggled with the resource strain imposed by this solution and had to write more code to alleviate the situation. So, a new way of message communication was needed.

The Solution—PubSubHubbub, RabbitMQ, and RabbitHub

Extending the present solution to better meet the demand for near-real time data feeds required some type of publish/subscribe mechanism, and webhooks was considered as a simple, effective approach to integration. One open source implementation almost immediately caught the eye of the development team—PubSubHubbub, an open protocol for distributed publish/subscribe communication on the Internet. The protocol was initially designed to extend the Atom and RSS protocols for data feeds. With this idea, feed readers can subscribe to a feed via a “Hub”, which will inform them when there are changes to the feed.

The original implementation of PuSH was geared toward blog feeds and required all the passed data to have an XML footprint. The protocol itself was very attractive, but the heavy message format was a bit of a nuisance. Luckily, the PubSubHubbub project page had references to several different implementations, and one was RabbitHub, introduced to the community by Tony Garnock-Jones in 2009.

The biggest advantage of RabbitHub over any other implementation was the support for a flexible message format. Because RabbitHub was written as a plugin to RabbitMQ, it also immediately benefited from a myriad of RabbitMQ features such as low memory footprint, built-in security, SSL support, message and message queue persistence, and flexible routing topologies via exchanges. RabbitMQ, on its own, opens many additional potential uses within the Lucid Logistics message processing ecosystem. One such application could be routing messages to another queue prior to them being written into a database.

When the latest version (3.1.5) of RabbitMQ was released, another member of the RabbitMQ community, Brett Cameron, helped to bring RabbitHub up to date with this version of RabbitMQ and with the latest releases of Erlang/OTP. Lucid Logistics developers were not versed in Erlang, so Brett’s help with a few existing bugs and SSL support was monumental to the success of the project. After about 3 weeks of collaboration with Brett, Lucid Logistics had the entire pub/sub scheme up and running on the development servers where messages were being pushed to subscribers in JSON format.

It was a simple task when it came to integrating into the existing Lucid Logistics message delivery stack. RabbitHub places a pubsub layer on top of plain old HTTP POST and provides two URLs for every AMQP exchange and queue. One URL is used to deliver messages to the exchange or queue:

curl -d "test message"
https://guest:guest@localhost:15671/endpoint/q/foo?hub.topic=foo

The other is used to subscribe to messages forwarded by the exchange or queue:

curl -k -v -d
"hub.mode=subscribe&hub.callback=http://localhost:4567/ClientCallback&
hub.topic=foo&hub.verify=sync&hub.lease_seconds=600"
https://guest:guest@localhost:15671/subscribe/q/foo

So, whenever new data comes in, it is pushed to the hub and HTTP posts go out to subscribers’ callback URLs. With this approach, all the overhead of polling goes away while message latencies go from minutes to milliseconds—subscribers will no longer poll constantly via SOAP and complex queries.

SOAP-vs-PubSubHubbub-RabbitMQ-LucidLogistics

For further reading:

NatalyaArbit-Headshot About the Author: Natalya Arbit is a senior web developer with close to fifteen years of professional experience in the variety of industries. She spent the last 6 years working in telematics with focus on supporting SaaS multi-tenant web portals using the Microsoft stack of technologies. She holds an MS in Geology from University of Minnesota, Duluth, and a BS in Economics from Russian State Geological Prospecting University.