Recently, we had the opportunity to speak with architect Brett Cameron about vFabric RabbitMQ. A popular speaker, Brett is well known for his effort to port Erlang and RabbitMQ over to the “legacy” OpenVMS operating system platform (now owned by HP). With over 19 years in the software industry, Brett specializes in systems integration and large, distributed systems. Of course, he has spent a lot of time with OpenVMS – an OS with one of the more interesting histories in the software industry.
When we started chatting with Brett, he had recently discussed the concept of the Polyglot Rabbit with Alexis Richardson and written a great article titled, “The Polyglot Rabbit: Examples of Multi-Protocol Queues in RabbitMQ.” According to Brett, the main goal of this article is about the fact that you can publish messages into this environment via one protocol and consume via one or more other protocols (simultaneously if you want). “It’s a brilliant and a very powerful capability.” Brett felt that this capability was possibly not being promoted enough, and hopefully the article will go some way towards fixing this.
In Brett’s article, there are seven examples with sample code included:
- Message produced via Pika Python AMQP library over STOMP and consumed via C API on OpenVMS (or most UNIX/Linux variants)
- Pika Python AMQP library producing a messages to a fanout exchange where a consumer receives via AMQP and another consumer receives via STOMP
- Extends example 2 by adding an HTTP consumer via a Ruby Sinatra script and using the RabbitHub plugin.
- Publishing via cURL command via HTTP to the RabbitHub plugin.
- Publishing via the STOMP protocol and the stomp.py Python client.
- Using a Pika Python script to publish a message via AMQP and a C program consuming via MQTT
- Extending example 6 by adding an MQTT publisher and a STOMP consumer
Q&A With Brett Cameron
Q1. You have spent a considerable portion of your 16+ year career working on the architecture and design of distributed, enterprise systems and integrations. What drew you to this space and what keeps you intrigued by it?
A1. When I started working with distributed systems back in the early and mid-90’s, I just seemed to have something of a knack for understanding this stuff and getting it to work. Generally speaking, I found coding piles of business logic rather boring—it was much more interesting to tinker with lower-level things like operating systems and networks. So I just seemed to carve out a niche for myself in the integration space. Around this time we were starting to get into using stuff like DCE and CORBA, and some other rather crude but quite efficient RPC tools. I’d come up with the architecture and details of how we were going to use the software, and would then work with the development teams to deliver the end solution. Around this time (1994 if I recall) I also coded up our own little RPC package that basically worked like a simple web service, sending formatted textual data between clients and server via HTTP POST. This was written back in the days of Digital Equipment Corporation, and unfortunately there was not much in the way of client-server software available at the time for the DEC OpenVMS platform. Interestingly, the customer this was written for is still using it; we’ve only just started replacing it with something standards based, so I’m rather proud of that—web services before web services.
In some ways I think it is easier today to come up with good integration solutions—it’s certainly different. In general, I would say that there is a better understanding of this space. However, with recent proliferation (say the last 8 years or so) of good Open Source integration technologies I think people sometimes get a bit confused as to the best way to go—there are just too many potentially viable options. Another problem I see a lot of is customers wanting to integrate cool new stuff with their valued “legacy” applications, or possibly it does not even occur to them that this might be feasible. This is an area I really enjoy, because every situation is different—figuring out how to help customers get the most out of those valued legacy applications through integration with new technologies. This quite often involves getting a bit creative, writing tools, and things like that. And of course it’s fun to spend time with customers in different places around the world and make new friends. Quite often these days, it’s a matter of pulling together solutions using multiple bits of Open Source software, whereas the more proprietary stuff would quite often be a more complete package. This sort of flexibility can also generate confusion and interesting decision making, but from my perspective it’s a good thing (so long as I make the right decisions).
Q2. In your career, what is one of the coolest integration or distributed systems projects you’ve worked on and why?
A2. That’s a tough one. From a pure enjoyment perspective it would have to be working with a customer in Palma Mallorca, Spain. Aside from the fact that the project was very successful, the people were great to work with, and it was just such a nice place to visit—so nice in fact the family had a holiday there a couple of years later. But from a satisfaction perspective it would probably be a project that involved integrating several government agencies. I can’t say too much about the specific work, but it was one of those projects that had all of the elements I like: working to a tight (and immoveable) timeline just to provide a bit of stress, but having a great young team and a technical solution in which we had total confidence. While the timeline was a concern, I was able to devise some bits and pieces to streamline the development work, which the team picked up really well and took much further than I had considered! This work was done around 2002, and again the solution is still in place and rock solid. Several of the team have now moved on (London and elsewhere), and I try to catch up with them on my travels. They too have fond recollections of this project. It is pleasing to think that their first work experience just out of university was a successful and enjoyable one. As it was, we implemented this project using BEA TUXEDO (long story behind the technology choice, and it was not a choice of my making, although I would say that TUXEDO is a good piece of software, as it witnessed by its longevity and continued support and evolution by Oracle). It is mostly point-to-point integration (a slight over-simplification), which is a relatively trivial use-case for RabbitMQ; however following some recent updates, the solution must now also exchange data via other mechanisms, and the multi-protocol capabilities for RabbitMQ would have been ideal for this.
Q3. What have you seen as one of the biggest challenges in the projects you’ve been on throughout your career?
A3. Generally speaking the biggest challenges on any project are not of a technical nature. That being said, one of the things that never ceases to amaze me is the distorted view that a good solution has to be a complicated solution! Wrong! The less moving parts the better—less things to break! However, there generally needs to be a compromise, as good solutions also need to be flexible. This I where something like RabbitMQ is able to provide an excellent basis around which to build solutions—like the marketing says, “Messaging that just works”—you can literally download, install, and start doing useful things with it in just a few hours. It also affords a lot of flexibility and extensibility (support for multiple protocols for example).
If I want to look at Open Source software, another challenge one often faces in this space is average documentation and support, and average (or possibly non-existent) administration tools. Obviously requirements in this regard vary, depending on what the software does, but in general I’ll be reluctant to use or recommend anything that doesn’t have these things adequately covered. There are certainly no complaints about RabbitMQ in this regard.
Q4. As a programming polyglot yourself, what languages have you worked with?
A4. I work with a whole pile of languages, old and new. Ultimately, I suppose I’m most happy looking at a pile of C code, but I can hack my way around in a number of other languages. And, I have a bad habit of getting distracted by new languages that I uncover. A list of the ones that I would most commonly use (in no particular order) would be C/C++, FORTRAN, Pascal, COBOL, Java, Ruby, Python, Tcl, and Erlang. I’ve recently been having a bit of a look at Clojure, and as a consequence of my work with RabbitMQ and Erlang, I’m starting to develop quite a fondness these sorts of functional languages.
Q5. When we look at cloud computing platforms, what are a few ways we have to look at programming differently and how does middleware like RabbitMQ fundamentally change programming approaches?
A5. The whole cloud space is still in the relatively early stages of evolution. However, in terms of programming approaches, I think that what VMware is doing with Cloud Foundry is pretty neat, and I suspect that this sort of approach will develop quite a bit further. Probably one of the big considerations for cloud-scale work is concurrency, which of course RabbitMQ handles very well courtesy of Erlang. It’s interesting (and indeed rather heartening) to see how the use of Erlang seems to be steadily increasing in this space, and hopefully this trend will continue. It has been a language ahead of its time for some while now, and perhaps finally the rest of the world is catching up to it and realizing its potential. Another language that’s good at concurrency is Scala, and it is also seeing increased adoption. A phrase that is appearing more and more in the context of cloud computing is Big Data, which generally involves efficiently processing and analysis vast quantities (petabytes) of potentially unstructured data. This sort of thing is clearly more about algorithms than programming languages, and again concurrency comes into play, as does distributed processing, which makes me think about things like ZeroMQ or maybe even MPI.
We could have a pretty lengthy discussion on this topic!
Q6. What middleware technologies have you worked with and what drew you to RabbitMQ?
A6. I’ve mentioned a few of the older ones above—assorted DCE and CORBA implementations, DEC MessageQ (now owned by Oracle), MQSeries, TUXEDO (arguably more of a TP system, but very flexible and really quite good for integration work). You’d then start to get into newer bits and pieces like ZeroMQ, OpenAMQ, and of course RabbitMQ. There would also be a few others that I’ve tinkered with.
What drew me to RabbitMQ? Long story. I had been following developments around AMQP since around 2006, and following discussions with a few colleagues, it was decided that we needed an AMQP implementation that would run on HP’s OpenVMS platform, so I set about porting OpenAMQ, which was the first AMQP implementation, developed by iMatix. As a consequence of this work, I became at least aware of RabbitMQ, but did not really pay it much attention. For various reasons, iMatix pulled out of the AMQP Working Group, and ceased development/support of OpenAMQ, so I needed another solution for AMQP on OpenVMS. At the time, Qpid was really only just getting started, and as I recall there were at the time issues with building it (the C/C++ implementation) on OpenVMS. Actually, I think it depended on a whole pile of Open Source libraries that I really couldn’t be bothered porting because it would have been a significant effort. So I looked at RabbitMQ. The catch here was that it was implemented in Erlang; however I was impressed by how RabbitMQ looked and it was clear that it had a great team behind it. Additionally, I was becoming more and more intrigued by Erlang. So following an airport lounge discussion over a few beers with a friend (who was actually head of OpenVMS strategy at the time), I decided to try porting Erlang to OpenVMS. Once I’d done this, RabbitMQ worked pretty much out of the box! Literally, all I had to do was to change a few shell scripts to OpenVMS–speak and away we went! I have joked that it takes longer for me to download new versions than it does to get them going. This exercise convinced me to look more into both Erlang and RabbitMQ.
Q7. In your article, The Polyglot Rabbit: Examples of Multi-Protocol Queues in RabbitMQ, what originally prompted you to look at multi-protocol queues this way?
A7. Like I discuss in the article, the notion of a one-size-fits-all messaging protocol is I think somewhat flawed. Even though most good messaging protocols have many features in common, different protocols are better suited to certain use cases. It is also just going to be human nature that people will come up with new protocols and that some of these will receive reasonable levels of adoption. Developing gateways or bridges to map between different protocols is a costly pain (both for initial development and ongoing maintenance and support). Having plugins and adapters that sit on top of RabbitMQ and that can leverage all the capabilities that RabbitMQ provides is a far superior way to go. And the fact that you can publish messages into this environment via one protocol and consume via one or more others (simultaneously if you want) is really quite brilliant, and powerful. It seemed to me that this capability was possibly not being promoted enough, and hopefully the article will go some way towards fixing this.
Q8. Could you give us a brief synopsis of the article?
A8. RabbitMQ is a popular Open Source message queuing system that implements the Advanced Message Queuing Protocol (AMQP). It has been estimated that there are presently some 30,000 production deployments of RabbitMQ across the globe, and this number is growing rapidly. Most of these deployments are business-critical, underpinning everything from internet-based pizza ordering systems through to providing the central nervous system for OpenStack-based cloud deployments. RabbitMQ currently supports versions 0.8.0 and 0.9.1 of AMQP and will soon also provide support for 1.0. However, a somewhat overlooked capability of RabbitMQ is its ability to also readily provide support via its flexible plugin architecture for a variety other popular Open Source message queuing protocols, including STOMP, MQTT, ZeroMQ, and RESTful messaging via the RabbitHub plugin. Most good message queuing protocols share many features in common; however, some are better suited to a particular set of use cases than others. This ability of RabbitMQ to be able to seamlessly receive and propagate messages simultaneously via multiple protocols is an extremely powerful facility, and one that affords great flexibility. For example, it means that it is possible to use the most appropriate protocol for a particular function or to simultaneously use different protocols to disseminate the same data to different types of users via the most appropriate protocol without having to develop and maintain any separate gateway components. The post discusses this ability of RabbitMQ to support multiple message queuing protocols and presents a number of simple examples to illustrate how this facility may be used.
Q9. Why did you choose these examples?
A9. The examples are aimed at illustrating the key points, namely that it is possible to publish via one protocol and (simultaneously) consume messages via one or more other protocols. The protocols looked at (other than RabbitMQ’s native protocol) are MQTT, STOMP, and HTTP (via the RabbitHub plugin). This set of protocols provides a reasonable illustration of the sorts of things that are possible. The article is certainly not meant to be definitive in any way, but rather to illustrate some of the typical sorts of things that can be done. This set of protocols provides a good cross-section in this regard. These protocols also cover a wide range of use cases: MQTT is good for lightweight messaging over constrained networks; HTTP has applicability to web-based/internet-based solutions; and STOMP is a nice, simple little protocol that can be great in situations where other solutions might be just a bit over the top. A number of links for client libraries are included in the paper. Another good thing about these protocols is that they are “open”, and there are client APIs in a wide variety of languages.
Q10. We hear you are a New Zealander who loves music and plays guitar. Tell us more.
A10. I love guitar-based blues rock and jazz rock. I’ve got 8 guitars – 7 electric and 1 acoustic. At this time of year (getting into summer down here), I tend to use the acoustic a lot more—sitting on the deck with a beer and playing a few tunes, jam sessions with friends at BBQ’s, etc.
|About Brett: Brett Cameron currently works as a senior architect with HP’s corporate Cloud Services group, focusing on the design and implementation of message queuing and related integration services for customers and internal use. Brett lives in Christchurch, New Zealand and has worked in the software industry for some 19 years. In that time he has gained experience in a wide range of technologies, many of which have long since been retired to the software scrapheap of dubious ideas. Brett is involved in the research and development of low-latency and highly scalable messaging solutions for the Financial Services sector running on HP platforms, and as a consequence of this work, Brett has been involved in several interesting Open Source projects. He is responsible (or should that be irresponsible) for porting various Open Source solutions (including Erlang and RabbitMQ) to HP’s “legacy” OpenVMS operating system platform. Brett holds a doctorate in chemical physics from the University of Canterbury, and still maintains close links with the University, working as a part time lecturer in the Computer Science and Electronic and Computer Engineering departments. In his spare time, Brett enjoys listening to music, playing the guitar, and drinking beer (preferably cheap Australia lager).|