As digital devices proliferate, it’s becoming increasingly common that we interact with multiple screens or devices at once. The most well-known manifestation of this is the “second screen” experience, which will be familiar to anyone who has absent-mindedly watched a TV show while reading or tweeting on their smartphone or tablet. There’s a lot of thought and energy going into creating convergence with these screens, but the efforts are often limited to TV networks trying to encourage viewers to tweet hashtags while watching shows. A recent experimental project at Pivotal Labs is a far more ambitious attempt to push the second screen to its limits, connecting multiple devices in innovative and imaginative ways.
Great opportunity lies in connecting the various devices in our homes and offices to one another, yet current functionality is limited. An iPhone and an iPad can communicate using Bluetooth, but this is largely used to enable local multiplayer games, or connect the devices to physical keyboards. Smartphones with NFC may be able to share contact info or pay for a transaction, but those are only the most obvious applications for the technology. Push technology is prevalent in many contemporary apps, primarily for messaging and background app updates on iOS, or Google Cloud Messaging on Android.
These and other related protocols and technologies are critical to our experiences on mobile device, but their implementation remains limited. They are often simple one-to-one transactions of small amounts of data between two peer devices. In the Device Wall experiment, a team of Pivotal Labs engineers attempted to use these protocols to connect multiple devices to one another and engage in far more ambitious exchanges of data and processing power. They aimed for the Holy Grail of cross-device interoperability: getting a wide range of connected devices to operate as one.
In the words of Pivotal Labs engineer Emir Hasanbegovic, the team aimed to “use multiple devices to orchestrate a unique unified experience.” What this required was a way to connect devices and enable them to send and receive messages among themselves in real time. This is a far more sophisticated exchange than the one-to-one data transfers we see on mobile devices today. During a hackathon held at Pivotal’s Toronto office, engineers created a multi-screen image and optical character recognition experience in which a variety of connected devices operated as one through real-time communication using persistent connections.
To build the real-time messaging component, the team used the Advanced Message Queuing Protocol (AMQP), a message broker that is supported for both clients and servers in RabbitMQ. The team built a custom cloud-based application that served as the Controller, managing the states of the various devices, and sends and receives messages to and from the client devices. Hasanbegovic explains the architecture of their solution in a recent technical post on the Pivotal Labs blog:
We split the main communication layer into three parts. The server, the client and the client server….The server would be our communication protocol layer, RabbitMQ and Apache in this case, the client would be the phone applications and the server client would be Java applications running server side. In short, the server clients and the clients would communicate between each other using the server. The server client would keep track of the entire state of the unified experience and report to each client their particular state with respect the unified experience. Each client would then render their respective state.
Once the architecture for getting multiple devices to operate as one was built, the team developed applications that uniquely implemented the Device Wall concept and capitalized on the distributed processing power of the various devices. The first application, Screen Detection, made use of OpenCV, an open source computer vision and machine learning software library. Boasting over 2500 algorithms, the library can be used to “detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects,” and much more. As a test case, the Pivotal Labs engineers ran an app on each device which would receive a unique ID from the server, and then display each device’s ID on its respective screen. The team then took a photo of the screens displaying each ID, and used the OpenCV library to get the devices recognize each one in the photo using analysis of the shapes and OCR of the displayed device ID’s.
The second application was a take on the venerable Memory Game, in which various shapes on cards are briefly viewed by players, and then they have to identify the location of the shapes after the cards are flipped. This game was simulated on the Device Wall, with each phone representing a single card, and each tablet representing two cards. As engineer Devin Fallak explains, “The game state resides on the Memory Game server client. When the game is initialized, all cards are shuffled and flipped down. The assignments are sent to the clients. When the user touches a card, events are sent to the server client. The server client responds by sending a message back to the client telling it to flip up the card if appropriate.”
The game operated as another proof-of-concept of the potential of real-time messaging between connected mobile devices, using both their distributed processing power as well as leveraging the tactile experience of the devices’ touchscreens.
Learn more about the Device Wall project:
- Device Wall: A second screen experiment
- The power and structure of push: Second screen solution
- Device Wall code on GitHub
For more information on RabbitMQ:
- Read more articles about RabbitMQ or the federation plugin
- Check out the full product overview, download, and get documentation
- Learn about how RabbitMQ runs within Pivotal One and on Pivotal CF
- Get more information about our commitments to open source software