posted

0 Comments

By, Luis M. Valerio Castillo, Consultant, PS Research Labs; Neeraj Arora, Staff Architect, PS Research Labs

In the article, Infrastructure Sub-Systems of an IoT Solution, we examined the high-level architecture in the infrastructure plane applicable to IoT deployments. Next, we explored the inference IoT Solution Components in which we took a layered design approach with infrastructure sub-systems as a base to deploy solution components to realize the Ingest, Analyze and Engage processes of an IoT solution.

In this article, we will talk about, engagement, the last of the three processes, Ingest, Analyze and Engage, patterned in an IoT solution. To do this, we will once again leverage the People Counter Modern IoT Application as an example. Before jumping in, let’s quickly explain what we mean by engagement. Engagement is where we provide and/or act on information inferred by the analyze process. The engage process may interact directly with users, or it may interact with systems transparent to end users. Note how we used information rather than data to describe what is being consumed by the engagement microservice. This is because data is a raw, unorganized fact or detail such as a picture in our case; and information is processed, organized, structured data such as the count of people from the picture. The engagement microservice handles information produced by the inference microservice, which consumes data provided by the ingestion microservice.

Now, let’s dive into what makes up the engagement microservice and how it handles information.

Infrastructure Sub-Systems of Engagement

Engagement in the example People Counter IoT Application has the following infrastructure sub-systems:

  • Core Datacenter
  • Cloud Services

The Core Datacenter is where we deposit the information. It needs enough computational power to handle the number of end-users, and network connectivity to connect to Cloud Services. We use a VM as the Core Datacenter in our example application to run the People Counter Engagement microservice.

Solution Components

Solution components related to engagement found in the example People Counter Application are:

  • Deployment Platform
  • Storage Platform
  • Connectivity Platform

Much like with the other microservices, Ubuntu is our Deployment Platform, MinIO is our Storage Platform, and the Connectivity Platform is a combination of REST and MQTT. The analogy we used in a previous post is solution components are gears in a clock. The gears can be any material as long as it meet certain standards. In the same way, each solution component can be any technology, for example Amazon S3 instead of MinIO, as long as it meet certain requirements, such as having an API.

Development Considerations

These are the choice of programming language, the framework to stream data to clients, and application packaging. We believe we must follow DevOps best practices keeping inline with Modern Application development principles. Let’s discuss each of those decisions.

We used Python and JavaScript to create the engagement microservice. The core engagement engine is written in Python. We could have chosen a different programming language, but we chose Python as a matter of development efficiency. Modern Applications are often split into multiple microservices. It is possible to develop these co-operative, but distinct microservices using different programming languages. For reasons of simplicity, reusability and manageability of code, these co-operative co-dependent microservices will, more often than not, use the same programming language. For us, rather than having to recreate core code in a different programming language, we were able to reuse code – via packages – from the other two microservices created for this project. Thus, we reused library code to communicate with Cloud Services MQTT and MinIO that were written and tested with development of the Ingest and Analyze microservices.

Whereas the MQTT communication channel is common across the three microservices, the engagement microservice presents this information to end users. We wanted more than one user to have the ability to connect via their web browsers. To be successful and timely, we needed a framework to provide real-time updates of information coming from the other microservices. Flask with SocketIO fulfills our requirements and is easy for newcomers to adopt due to the extensive documentation and ample community assistance. The combination stream updates to clients and provides a REST endpoint to get batch updates. Having the ability to stream updates is important considering we can have hundreds or thousands of devices producing information. We didn’t want to slow the system down by getting updates for all devices all the time when most of the time, only some will have any new information.

The last development decision was on code packaging, for which we chose Docker. The Docker container format allows us to package code and dependencies in an easy to deploy format. Containerizing was simplified since the engage microservice is designed to be stateless i.e. it builds its state on every restart using information available on the communication channel and in shared storage, and does not require persistent storage.

Deployment Considerations

As discussed in the previous articles, our application consumes MQTT and MinIO as-a-service. It does not impact the ingest, analyze or engage microservices design whether the communication channel or storage are consumed as-a-service or self-managed. We believe the main consideration that would change consumption of these from as-a-service to self-managed would be performance, data sovereignty, security, connectivity and the like, whereas simplicity and rapid development (as might be required for PoCs) would support as-a-service usage.

A low latency and reliable connection is essential to consuming MQTT and MinIO as-a-service so that images and messages are accessible to the engagement microservice near real-time. An organization deploying the solution may have more control over their network connectivity when deploying and managing MQTT and MinIO in-house demonstrating a tradeoff when compared to consuming them as services.

Even with the locality of these services changing, our code will require no changes since these are accessed using API calls to endpoints.

Data Orchestration Pipeline

The engagement microservice is the final pieces in our data orchestration pipeline. As you may recall, ingestion collected raw image data from devices pushing it up to MQTT and MinIO where inference analyzed the data to get count information pushing it up to MQTT. The engagement microservice takes the count information and presents it to the end-user or system. Whereas we present the information on a webpage, APIs exist in the engagement microservice to push it to a third-party system like vRealize Operations Manager to create alerts, actions or generate reports. Now, let’s discuss how the engagement microservice works.

First, the Paho MQTT Python client library is used to connect to MQTT. From there, the microservice collects analysis information on images. This shows up as a message on MQTT with a format similar to the following listing:

{
     “type”:”result”,
     “deviceID”:”738ff4a4-4c0d-11ea-b19c-000c2920a5be”,
     “filePath”:”people-counter-images/image-b649e50c-1277-4cf5-a8de-6692cf0e90a2.jpg”,
     “creationTimestamp”:1586274622.33818,
     “analysisMetadata”:{
          “status”:”COMPLETED”,
          “message”:”The analysis of image \’/var/lib/people-counter-inference/image-b649e50c-1277-4cf5-a8de-6692cf0e90a2.jpg\’ completed”,
          “results”:{
               “count”:8
         }
     }
}

The microservice downloads the latest image analysis information from each device using the MinIO Python client library and builds a cache. It keeps only the latest images and results for devices. When new information comes in for a particular device, the previous one is discarded. Images for all devices it knows about are exposed on a webpage. The contents of the page look as in the image below:

In one of our next deep dive articles of the series, we’ll discuss how we’ve design the service to handle many clients efficiently which is important considering we could have thousands of inference nodes producing information and just as many users connected to the engagement microservice getting updates.

The following diagram illustrates the data flow and control:

Direction of the arrows show the interactions between the engagement microservice and the other services. The order of operations are:

  1. Listen to MQTT for a new image metadata
  2. Fetch the image from MinIO
  3. Push analytics results to end users

Conclusion

Infrastructure sub-systems and solution components combine to add the engagement portion of the data orchestration pipeline in a modern IoT application. We used the People Counter Engagement microservice as an example and explored the design decisions along with how the components tie together to form our solution. Next in our series, we’ll go through a technical deep-dive of each of our microservices.

 

About the Authors

Luis M. Valerio Castillo is a Solutions Development Consultant with PS Research Labs at VMware, focusing on IoT, and Edge Computing. Prior to this role, Luis worked in the field implementing solutions for customers, which included application deployment automation, third party system integrations, automated testing, and documentation. His six years of experience started at Momentum SI, which was acquired by VMware in 2014. He holds a Bachelor of Science, with a major in Computer Science.

Neeraj Arora is a Staff Architect with PS Research Labs at VMware. He leads the development of service offerings for Machine Learning, IoT, and Edge Computing. Previously, Neeraj was part of the VMware Professional Services field organization delivering integrations to Fortune 500 companies using VMware and non-VMware products. Industry experience includes gaming, utilities, healthcare, communications, finance, manufacturing, education, and government sectors. Neeraj has published research papers in the areas of Search Engines, Standards Compliance, and use of Computer Science in Medicine.