Photo Taken In Wustermark, Germany
Community

7 Best Practices for Writing Edge Microservice-based Applications

 

microservice

The Internet of Things (IoT) is sparking the Fourth Industrial Revolution, one that promises to blend our physical and digital worlds in powerful new ways, enable a wave of novel applications, and unlock significant value.

Traditional cloud computing networks are highly centralized, with data produced at the edge traveling a long way to be processed. In edge situations, it’s preferable to use a hierarchical architecture where devices report into an edge node that is close to the data source, which in turn transmits a much smaller subset of data to a cloud node. This:

  • Greatly reduces response latency
  • Lowers network bandwidth needs
  • Helps preserve data privacy

But we’re dealing with more than an architectural change here. Our industry is moving away from monolithic applications towards microservices. For the most part, this trend suits edge computing very well. Monolithic applications are relatively easy to deploy and typically large in size, but they are difficult to understand, hard to scale, and inflexible. In contrast, when delivered as microservices, applications are more flexible and scalable, but they do require more care because they create a much more complicated service environment.

Here, we share seven microservices best practices. Some are common to both cloud and edge environments, while others are more specific to the edge because they are typically:

  • Resource constrained
  • Remote
  • More vulnerable to hostile attacks
  • Susceptible to intermittent network connectivity

1. Leverage a reverse proxy/API gateway

It’s very important that you control access to your edge microservices. For most real‑world applications, it makes sense to do this through a reverse proxy/API gateway. This is set up in front of any edge microservice and becomes the entry point for your application.

A reverse proxy/API gateway:

  • Encapsulates the application’s internal structure and simplifies client implementations
  • Is responsible for request routing, composition, and protocol translation
  • Handles some requests by simply routing them to the appropriate backend service and handles others by invoking multiple backend services and aggregating the results. If there are failures in the backend services, the API gateway can mask them by returning cached or default data as appropriate
  • Handles responsibilities such as authentication, monitoring, load balancing, caching, request shaping and management, and static response handling
  • Helps secure and protect application traffic

Of course, this does mean you now have another highly available component that must be developed, deployed, and managed. But it’s very much worth the effort.

2. Use a registry service

A microservice-based application is by definition made up of a large number of microservices, so it can be hard to track all their network addresses. In addition, microservices applications typically run in a containerized environment where network addresses and ports are changing dynamically. The client or API Gateway, meanwhile, needs to know the correct network address for any service and where each request should be directed.

All of these challenges can be addressed by using a service registry. Simply put, this is a database of available service instances. It maintains a mapping of each service to its network address. On service termination, its registry entry is cleared. During the lifetime of a service, via periodic heartbeats, its status/availability is updated in the registry.

3. Design good service APIs (Aim for pithy messages!)

This is a general best practice but even more so for the typically resource-constrained edge nodes. Keeping interservice messages short and sweet saves time in constructing, transmitting, and consuming (parsing) them.

Leveraging IDs/Names in interservice messages rather than capturing whole objects reduces interservice message sizes, in turn making your solution more scalable and easier to keep object states synchronized across multiple threads of execution. To maximize benefit, cache objects that are relatively static to avoid trips to the database.

4. Introduce unique request IDs

This is a common best practice for analyzing and debugging distributed systems. It involves introducing a request-ID with each top-level user request and passing it along in all subsequent calls to other microservices, and ensuring it is captured in all log messages. It also helps with forensics and in computing the latency and throughput of various actions.

5. Plan for intermittent connectivity

Oftentimes, IoT solutions transmit telemetry, and you cannot assume that there will be constant connectivity to a cloud endpoint. Build in support to store data until it can be transmitted.

6. Limit vulnerability

While security is a concern common to all microservice-based applications, edge networks are particularly vulnerable to attack. They are physically a lot more accessible than servers located in a heavily secured data center, for example.

Ways to limit the damage a compromised microservice may wreak are to:

  • Run your containerized microservice as a non-root user
  • Limit temporary directories to /var, /run, /tmp
  • Allow files in temporary directories to be only read/write, no execution
  • Disallow user privilege escalation

7. Encrypt data at rest and in transit

This last best practice also relates to security. Edge nodes may be more vulnerable to storage disks being stolen with a resulting compromise of the data stored therein. Also, data transmission may use the public internet and thus be more vulnerable to snooping.

To mitigate these:

  • Use TLS (transport layer security) X509 certificates to encrypt all of your communication
  • Encrypt data before saving it to disk
  • Deploy hardware security modules such as Trusted Platform Modules or Trusted Execution Environments where needed to strongly protect your encryption keys

Of course, don’t forget testing and documentation, which are best practices common to any endeavor.

These are all best practices we are adhering to as we contribute to EdgeX Foundry, an open source project that seeks to provide a common open framework for IoT edge computing.

You can find out more about EdgeX in a number of other VMware Open Source Blog posts:

And make sure to check out the project home page at github.com/edgexfoundry.

Stay tuned to the Open Source Blog and follow us on Twitter (@vmwopensource) for information on all the latest open source projects coming out of VMware.