By Stephen McPolin and Venil Noronha
When Istio 1.0 was released a couple of months ago, TechCrunch called it “probably one of the most important new open source projects out there right now.” It’s not perfect (more on that in part two of this series), but the release does mark a significant stage in the development of service mesh architectures.
Despite the attention devoted to Istio’s release, though, service mesh still flies somewhat under the radar in the open source world. So, in a pair of posts here, we’re going to first offer a window into what service meshes do and then, in part two, ask what we can expect from them in the near future.
One important thing to know about service meshes is that they essentially became inevitable as soon as microservices started to become popular. That’s because, in essence, they operate as platforms for solving the increasingly complex challenge of communicating between these services.
Here’s how they work: Say you have one microservice that looks up payment methods in a customer database and another that processes payments. If you want to make sure information doesn’t leak from either of them or that you always connect your customer’s information to the right payment processor, you’ll want to encrypt the traffic between them. A service mesh can take care of that encryption for you without requiring either service to know how to secure that encryption themselves.
But service meshes do a lot more than just that. Overall, they take care of a wide swathe of core communications features, including:
- Observability – logging and supplying metrics between services
- Discovery – enabling services to be linked together to find other services
- Communication – establishing policy, means and security for communications
- Authentication – establishing access rights to services and communications
- Platform provision – providing control across multiple backends (Azure, AWS, etc.) and orchestrations (Kubernetes, nginx, etc.)
You can see the appeal for developers—a service mesh takes care of a whole tranche of things they’d rather not have to deal with each time they build a microservice. It’s a boon for sysadmins and deployment teams, too; they don’t have to negotiate with developers to build the features they need into any specific microservice. And customers benefit, in theory at least, because they can deploy their market-specific services much faster.
Given these advantages, it was basically inevitable that we would get to this point. At first, people created their own communications meshes. Before long, common patterns emerged. Common approaches started to get aggregated and finally took on the form of platform solutions.
Two years ago, Google open sourced its own service mesh platform as Istio. It wasn’t the first service mesh and isn’t the most mature, but it’s the fastest growing and the debut of 1.0 marks a new stage in the service mesh story.
To quote that TechCrunch article again: “If you’re not into service meshes, that’s understandable. Few people are.” But while that may be the case at present, for all the reasons outlined above, we think that’s also very likely to change. It’s why we’re devoting a fair amount of time and energy to contributing to service mesh development here at VMware.
In part two of this pair of posts, we’ll outline how we are contributing to open source service mesh development at VMware and describe what we see as the major issues these architectures are facing now that they have begun to mature.
Stay tuned to the Open Source Blog for part two of our service mesh blog series and follow us on Twitter (@vmwopensource).