Features Projects

The Future of Service Mesh, Part Two: What’s Next After Istio 1.0

By Stephen McPolin and Venil Noronha

In part one of our service mesh series, we argued that service meshes are both an inevitable and beneficial consequence of the development of microservices architectures. With the release of Istio 1.0, we’ve passed a significant milestone in service mesh provision, making this a reasonable time to ask where we go next.

Certainly, here at VMware, we believe that open source service mesh architectures are very much worth supporting with our time and effort. We’ve become contributing members of both Istio and Envoy, the specific open source service proxy that Istio uses to dynamically control microservices. In particular, we’ve put a lot of our effort into improving networking but are making regular contributions in a broad range of additional areas.

We’ve also taken on board the fact that pretty much every Istio presentation at the moment is based on a single demo. A VMware colleague in Bulgaria is currently building a brand new Istio demo that shows how to manage video quality across services such as closed captioning, demonstrating Istio’s dynamic routing capabilities in a microservices environment.

Because we think service meshes are both valuable and here to stay, we’ve been looking to integrate VMware’s own world-class system management tools into service mesh frameworks. A good example is an adapter we recently created to export Istio metrics into VMware’s Wavefront monitoring and analytics tool. If we can incorporate more information out of these microservices in our system management tools, we believe the tools can do an even better job of managing these systems.

service mesh

From our perspective, work like this is all about broadening the microservices ecosystem. Service mesh platforms themselves, however, are still far from perfect. Istio, for example, is a complicated piece of software that’s very difficult to debug when it doesn’t work properly. Once it works, it’s actually great at helping you discover what your microservice architecture is doing. But when you don’t have it working, it’s too difficult at present to figure out why it’s not working properly. This is widely understood in the community and we’ve collectively been spending some time and effort thinking about ways to combat this complexity, but we haven’t solved it yet.

Service mesh platforms are also only just starting to deal with multi-clustering as well. If you deploy all your software on just one cluster, services like Istio and Envoy are generally able to manage it well. But if you want to scale these into multiple clusters and have your services communicating over the cluster boundaries (a good idea simply from a security perspective), it can be a challenge. Again, the Istio community understands this and we are moving toward an improved, multi-cluster-friendly design.

Lastly, we’re keeping an eye on a new initiative called Knative coming out of Google. Fundamentally, this is a build of Kubernetes and Istio within Google’s notion of functions-as-a-service. It looks like it will be pushing more demands onto Istio in the near future, but it’s not exactly clear where all those are going to come from. The “event” notion, for example, is an entirely foreign concept to Istio but is necessary for dealing with temporal data. Knative is adding facilities for this that will inevitably push down on Istio.

For now, we’re just watching the space—Knative came about a month and a half ago with a fair amount of issues left unresolved, so we’re looking for an update before deciding how we should respond. So, there’s clearly plenty to both work and keep an eye on. What’s certain, though, is that the service meshes are here to stay.

Stay tuned to the Open Source Blog for the latest updates around service mesh and follow us on Twitter (@vmwopensource).