posted

0 Comments

By Mark Schweighardt, Director, NSBU

Today marks a major milestone for the Istio open source project – the release of Istio 1.0. In support of today’s release, I interviewed Shriram Rajagopalan, one of Istio’s founding engineers as well as the technical lead of the networking subsystem within the Istio project.

 

 

Shriram actively contributes to the Istio and Envoy projects, working alongside contributors from Google, Lyft, IBM, and other companies. Shriram was also one of the founding members of IBM’s Amalgam8 project. Shriram is an engineer at VMware, working closely with VMware’s enterprise customers, developing service mesh solutions. Fun fact: Shriram wrote the initial version of the Istio Bookinfo Sample Application. You can follow Shriram on Twitter – @rshriram.

 

In this interview Shriram shares his thoughts and insights on many interesting Istio and service mesh topics. Such as, what were the main goals for the Istio 1.0 release, and how he recommends enterprises adopt Istio. Now over to our featured interview with Shriram. . .

 

What were the main goals for the new Istio 1.0 release?

 

Shriram: Istio 1.0 was all about polishing of existing features and ensuring that Istio can be adopted incrementally in a production environment in a non-disruptive manner.

 

Incremental rollout of Istio presented some interesting issues when we started turning on mutual TLS authentication for services one-by-one. We needed to make sure that when enabling mutual TLS authentication for a service, legacy clients can continue talking to the same service on the same port over plaintext while newer Istio-enabled clients talk over mTLS.

 

Another area we focused on a lot is upgradeability from the 0.8 release as well as upgrades to future releases. We had to ensure we could smoothly upgrade the mesh control plane without disrupting service to service communication, and still support older and newer versions of proxies.

 

What do you recommend as a smart path to incrementally adopt Istio 1.0?

 

Shriram: The best way to incrementally install Istio is to use the Helm charts. You can disable components that you don’t need in the values.yaml file and run a Helm command to generate your customized istio.yaml file.

 

Start by deploying a networking-only install of Istio with the Istio ingress gateway. Migrate all of your traffic from Kubernetes Ingress to Istio gateway and ensure that services exposed by your cluster are still accessible to clients outside. This step requires minimal downtime to applications already running in your cluster. At this stage, you have two options: turn on Istio features such as routing, telemetry, policy enforcement, etc. for traffic coming into the mesh, or continue with the passive mesh install across all services and then experiment with different features.

 

Let’s say you want to deploy the mesh across all services. Before you add Istio sidecars to your applications either manually or through automatic injection, take stock of all your external dependencies (from a Kubernetes cluster) such as third party APIs, backend databases, etc. Istio Pilot sets up connectivity between all services in the service registry (i.e. Kubernetes services). Any communication to services outside of the registry to your external dependencies will be dropped by default. Capture all details about your external dependencies using Istio’s service entry configuration that describe the hosts outside the mesh and the ports and protocols used. The service entries will ensure that the sidecars are programmed to route traffic and API calls as needed to target systems that live outside the mesh.

 

Next, the immediate focus should be on observability, specifically metrics. You don’t have to re-instrument your applications. Install the Istio telemetry collector (via the helm chart) and turn on telemetry collection globally. Now you can see how traffic is flowing through the system. At this point the entire mesh is still completely passive. But it’s providing you with valuable telemetry data.

 

Once you are comfortable with the setup above, start experimenting with different Istio capabilities. Depending on your environment, your choices may differ. If you are frequently deploying applications, you may want to start with traffic management features like version routing, or resiliency features such as timeouts, retries, connection pooling, etc. If your first goal is to secure all traffic, start by slowly rolling out Istio mTLS across services, and then experiment with various policy enforcement features.

 

Whichever path you choose, make sure to automate tasks. Istio, like Kubernetes, requires you to author or stare at a wall of YAML most of the time. Using a versioned configuration store [a.k.a., Git repository :-)] makes it easy to pinpoint errors and quickly rollback.

 

What are some of the ways Istio 1.0 can be extended?

 

Shriram: Istio can be extended in a several ways for both in-house customization and vendor value differentiation.

 

For example, the telemetry service has a lot of adapters to send telemetry data to various cloud-hosted services such as Stackdriver, DataDog, AWS CloudWatch, etc. If you have an in-house metrics service, you can write an adapter to route metrics from the mesh to your in-house metrics store.

 

Likewise, you can customize several aspects of Istio related to security. You can plug in your custom policy engine. For example, the Apigee folks have added an adapter in Istio policy engine to provide API management. There is also an adapter for the Open Policy Agent (OPA). You can build on top of OPA to integrate with in-house AD/LDAP or other systems. Or you can add a custom adapter to the policy engine directly to allow an in-house authorization system to decide whether to let requests through, without affecting the underlying data plane proxies. It’s a very flexible and pluggable design.

 

You can also customize the data plane. If you have in-house codecs for proprietary protocols, you can add them as extensions to Envoy and enable these extensions through networking configs. The Calico folks, for example, are using this technique to enable custom Envoy filters for authorization.

 

What’s the next major milestone for Istio?

 

Shriram: As we are talking to enterprise customers, we repeatedly hear they have or will have application deployments spanning Kubernetes clusters, Cloud Foundry, and traditional VM infrastructure, across different availability zone, regions, and clouds. We want to make sure the mesh can span these heterogeneous environments, whether running on-premises or in multiple public clouds.

 

The Istio Gateway [introduced in 0.8] was the first step to achieve this goal. Using Gateways allows organizations to avoid, to a certain extent, costly VPN peering for pod networks and seamlessly route traffic across clusters, managed by a single logical control plane. Then we can have a single policy layer and start propagating authorization context across clusters and do things like RBAC, ABAC, and other policies enterprises require.

 

More Information about Istio 1.0

 

You can also read the Istio, Google, and IBM blog posts about the Istio 1.0 release:

 

Istio Community Blog Post about the Istio 1.0 Release

 

Google Cloud Platform Blog Post about the Istio 1.0 Release

 

IBM’s Blog Post about the Istio 1.0 Release

 

Or give Istio a try at istio.io.