The built-in vSphere Kubernetes Service (VKS, formerly known as Tanzu Kubernetes Grid) simplifies Kubernetes deployment and management, enabling enterprises to run modern applications alongside traditional workloads. Moreover, VKS provides a comprehensive platform for integrating different components of infrastructure together, lowering CapEx and raising efficiency. It provides automated lifecycle management, built-in security, and enterprise-grade scalability, reducing operational complexity. By leveraging vSphere’s infrastructure, VKS delivers high performance, cost efficiency, and a familiar management experience, making Kubernetes more accessible to IT teams.
Istio is a powerful open-source service mesh that simplifies service-to-service communication in cloud-native applications. It provides traffic management, security, and observability features, making it easier to deploy, manage, and secure microservices at scale. In this blog, we’ll explore how to set up and use Istio on VMware Kubernetes Engine (VKE), leveraging its capabilities to enhance the reliability and performance of your Kubernetes workloads.Below we’ll setup Istio on VKS and then go through the bookinfo app example from Istio (see https://istio.io/latest/docs/examples/bookinfo/)
Prerequisites
First, make sure a supervisor namespace has been created, and a VKS cluster is up and running. For details on how to setup a VKS cluster, consult the official documentation: https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere-supervisor/8-0/using-tkg-service-with-vsphere-supervisor/provisioning-tkg-service-clusters/workflow-for-provisioning-tkg-clusters-using-kubectl.html
Below we’re using the latest release available at the time, v1.32. For simplicity, we’ll use an Ubuntu (Jammy) VM as our environment (for the VKS cluster spec, see the footnote).
Installation and Setup
To begin, we download and install Istio binary locally:
mkdir ~/istio; cd ~/istio
curl -L https://istio.io/downloadIstio | sh -
sudo cp istio*/bin/istioctl /usr/local/bin
Then we create the Istio Namespace and label it for PSA to allow for privileged pods:
kubectl create ns istio-system
kubectl label --overwrite ns istio-system pod-security.kubernetes.io/enforce=privileged
kubectl label --overwrite ns istio-system pod-security.kubernetes.io/warn=restricted
Next, we’re going to install Istio using the CNI method. This approach eliminates the need for a privileged init container in every workload namespace, enhancing security and compliance. Instead, we only need one privileged namespace (istio-system) and this will install one CNI pod per node.
By handling traffic redirection at the CNI level we enable better compatibility with NetworkPolicies and multi-tenant environments.
Now let’s install Istio using istioctl, setting the option to install the CNI components and the profile set to ‘minimal’ as we’re not going to be using Istio’s gateways
Note: to pull from a private registry, add the switch ‘–set hub=xxxx’
$ istioctl install --set profile=minimal --set components.cni.enabled=true -y
For more details on configuration profiles, visit https://istio.io/latest/docs/setup/additional-setup/config-profiles/
We can see that the Istio CNI pods are running on each node:
$ kubectl get pods -n istio-system -o custom-columns="NAME:.metadata.name,NODE:.spec.nodeName"
NAME NODE
istio-cni-node-h7wxb vksclusternew-worker-szr4p-57jx9-tqq7d
istio-cni-node-m7rgz vksclusternew-worker-szr4p-57jx9-dpbnl
istio-cni-node-nrv42 vksclusternew-7cvjq-vsh4h
istio-cni-node-q67k6 vksclusternew-worker-szr4p-57jx9-gfb7h
istiod-86d96548d6-6gts4 vksclusternew-worker-szr4p-57jx9-gfb7h
Setting up the Demo App
Now we can install the sample app, as per https://istio.io/latest/docs/examples/bookinfo/
First, we create the bookinfo namespace and label it as ‘baseline’ for PSA. Additionally we create a label so the Istio operator can install the sidecar:
kubectl create ns bookinfo
kubectl label --overwrite ns bookinfo pod-security.kubernetes.io/enforce=baseline
kubectl label namespace bookinfo istio-injection=enabled
Apply the manifest:
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.24/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo
Check the bookstore app is running:
$ kubectl -n bookinfo exec "$(kubectl -n bookinfo get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep title
Configuring the Gateway for the Demo App
Next we create the Gateway and provide routing for the app. Note that there are two available APIs we can make use of:
– Istio native API ← legacy, soon to be deprecated
– K8s Gateway API ← preferred
We’ll use the latter here. Taking a look at the sample manifest (samples/bookinfo/gateway-api/bookinfo-gateway.yaml):
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
gatewayClassName: istio
listeners:
- name: http
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: Same
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: bookinfo
spec:
parentRefs:
- name: bookinfo-gateway
rules:
- matches:
- path:
type: Exact
value: /productpage
- path:
type: PathPrefix
value: /static
- path:
type: Exact
value: /login
- path:
type: Exact
value: /logout
- path:
type: PathPrefix
value: /api/v1/products
backendRefs:
- name: productpage
port: 9080
We notice that the first part defines our gateway using the Kubernetes gateway API, class ‘Istio’ and advertising on port 80. The next section defines the HTTP route and rules that directs traffic from this gateway to the correct endpoint (later we’ll see how we can alter this).
Let’s apply the manifest:
kubectl -n bookinfo apply -f samples/bookinfo/gateway-api/bookinfo-gateway.yaml && kubectl -n bookinfo wait --for=condition=programmed gtw bookinfo-gateway
We now have a setup that looks something like:

Verifying the App deployment
We can run an Istio heathcheck on the namespace to see if everything looks good:
$ istioctl analyze -n bookinfo
✔ No validation issues found when analyzing namespace: bookinfo.
We can also run a quick check to see the status of the Istio sidecar proxy container in the app pods:
kubectl -n bookinfo get pods -o name | xargs -I {} kubectl -n bookinfo logs {} -c istio-proxy | awk '{print $4,$5,$6,$7,$8}' | grep ready
Also, we check that the pods are running and the status of our gateway:
NAME READY STATUS RESTARTS AGE
pod/bookinfo-gateway-istio-7cbfb89755-l2s8g 1/1 Running 0 47h
pod/details-v1-6fbc5578dd-2blk8 2/2 Running 0 47h
pod/productpage-v1-5bf68dbdb8-9gtv9 2/2 Running 0 47h
pod/ratings-v1-586bd78f64-bfxwq 2/2 Running 0 47h
pod/reviews-v1-756dbb5898-h6f88 2/2 Running 0 47h
pod/reviews-v2-8644897954-bg5ft 2/2 Running 0 47h
pod/reviews-v3-77cfc67f4-x99hx 2/2 Running 0 47h
NAME CLASS ADDRESS PROGRAMMED AGE
gateway.gateway.networking.k8s.io/bookinfo-gateway istio 10.138.216.229 True 2d1h
We can see that the pods are in a healthy state and an external IP address has been assigned. To find just the IP address, we can use:
kubectl -n bookinfo get svc bookinfo-gateway-istio -o jsonpath='{.status.loadBalancer.ingress[0].ip}';echo
Like earlier, we can see that we’re able to access the app – but this time using the gateway IP:
curl -s $(kubectl -n bookinfo get svc bookinfo-gateway-istio -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/productpage | grep title
Opening a browser to http://<IP>:8080/productpage shows our demo app
Observing via the Kiali Dashboard
The Kiali dashboard provides a nice way to visualize our service mesh. To install it, we apply the add-on manifests that come with the Istio install and wait until it’s ready
kubectl apply -f samples/addons
kubectl rollout status deployment/kiali -n istio-system
Note: the dashboard needs a persistent volume, and inherits the default storage class of the cluster, therefore ensure you have a default storage class is set in your VKS definition (see the footnote for an example)
Then Launch the dashboard in the background:
istioctl dashboard kiali &
This should output the address of the dashboard, ‘http://localhost:20001/kiali’

To see how the data passes through the app, we’ll need to generate some traffic
watch -n 2 'curl -s $(kubectl -n bookinfo get svc bookinfo-gateway-istio -o jsonpath="{.status.loadBalancer.ingress[0].ip}")/productpage'
Navigating to Applications and then selecting the ‘bookinfo’ namespace, then selecting ‘productpage’ (i.e. our homepage) gives us an overview of the traffic flow:
We can see that the homepage references v1 of the site, which has links to details (v1) and reviews (v1, v2, v3). The padlocks show that the traffic is encrypted using mTLS.
Selecting the reviews page we can see that only v2 and v3 flow to ratings, and v1 does not.
In fact, if we look at ratings, we can see this very clearly:
Applying Rules
If we navigate to the bookinfo app website and refresh the page a few times, you will notice that sometimes there is a change in how the stars are rendered: sometimes black, sometimes red, sometimes none. This difference is due to the the different versions, i.e. v1,v2,v3 of the reviews site. As we saw earlier on the dashboard, v1 does not flow to ‘ratings’
With Istio, we are able to declaratively apply a ruleset to affect the http routes, using the gateway api
In order to show this with the bookinfo app, first we must apply the backend service definitions that define each version:
kubectl -n bookinfo apply -f samples/bookinfo/platform/kube/bookinfo-versions.yaml
Then we can use the gateway api to apply a rule that ensures only v1 is served:
kubectl -n bookinfo apply -f - << EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: reviews
spec:
parentRefs:
- group: ""
kind: Service
name: reviews
rules:
- backendRefs:
- name: reviews-v1
port: 9080
EOF
Back on the dashboard, we can clearly see how this affects the flow. All traffic now only traverses to v1:
And if you refresh the bookinfo website, you’ll see the page without any stars
We can also apply the rules selectively per user. Let’s see this in action:
kubectl -n bookinfo apply -f - << EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: reviews
spec:
parentRefs:
- group: ""
kind: Service
name: reviews
rules:
- matches:
- headers:
- name: end-user
value: jason
backendRefs:
- name: reviews-v3
port: 9080
- backendRefs:
- name: reviews-v1
port: 9080
EOF
Here, we have created a rule based on ‘end-user’ with a value of ‘jason’ to point to ‘reviews-v3’. Navigating back to our website, we login as user ‘jason’ (any password). Notice now that the red review stars are always present:
Summary
In this blog we saw how we can easily install OSS Istio on VKS. We observed traffic flows and mTLS encryption on the dashboard. Finally we demonstrated how we can manipulate the gateway api to apply rules to direct traffic as needed
For a video walkthrough, visit https://youtu.be/D2KKWribGms
Footnote
Below is the spec used to create the VKS cluster
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: vkscluster
namespace: vks
spec:
clusterNetwork:
services:
cidrBlocks:
- 10.96.0.0/12
pods:
cidrBlocks:
- 192.168.0.0/16
serviceDomain: cluster.local
topology:
class: tanzukubernetescluster
version: v1.32.0+vmware.6-fips
controlPlane:
replicas: 1
workers:
machineDeployments:
- class: node-pool
name: worker
replicas: 3
variables:
- name: vmClass
value: best-effort-medium
- name: storageClass
value: vsan-esa-default-policy-raid5
- name: defaultStorageClass
value: vsan-esa-default-policy-raid5