Virtual Volumes or vVols enables your existing storage array capabilities to be managed via SPBM policies and applied at a VM or VM disk level, all while existing in a single vVols datastore that correlates to a storage container on the array. The Guest OS file system is the native FS on the vVol itself. It is not a VMDK sitting on top of another FS such as VMFS or NFS. Each vVol is its own first-class citizen, meaning it is independent of other vVols, LUNs, or volumes. As a result, vVols match very well with First Class Disks (FCD), which are disks not requiring an associated VM. vVols also allows for a much larger scale than traditional LUNs, up to 64k per host! Plus, you don’t have to manage LUNs or volumes, which would be a nightmare at scale.

Kubernetes is quickly growing and becoming the preferred platform for deploying applications. Cloud-Native Storage (CNS) for vSphere allows Cloud-Native Applications (CNA) to manage the storage in a dynamic container environment. When you have the potential of creating and deleting thousands of containers in short periods, management at scale quickly become relevant. With vVols and vSAN, you can easily map K8s StorageClass to a vSphere Storage Policy. This architecture fosters management at scale. Imagine trying to manage Fist Class Disks (FCDs) when you’re talking thousands of storage containers manually. Being able to map an SPBM policy to a StorageClass means easily you don’t have to worry about the orphaned LUNs or volumes and focus on your applications.


So how can you use vVols with CNS? Myles Gray is going to go into the details and show how you can use SPBM, vVols, or vSAN with CNS.


With vSphere 7.0 and the CSI 2.0 driver for vSphere we have introduced a much sought-after feature: support for vVols as a storage mechanism for Cloud Native Storage. Now, all storage types on vSphere are supported as backing storage for the Kubernetes PersistentVolumes that are provisioned through our CSI driver, vSAN, vVols, VMFS, and NFS.

vVols and vSAN are preferred as the storage of choice for container volumes because of scaling ability and day-2 operations – as Jason mentioned above, 64k vVols per host is a huge number! Not to mention, because these two storage types are SPBM based, with native primitives, they can be adjusted after provisioning to change SLA or other storage parameters. This can’t be done with VMFS and NFS as tag-based SPBM placement only allows for the initial placement of volumes, not reconfigurations.

vVols are obviously vendor specific, and vary by model or manufacturer, but basic vVols primitives are supported in CNS across the board. However, snapshots and replication vVols primitives will need to wait until a future release.

As an example below – I’ve configured a 6-node Kubernetes cluster on Jason’s lab with a few vVols arrays attached – you can see I have provisioned three different instances of Rocketchat against three different StorageClasses in Kubernetes, that target specific SPBM policies:

This way, we can ensure that certain apps land on specific arrays, or even vendors. Above you can see I have two that have landed on Pure Storage arrays, and one landed on a Unity array, all using vVols and the native SPBM integration.

Additionally, you can drill into each of these volumes and get more information on each. For example, what Pods are mounting the volumes, what application, what VM it’s mounted into, what the datastore is, and what is the SPBM policy? You can also see whether those volumes are in compliance and accessible.

For more details on specific partner integrations for vVols and CNS, we have created a list of blogs from our partners.



If you would like to set up the CSI integration on vSphere yourself, find the installation instructions here.

@jbmassae

@mylesgray