Virtual Volumes (vVols)

What’s New in vSphere 7 Update 2 Core Storage

What’s New in vSphere 7 Update 2 Core Storage

iSCSI Path Limit increase

One of the enhancements from the vSphere 7 Update 2 release I’m sure many customers will be thrilled about is the iSCSI path limit increase. Until this release, the iSCSI path limit was 8 paths per LUN, and many customers end up going over this. Whether it’s from multiple VMKernels or targets, customers often ended up with 16 or 24 paths. I’m excited to announce that with the vSphere 7.0 U2, the new iSCSI path limit is now 32 paths per LUN.

RDM Support for RHEL HA

There were a few changes that needed to be made to enable support for Red Hat Enterprise HA to be able to use RDMs in vSphere. With the release of vSphere 7 Update 2, RHEL HA is now supported on RDMs.

VMFS SESparse Snapshot Improvements

Read performance improvements by using a technique for directing the reads to where the data resides rather than traversing the delta disk snapshot chain every time. Previously, if a read came into the Virtual Machine and the VM had snapshots, the reads would traverse the snapshot chain then to the base disk. Now, when a read comes in a filter will direct the read to either the snapshot chain or the base disk reducing the read latency.

vSphere 7 Update 2 Core Storage VM SESparse read process.

Multiple Paravirtual RDMA (PVRDMA) Adapter Support

In vSphere 6.7, we announced support for RDMA in vSphere. One of the limitations was only a single PVRDMA adapter was supported per Virtual Machine. With the release of vSphere 7 Update 2, we now support multiple PVRDMA adapters per VM.

Performance Improvements on VMFS

With the release of vSphere 7 Update 2, we have made performance improvements to VMFS. Performance was improved for first writes on thin disks. These changes improve performance for backup and restore, copy operations, and Storage vMotion in certain instances. With this improvement and the enhancements with Affinity 2.0, the first write impact has further been reduced. These improvements help to reduce the potential effects of first writes when using thin-provisioned disks.

NFS Improvements

NFS required a clone to be created first for a newly created VM and the subsequent ones could be offloaded to the array. With the release of vSphere 7.0 U2, we have enabled NFS array snapshots of full, non-cloned VMs to not use redo logs but instead, use the snapshot technology of the NFS array in order to provide better snapshot performance. The improvement here will remove the requirement/limitation of creating a clone and enables the first snapshot also to be offloaded to the array.

High Performance Plugin FastPath Support for Fabric Devices

With the release of vSphere 7 Update 2, HPP is now the default plugin for NVMe devices. The plugin comes with 2 options – SlowPath with legacy behavior, VM fairness capabilities, and the newly added FastPath, designed to provide better performance as compared to SlowPath with some restrictions. Even in SlowPath mode HPP can often perform better than Native Multi-pathing Plugin (NMP) for the same device due to IOs being handled in batch mode by helping to reduce lock contention and CPU overhead in the IO path. For more detail see TechZone Core.VMware.com vSphere 7 Update 2 article.

HPP as the Default Plugin for vSAN

With the release of vSphere 7 Update 2, HPP is now the default MPP for all devices (SAS/SATA/NVMe) used with vSAN. Note that HPP is also the default plugin for NVMe fabric devices. This is an infrastructure improvement to ensure vSAN uses the improved storage plugin and can take advantage of the improvements.

VOMA Improvements

vSphere On-disk Metadata Analyzer (VOMA) is used to identify and fix metadata corruption affecting the file system or underlying logical volumes. With the release of vSphere 7 Update 2, VOMA support has now been enabled for spanned VMFS volumes. For more information on VOMA, see the VMware Docs article here.

vVols

Support for Higher Queue Depth with vVols Protocol Endpoints

In some cases, the Disk.SchedNumReqOutstanding (DSNROconfiguration parameter did not match the queue depth of the vVols Protocol Endpoint (PE) (VVolPESNRO ). With the release of vSphere 7.0 U2, the default QD for the PE will now be 256 or the maxQueueDepth of the exposed LUN. So the default minimum PE QD is now 256.

vVols VVOLPESNRO default Queue Depth 256

Create larger than 4GB Config vVol

This allows the Config vVol to be larger than the default 4GB for partners to be able to store images for automatic builds.


vVols with CNS and Tanzu

SPBM Multiple Snapshot Rule Enhancements

With vVols, Storage Policy Based Management gives the VI admin autonomy to manage storage capabilities, at a VM level, via policy. With the release of vSphere 7 Update 2, we have enabled our vVols partners to support multiple snapshot rules in a single SPBM storage policy. This feature will need to be supported in the respective VASA providers that enable snapshot policies to be constructed. When supported by our vVols partners, it will be possible to have a single policy with multiple rules with different snapshot intervals.

32 Snapshot support for Cloud Native Storage (CNS) for First Class Disks

Persistent Volumes (PV) are created in vSphere as First-Class Disks (FCD). FCDs are independent disks with no VM attached. With the release of vSphere 7 Update 2, we are adding snapshot support of up to 32 snapshots for FCDs. This enables you to create snapshots of your K8s PVs which goes along with the SPBM multiple snapshot rule.

CNS PV to vVol Mapping

In some cases, customers may want to see which vVols is associated with which CNS Persistent Volume (PV). With the release of vSphere 7 Update 2 in the CNS UI, you can now see a mapping of the PV to its corresponding vVol FCD.

vSphere 7 Update 2 Core Storage vVol to CNS Persistent Volume mapping

Make sure to check back this week on Virtual Blocks to learn about new features and enhancements with SRM and VR.

@jbmassae