Technical VCF Storage (vSAN)

vSphere and VMware Cloud Foundation 9.0 Core Storage – What’s New

VCF Storage Support Enhancements 

For Greenfield VCF deployments  VMFS Fibre Channel, and NFSv3  are now supported as principal storage options for the management domain. For full information on storage support see the VCF 9 technical documentation.

NFS Improvements

TRIM Support for NFS v3

Existing support for TRIM and UNMAP commands allows block based solutions such as vSAN and VMFS backed storage to reclaim space when files are deleted in a VM or when space is allocated for multiple writes to the same file. In typical environments this can reclaim as much as 30% of storage space, allowing better capacity utilization.  Using the VAAI NFS Plugin NFSv3 attached NAS systems can reclaim space in VCF 9.  

Data in Transit Encryption for NFS 4.1 

Krb5p encrypts the NFS traffic in transit. Data-in-transit encryption secures data as it moves between hosts and the NAS, protecting against unauthorized access and ensuring the confidentiality of organizational information, especially during man-in-the-middle attacks. KRb5i will also be available which does not include encryption, but does add validation that data has not been tampered with.

Core storage improvements

End to End (E2E) 4kn support 

In 9.0, the 4Kn E2E feature introduces support for:

  • Front end – 4K VMDKs presented to VMs
  • Back end – 4Kn NVMe SSDs for vSAN ESA. OSA will not support 4Kn SSDs.
  • ESXi will also support 4Kn SCSI SSDs for local VMFS and external storage.

SEsparse as default snapshot with NFS and VMFS-5 

Historically snapshots have a rather larg I/O impact. Space Efficient Sparse Virtual Disks (or SE Sparse Disks for short) were designed to alleviate two issues that the legacy vmfsSparse format has.

  1. Space reclamation with snapshots. SESparse allows for granular tunable space reclamation while snapshots are in use (common for VDI use cases). 
  2. Read latency and storage performance impacts caused by multiple reads required when snapshots are in use. 

Performance is improved using a probabilistic data structure like Bloom Filter and is targeted to optimize read work flow especially for first level snapshot. In addition SESparse support grain size/block allocation unit size is now tunable to match the backing storage platform.

SparseSE Snapshots have been the default for VMDKs over 2TB on VMFS5 since vSphere 7 Update 2 and later. Now with vSphere 9, SEsparse will be the default for NFS and all VMFS-5 VMDKs of all size. This will improve latency and throughput of read I/O while a snapshot is open and in operation. NFS will now also now use this snapshotting format, when the VAAI snapshot offload plugin is not in use. 

vNVMe 1.4 spec support – Support for Write Zero command


With 9.0 NVMe 1.4 vNVMe support allows for the guest operating system to use the NVMe Write Zeroes command is used to set a range of logical blocks to 0. This command is similar to the SCSI WRITE_SAME command, but is limited to the use of zeros. 

iSCSI Multiple Connections per Session support

Ideally, iSCSI should be deployed using Multi-path I/O (MPIO) for fault tolerant connections to hosts. This requires unique physical paths for each VMkernel port, and that path failover not be handled by link state, or by using teaming policies on the NICs as deterministic paths are used. iSCSI also historically uses only a single connection, making any attempt at deploying it over any form of NIC Teaming/ or Link Aggregation Group (LAG) to only use a single path as hashing algorithms could not balance traffic over a single connection. 

iSCSI with multiple connections per session provides the following benefits:

1. Allows the use of multiple links when a single Vmkernel port is paired with a LAG with sufficiently advanced hashing. 

2. Allows potentially increased throughput, when the link speeds exceed the storage arrays ability to process TCP with a single connection thread.

Please confirm with your storage vendor if this is a supported or recommended configuration.  

Deprecation of NPIV

As previously noted, N-Port ID Virtualization (NIPV) was deprecated and is now removed in 9.0.

***

Ready to get hands-on with VMware Cloud Foundation 9.0?  Dive into the newest features in a live environment with Hands-on Labs that cover platform fundamentals, automation workflows, operational best practices, and the latest vSphere functionality for VCF 9.0.