VMWare Cloud Foundation (VCF) 5.0 has now been launched, and as a part of the release, we have added support for NVMeoF/TCP (non-volatile memory express over fabric/Transport Control Protocol) as supplemental storage.
NVMe is the new transport and storage access protocol for next-generation solid-state drives. NVMe/ TCP is an extension of the NVMe base specification across network fabrics such as Ethernet, Fibre Channel (FC), and Remote Direct Memory Access (RDMA). NVMeoF/TCP runs over Ethernet and binds the NVMe protocol to TCP. It enables the efficient end-to-end transfer of NVMe commands and encapsulates data inside the TCP datagram. Customers can use their existing Ethernet-based network infrastructure with traditional network adapters and multilayered switch configurations. This new support enables customers to achieve lower latency, higher performance, additional parallel requests, improvements to storage array performance and lower cost.

Fig 1. NVMeoF and SCSI as supported storage protocols
Prior to the VCF 5.0 release, VCF supported vVols (vSphere Virtual Volumes), iSCSI, NFS (v3 or v4.1), VMFS on FC, and NVMe/FC as supplemental storage. With this launch, customers may now choose to use NVMeoF/TCP as a supplemental storage option. This provides users options with greater storage flexibility. The added support for NVMe/TCP ensures customers are not required to use special hardware. Customers can use standard Ethernet networking hardware provided it has available bandwidth for additional storage overhead. This will make the cost of entry for NVMeoF/TCP less than with net new FC environments.
VCF offers several supplemental storage options to support multiple use cases. NVMeoF/TCP performs almost at par with NVMeoF/FC but scales to significantly higher speeds at a substantial cost reduction. Its biggest advantage is its ability to use the standard network infrastructure. Since it can be deployed on any TCP network, you can use it on-premises or in the cloud. We could compare the performance of different supplemental storage types; however, it is more important to understand what your application needs are for performance enhancements while choosing a specific storage type. Although FC may provide improved performance when compared to TCP, TCP is a reliable and economical option for applications that do not require the lower latency and higher bandwidth that NVMeoF provides. If you already have an FC environment, then it is prudent to use that if the array supports it. Instead of choosing one over the other, customers can choose to opt for dual support of both NVMEoF/FC and NVMEoF/TCP. This provides customers with an added layer of flexibility to choose from the protocol which best serves their modern SAN infrastructure needs. Customers can also use both concurrently or switch from one to the other freely as they go to achieve performance optimization and cost reduction. Like iSCSI, NVMEoF/ TCP can use any Ethernet NICs, however, adapters with hardware acceleration support for NVMeoF/TCP provides a tremendous performance boost in throughput and latency reduction when compared to iSCSI on ESXi hosts.
To configure NVMeoF/TCP as supplemental storage on VCF you can follow the steps described below. Prior to adding NVMe datastores as supplemental storage to a host in a VCF Workload Domain, you must follow hardware vendor instructions for Storage Provisioning, FC zoning, and HBA configuration. VCF requires a principal storage service for all ESXi hosts within the workload domain. VCF is used and is validated with vSAN, NFS v3, VMFS on FC, and vVols for principal storage. After you create a new Workload Domain or Cluster, you can then add supplemental storage to the cluster using the vSphere Client. VCF supports vVols, iSCSI, NFS (v3 or v4.1), VMFS on FC, and NVMeoF as supplemental storage.
You can add NVMe Storage by creating a new VMFS datastore which can be done quickly by following the steps in the VMware vSphere Storage Product Documentation.