With vSphere 7 Update3, we have added a new feature, NVMe/TCP, and enhancements to existing solutions.
NVMe over TCP
NVMe over fabric extends NVMe from local storage to shared network storage. With the release of vSphere 7, we added support for NVMe over Fabrics (NVMeoF). With the initial release, the supported protocols were FC and RDMA. With the release of vSphere 7 Update 3, we are adding support for NVMe TCP. The significance of adding support for TCP is there are no special HW requirements. Standard Ethernet networks and HW may be used. Now, you do need to ensure you have ample bandwidth in your network for the additional storage overhead.
With the ability to use standard Ethernet HW, the cost of entry for NVMe/TCP is less than with net new FC or RDMA environments. The question of which protocol has the best performance may arise. Really, the real question is, what performance enhancement does your application need. To answer the question in generalities, RDMA will be the highest performing, then FC and finally TCP. If your applications don’t require the absolute lowest latency and highest bandwidth, TCP is a great and economical option for getting into NVMeoF. If you already have an FC environment, then you would of course use that if the array supports it.
Increase number of hosts per datastore
We have numerous customers who often reach the vSphere limit of 64 hosts per VMFS or NFS datastore. In this release, we have increased the number of hosts that may connect to VMFS-6 or NFS datastore from 64 to 128. Note this is not a hosts-per-cluster increase; this is the number of hosts that can access a single VMFS or NFS datastore.
Affinity 3.0, support for CNS
In vSphere 7, we updated the Affinity Manager, Affinity 2.0, which improves the potential penalty of first writes with thin or lazy thick provision. In this release, we have added additional enhancements to Affinity 3.0 which now support CNS persistent volumes or FCDs (First Class Disks). In addition, we have added support for the higher number of vSphere hosts per cluster as well.
vVols Batch Snapshots
With the potential scale, vVols offers, ensuring operational efficiency is key. In this release, we have enhanced the procedure for processing large numbers of vVol snapshots by making snapshot operations into a batch process. By grouping snapshot operations, we reduce the serialize actions used for snapshots, making the process more efficient and, reducing the effect on VMs and the storage environment.
For more details on vSphere 7.0 Update 3 Core Storage, see the full article here.