Announced today, vSphere 6.7, and several new features and enhancements to further the advancement of storage functionality are included. Centralized, shared storage remains the most common storage architecture used with VMware installations despite the incredible adoption rate of HCI and vSAN. As such, VMware remains committed to the continued development of core storage and Virtual Volumes, and with the release of vSphere 6.7, this truly shows. The 6.7 version marks a major vSphere release, with many new capabilities to enhance the customer experience. From space reclamation to supporting Microsoft WSFC on VVols, this release is definitely feature rich! Below are summaries of what is included in vSphere 6.7, and you can find more detail on each feature on the VMware storage and availability technical document repository: core.vmware.com.
Configurable Automatic UNMAP
Automatic UNMAP was released with vSphere 6.5 with a selectable priority of none or low. Storage vendors and customers have requested higher, configurable rates rather than a fixed 25MBps. With vSphere 6.7 we’ve added a new method, “fixed” which allows you to configure an automatic UNMAP rate between 100MBps and 2000MBps, configurable both in the UI and CLI.
UNMAP for SESparse
SESparse is a sparse virtual disk format used for snapshots in vSphere as a default for VMFS-6. In this release, we are providing automatic space reclamation for VM’s with SESparse snapshots on VMFS-6. This only works when the VM is powered on and only affect the top-most snapshot.
Support for 4K native HDD
Customers may now deploy ESXi on servers with 4Kn HDDs used for local storage (SSD and NVMe drives are currently not supported). We are providing a software read-modify-write layer within the storage stack allowing the emulation of 512B sector drives. ESXi continues to expose 512B sector VMDKs to the guest OS. Servers having UEFI BIOS can boot from 4Kn drives.
XCOPY enhancement
XCOPY is used to offload storage-intensive operations such as copying, cloning, and zeroing to the storage array instead of the ESXi host. With the release of vSphere 6.7, XCOPY will now work with specific vendor VAAI primitives and any vendor supporting the SCSI T10 standard. Additionally, XCOPY segments and transfer sizes are now configurable.
vVols enhancements
As VMware continues the development of Virtual Volumes, in this release we have added support for IPv6 and SCSI-3 persistent reservations. With end to end support of IPv6, this enables organizations, including government, to implement vVols using IPv6. With SCSI-3 reservations, this substantial feature allows shared disks/volumes between virtual machines across nodes/hosts. Often used for Microsoft WSFC clusters, with this new enhancement it allows for the removal of RDMs!
Increased maximum number of LUNs/Paths (1K/4K LUN/Path)
The maximum number of LUNs per host is now 1024 instead of 512 and the maximum number of paths per host is 4096 instead of 2048. Customers may now deploy virtual machines with up to 256 disks using PVSCSI adapters. Each PVSCSI adapter can support up to 64 devices. Devices can be virtual disks or RDMs. A major change in 6.7 is the increased number of LUNs supported for Microsoft WSFC clusters. The number increased from 15 disks to 64 disks per adapter, PVSCSI only. This changes the number of LUNs available for a VM running MICROSOFT WSFC from 45 to 192 LUNs.
VMFS-3 EOL
Starting with vSphere 6.7, VMFS-3 will no longer be supported. Any volume/datastore still using VMFS-3 will automatically be upgraded to VMFS-5 during the installation or upgrade to vSphere 6.7. Any new volume/datastore created going forward will use VMFS-6 as the default.
Support for PMEM /NVDIMMs
Persistent Memory or PMem is a type of non-volatile DRAM (NVDIMM) that has the speed of DRAM but retains contents through power cycles. It’s a new layer that sits between NAND flash and DRAM providing faster performance and it’s non-volatile unlink DRAM.
Intel VMD (Volume Management Device)
With vSphere 6.7, there is now native support for Intel VMD technology to enable the management of NMVe drives. This technology was introduced as an installable option in vSphere 6.5. Intel VMD currently enables hot-swap management, as well as NVMe drive, LED control allowing similar control used for SAS and SATA drives.
RDMA (Remote Direct Memory Access) over Converged Ethernet (RoCE)
This release introduces RDMA using RoCE v2 support for ESXi hosts. RDMA provides low latency, and higher-throughput interconnects with CPU offloads between the end-points. If a host has RoCE capable network adaptor(s), this feature is automatically enabled.
Para-virtualized RDMA (PV-RDMA)
In this release, ESXi introduces the PV-RDMA for Linux guest OS with RoCE v2 support. PV-RDMA enables customers to run RDMA capable applications in the virtualized environments. PV-RDMA enabled VMs can also be live migrated.
iSER (iSCSI Extension for RDMA)
Customers may now deploy ESXi with external storage systems supporting iSER targets. iSER takes advantage of faster interconnects and CPU offload using RDMA over Converged Ethernet (RoCE). We are providing iSER initiator function, which allows ESXi storage stack to connect with iSER capable target storage systems.
SW-FCoE (Software Fiber Channel over Ethernet)
In this release, ESXi introduces software-based FCoE (SW-FCoE) initiator than can create FCoE connection over Ethernet controllers. The VMware FCoE initiator works on lossless Ethernet fabric using Priority-based Flow Control (PFC). It can work in Fabric and VN2VN modes. Please check VMware Compatibility Guide (VCG) for supported NICs.
It is plain to see why vSphere 6.7 is such a major release with so many new storage-related improvements and features. These are just highlights, more detail may be found by heading over to core.vmware.com and review the vSphere 6.7 Core Storage section.