vSAN 6.7 – Introducing WSFC support on vSAN!


In vSAN 6.5 we introduced the concept of iSCSI targets hosted on a vSAN cluster. This allowed connection of physical hosts to a vSAN datastore to take advantage of SPBM and other vSAN features such as deduplication, compression, encryption and QoS for physical machines external to a virtualized cluster. This extended vSAN’s capabilities from the VMs hosted on vSAN clusters to physical hosts, and provided an excellent method for hosting shared disks used by applications like physical Oracle RAC instances.


What’s new in vSAN 6.7?

vSAN 6.7 introduces support for Windows Server Failover Clusters (WSFC) using the vSAN iSCSI target service. If you currently host WSFC instances on your infrastructure that use RDMs for shared disks in use cases such as quorum, SQL Failover Cluster Instances (FCI) and Scale-out File Server (SOFS), these can now fully migrate to vSAN without the use of RDMs.

As of this release, fully transparent failover of LUNs is now possible with the iSCSI service for vSAN when used in conjunction with WSFC. This feature is incredibly powerful as it can protect against scenarios in which the host that is serving a LUN’s I/O fails. This failure might occur for any reason: power, hardware failure or link loss.  In these scenarios, the I/O path will now transparently failover to another host with no impact to the application running in the WFSC.


Great! So how does it work?

Simply turn on the iSCSI target service in the vSAN UI and create the desired LUNs (I will use a SQL FCI as an example). In this instance, I have created four LUNs, one each for cluster quorum, SQL data, SQL logs and SQL backups with sizes of 20, 100, 150 and 200GB respectively.

Back in Windows, I have configured the iSCSI service as you would for any other iSCSI target and added each host’s iSCSI vmk IP address in the Discovery Portal.

Given that we support MPIO in Active/Standby mode with iSCSI on vSAN, I have created sessions for each of those targets on the initiator. Note the different path IDs listed for the same target and the number of failover paths available.

Now that we have connected our Windows VMs to the vSAN iSCSI target service and configured MPIO, simply “Auto Configure” the devices and the four LUNs will show up – you will note that the paths include the vendor as “vmware” and the product as “virtual_san”.

Format the drives as you would like any other drive in Windows with Computer Management and you are ready to install your WSFC!


I already have a WSFC using RDMs, can I migrate?

Of course, and this process has already been documented in our “Migrate to vSAN” section in StorageHub the process for migrating from RDMs to iSCSI is the same, no matter what the iSCSI target is.


Do you have a detailed implementation guide?

Yes, and it will take you from initial target setup on vSAN, including vmk selection and LUN provisioning right through to configuring Windows MPIO and the Windows iSCSI initiator.

That guide, as well as all vSAN related technical content, can be found, as always on StorageHub.

A reference architecture is also available here that includes details on implementing SQL FCI and Scale-Out File Server on top of WSFC using the vSAN iSCSI Target service. A KB covering the configuration is also available here.


Can I see a demo failover?

Absolutely, below is a short demo showing the failure of an entire host in a vSAN cluster. The host that fails is serving the I/O for a SQL cluster that is under heavy load using a database benchmarking application (HammerDB). As illustrated in the video, there is a brief lowering in I/O rate and once the LUN has transparently failed over to a new host the transactions proceed again at their previous rate with no application interruption.


4 comments have been added so far

  1. How would someone running WFSC implemented using iSCSI handle data replication — does RP4VM handle VMs with iSCSI-based devices, is there a feature in vSAN, or would you depend on SQL Availability Groups for DR?

Leave a Reply

Your email address will not be published. Required fields are marked *