One of the announcements in the vSphere 6.7 launch that caught my attention was vVols support for WSFC (Windows Server Failover Clustering) using SCSI-3 Persistent Reservations. This is great news for customers still working with RDMs. WSFC clustering solutions use SCSI-3 Persistent Reservations which enables access for multiple nodes to a device (vVol disk) and simultaneously blocks access for other nodes.
What are SCSI reservations?
SCSI reservations allow multiple nodes to access a shared disk at a storage level while restricting simultaneous access to that shared disk from other nodes to avoid data corruption. By allowing multiple nodes access to an individual disk, this allows Microsoft WSFC to manage the locking along with the storage. This is called registration and reservation, which allows one node to eject another node. Other features of SCSI-3 include persistent reservations across reboots and supporting multi-pathing to disks.
So long RDMs
WSFC is one of the very last use cases for Physical Mode Raw Device Mappings (pRDMs). Today, RDMs are required to allow SCSI reservations to be passed directly to the quorum/witness device without being handled/interpreted by the VMkernel I/O stack.
Up until vSphere 6.7 there was only one scenario where you couldn’t use vVols for SQL Server, and that was shared storage between SQL Server instances. Now, vVols support in ESXI 6.7 enables up to 5 WSFC cluster nodes to access the same shared vVol disk.
Setting up WSFC using vVols
The process for setting up WSFC on vVols is pretty straight forward. Follow the Guidelines for setting up WSFC on vSphere 6.x keeping a few vVols specific guidelines in mind:
- Be sure to use HW Version 13 or greater on your FCI VMs.
- ESXi 6.7 supports vVols storage with up to 5 node WSFC.
- The storage array must support SCSI persistent operations at the subsidiary LUN level.
- ESXi 6.7 supports vVols Storage for Windows Server 2008 SP2 and above releases.
- All hosts must be running ESXi 6.7 or above.
- WSFC on vVols can work with any type of disk, “Thin” as well as “Thick”-provisioned disks.
- The underlying transport protocol can be FC, ISCSI or FCOE.
- Only Cluster-across-box (CAB) is supported.
- Cluster-in-a-box (CIB) and a mixture of CAB and CIB is not supported.
- N+1 cluster configuration, in which one ESXi host has virtual machines which are secondary nodes and one primary node is a physical box is not supported.
- This feature enables customers to move away from using Pass-through RDM (physical compatibility mode)
- WSFC on vVols supports HA, DRS and vMotion.
Here’s how it works
Add a SCSI Controller to the first node
- In the vSphere Client, select newly created virtual machine, right-click and select Edit Settings.
- Click the New device drop-down menu, select SCSI Controller, and click Add.
- Select the appropriate type of controller, depending on your operating system
- Set SCSI Bus Sharing to Physical and click OK
Add a new hard disk to the first node in the failover cluster
- In the vSphere Client, select the newly created virtual machine, right-click and select Edit Settings.
- Click the New device drop-down menu, select New Hard Disk, and click Add.
- Select the disk size.
- Under Disk Provisioning, select either Thick or Thin Provision.
- Expand the New Hard Disk.
- From the Virtual Device Node drop-down menu, select the newly created SCSI controller (for example, select SCSI (1:0)) You must select a new virtual device node. You cannot use SCSI 0.
- Click OK. The wizard creates a new hard disk using the newly created SCSI Controller.
Note: Take note of the name of the DiskFile; you will need it for the next step. You can find it by expanding the New Hard Disk and looking under Device File.
Add a SCSI Controller to each additional node
- In the vSphere Client, select newly created virtual machine, right-click and select Edit Settings.
- Click the New device drop-down menu, select SCSI Controller, and click Add.
- Select the appropriate type of controller, depending on your operating system
- Set SCSI Bus Sharing to Physical and click OK
Add existing hard disk to the additional nodes in the failover cluster
- In the vSphere Client, select the newly created virtual machine, right-click and select Edit Settings.
- Click the New device drop-down menu, select Existing Hard Disk, and click Add.
- In Disk File Path, browse to the location of the quorum disk specified for the first node. The name of the disk can be found by expanding the first node’s hard disk\Disk File.
- Select the same virtual device node you chose for the first virtual machine’s shared storage disks (for example, SCSI (1:0)), and click OK. The wizard creates a new hard disk.
- Click OK. The wizard adds the shared vVols disk using the newly created SCSI Controller.
That’s it. Pretty straight forward and all from within the vSphere client. And the best part of all…. no RDMs.
If you are experiencing timeouts when testing failover, be sure to check your HealthCheckTimeout setting. The default timeout for the Failover Cluster Instance (FCI) to be considered unresponsive is 30 seconds. You can increase this if needed by configuring the HealthCheckTimeout setting. Changes made to the timeout settings are effective immediately and do not require a restart of the SQL Server resource.
As you can see using vVols is much simpler than setting up RDMs and the shared vVols also supports HA, DRS and vMotion. If you already have RDMs and you want to get rid of them, be sure to read Cody Hosterman’s blog post on Migrating RDMs to vVols. Now it’s your turn. Give it a try and let me know how it goes. I’m interested in your feedback on this post as well as the new support for WSFC on vVols.