Technical VCF Storage (vSAN)

New Design and Operation Considerations for vSAN 2-Node Topologies

A 2-node vSAN topology helps our customers address use cases that require remote or edge compute and storage due to constraints in connectivity, performance, or cost. It is a powerful topology, and one of the reasons why we see so many 2-node deployments across all types of scenarios, including retail, healthcare, and tactical.

An interesting aspect of 2-node cluster design is that while the cluster is technically small, there can often be a requirement for very high levels of durability. These 2-node clusters often live in difficult environments, yet power very important workloads. With recent advancements in vSAN, it is worth revisiting the design and operation of environments using 2-node vSAN clusters. Whether you have one cluster or several thousand clusters, the considerations below may help your organization take the best advantage of a vSAN 2-node topology.

Design Considerations

A good design leads to good outcomes. Let’s take a look at new considerations in the design of 2-node clusters when using vSAN 7 U1.

VM uptime under strained capacity conditions. A feature introduced in vSAN 7 and targeted as an improvement in uptime of VMs running in a stretched cluster in which one site has a capacity strained condition may also improve the resilience of 2-node clusters as well. Imagine a 2-node cluster powering a VM with a storage policy of FTT=1 by mirroring the data on the two hosts. In this example, each host has two disk groups, and a disk group on one host fails. With versions previous to vSAN 7, if the host that lost the disk group comes too close to the capacity threshold of the remaining disk group in the host, the capacity strained condition may prevent subsequent I/O from being processed on the VM, even though the other host still has sufficient capacity. vSAN 7 and newer will increase the uptime of a VM by proactively marking the object as absent in the capacity-constrained host and redirect I/O to the host with available capacity. The VM will continue to process I/O non-disruptively, albeit in a less resilient state.

Figure 1. Improved uptime of a capacity-constrained scenario with a 2-node vSAN cluster

Recommendation: For 2-node environments that need the very best resilience, use a host design of at least 3 disk groups. This helps reduce the percentage of capacity lost upon a disk group failure.

Space Efficiency through the new “Compression only” capability. This new feature offers cluster level efficiency for all-flash based clusters and is an alternative space efficiency feature to deduplication and compression (DD&C). Unlike DD&C, a failure of a capacity device in a cluster using “Compression only” will impact only the discrete devices that failed. It makes it a good choice for 2-node environments.

Figure 2. Comparing the failure domain of a capacity device failure in vSAN 7 U1

Shared witness for 2-node clusters. For larger environments running more than one 2-node vSAN cluster, the new shared witness feature offers the potential to dramatically reduce resource consumption compared to previous versions. Not only would the resource consumption in the primary cluster serving the witness appliance(s) be reduced, but operations to deploy and maintain the appliances are simplified. Two important design decisions should be factored in when determining the best use of a shared witness:

  1. What consolidation ratio makes the most sense for the environment?
  2. Which 2-node environments should use a common witness host appliance?

Determining the prudent answer to these two questions will help minimize the impact of a shared witness host should an unplanned issue arise.

Figure 3. Selection options for 2-node clusters when using a shared witness host appliance

Recommendation: Balance the desire to consolidate witness host appliances with the implications of increasing the dependency domain for the 2-node clusters using a shared witness. While you can share a single witness with up to 64 2-node clusters, a smaller consolidation ratio – which would reduce the size of the dependency domain – may be more appropriate for your particular business requirements and risk tolerance.

2-node Direct-connect using the latest hardware. With 2-node environments, sometimes host networking is an afterthought. The conventional wisdom is that 10Gb uplinks will be more than enough for a 2-node direct-connect configuration. While that may be the case in some environments, the improved performance capabilities and capacities of flash devices can shift the bottleneck to the network. The cost of 25Gb NICs are very competitive now and are a great way to ensure that networking is not the bottleneck. 25Gb can not only help in application performance but improve resynchronization times. Less time to restore to full policy compliance translates to a more resilient platform.

Deployment and Operation Considerations

The most helpful configuration improvements are those that make operations easier and reduce the potential of mistakes. Here are a few improvements to be aware of as you review your current operational procedures for 2-node environments.

2 node deployment using default gateway override. The default gateway override feature in vSAN 7 U1 helps make deployments easier with fewer configuration mistakes. By eliminating the need to create static routes for each host participating in 2-node and stretched clusters, it helps avoid one of the most common configuration issues in these environments.

Automated Immediate resync/repair after a replacement of a witness host appliance. Before vSAN 7, a common operational procedure when replacing a witness host appliance in 2-node and stretched clusters was to manually force a “Repair objects immediately” operation to regain full compliance as quickly as possible. Otherwise, the new witness host would not initiate a metadata resync until the 60-minute delay timer expired. In vSAN 7 and newer, a metadata repair is automatically invoked upon the use of the “Replace Witness” workflow in the UI, and automatically minimizes vulnerabilities during a witness host replacement.

Improved operations during EMM conditions for vLCM enabled 2-node clusters. For 2-node clusters using VUM, HA must be turned off before the cluster remediation process can take place, and reenabled after the remediation process is complete. With vLCM, these steps are no longer necessary which should simplify the update process of 2-node environments. vLCM is capable of remediating up to 64 2-node clusters concurrently, reducing the amount of time needed to update environments running large numbers of 2-node clusters.

Updates to the witness host appliance. As of vSAN 7 U1, the witness host appliance should be upgraded prior to upgrading the hosts in the cluster that uses the witness host. This helps maintain backward compatibility and is an operational change from past versions of vSAN. Note that the witness appliance must be updated using VUM. The use of vLCM for updating the witness host appliance is not supported at this time. For more detailed guidance, see the post: Upgrading 2-node vSAN Clusters from 6.7U3 to 7U1 with a Shared Witness Host.

Summary

For 2-node environments, design options have expanded, and operations are more efficient thanks to the recent improvements with vSAN. This makes for a great time to introduce these new capabilities into your own environment. Also be sure to check our recently refreshed vSAN Operations Guide and vSAN Design Guide out on core.vmware.com: Our new portal for the very latest content related to VMware Cloud Foundation, VMware vSphere, and VMware vSAN.

@vmpete