Traditional vSAN 2 Node configurations require connectivity between the vSAN tagged VMkernel ports, and the vSAN Witness Appliance’s VSAN tagged VMkernel port.
Also remember that vSAN traffic will use the default gateway of the Management VMkernel interface to attempt to reach the Witness, unless additional networking is put in place, such as advanced switch configuration or possibly static routing on ESXi hosts and/or the Witness Appliance
New in 6.5
VMware vSAN 6.5 supports the ability to directly connect two vSAN data nodes using one 10Gb networking cable or, preferably, two connections for redundancy.
This is accomplished by tagging an alternate VMkernel port with a traffic type of “Witness.” The data and metadata communication paths can now be separated. Metadata traffic destined for the Witness vSAN VMkernel interface, can be done through an alternate VMkernel port.
I like to call this “Witness Isolated Traffic” (or WIT), I think we calling are it “Witness Traffic Separation” (or WTS).
With the ability to directly connect the vSAN data network across hosts, and send witness traffic down an alternate route, there is no requirement for a high speed switch for the data network in this design.
This lowers the total cost of infrastructure to deploy 2 Node vSAN. This can be a significant cost savings when deploying vSAN 2 Node at scale.
The How
To use ports for vSAN today, VMkernel ports must be tagged to have “vsan” traffic. This is easily done in the vSphere Web Client. To tag a VMkernel interface for “Witness” traffic, today it has to be done at the command line.
To add a new interface with Witness traffic is the type, the command is:
esxcli vsan network ipv4 add -i vmkX -T=witness
We can configure a new interface for vSAN data traffic using this command (rather than using the Web Client)
esxcli vsan network ipv4 add -i vmkX -T=vsan
When looking to see what our vSAN network looks like on a host, we use:
esxcli vsan network list
It should look something like the image to the right:
Notice that vmk0, the management VMkernel interface in this example, has Witness traffic assigned.
In the example shown, vmk0, on each data node, requires connectivity to the VMkernel port vmk1 on the Witness Appliance. The vmk2 interface on each data node could be direct connected.
Also keep in mind that it is cleaner & easier to have the direct connected nics on their own vSphere Standard Switch, or possibly a vSphere Distributed Switch. Remember that vSAN brings the vSphere Distributed Switch feature, regardless of what version of vSphere you are entitled to.
If we have dual nics for redundancy, we could possibly assign vMotion traffic down the other 10Gbps nic. Couple that with NIOC, and we could ensure that both vSAN and vMotion could have enough resources in the event of one of the two nics failing.
Here’s a video I put together demonstrating the vSAN 2 Node Direct Connect feature:
I’m in the process of updating the Stretched Cluster & 2 Node guide for vSAN 6.5 to include this, some recommendations for it, as well as some more new content. Stay tuned.