Hello and welcome to the Virtual SAN Troubleshooting blog series. This series of articles is dedicated to and driven by requests from you our readers. Today we will be focusing upon one of our most requested troubleshooting topics, Layer 2 Multicast functionality from the Virtual SAN Networking requirements.
You are probably familiar with the Virtual SAN networking requirement of Layer 2 Multicast but today we would like to discuss why Virtual SAN leverages multicast forwarding for a portion of its network traffic as well as provide troubleshooting steps when it seems as though multicast traffic is not being received by the Virtual SAN VMkernels. The goal of this article is to educate the networking novice as well as provide clarification for the networking experts so we will be taking a thorough, ground up approach for our discussion.
Click the link if you need to jump directly to the testing examples Testing Multicast functionality. You will also want to make sure that you are following the guidelines below.
Virtual SAN Multicast Guidelines
- Layer 2 multicast v2 enabled for the VSAN VMkernel network
- Layer 3 multicast is not required
- VSAN VMkernel multicast traffic should be isolated to a layer 2 non-routable VLAN
- We do not recommend implementing multicast flooding across all ports as a best practice.
- IGMP Snooping and an IGMP Querier can be used to filter multicast traffic to a limited to specific port group. This is beneficial if other non-Virtual SAN network devices exist on the same layer 2 network segment (VLAN). If only Virtual SAN VMkernel ports exist on a particular VLAN, IGMP snooping will not offer any benefit. Note: An IGMP Querier is necessary to house the access tables in order for IGMP snooping to function.
- Two Virtual SAN clusters can exist peacefully on the same layer 2 network segment however, both clusters will receive all of the multicast traffic from the other cluster. If a networking issues arises in this scenario (e.g. vCenter displaying “Network Status: Misconfiguration detected”) the Ruby vSphere Console command “vsan.reapply_vsan_vmknic_config” may resolve the issue (click here for the RVC blog series for instructions on accessing RVC).
As a suggestion for performance optimization, if two Virtual SAN clusters do exist on the same layer 2 network segment, modifying the multicast addresses for one of the clusters will reduce the amount of multicast traffic received for each Virtual SAN cluster and possibly resolve the “Network Status: Misconfiguration detected” message as well. For instructions on modifying the multicast addresses of the cluster, please see VMware KB 2075451: Changing the multicast address used for a VMware Virtual SAN Cluster.
- Use tcpdump-uw and nc to validate that each Virtual SAN node can send and receive multicast traffic.
For a complete list of Virtual SAN networking requirements please check out the Virtual SAN Networking Requirements and Best Practices from the VMware vSphere 5.5 Documentation Center.
Network Datagram Forwarding Schemes
Forwarding schemes for network datagrams differ in their delivery methodologies. Below you will find a list of the most common network forwarding schemes:
- Unicast delivers a message to a single specific network host
- Broadcast delivers a message to all hosts in a network
- Multicast delivers a message to a group of hosts that have expressed interest in receiving the message
- Anycast delivers a message to anyone out of a group of hosts, typically the one nearest to the source
- Geocast delivers a message to a geographic area
Unicast Network Transmission
Unicast forwarding is the predominant delivery mechanism used for the sending of network datagrams (network traffic) over IP based networks. A datagram is simply a basic transfer unit of network traffic associated with packet-switched networks. In unicast, network datagrams are intended for delivery to a single network destination that is identified by a unique network address. Though network traffic that is sent via unicast may be forwarded across multiple devices to get to the intended receiver, the intention of the forwarding devices is to simply read the header of the packet, not the data payload, in order to identify the destination address for proper forwarding.
Think of unicast as someone having a conversation with a friend. You may converse face-to-face or through other medium such as telephone, email, or smoke signals however the intended destination is a single, specific recipient.
Each network datagram forwarding scheme excels in its own unique area. Unicast forwarding is meant to be a more secure and resource friendly solution for delivering network traffic. This is possible because the data is sent directly to a single host rather than to all hosts everywhere.
Unicast does have its challenges though. Since unicast creates a one-to-one connection with the intended recipient, network traffic intended for mass-distribution can be very costly in terms of computing resources and bandwidth consumption. Each recipient of the data will require a separate network connection that consumes computing resources on the sending host and requires its own separate network bandwidth for the duplicate transmission. Streaming media to multiple recipients presents a very challenging use case for unicast as large quantities of duplicate data are being sent to multiple recipients.
Think of this as having to have the same conversation with a group of friends. Wouldn’t it be easier if you could just let everyone know at the same time?
Multicast Network Transmission
Where unicast has challenges, multicast excels. Multicast forwarding is a one-to-many or many-to-many distribution of network traffic (as opposed to unicast’s one-to-one forwarding scheme). Rather than using the network address of the intended recipient for its destination address, multicast uses a special destination address to logically identify a group of receivers.
Since multicasting allows the data to be sent only once, this has the benefit of freeing up computing resources from the host that would otherwise be required to send individual streams of duplicate data. Leveraging switching technology to repeat the data message to the members of the multicast group is far more efficient for the host than it is sending each individually.
Multicast with IGMP Snooping and an IGMP Querier
Layer 2 multicast forwarding, without IGMP snooping and an IGMP Querier enabled, is essentially a layer 2 network broadcast. Each network device attached to an active network port will receive the multicast network traffic.
IGMP Snooping and an IGMP Querier can be leveraged to constrain the IPv4 multicast traffic to only those switch ports that have devices attached that request it. This will avoid causing unnecessary load on other network devices in the layer 2 segment by requiring them to process packets that they have not solicited (similar to a denial-of-service attack).
How does Multicast benefit Virtual SAN network traffic?
Virtual SAN uses a clustered metadata database and monitoring service (CMMDS) to make particular metadata available to each host in the cluster. The CMMDS is designed to be a highly available, performant and network efficient service that shares information regarding host, network, disks, objects, components, etc. among all of the hosts within the Virtual SAN cluster.
Distributing this data amongst all of the hosts and keeping each host synchronized could potentially consume a considerable amount of compute resources and network bandwidth. Each host is intended to contain an identical copy of this metadata which means, if we were using general unicast forwarding for this traffic, there would be constant duplicate traffic being sent to all of the hosts in the cluster.
Virtual SAN leverages layer 2 multicast forwarding for the discovery of hosts and to optimize network bandwidth consumption for the metadata updates from the CMMDS service (storage traffic is always unicast). This eliminates the computing resource and network bandwidth penalties that unicast imposes in order to send identical data to multiple recipients.
The bandwidth required for these updates depend upon the actual deployment (quantity of hosts, VMs, objects, etc) however here is a speculative example using nice round numbers for easier calculations.
Consider a 32 node cluster with a considerably dense population of virtual machines. CMMDs updates could potentially consume up to 100Mb through spikes in network traffic with possibly an average of 10Mb sustained throughput. Using unicast forwarding, this would require 3.2Gb (100Mbit * 32 = 3200Mbit = 3.2Gbit) just for the transfer of metadata. The bandwidth required for storage traffic would be added on to this number.
By leveraging multicast forwarding for metadata updates, Virtual SAN is able to decrease the 3.2Gb this scenario requires, down to 100Mb.
(Again this scenario is speculative in order to illustrate the concept, actual numbers vary depending upon deployment).
When enabling Virtual SAN, vCenter may display that there is a “Misconfiguration detected” in the cluster’s network status. If you receive this error, you will want to validate that each host can successfully receive multicast network traffic from each other host in the cluster using the default Virtual SAN multicast group addresses.
1. Identify Virtual SAN VMkernel Port
First we will want to identify which VMkernel is used by Virtual SAN:
~ # esxcfg-vmknic -l
Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type
vmk0 Management Network IPv4 10.144.97.177 255.255.255.0 10.144.97.255 a0:d3:c1:03:9b:a8 1500 65535 true STATIC
vmk0 Management Network IPv6 fe80::a2d3:c1ff:fe03:9ba8 64 a0:d3:c1:03:9b:a8 1500 65535 true STATIC, PREFERRED
vmk1 VSAN IPv4 10.144.102.177 255.255.255.0 10.144.102.255 00:50:56:66:4c:4a 1500 65535 true STATIC
vmk1 VSAN IPv6 fe80::250:56ff:fe66:4c4a 64 00:50:56:66:4c:4a 1500 65535 true STATIC, PREFERRED
2. Monitor VSAN VMKernel Port network traffic
Next we simply monitor the Virtual SAN VMkernel for any multicast network traffic. If each host in the cluster sees multicast traffic across their respective VMkernel interfaces with the default Virtual SAN multicast group addresses, then multicast traffic is successfully traversing the network segment as it should.
The default multicast group addresses for Virtual SAN are:
188.8.131.52 Port: 12345
184.108.40.206 Port: 23451
Here are two ways to monitor for multicast network traffic:
A) Using “esxcli network ip connection list” to identify active connections with the default multicast group addresses.
~ # <strong>esxcli network ip connection list | egrep 224</strong>
udp 0 0 220.127.116.11:12345 0.0.0.0:0 34062 hostd-worker
udp 0 0 18.104.22.168:23451 0.0.0.0:0 34062 hostd-worker
B) Using tcpdump-uw to collect packet traces to troubleshoot network issues:
-i = interface
-n = no IP or Port name resolution
-s0 = Collect entire packet
-t = no timestamp
-c = number of frames to capture
3. Generate Multicast traffic (*Optional)
You can use the nc command (netcat) to troubleshoot TCP port connectivity.
The syntax of the nc command is:
# nc -uz <destination-ip> <destination-port>
Here is what it will look like when ran.
~ # nc -uz 22.214.171.124 12345
Connection to 126.96.36.199 12345 port [udp/*] succeeded!
Note: Netcat includes an option to test UDP connectivity with the -uz flag, but because UDP is a connectionless protocol, it will always report as ‘succeeded’ even when ports are closed or blocked. Use in combination with tcpdump-uw on the remote host to validate that the network traffic generated with nc (netcat) was successfully received.
That concludes our guided tour through Virtual SAN’s usage of multicast, its benefits, as well as troubleshooting steps in the event something goes awry. In the next Virtual SAN Troubleshooting blog we will show how to create a script to automate multicast group address changes in the event you have need. Happy troubleshooting!