Home > Blogs > VMware vSphere Blog


VXLAN Series – Multicast usage in VXLAN – Part 3

I covered some basics on Multicast in the last blog entry here. Let’s now take a look how multicast is utilized in VXLAN deployments. During the configuration of VXLAN, it is required to allocate a multicast address range and also define the number of logical Layer 2 networks that will be created. For more details on the configuration steps please refer to the VXLAN Deployment Guide.

Ideally, one logical Layer 2 network is associated with one multicast group address. Sixteen million logical Layer 2 networks can be identified in VXLAN, using 24 bit field in the encapsulation header, but the multicast group addresses are limited (224.0.0.0 to 239.255.255.255). In some scenarios it might not be possible to have one to one mapping of a logical Layer 2 network to multicast group address. In such scenarios the vCloud Networking and Security Manager maps multiple logical networks to a multicast group address. After the discussion on the association of multicast group to logical network, let’s take a look at some details on the logical network properties.

Logical Layer 2 networks can span across all the hosts managed by a vCenter Server. Virtual machines connected to a logical network behave as if they are connected to a single broadcast domain (equivalent to same VLAN). For example, in the diagram below, VXLAN 5001 is a logical Layer 2 network that spans across four hosts. Virtual machines running on Host 1 and Host 4 are connected to same logical network (VXLAN 5001).

The question is how broadcast traffic is handled on the logical network. Any broadcast packet from a device connected to the logical network should reach all the devices on that network. For example, in the diagram below, if virtual machine 1 on Host 1 sends a broadcast packet that packet has to reach virtual machine running on Host 4. As you can see the packet has to traverse through the VTEPs and the physical network to reach the virtual machine running on Host 4.

There are few communication options (as discussed in part 2) available to VTEP on the Host 1 when it comes to delivering broadcast packet from the logical network. It can use unicast, broadcast or multicast. Multicast is much more efficient in utilizing the resources of the physical network and it is used when sending broadcast packet from the logical network.

VXLAN deployment with one logical network

Let’s take a look in detail how the packet flows through the VTEP and the physical network.

We will take the same example of one logical network spanning across 4 Hosts. The physical topology provides a single VLAN 2000 to carry VXLAN transport traffic. In this case only IGMP snooping and IGMP querier is configured in the physical network. As we saw with multicast operation here , few things have to happen first before the physical network devices handle multicast packets.

IGMP Packet flows

In the diagram above the blue circles with number indicates the packet flow:

  1. Virtual machine (MAC1) on Host1 is connected to the logical Layer 2 Network VXLAN 5001 and is powered on.
  2. VTEP on Host 1 sends a IGMP join message to the network and joins the 239.1.1.100 multicast group address that is associated with VXLAN 5001 logical network
  3. Similarly, virtual machine (MAC2) on Host 4 is connected to VXLAN 5001 and is powered on.
  4. VTEP on Host 4 sends a IGMP join message to the network and joins the 239.1.1.100 multicast group address that is associated with VXLAN 5001 logical network

The Host 2 and Host 3 VTEPs don’t join the multicast group address because they don’t have any virtual machines running those are connected to VXLAN 5001 logical network. This is where the multicast optimization comes into play. Only VTEPs that are interested in listening to multicast group traffic, joins the group.

When a broadcast packet is generated by virtual machine on Host 1 this is how the packet flows through the physical topology and is delivered to the virtual machine running on host 4.

Multicast Packet flow

The following is the flow of packets:

  1. Virtual machine (MAC1) on Host1 generates a broadcast frame.
  2. VTEP on Host 1 encapsulates this broadcast frame into a UDP header with destination IP as multicast group address 239.1.1.100.
  3. Physical network delivers the packet to the Host 4 VTEP, because it had joined the multicast group 239.1.1.100. The Host 2 and 3 VTEPs will not receive the broadcast packet.
  4. VTEP on Host 4 first looks at the encapsulation header and if the 24-bit value of VXLAN identifier matches with the logical Layer 2 network ID, it removes the encapsulation header and delivers the packet to the virtual machine.

This is how multicast is used to deliver the broadcast traffic generated on any logical network. The other two traffic types on the logical network that will make use of multicast on the physical network are:

  • Unknown Unicast frames
  • Multicast frames from virtual machine

All other types of communication on the logical network is handled through normal unicast path in the physical network.

One of the question I always get is what happens if multiple logical networks are mapped to a single multicast group address and what are security and performance implication on that type of configuration. In the next post I will cover that.

Please let me know if you have any questions on these packet flows.

Here are the links to Part 1, Part 2Part 4, Part 5

Get notification of these blogs postings and more VMware Networking information by following me on Twitter:  @VMWNetworking

9 thoughts on “VXLAN Series – Multicast usage in VXLAN – Part 3

  1. Pingback: VXLAN Series – Multiple logical networks mapped to one Multicast group address – Part 4 | VMware vSphere Blog - VMware Blogs

  2. Pingback: VXLAN Series – Multicast Basics – Part 2 | VMware vSphere Blog - VMware Blogs

  3. Pingback: VXLAN Series – Different Components – Part 1 | VMware vSphere Blog - VMware Blogs

  4. Pingback: VMware vSphere Blog: VXLAN Series – How VTEP Learns and Creates Forwarding Table – Part 5 | System Knowledge Base

  5. Pingback: VXLAN Series – How VTEP Learns and Creates Forwarding Table – Part 5 | VMware vSphere Blog - VMware Blogs

  6. Network Guy

    In your example all the VTEPs are residing in the same subnet. I understand this removes the need for multicast routing in the physical infrastructure but also more importantly it means a single VLAN (2000 in your example) is required to span across the physical infrastructure. This is a major design constraint for the physical infrastructure, the driver for VXLAN was to remove this constraint by abstracting virtual network from the physical, allowing VLANs/VNI to be routed across a layer 3 fabric, removing the need for stretch layer 2 VLANs across the physical infrastructure.

    Reply
  7. Pingback: VXLAN Series – How vMotion impacts the forwarding table – Part 6 | VMware vSphere Blog - VMware Blogs

  8. Pingback: VMware vSphere Blog: VXLAN Series – How vMotion impacts the forwarding table – Part 6 | System Knowledge Base

  9. Lucy

    if each VXLAN instance relys on one underlying multicast group for overlay BUM traffic, do you have concern about PIM scalability to handle the massive multicast traffic? if not, please explain?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>