In this post I am going to describe how VTEPs learn about the virtual machines connected to the logical Layer 2 networks. The learning process is quite similar to a transparent bridge function. As transparent bridges learn based on the packets received on the bridge ports, the VTEP also learn based on the inner and outer header of the packets received.
Let’s take an example to illustrate the VTEP learning process.
Example Deployment with Two Hosts
In this post I am going to address a common question about the security and performance impact when multiple logical Layer 2 networks are mapped to one multicast group address.
As mentioned in earlier post here, vCloud Networking and Security (vCNS) Manager is responsible for mapping the logical Layer 2 networks to multicast group addresses. If you provide less number of multicast group addresses than the logical layer 2 networks, vCNS manager will assign the logical layer 2 networks to multicast addresses in a round robin fashion. For example, if there are 4 logical L2 networks (A1,A2,A3,A4) and 2 multicast group addresses (M1, M2), Logical networks A1 and A3 will be mapped to multicast group address M1 while A2 and A4 are mapped to M2.
I covered some basics on Multicast in the last blog entry here. Let’s now take a look how multicast is utilized in VXLAN deployments. During the configuration of VXLAN, it is required to allocate a multicast address range and also define the number of logical Layer 2 networks that will be created. For more details on the configuration steps please refer to the VXLAN Deployment Guide.
Ideally, one logical Layer 2 network is associated with one multicast group address. Sixteen million logical Layer 2 networks can be identified in VXLAN, using 24 bit field in the encapsulation header, but the multicast group addresses are limited (220.127.116.11 to 18.104.22.168). In some scenarios it might not be possible to have one to one mapping of a logical Layer 2 network to multicast group address. In such scenarios the vCloud Networking and Security Manager maps multiple logical networks to a multicast group address. After the discussion on the association of multicast group to logical network, let’s take a look at some details on the logical network properties.
In the last post here, I provided some details on vSphere hosts configured as VTEPs in a VXLAN deployment. Also, I briefly mentioned that Multicast protocol support is required in the physical network for VXLAN to work. Before I discuss how Multicast is utilized in VXLAN deployment, I want to briefly talk about some of basics on Multicast.
In the diagram below you see three main types of communication modes that are common in a network – Unicast, Broadcast and Multicast.
In the last six months, I have talked to many customers and partners on Virtual eXtensible Local Area Network (VXLAN). One of the things I felt was challenging was how to explain the technology to two different type of audience. On one hand, there are Virtual Infrastructure administrators who want to know what problems this new technology is going to solve for them and what are the use cases. While on the other hand, there are Networking folks who want to dig into packet flows and all the innate protocol level details, how this technology compares with others, and what is the impact of this on the physical devices in the network etc.
The papers that we have made available “Network virtualization Design Guide” and “VXLAN Deployment Guide”, provides some basic knowledge about the technology, Use cases, and step-by-step deployment instructions. However, some of the detailed packet flow scenarios are not explained in these papers. So I thought it would be a good idea to put together a series of post discussing the packet flows in a VXLAN environment. Also, there are many common questions that I would like to address as part of this series.
To start this series, I will first describe the different components of the VMware’s VXLAN implementation.
Today while I was working on a LAB, I struggled to find where the migration option is for the vmknics in the new NGC client. In the traditional client, as shown in the screen shot below, you can select the virtual adapter and then either choose to change the properties of the adapter or migrate it to another switch
Traditional Client Screen Shot
In the last post here, I provided some basic information on SNMP and also shared which networking MIB modules are supported in vSphere 5.1. Before I describe how to use these MIB modules, there is one correction I would like make to the last post. I had mentioned that network related Trap is not supported, but that is not correct. SNMP agent on the host does send SNMP Trap when a physical link goes UP or DOWN. The Trap is like an interrupt. Instead of polling the values of the different network parameters, specific trap tells the user which network parameter needs attention.
Let’s take a look how you can use Networking MIBs to monitor virtual switch parameters.
In this post, I want to discuss one of the important enhancements in vSphere 5.1. It is obviously related to networking and has to do with providing monitoring support to virtual switch parameters through SNMP. We talked about the RSPAN, ERSPAN capabilities and how you can make use of these features to monitor and troubleshoot any networking issue. Similarly, using the new networking MIBs, you will have the visibility into virtual switches. Here are some of the basics on SNMP before I jump-in and discuss the enhancement in detail.
Simple Network Management Protocol (SNMP) is a standard that allows you to manage devices on IP networks. It consists of three key components: Managed devices, Agent, and Network Management System. In a physical network you will find switches, routers and other networking devices as Managed devices with SNMP Agent running on them. The Agent support on these physical network devices allows a centralized Network Management System (NMS) to get information about these devices and also set parameters centrally.
Recently I posted the Network Virtualization Design Guide that provides details on the different components of VMware’s VXLAN based network virtualization solution. The guide also discusses the packet flow and design considerations while deploying VXLAN in an existing and a green field environment.
To accompany this design guide we have put together a VXLAN deployment guide that provides more detail on how to prepare your clusters and existing networks and how to consume logical networks. The consumption of logical networks is shown through the vCloud Networking and Security Manager and vCenter Server UI. Those who are using vCloud Director in their environment the consumption of VXLAN network pool is similar to the consumption of any other type of network pool. The VXLAN preparation process in vCloud Director deployment is same as described in this paper.
Please download the guide from here.
Get notification of these blogs postings and more VMware Networking information by following me on Twitter: @VMWNetworking
In one of my earlier posts on “vSphere5.1 – VDS new Features”, I had discussed the LACP feature and stated that only one Link Aggregation Group (LAG) could be configured per VDS per Host. However, it seems that I was partially correct. The limitation of one LAG per VDS is there but there is no such limit on the Host. You can have multiple LAGs configured on a single Host by using multiple VDS. The following diagram shows a deployment with two LAGs on a Host.
Example – Two Link Aggregation Groups on a Host