On August 26th at VMworld 2013 VMware announced vSphere 5.5, the latest release of VMware’s industry-leading virtualization platform. This latest release includes a lot of improvements and many new features and capabilities. In an effort to try and get my head around all this exciting new “stuff” I decided to go through the what’s new paper and compile a brief summary (well, relatively brief anyway).
Here’s the list I came up with. I’m sure I missed some things, but this list should help you get started with learning about what’s new in vSphere 5.5.
Today while I was working on a LAB, I struggled to find where the migration option is for the vmknics in the new NGC client. In the traditional client, as shown in the screen shot below, you can select the virtual adapter and then either choose to change the properties of the adapter or migrate it to another switch
In the last post here, I provided some basic information on SNMP and also shared which networking MIB modules are supported in vSphere 5.1. Before I describe how to use these MIB modules, there is one correction I would like make to the last post. I had mentioned that network related Trap is not supported, but that is not correct. SNMP agent on the host does send SNMP Trap when a physical link goes UP or DOWN. The Trap is like an interrupt. Instead of polling the values of the different network parameters, specific trap tells the user which network parameter needs attention.
Let’s take a look how you can use Networking MIBs to monitor virtual switch parameters.
In this post, I want to discuss one of the important enhancements in vSphere 5.1. It is obviously related to networking and has to do with providing monitoring support to virtual switch parameters through SNMP. We talked about the RSPAN, ERSPAN capabilities and how you can make use of these features to monitor and troubleshoot any networking issue. Similarly, using the new networking MIBs, you will have the visibility into virtual switches. Here are some of the basics on SNMP before I jump-in and discuss the enhancement in detail.
Simple Network Management Protocol (SNMP) is a standard that allows you to manage devices on IP networks. It consists of three key components: Managed devices, Agent, and Network Management System. In a physical network you will find switches, routers and other networking devices as Managed devices with SNMP Agent running on them. The Agent support on these physical network devices allows a centralized Network Management System (NMS) to get information about these devices and also set parameters centrally.
Recently I posted the Network Virtualization Design Guide that provides details on the different components of VMware’s VXLAN based network virtualization solution. The guide also discusses the packet flow and design considerations while deploying VXLAN in an existing and a green field environment.
To accompany this design guide we have put together a VXLAN deployment guide that provides more detail on how to prepare your clusters and existing networks and how to consume logical networks. The consumption of logical networks is shown through the vCloud Networking and Security Manager and vCenter Server UI. Those who are using vCloud Director in their environment the consumption of VXLAN network pool is similar to the consumption of any other type of network pool. The VXLAN preparation process in vCloud Director deployment is same as described in this paper.
In one of my earlier posts on “vSphere5.1 – VDS new Features”, I had discussed the LACP feature and stated that only one Link Aggregation Group (LAG) could be configured per VDS per Host. However, it seems that I was partially correct. The limitation of one LAG per VDS is there but there is no such limit on the Host. You can have multiple LAGs configured on a single Host by using multiple VDS. The following diagram shows a deployment with two LAGs on a Host.
The remote mirroring capability on VDS helps you send traffic from a virtual machine running on one host to a virtual machine on another host for debugging or monitoring purposes. As shown in the diagram below, traffic from a monitored VM on Host 1 is sent through multiple physical switches to an Analyzer VM on Host 2. For this setup to work, you have to perform configuration at various levels as shown by the numbered red circles in the diagram.
In the last post I covered the configuration of one of the port mirroring session type, Switch Port Analyzer (SPAN) on a host. SPAN is a simple configuration on VDS that allows users to quickly replicate traffic to another virtual machine on the same host. However, SPAN on VDS has following limitations
- The source and destination ports of the session should be on the same host. Thus limiting the visibility to a particular host.
- If the monitored virtual machine is moved from one host to another using vMotion, you can’t monitor that virtual machine traffic anymore.
The Remote SPAN (RSPAN) port mirroring session addresses above concerns and also provides the capability to send mirror traffic to a central analyzer tool. The analyzer tool can be connected multiple hops away in a network as shown in the diagram below.
Network troubleshooting and monitoring tools are critical in any environment. Especially in data centers where you have many applications or workloads consolidated on server virtualization platforms such as vSphere. When you ask any network administrators, what are the challenges in troubleshooting data center networks, where server virtualization is prominent? They will say that they don’t have the visibility into virtual networks and they don’t know what is going on in the hypervisor world.
To provide the right amount of visibility to the administrators, VMware vSphere Distributed Switch (VDS) supports industry standard features such as port mirroring and NetFlow. These features were introduced with the release of vSphere 5.0. In this latest release there are more enhancements to the features along with configuration workflow improvements. I will provide more details on the different types of port mirroring capabilities and which one to choose while troubleshooting or monitoring your network.
Recently there has been some discussion around the egress traffic management feature of vSphere Distributed Switch (VDS) also called as Network I/O Control (NIOC). Thanks to my colleague Frank Denneman for providing more details about this feature on his blog site and bringing to my attention an architectural change in the vSphere 5.1 release. This change impacts how the Limit parameters are applied at the host level. In this post, I will first describe the old architecture of NIOC and then discuss the change. I will also talk about the impact of this change and what users need to keep in mind while configuring limit parameter.
Let’s first take a look at the NIOC components and architecture in the previous releases of vSphere. The diagram below shows a vSphere host with two 10 gig NICs, VDS components, NIOC configuration table, and different traffic types running on the host.