In keeping with the theme of moving the Software-Defined Data Center from concept to reality, I discussed in my previous blogs why VMware vSphere is the perfect platform to deploy cutting edged technologies like SAP HANA. This is because vSphere enables our customers to agilely react to rapidly changing hardware/software requirements by recasting memory, CPU, IO, or network resources where needed in your landscape through software in a centrally managed manner. I also discussed how VMware Virtual Volumes can be leverage to simplify SAP’s multi-temperature data management strategy; where data is classified by the frequency of access as either hot, warm, or cold depending on data usage. This is an example of the essence of Software-Defined Storage.
Mission Critical Architectures: Completing The Picture with VMware NSX
In this blog I want to discuss how VMware NSX can be leveraged in your SAP HANA Landscapes. Figure 1. is an excerpt from the SAP HANA Network Requirements Guide, which kind of goes to the heart of why networks should be virtualized. Now the components of an SAP HANA system communicate via different network channels. Rightfully so, SAP recommended to have a well-defined network topology to control and limit access into only the required access channels in order to apply the appropriate security measures as necessary.
Figure 1. SAP HANA Network Zones
In the Client Zone access is granted to different clients, such as the SQL clients on SAP application servers. In addition there are also browser applications using HTTP/S to access the SAP HANA server, as well as other data sources (such as BI) which need a network communication channel to the SAP HANA database
In the last post here, I provided some basic information on SNMP and also shared which networking MIB modules are supported in vSphere 5.1. Before I describe how to use these MIB modules, there is one correction I would like make to the last post. I had mentioned that network related Trap is not supported, but that is not correct. SNMP agent on the host does send SNMP Trap when a physical link goes UP or DOWN. The Trap is like an interrupt. Instead of polling the values of the different network parameters, specific trap tells the user which network parameter needs attention.
Let’s take a look how you can use Networking MIBs to monitor virtual switch parameters.
Version 2.0 of the popular VMware Mobile Knowledge Portal (VMKP) is now live!
The VMKP is a free app which is designed to provide a simple way for VMware customers to view technical collateral around the Datacenter & Cloud Infrastructure and Infrastructure & Operations Management products.
Gain easy access to a variety of media and download your selected items to your device for when you are without access to the internet, the VMKP contains:
What’s New papers
The app will be updated and new content will be added routinely, so check the VMKP often!
What’s New ?
VMKP 2.0 adds the following enhancements:
Android and iPad support (Previously only iPad support was available)
Ability to rate collateral
Ability to provide feedback to VMware on pieces of collateral
Integration with Facebook and Twitter to let others know what you have been reading on the VMKP
Mechanism to request additional collateral items – let us know what you want to see!
Download it now
The VMware Mobile Knowledge Portal is now available for both IOS iPad and Android devices and can be downloaded below.
Note: There is a planned update for VMKP 2.0 in late April to better support smaller form factor tablets, such as iPad mini and Nexus 7
The Android version of this app can be downloaded from the Google Play or sent to your device by using the below button:
In this post, I want to discuss one of the important enhancements in vSphere 5.1. It is obviously related to networking and has to do with providing monitoring support to virtual switch parameters through SNMP. We talked about the RSPAN, ERSPAN capabilities and how you can make use of these features to monitor and troubleshoot any networking issue. Similarly, using the new networking MIBs, you will have the visibility into virtual switches. Here are some of the basics on SNMP before I jump-in and discuss the enhancement in detail.
Simple Network Management Protocol (SNMP) is a standard that allows you to manage devices on IP networks. It consists of three key components: Managed devices, Agent, and Network Management System. In a physical network you will find switches, routers and other networking devices as Managed devices with SNMP Agent running on them. The Agent support on these physical network devices allows a centralized Network Management System (NMS) to get information about these devices and also set parameters centrally.
In the last post I covered the configuration of one of the port mirroring session type, Switch Port Analyzer (SPAN) on a host. SPAN is a simple configuration on VDS that allows users to quickly replicate traffic to another virtual machine on the same host. However, SPAN on VDS has following limitations
– The source and destination ports of the session should be on the same host. Thus limiting the visibility to a particular host.
– If the monitored virtual machine is moved from one host to another using vMotion, you can’t monitor that virtual machine traffic anymore.
The Remote SPAN (RSPAN) port mirroring session addresses above concerns and also provides the capability to send mirror traffic to a central analyzer tool. The analyzer tool can be connected multiple hops away in a network as shown in the diagram below.
Some of the key features released in vSphere Distributed Switch (VDS) addresses the management and operational aspects. I talked about the Network Health Check feature, which reduces the time it takes to identify configuration issues across virtual and physical switches, in an earlier post . In this post I am going to cover the following features that further simplify the management and operation of VDS:
1) Rollback and Recovery
2) Configuration Backup and Restore
The above features are briefly discussed in the What’s new paper. I will provide some more technical details beyond what is discussed in this paper.
One of the common questions I get asked is whether to have management network on a standard switch (VSS) or distributed switch (VDS) ? For those who are new to this term management network, it is primarily used to provide communication between vCenter Server and vSphere hosts. I will address this question in this post.
After the holiday break I am happy to be back and want to continue where I left off in terms of blog posts. Before I do that let me first wish you all a Happy New Year!!! End of the last year, I did couple of posts providing some technical details on the new vSphere Distributed Switch (VDS) features released as part of vSphere 5.1. In this post I will discuss the new Link Aggregation Control Protocol (LACP) feature. While discussing this feature, I will also talk about its configuration parameters and scenarios in which this teaming option will provide you better throughput and better utilization of uplinks (physical NICs).
Link aggregation allows you to combine two or more physical NICs together and provide higher bandwidth and redundancy between a host and a switch or between two switches. Whenever you want to create a bigger pipe to carry any traffic or you want to provide higher reliability you can make use of this feature. However, it is important to note that the increase in bandwidth by clubbing the physical NICs depends on type of workloads you are running and type of hashing algorithm used to distribute the traffic across the aggregated NICs.
This is the second blog of the VDS – New Features series. In this one I will talk about the Network Health check feature – what it is and how it works. Let’s first take a look at some operational challenges that vSphere and network administrators face when it comes to network configuration in the vSphere environment.
When you look at the configuration process of virtual network, it involves configuring parameters on port groups of a virtual switch. As a vSphere administrator, you make sure that the configuration you are performing on port groups matches the physical switch configurations. However, this configuration process always doesn’t go that smoothly, either due to typing errors or multiple people involved in configuration process. Especially, when you have different teams managing virtual and physical switch configurations.
As you might know by now, with the release of vSphere 5.1, VMware has enhanced vSphere Distributed Switch (VDS) operationally as well as functionally. I talked about the new features briefly in the what’s new paper and also posted evaluation videos on some of the key features. In the next couple of weeks, I am planning to post more technical details about some of these new features. Since there were lots of questions around the new BPDU filter feature, I thought I will address that in this post.
First of all, I will provide some background behind the BPDU frames for those who are new to this particular networking term. BPDU stands for Bridged Packet Data Unit. These are the packets that are exchanged between physical switches as part of the Spanning Tree Protocol (STP). STP is used to prevent loops in the network and is enabled on the physical switches. When a link on the physical switch port goes up, STP protocol starts its calculation and BPDU exchange to determine if the port should be in forwarding or blocking state. Bridge Packet Data Unit (BPDU) packet exchanges across the physical switch ports help identify Root Bridge and form a tree topology. VMware’s virtual switch doesn’t support STP and thus doesn’t participate in BPDU exchanges. If a BPDU packet is received on an uplink, VDS drops that packet. Also, VDS doesn’t generate BPDU packets.
The STP process of identifying root bridge and finding if the switch ports are in forwarding or blocking state takes somewhere around 30 to 50 seconds. During that time no data can be passed from those switch ports. If a server connected to the port can’t communicate for that long, the applications running on them will time out. To avoid this issue of time out on the servers the best practice is to enable Port Fast configuration on the switch ports where server’s NICs are connected. The Port Fast configuration puts the switch port immediately into STP forwarding state. The port still participates in STP protocol and if a loop is detected the port can enter the blocked state.
As you start looking at the new features of vSPhere 5.1 release, you will find substantial improvements to the vSphere Distributed Switch (VDS). In the earlier post here, I briefly talked about the new VDS features and also provided link to the white paper.
To help you evaluate some of the key features of the VDS we have put together following videos:
1) VMware vSphere 5.1 – Network Health Check
This unique tool helps detect mis-configurations across the Physical switch and VDS parameters. You don’t have to now spend countless hours in troubleshooting network issues caused due to misconfiguration.