Home > Blogs > VMware vSphere Blog > Monthly Archives: June 2009

Monthly Archives: June 2009

Upgrading from ESXi 3.5 to ESXi 4.0

There are two main ways to upgrade from ESXi 3.5 to ESXi 4.0. Both methods perform an in-place upgrade of ESXi, which allows the following:

  • preserve VMFS and all contents on local disk, if upgrading ESXi Installable
  • upgrade ESXi embedded, whether on internal or external USB key or internal flash memory
  • preserve almost all configuration data, including your networking, security, and storage configuration. The only configuration not preserved is related to licensing, because a new ESXi 4.0 license is required after the upgrade.

If you are using vCenter to manage your host, the best way to upgrade is to use vCenter Update Manager. You need to update your vCenter to vCenter 4.0 first, but that can be a first, separate step since vCenter 4.0 can manage both ESXi 3.5 and ESXi 4.0 systems. vCenter 4.0 Update Manager has been enhanced to specifically perform the upgrade process for both ESXi as well as ESX. VMwareTips has a nice video showing the entire upgrade process with Update Manager.

If you are not using vCenter, then you can use the standalone Host Update Utility to perform an upgrade. This tools installs on any Windows host, and can be used to upgrade any number of ESXi hosts. VM Help (the home of the unofficial ESXi Whitebox HCL) has a nice overview with screenshots of using Host Update Utility to upgrade ESXi 3.5 to ESXi 4.0.

More detail on the upgrade process from ESXi 3.5 to ESXi 4.0 may be found in the vSphere Upgrade Guide

VMotion between Data Centers—a VMware and Cisco Proof of Concept

VMotion, the VMware feature that enables live VM migration between ESX hosts is one of the major attractions of vSphere and before that, VMware Infrastructure (or VI for short). It’s simply quite amazing to watch a VM continue operation and maintain sessions while moving from one host to another.

As cool as this is, we’re often asked, “How do we take that one step further, and perform VMotion between datacenters?” This, of course, is a non-trivial thing to do.  There is the challenge of moving a VM over distance (which involves some degree of additional latency) without dropping sessions. To maintain sessions with existing technologies means stretching the L2 domain between the sites–not pretty from a network architecture standpoint. And then there is the storage piece. If you move the VM, it has to remotely access its disk in the other site until a Storage VMotion occurs.

Last year, Cisco and VMware began the task of trying to solve these long distance VMotion issues with the target of seamlessly migrating a VM between two datacenters separated by a reasonable distance. The joint Cisco/VMware lab in San Jose has run number of tests over varying distances (simulated with reels of optic fiber) as a proof of concept. We will demonstrate this proof of concept at Cisco Live this week in San Francisco. The demo as it stands incorporates a distance of 80km (50 miles). That’s around 400us latency each way over fiber or a round trip just under 1ms.

This proof of concept is aimed at the following requirements:

  1. Load balance compute power over multiple sites: Migrate VMs between datacenters to “follow the sun”  or to simply load balance over multiple sites. Enterprises with multiple sites can also conserve power and cooling by dynamically consolidating VMs to fewer datacenters (automated by VMware Dynamic Power Management (DPM))—another enabler for the Green datacenter of the future.
  2. Avoid downtime during DC maintenance: applications on a server or datacenter infrastructure requiring maintenance can be migrated offsite without downtime.
  3. Disaster Avoidance: Data centers in the path of natural calamities (e.g. hurricanes) can proactively migrate the mission critical application environment over to another data center.

Use cases #2 and #3 above also require a Storage VMotion to move the disk image to the alternative datacenter.

Remember, this is a proof of concept, so we still have work to do in multiple areas. e.g. the storage VMotion for disaster avoidance and so on.

See and hear about it at Cisco Live this week…

Cisco Live is on this week in San Francisco. We will feature briefings in the VMware theatre (booth #531 …adjacent to the big Cisco booth). Refer to the theatre schedule posted at the VMware booth for session times. (of course, you can just come and ask us about it anytime)

See an update at VMworld in San Francisco—August 2009

We will demonstrate this again at VMworld (http://www.vmworld2009.com/) in San Francisco in August where we will also hold a technical breakout session on VMotion between datacenters. This will cover the proof on concept, test results, and reveal a little more of our plans to solve some of the remaining issues. Look for this session in the Technology and Architecture track.   

VMware-Cisco-vmotion-V2_2

vSphere 4 and Nexus 1000V—How to: (1) Get, (2) Install, (3) Use with an Evaluation License

Neal Mueller and Pierre Ettori over at Cisco have published a few short tutorial style videos on the Nexus 1000V. You can get to kick the tires for 60 days without it costing you a cent. The videos to look at are:

  1. How to install an (60-day) evaluation license for Nexus 1000V—this covers getting an eval copy of vSphere 4, Nexus 1000V, a 60-day license and installing it all.
    Note: go to our co-branded vSphere site to get your vSphere eval.
  2. Detailed Feature Demo of Nexus 1000V—Pierre takes you through a 20-minute demo of the major Nexus 1000V features and shows the tight coupling with vCenter Server

VMware’s Backup and Recovery product

One of the many capabilities introduced in VMware vSphere 4 is VMware Data Recovery (VDR), a virtual machine backup and recovery product.  Market research and customer feedback showed that many people wanted an integrated option for protecting virtual machines in a VMware environment.  Further analysis showed that this was more eminent for VMware customers that had (or plan to have) fewer than 100 virtual machines in their environment and where IT responsibilities (including VMware) were shared among 2-3 IT administrators (as opposed to having a dedicated VMware administrator on-staff).

VMware has been helping customers address their backup challenges in two ways:  making significant investments in the vStorage APIs for Data Protection that third-party backup tools use to integrate their backup/recovery products with vSphere, and in providing an integrated option optimized for vSphere customers with smaller environments.  VDR is built using the vStorage APIs for Data Protection and incorporates a user interface, policy engine and data duplication – see the diagram below on how it all fits together.  I’ll cover these blocks in a series of blogs but I wanted to start out by discussing Data Deduplication (dedupe).

VDR_Arch

 

Given that we had a made a decision to only use disks as the destination for the VDR backups, we had to look for a solution that offered disk storage savings – and this is where dedupe comes in.  In a nutshell, dedupe avoids the same data to be stored twice – and dedupe is HOT – just check out the mergers and acquisitions news! 

What VMware decided to implement for VDR dedupe is (take a deep breath) – block based in-line destination deduplication.  Deconstructing it means the following:

    1. We discover data commonality at the disk block level as oppose to the file level.

    2. It is done as we stream the backup data to the destination disk as opposed to a post-backup process.

    3. The actual dedupe process occurs as we store the data on the destination disk as opposed to when we are scanning the source VM’s virtual disks prior to the backup.

When it comes to deduplication, there are different techniques and hash algorithms used to accomplish the result.  I am not going to get into a theoretical discussion of the pros and cons of the various types of dedupe technologies available and which approach provides the best disk savings.  I personally think that it totally depends on the customers’ IT environment constraints and their overall business goals plus a lot of the storage savings is going to be data driven anyway (the more data commonality there is, the better the dedupe rate).  We chose this dedupe architecture because it fit best with what we were trying to achieve with VDR and what the vSphere platform provided to us.  What were these reasons?  Stay tuned to this space……

Nexus 1000V demo videos

While we’re on the subject of Nexus 1000V, Pierre-Emmanuel Ettori from Cisco has just posted a couple of videos on the Cisco Data Center blog. The videos take you on an in-depth tour of the Nexus 1000V (configuration, port profiles, etc) and shows the tight integration with vCenter Server. If you haven’t seen the Nexus1000V in action, it’s well worth a look.

Nexus 1000V and VN-Link confusion

We’ve seen a lot of excitement around our vNetwork Distributed Switch and also the Cisco Nexus 1000V virtual switch. A little confusion, however, has arisen around physical switch dependencies for the Nexus 1000V. I understand the confusion as the Nexus 1000V (which is software based and available now) and VN-link (hardware based and not yet available) with Nexus 5000 are often presented together.

So, for the record … the Cisco Nexus 1000V will operate with any physical switch (that we know of)—Cisco Catalyst, Nexus, Foundry, HP, Force 10, etc, etc. Of course, some of the special features may only be available when coupled with a Catalyst or Nexus.

For more information:

IBM CloudBurst runs on ESXi

Recently, IBM announced a new Cloud Computing offering called CloudBurst.  From the product page:

IBM CloudBurst is a complete IBM Service Management package of hardware, software and services, which simplifies your cloud computing acquisition and deployment.

This blog entry from the ibm.com Community describes the software used to provide the resource abstraction layer:

Cloud Software Configuration:
IBM CloudBurst service management pack
•IBM Tivoli Provisioning Manager v7.1
•IBM Tivoli Monitoring v6.2.1
•IBM Systems Director 6.1.1 with Active Energy Manager; IBM ToolsCenter 1.0; IBM DS Storage Manager for DS4000 v10.36; LSI SMI-S provider for DS3400
•VMware VirtualCenter 2.5 U4; VMware ESXi 3.5 U4 hypervisor


What's interesting to note is that the solution is based on the prior release of ESXi, version 3.5  With all the enhancements that have been added in ESXi 4 (which we'll talk about in upcoming blog postings), there should be no doubt that ESXi is the ideal architecture for building clouds of any size — as many of our customers are already doing today.

The ESX Team

Virtual Switches vs Physical Switches plus more on “Let’s Talk Security …”

After posting the “Let’s Talk Security …” blog entry last week, our engineering director reminded me of a few more things worth pointing out. Virtual switches are very much like physical switches, but they do differ in a few ways relevant to the security discussion around MAC flooding and spanning tree attacks.

  • Virtual switches know the MAC addresses of the VMs and vmkernel ports by registration. It’s all controlled by the ESX hypervisor, so there is no need to “learn” any MAC addresses. vSwitches will also toss any frames with a destination MAC address outside what is registered. Hence, they’re not susceptible to MAC flooding.
  • Frames received on an uplink will never be forwarded out an uplink—they’re either forwarded to the correct virtual port (with registered MAC address) or ports (multicast or broadcast) or thrown away (destination is not attached to this virtual switch). This simple rule means ESX cannot introduce a loop in the network (unless someone deliberately provisions a bridge inside a VM with two vnics). This also means ESX does not need to participate in Spanning Tree and will not put an uplink in a blocked state so you get full use of all uplinks. Note: this does not mean you should turn off spanning tree on your access switches—ESX just ignores the BPDU updates. (of course, always configure portfast or portfast trunk on the physical switchports to immediately get to the STP forwarding state)

What vnic? Choosing an adapter for your VM

In ESX 4, we released VMXNET 3 as another high performance paravirtualized adapter for use with VMs. This increases the choices …or perhaps the level of confusion. Which adapter do you choose?

Fortunately, there is help at hand. One of our engineers recently updated a knowledge base (KB) article on this very topic. You can see the full text at the kb.vmware.com site (KB #1001805), but I’ve copied the meat of it here…

The adapter choices are as follows …

  • Vlance — An emulated version of the AMD 79C970 PCnet32 LANCE NIC, an older 10 Mbps NIC with drivers available in most 32-bit guest operating systems except Windows Vista and later. A virtual machine configured with this network adapter can use its network immediately.
  • VMXNET — The VMXNET virtual network adapter has no physical counterpart. VMXNET is optimized for performance in a virtual machine. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET network adapter available.
  • Flexible — The Flexible network adapter identifies itself as a Vlance adapter when a virtual machine boots, but initializes itself and functions as either a Vlance or a VMXNET adapter, depending on which driver initializes it. With VMware Tools installed, the VMXNET driver changes the Vlance adapter to the higher performance VMXNET adapter.
  • E1000 — An emulated version of the Intel 82545EM Gigabit Ethernet NIC, with drivers available in most newer guest operating systems, including Windows XP and later and Linux versions 2.4.19 and later.
  • VMXNET 2 (Enhanced) — The VMXNET 2 adapter is based on the VMXNET adapter but provides some high-performance features commonly used on modern networks, such as jumbo frames and hardware offloads. This virtual network adapter is available only for some guest operating systems on ESX/ESXi 3.5 and later.
    VMXNET 2 is supported only for a limited set of guest operating systems:
    • 32- and 64-bit versions of Microsoft Windows 2003 (Enterprise and Datacenter Editions). You can use enhanced VMXNET adapters with other versions of the Microsoft Windows 2003 operating system, but a workaround is required to enable the option in the VI Client or vSphere Client. See http://kb.vmware.com/kb/1007195 if Enhanced vmxnet is not offered as an option.
    • 32-bit version of Microsoft Windows XP Professional
    • 32- and 64-bit versions of Red Hat Enterprise Linux 5.0
    • 32- and 64-bit versions of SUSE Linux Enterprise Server 10
    • 64-bit versions of Red Hat Enterprise Linux 4.0
    • 64-bit versions of Ubuntu Linux
  • VMXNET 3 — The VMXNET 3 adapter is the next generation of a paravirtualized NIC designed for performance, and is not related to VMXNET or VMXNET 2. It offers all the features available in VMXNET 2, and adds several new features like multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. 
    VMXNET 3 is supported only for virtual machines version 7 and later, with a limited set of guest operating systems:
    • 32- and 64-bit versions of Microsoft Windows XP and later
    • 32- and 64-bit versions of Red Hat Enterprise Linux 5.0 and later
    • 32- and 64-bit versions of SUSE Linux Enterprise Server 10 and later
    • 32- and 64-bit versions of Asianux 3 and later
    • 32- and 64-bit versions of Debian 4/Ubuntu and later
    • 32/64-bit versions of Sun Solaris 10 U4 and later

VM Upgrades … 

Now, a word about upgrades… if you upgrade a VM to Version 7 hardware support (to take advantage of VMXNET3), it’s a one-way upgrade. i.e. you cannot go back.

Scott Lowe posted some good information about the virtual machine upgrade process on his blog. It’s worth a look

David Oslager, from our own field, added some great info about capturing the interface information from a Windows VM for later reapplication after the upgrade. This is his process…

As per Scott’s blog, you have to save the IP info from the old NIC and replace the IP info on the “new” VMXNET 3 adapter.

Dump the IP info to a text file and then reapply it on Windows

To dump the IP config using netsh from a command line:

netsh interface ip dump > c:\ipconfig.txt

Since Windows will most likely see the new NIC as “Local Area Connection 2” (or something similar) you have to modify the above text file and change the NIC name to match the new NIC’s name. Or change the new NIC’s name on the host to match what’s in the file above. Either way works.

To re-import it:

netsh -c interface –f c:\ipconfig.txt

This really comes in handy when you have a lot of DNS servers, WINS servers, etc and/or multiple IPs on the same NIC.

Let’s talk Security … DMZs, VLANs, and L2 Attacks

We recently posted a paper titled, Network Segmentation in Virtualized Environments on vmware.com that discusses and describes three virtualized trust zone configurations and some best practices for secure deployment.

So, what’s a trust zone? It’s part of a network (a network segment) within which traffic flows relatively freely. Traffic in and out of the trust zone is subject to stronger restrictions. Good examples are DMZs, or web/application/database zones between which we would put some form of firewalling.

The idea of consolidating a DMZ to a single host (one of the scenarios described in the paper) has stirred some opinions in the VMware Communities. The subject of security always does.

I thought one of the replies to the ongoing discussion was worth reposting. This post is from our own Serge Maskalik (aka vSerge on communities). You can read the rest of the thread here to get the context of the discussion, but the points about L2 attacks I think stand on their own (and hence why I reposted them here)…

These are really good questions, and there are a number of considerations with regards to using VLANs and how to properly secure L2 environments to reduce your attack surface area. To say that VLANs aren’t secure and can’t be used for DMZ usage isn’t fair – the reality is that there lots of very secure VLAN implementation in production networks since the early part of this decade, especially seen in service provider networks. When you go to a Savvis, Global Crossing, AT&T, etc – you get a VLAN + CIDR block and datacenters’ tenants are split up this way across the access layer. I recall building out the GlobalCenter datacenters in the late 90s/early part of this decade (these are now Savvis through Exodus acquisition), and the flat edge network which was Catalyst 5500s with shared broadcast domains became Catalyst 6500s or 7600s or comparable solutions by other vendors with VLAN segregation by customer with VLAN counts in 1k+ range per datacenter. That was almost 10 years ago and we now see lots of large and small enterprise networks heavily leverage VLANs to reduce numbers of physical NICs, simplify physical topology, reduce port density requirements on the switching edge, provide more configuration flexibility, reach large consolidation ratios by having more VMs run on smaller number of ESX servers in collapsed DMZ+Internal environments, etc.

The following is a little bit of information about L2 attacks that folks often talk about and how to put some controls in place to prevent them.

1. CAM flooding or MAC flooding. Switches use content-addressable memory which contain VLAN/PORT/MAC-ADDRESS tables for looking up egress ports as frames are forwarded. These are the forwarding tables for the switches and they have limits in size. The CAM tables are populated by looking at the source MAC on a frame and creating a CAM entry that records which port maps to what source MAC. This attack type tries to overrun the table by generating large numbers of frames with different MACs, to the point that there is no more room in the CAM to store the MAC entries. When the CAM can no longer be populated, the switch will act like a hub and flood frames to all ports except the one the frame came in on (to prevent loops). To avoid this, there are features like setting the max number of MAC entries per port – in most cases you only need one per NIC. By setting this configuration, you get rid of this risk. Secondly, you have to evaluate the risk of such attack. The attacker has to penetrate into the DMZ, own a host within the DMZ or be already on a segment close to the DMZ to run this attack. This attack could not occur if there are intermediate routers in the path, since a mac rewrite occurs on those nodes. It’s a good idea to limit your L2 broadcast domains and the diameter of the switched network to avoid propagation of these type of issues.

2. VLAN Cross-talk Attacks (or VLAN Hopping) – on Cisco switches, the dot1q trunks pass all tags be default. When you configure the ESX host to uplink via a dot1q trunk, and guest tagging is allowed, it’s conceivable that a rogue guest can generate frames for VLANs it should not be a part off. Avoid enabled guest tagging and monitor your vSwitch configuration activity for such things. Another way to hop VLANs is to spoof dynamic trunk configuration frames from a host; protocols like these are used by vendors to automatically configure 802.1q trunks to set up and allow VLANs. To avoid this, configure switch ports passing tagged frames explicitly to be trunks and to explicitly forward specific tags. Also, don’t allow for unplugged ports on the switches to remain in a VLAN used by important assets – put them into an unused VLAN to avoid the possibility of someone plugging in to a port and getting access to the VLAN. Avoid using default VLANs (like VLAN1 on Ciscos).

3. ARP spoofing – this is where a host on the same segment as other hosts modify the ARP table on the edge router/gateway to point to the attacker’s MAC and are able to redirect traffic to themselves. This can be done using ARP request or Gratuitous ARP mechanisms. This is a bit tougher to defend against, but can happen regardless of whether you are using VLANs or not.

4. Spanning-Tree attacks – this is where attackers could cause a DoS and bring down the L2 network section by generated malicious STP BDPUs and become root bridges or confuse the protocol to block specific ports. This can happen regardless of usage of VLANs, plus features like bpdu-guard and root-guard help prevent this type of stuff.

5. VRRP or HRSP tampering – break the failover protocol for the default gateway, take over the gateway MAC yourself, etc.

6. Starve out the DHCP address range – not as big of deal of DMZ, unless you are using DHCP for servers.

We on the vShield Zones teams recognize these issues and try to provide visibility to VMs and flows destined/sourced to and from VMs from a network perspective. Using Zones, you can see an ARP spoofing attack from a VM or a physical host on a segment and remediate the issue. Security best practices claim that you need visibility into L2 to deal with these type of issues, so in addition to providing firewalling functionality, we spent a lot of time on providing microflow visibility.

Also, we are seeing lots of customers using vShield Zones to isolate and segment clusters to provide dual-purpose for DMZ and internal server VMs usage using VLANs + vShield Zones isolation. We will be posting papers on this front and there will be examples at VMworld on how this can work. We are seeing three major use cases in the context of this:

1. Isolated/Segment DMZ in a dedicated set of ESX hosts or cluster with multiple trust zones provided by the vShield Zones.

2. Fully collapsed DMZ where the cluster or set of ESX hosts are shared by internal VMs and Internet-facing VMs.

3. Branch office environments where there may be some VMs hosted with Internet access, some for internal server usage and VDI as well.