Home > Blogs > VMware vSphere Blog > Monthly Archives: May 2011

Monthly Archives: May 2011

Migrating to ESXi part 3 – Installing ESXi and reconfiguring the host

This is the third post in my migrating to ESXi series.  In the first post I talked about the need to upgrade vCenter and provided some tips to ensure a smooth upgrade.  In the second post I talked about the need to evacuate VMs off the ESX host and called out special consideration needed for VMs running on boot disk and other local datastores.  This time I will talk about the steps to install ESXi and reconfigure the host after the ESXi migration.

Installing ESXi
Once vCenter has been upgraded and all the VMs have been migrated off your ESX hosts you are ready to install ESXi.  To install ESXi you simply boot the host from the ESXi installation media and follow the prompts to accept the license agreement and select the target boot disk. 


I strongly recommend that before installing ESXi you take some time to document the ESX host configuration using the Host Configuration Worksheet as a guide.  This will ensure you have a good reference available to facilitate reconfiguring the host after the ESXi install.

When installing ESXi there a few things to watch for:

  • Remember, the ESXi install will reformat the boot disk, so make sure you've migrated any VMs and templates you want to keep off the boot disk before you install.
  • Any local disks that the installer identifies as blank (disks without a partition table) will automatically be claimed by ESXi and formatted with VMFS.  If you have any local disks that are blank that you don’t want to have formatted as VMFS disconnect them while you install ESXi. 
  • If the host has access to a large number of LUNs it may take several minutes for the installer to complete its storage scan.  You can speed up the installation by disconnecting the host from the SAN while you install ESXi.

Configure the Management Network
After installing ESXi the next step is to logon to the ESXi Direct Console User Interface (DCUI) and set the host’s password and configure the management network.  From the host's console press F2 to access the System Customize screen (shown below).   The first time you log on the root password will be empty so leave it blank when prompted.  Use the DCUI to set the hosts password and configure the management network.


After configuring the management network it’s always a good idea to test network connectivity using the “Test Management Network” option. 

Reconnect the Host in vCenter
Once the root password has been set and the host is connected to the management network the next step is to reconnect the host in vCenter.  Do this by logging onto the vSphere client, right clicking on the host and choosing connect.


When you reconnect the ESXi host you will get a pop-up notifying you that the SSL certificate cannot be verified; this is okay because the host was reinstalled and now has a new SSL certificate. 


Close the SSL error pop-up and provide the ESXi host’s user name and password when prompted, be sure to choose “yes” when asked whether or not to trust the new host certificate.  The host will then be reconnected in vCenter.


Support for Rolling Upgrades
Note that it is supported to have a mix of ESX 3.5, 4.0, 4.1 and ESXi 4.1 hosts in the same cluster.  This support enables you to perform “rolling upgrades”.  A rolling upgrade is when you take one host out of the cluster, migrate it to ESXi, reconnect it to the cluster, and then repeat the procedure for each of the remaining hosts.   One bit of advice when running a mixed cluster, wait until all hosts in the cluster have been migrated to ESXi before provisioning any new VMs or upgrading the VM VMware Tools and hardware versions.  This is a precautionary step done to avoid any potential conflicts that might crop up when running newer versions of VMware Tools and newer hardware versions on older ESX hosts. 

Reconfigure the ESXi Host

With the ESXi host reconnected in vCenter the final step is to reconfigure the newly migrated ESXi hosts.  In my lab this involves setting the NIC teaming properties, adding vSwitches and Port Groups, and reconfiguring NFS datastores.  Depending on your environment this may also include setting up iSCSI initiators and configuring advanced storage settings and multi-pathing policies.  Check your host against the pre-migration settings documented in the Host Configuration Worksheet to ensure everything gets properly reconfigured.    

Using Host Profiles to Reconfigure ESXi Hosts

If you only have a few hosts to migrate then you can probably get buy with manually reconfiguring them.  However, if you have a lot of ESX hosts manually reconfiguring each host individually is not only repetitive but can become time consuming and error prone.  Fortunately, you can automate the host reconfiguration step using vCenter Host Profiles.  Host Profiles are a licensed vCenter feature and you will need to have a valid license to use them, but remember you can leverage the 60-day trial license included with the vCenter 4.1 install/upgrade.  Just be sure to coordinate your ESXi host migrations to take place within the 60-day trial period.

To use Host Profiles you need to start with a fully configured ESXi 4.1 hosts.  This host will be used as a reference host used to create the Host Profile Template.  To create a Host Profile use the vSphere client to perform the following steps:

  1. Right click the reference host and choose “Host Profile -> Create Profile from Host…”
  2. Enter a name and description for the Host Profile and choose “Next”
  3. Verify the name and description and choose “Finish”


(Note that after you create the Host Profile you can modify and further customize it by navigating to "Home" -> "Host Profiles" in the vSphere client.)

Once the Host Profile has been created you can then attach it to each ESXi host after it has been reconnected in vCenter.  Perform the following steps in the vSphere client to attach a Host Profile to a host:

  1. Place the host into maintenance mode by right-clicking the newly connected ESXi host and choosing “Enter Maintenance Mode…”
  2. Right-click the host and choose “Host Profile -> Manage Profile…”.  In the ensuing pop-up choose the Host Profile to be attached.


After the profile has been attached you then apply the profile.  Applying the profile will apply all the configurartion settings saved in the Host Profile to the host.  For host specific settings, like the IP address for the vMotion networks, you will be prompted to provide the required values.  To apply the host profile:

  1. Right-clicking the ESXi 4.1 host and choose “Host Profile -> Manage Profile… -> Apply Profile…”. 
  2. In the ensuing pop-up provide any values for host specific settings and choose “Next”. 
  3. Once all the required values have been provided click "Finish" to apply the changes to the host.  




(Note, in some cases you may need to apply the Host Profile twice as some changes may require  other changes be committed first).

After the Host Profle has been applied take the host out of maintenance mode by right-clicking the host and choosing "Exit Maintenance Mode…".  At this point the host should be fully configured and capable of hosting VMs.  I recommend testing the host by migrating a few less critical VMs first.  Once you are confident the host is working as expected you can then proceed to migrate the next host in the cluster.


Installing ESXi is pretty straightforward.  You simply boot the host using the installation media and follow the prompts.  Remember the install will overwrite the boot disk, so be sure to migrate any VMs or templates off the boot disk before installing.  It is also important to be aware that any local disks that are blank will automatically get formatted by the ESXi installer.  Also, if the host has access to a lot of LUNs you may want to temporarily disconnect it from the SAN to speed up the install by giving the installer less storage to scan.  After ESXi has been installed logon to the DCUI to set the root user password and configure the management network.  Once the host is back on the management network reconnect it in vCenter and complete the configuration.  If you only have a few hosts to migrate you can manually configure each host.  However, if you have a lot of hosts use Host Profiles to automate the reconfiguration.

LBT (Load Based Teaming) explained (Part 2/3)

By Hugo Strydom, Managing Consultant, VMware Professional Services

In the first part we looked at how we collect network stats. In this part we will be looking at when and how LBT checks for load and the process of moving the VM traffic to another pNIC.

Let's first look at the detection and frequency. The vmkernel checks every 30sec the stats from the relevant pNIC's. The calculation will use the stats for the 30sec interval and get an average over the 30sec (to normalize the average and to eliminate spikes). If the bandwidth is above 75%, it will mark the pNIC as saturated.

It will look at RX and TX individually. Thus if RX is at 99% and TX is at 1% the avg is 50%, the calculation will take into account that the RX is above the 75% and thus also mark the pNIC as saturated.

If a pNIC have been marked as saturated, the vmkernel will not move any more traffic onto the saturated pNIC (except when a VM is powered on at initial placement, see part 1). VM traffic will only be moved off a saturated pNIC unless all pNIC have been marked as saturated (More on this in Part 3).

The process of moving a VM traffic from one pNIC to another is as follows :

  1. vmkernel detect pNIC is saturated based on 30sec calculation
  2. Calculation takes place to determine which VM's to move to which unsaturated pNIC
  3. VM traffic is moved to new pNIC

Note that when the VM traffic is moved there is no halt in traffic or VM world.

In part 3 we look at some scenarios and rules on how LBT will move VM traffic around.

LBT (Load Based Teaming) explained (Part 1/3)

By Hugo Strydom, Managing Consultant, VMware Professional Sercices

In this first part of how LBT works we want to first explain how the stats are derived for the pNIC attached to the vDS (or vSwitch).

The vmkernel maintains tx/rx stats for each of the pNIC’s that is attached to a vDS. Stats to the vmkernel is updated as soon as the packet have been send or received by the pNIC. There is no overhead on delivering the packet to the VM during the stats update process to the vmkernel.

When a pNIC is not attached to a vDS, no stats will be collected for that pNIC. Have a look at ESXTOP and note that no stats is shown for pNIC’s that is not connected to a vSwitch/vDS. Also the in ESXTOP the pNIC is not shown if not connected to vSwitch/vDS.

ESX Host with 4 pNIC’s

Consider a vDS Port Group configured for LBT. When a VM is powered on the vNIC is attached to the DVS’s pNIC using “Route based on originating virtual port”(default when LBT is selected). At 30sec’s intervals the vmkernel will calculate the load on each pNIC that is attached to the vDS and if needed move the VM traffic over to another pNIC. Thus during VM startup the vNIC could be placed on a pNIC that is considered saturated but will be moved off to other pNIC’s that is not saturated once LBT have done a round of calculations to balance the load.

In part 2 we will explain how NLB calculate when to move VM traffic onto another pNic.

New ebook stock again, do your “transition to ESXi” course today!

A couple of weeks ago we published a free elearning course dedicated to ESXi , “Transition to ESXi Essentials”. The course is a self-paced three-hour online training and to make it even more appealing we decided to bundle it with a FREE, online, ebook copy of “VMware ESXi: Planning, Implementation, and Security” authored by Dave Mishchenko. The interest was overwhelming and after a week we were fully sold out on the ebooks. I just found out that because of the ongoing interest a new batch of ebooks has been bought. If you haven't taken the course yet make sure you do it now. Not only is this course essential to prepare for your transition to ESXi it will also provide you with a very valuable tool, Dave's ESXi ebook!

Transition to ESXi Essentials… Now!

Migrating to ESXi part 2 – Moving VMs off the ESX Host

In my last post I mentioned the requirement to upgrade vCenter in preparation for migrating  to ESXi.  In this post I will discuss the storage considerations related to an ESXi migration.

Before you shutdown an ESX hosts in preparation to migrate it to ESXi you need to evacuate the VMs off the host.  The steps to do this vary depending on where the VMs are stored and what their availability requirements are.  There are three places where VMs can be stored:  (1) on the boot disk, (2) on local disks, or (3) on shared storage.  Let’s discuss the considerations for each.

VMs on Boot Disk Datastores
Every ESX host has at least one VMFS partitions on its boot disk.  Before you migrate a host to ESXi it is important to identify any VMs or templates stored on this datastore.  The ESXi install will re-partition the boot disk and in the process any VMs or templates not relocated will be lost.  As such prior to shutting the host down you need to either move these VMs and templates to another datastore (preferably a shared datastore) or back them up so they can be restored after the migration.   As backup and restore is pretty self explanatory I’m only going to cover the steps to relocate the files to a new datastore.  There are a couple different ways to do this:

Moving Active VMs off the boot disk datastore(s)
If you want to avoid VM downtime the best way to move VMs off the boot disk is by using Storage vMotion.  Storage vMotion allows you to migrate the VM’s disk files to a different datastore while the VM is running.  Assuming you move the files to a shared datastore you can then use vMotion to migrate the running VM to another host and keep it running during the ESXi migration.   Note that both vMotion and Storage vMotion are licensed vCenter features.  Fortunately, if your vSphere license doesn’t include these features you can use the 60-day trial license provided with vCenter 4.1.

Moving inactive VMs off the boot disk datastore(s)
If it’s okay to power off the VM another option is to do a cold migration.  With a cold migration you first shut the VM down and then copy its disk files to a different datastore.  Once the files have been copied, and again assuming you copied them to a shared datastore, you can then register the VM on a separate host and power it on.   There is no special licensing required to perform a cold VM migration.

A note about templates
Templates are VM that have been converted into a template in order to facilitate deploying new VMs.  To move templates you first need to convert the template back to a regular VM, then migrate the files (using either Storage vMotion or Cold Migration), and then convert it back to a template. 


Figure 1 – VM Migration Options

VMs on Local Disk Datastores
Unlike with VMs on the boot disk you have a choice on whether or not to move VMs and templates off any local datastores.   During the ESXi installation local disks with existing datastores are ignored and will not be reformatted.  However, you do need to consider the impact the host downtime will have on your VMs as the VMs and templates on local datastores will not be accessible while the host is being migrated.  If you want to keep the VMs running or ensure templates remain accessible while you migrate the host, you will need to move them off the host’s local datastores.  Again, you can use a combination of Storage vMotion and vMotion to do this, or if downtime is not an issue use cold migration.  

One important thing to be aware of if you choose to leave VMs on local datastores is that you will need to manually re-register them following the ESXi migration.  When ESXi is installed the host level registrations for the local VMs will be lost and the VMs will appear in vCenter as “ghosted”.  To clean this up you will need to remove the ghosted entries from inside vCenter and manually re-register each VM by browsing into the VM’s directory on the local datastore and right clicking on the VMX file to add it back to the host’s inventory.

VMs on Shared Storage Datastores
VMs on shared datastores are the easiest to manage because they don’t require any special handling.  The only consideration necessary in the case of VMs on shared datastores is if the VM is active on the host to be migrated you will need to shut it down or vMotion it to another host.
During the ESXi migration, when you install ESXi, the boot disk will be re-formatted to include destroying any VMs and templates on it.  As such it’s important that prior to migrating to ESXi you move the VMs and templates off the boot disk, preferably to a shared datastore.  Evacuating the VMs and templates can be done with no VM downtime using a combination of Storage vMotion and vMotion.  If downtime is not a concern you can also use cold migration. 

In addition to evacuating VMs and templates off the boot disk, you also need to give consideration to VMs and templates stored on local datastores.  The ESXi migration requires shutting the host down, which will require powering off any VMs running on local datastores.  While the host is powered down the VMs and templates will not be accessible until after the ESXi migration is complete.  To ensure VMs and templates  remain running and accessible during the ESXi migration it is recommended that they also be migrated to a shared datastore. 

VMs on shared storage are already readily accessible by multiple hosts and therefore don’t require any special consideration.  VMs on shared storage can simply be vMotioned to another host and remain running and accessible during the ESXi migration.

Migrating to ESXi part 1 – Are you ready for ESXi?

It’s been almost a full year since we first heard about ESXi convergence, have you made the move to ESXi?  Is it still on your to-do list?  Migrating to ESXi is really pretty easy and can even be done with no VM downtime.   The basic process involves (1) migrating the VMs off the host, (2) installing ESXi and (re)configuring the host, and (3) restoring the VMs. 


Over the next couple weeks I will post a series of blogs to guide you through the ESXi migration process, I’ll point out some things to watch for and provide some advice to help steer you through a smooth and efficient transition to ESXi. 

Because most of us use vCenter Server I will start by discussing the vCenter upgrade.  Before you can use vCenter to manage ESXi 4.1 hosts you need to be running the latest version of vCenter.  Today the latest release of vCenter is 4.1 so I assume everyone will be upgrading to this version.  Details on how to do the actual upgrade are covered at length in the in the vCenter Server Upgrade Guide so I won’t go into them here, however I will point out a few things you need to be aware of:

Verify vCenter Upgrade Requirements
You can only upgrade to vCenter 4.1 from VirtualCenter 2.5 or vCenter 4.  If you are running an older version of vCenter you cannot upgrade directly to vCenter 4.1.

64-bit only
In vSphere 4.1 both ESXi and vCenter Server require 64-bit hardware (vCenter also requires a 64-bit OS).  If you are currently running on 32-bit hardware you will need to replace the server.  Note that you can continue to use a 32-bit database with a 64bit ODBC.

MSSQL 2000 and Oracle 9i support dropped
vCenter Server 4.1 no longer supports MSQL2000 or Oracle 9i.  If you are running one of these databases you will need to upgrade the database to a supported version.

Use the vCenter Server 60-day trial license
vCenter provides several tools that will help in migrating to ESXi.  Things like vMotion, Storage vMotion and Host Profiles can facilitate the transition and enable you to avoid VM downtime.   Many of these tools are separately licensed features within vCenter, but fortunately VMware provides a free 60-day trial license available immediately after you install/upgrade to vCenter Server 4.1.  Take advantage of this free 60-trial period and use these tools to help with your migration.  Just make sure you time your vCenter upgrade so it coincides with your ESXi host migration, once the 60-day trial period expires your access to these features may be disabled and there is no way to extend the trial period.

What, no vCenter?
I recently presented on the topic of ESXi Transition at several regional VMware User Group (VMUG) conferences where I met with several customers who aren’t using vCenter Server.  While vCenter Server is not required to migrate to ESXi, having one definitely makes the migration easier.  If you don’t use vCenter Server consider deploying one temporarily (using the 60-day trial) to help with your migration.  When you are done with your migration you can go back to managing the hosts locally. 

The following chart provides an overview of the key considerations when upgrading vCenter Server.  Start on the road to ESXi today by putting together a plan for upgrading vCenter.  Next time I’ll talk about evacuating VMs off your ESX hosts in preparation for installing ESXi.


Taking ESXi scripted installs to a new level

A couple of weeks back I wrote about an article that Tom Arentsen published around automating the installation of ESXi. It's a great read and I have had a lot of positive feedback on it. Today Tom released an article which takes this to the next level.

In this article Tom shows you how to generate an Event on your vCenter Server which in its turn kicks off a script to add the ESXi host to vCenter and finish the configuration… how brilliant is that?

So once we have the script to log the event, we will have to create an alarm in vCenter and kicks off a script that configures your ESXi host.

The full picture looks as following:

Anyway, read Tom's article for all the details! Great work Tom, keep them coming!