Home > Blogs > VMware vSphere Blog > Monthly Archives: October 2011

Monthly Archives: October 2011

vCenter Site Recovery Manager 4.1.2 Released

On October 27th the 4.1.2 release of SRM was published.  It's a fairly small update, but has a few nice bug-fixes included:

  • Site Recovery Manager 4.1 Test Failover Fails
  • Recovery Fails When NFS Datastores are Mounted on Two Hosts with Different IP Addresses
  • Rolling Back the Uninstallation of the SRM Service Fails
  • Timeout During Network Customization of SUSE Linux 10 Virtual Machines
  • Recovery Plan OS Heartbeat or IP Customization Timeout Settings Greater than 2000 Seconds Wait Forever
  • Running the vCenter Site Recovery Manager dns_update script fails with the error "\VMware\VMware was unexpected at this time"
  • Creating a Protection Group or Protecting a Virtual Machine Fails with the Error: Operation Not Supported on the Object
  • vCenter Server Session Does Not End when the vSphere Client Closes if the Remote Site is Unavailable.
  • Unclear Error Message when Installation Fails Because the SRM User Account Does Not Have Administrator Privileges
  • SRM Server Fails to Start the SRM Service if a Datastore Name is Invalid
  • SRM Stops Responding During Failover when the Storage Array on the Protected Site is Offline
  • SRM Uninstaller Does Not Remove Old SRM License Asset Data from the User Interface.
  • Customization Specification Does Not Configure the Gateway for Red Hat Enterprise Linux 5.x.

Check out the release notes here:

and download it here:

-Ken

 

VMware Technical Publications on YouTube

There are so many new features and capabilities in vSphere 5.0 that it's becoming increasingly difficult to keep up with everything.  When it comes to getting a quick introduction on a new feature I've found a great resource is the VMware TechPub's Channel on YouTube.   The channel contains several short videos (~3 minutes) that provide a high level overview of various features and capabilities.  If you have a question about a new feature and what it does, this is a great place to start.

http://www.youtube.com/user/VMwareTechPubs

Here's an example of just some of the videos available today (and there are more coming):

ESX/ESXi Convergence

Using Image Builder CLI

Auto Deploy Architecture

Using Host Profiles

Troubleshooting Smart Cards in VMware View

vSphere Network I/O Control

vCenter Server Appliance (VCSA) feedback survey

Now that vSphere 5.0 has been out for a few months we have launched a survey to understand the vCenter Server Appliance (VCSA) needs moving forward so that we can take feedback into consideration for our futures planning.

We would appreciate if you could help in getting more eyeballs so that we get a diverse feedback.

Here’s a link to it: http://www.surveymethods.com/EndUser.aspx?D3F79B81DA908780D1

Thanks for the help.

Mixing ESX/ESXi Versions in an HA/DRS Cluster

Kyle Gleed, Sr. Technical Marketing Manager, VMware

(Note original post was updated on 28 Oct 2011 to better clarify ESX/ESXi 3.5 support in a mixed cluster.)

Running different versions of ESX/ESXi in an vCenter 5.0 HA/DRS cluster is supported.  Frank Denneman recently posted a good blog on this.  You may ask why would anyone want to run a mixed cluster?  Usually, this is done to facilitate rolling upgrades.  If you have a large 32-node cluster it's not practical to upgrade all 32 hosts at once, so instead you can leverage the mixed cluster support to upgrade two or three hosts at a time and "roll" the upgrade through the cluster until all 32 hosts are upgraded.

While mixed clusters are supported there are some things to watch for, specifically the VMware tools and virtual hardware versions of your VMs.  The table below provides a summary of the VMware tools and virtual hardware versions that are supported on both vSphere 4.x and 5.0.

A1
   
From the table we can see that:

1.  VMs with virtual hardware and VMware tools version 3 are not supported on ESXi 5.0 hosts.

2.  VMs with virtual hardware version 8 are not supported on pre-5.0 hosts.

3.  VMFS-5 is not supported on pre-5.0 hosts.

With these limitations in mind, my recommendations for running mixed clusters are as follows:

1.  Verify VM virtual hardware and VMware tools versions before mixing 3.5 and 5.0 hosts in the same cluster.  VMs with VMware tools version 3 and virtual hardware version 3 are not supported on ESXi 5.0.  To avoid potential pitfalls, be sure to upgrade VM hardware versions to version 4 and VMware tools version to 3.5 before mixing  3.5 and 5.0 hosts in the same cluster.

2.  Do not upgrade virtual hardware versions while running in a mixed mode.  Once you upgrade a VMs virtual hardware version to 8, it can no longer run on a pre-5.0 ESX/ESXi hosts.  In addition, there is no option to undo the upgrade or revert back to an earlier virtual hardware version.  As such, while running a mixed cluster you should avoid upgrading the virtual hardware version of your VMs to version 8 until after all hosts have been upgraded to ESXi 5.0.

3.  Do not upgrade VMFS-3 volumes to VMFS-5 while running in mixed mode.  Wait until after all the hosts in the cluster are running ESXi 5.0 to upgrade VMFS volumes.  Upgrading to VMFS-5 will prevent any pre-5.0 hosts from accessing the filesystem.  Also, note that the upgrade to VMFS-5 is permanent, there is no way to revert an upgraded VMFS volume back to VMFS-3.

4.  Do upgrade VMware Tools to the latest version.  Unlike the virtual hardware version, the newer VMware tools 5.0 version is fully supported on older ESX/ESXi 4.x hosts.   As there are many improvements included with the latest version of VMWare tools it's always a good idea to upgrade as soon as possible.  Note however, VMware Tools 4.0 is also fully supported on ESXi 5.0 so it's not required to upgrade the VMware tools right away.  If you have 3.5 hosts in your cluster you should wait until all hosts are running at ESX/ESXi 4.x or higher before upgrading VMware tools to version 5.0.

Conclusion

So in conclusion, running mixed ESX/ESXi versions in an HA/DRS cluster is supported but be careful not to mix older VMs running virtual hardware 3 or VMware tools 3 in the same cluster with ESXi 5.0 hosts.  It is okay (if not recommended) to upgrade VMware Tools  while running in a mixed mode as long as all the hosts are running ESX/ESXi 4.x or higher, but avoid upgrading virtual hardware and VMFS volumes until after all hosts are running ESXi 5.0.

Configuring IP Addresses with Auto Deploy

When using Auto Deploy you have two options for managing the IP addresses of your ESXi hosts:  (1) use static reservations in DHCP or (2) use an answer file.  I'll go over each of these options.
 
1.  I like to use static DHCP reservations as this eliminates the extra step of having to pre-populate an answer file for each host.  With a static IP reservations the DHCP server assigns an IP address based on the host's MAC address, this ensures that each time the host boots the DHCP server will always assign the same IP address.  Because the host always gets the same IP address from the DHCP server there is no need to reconfigure the hosts' network using an answer file when the host profile is applied.

IP-Assign-Img1

Configuring static IP reservations in DHCP is very easy.  Simply provide the MAC address of the primary NIC being used for the management network and enter the IP address you want to be assigned.  The steps to configure static IP reservations on a Windows DHCP server can be found in the vSphere Installation and Setup Guide.

2.  Another approach for configuring IP addresses for your Auto Deployed ESXi hosts is to create an answer file.  Answer files are a new feature in host profiles introduced in vSphere 5.0.  Where host profiles are used to store configuration parameters that are common to many hosts, answer files store info that is unique for each host, such as IP addresses. 

IP-Assign-Img2

The downside to using answer files is that you must first do an initial deployment of each host in order to manually populate each host's answer file.  An example of how to do this is available in the vSphere Installation and Setup Guide.  However, once the answer file has been created it will be used to automatically reconfigure the host's network during all future reboots without additional user intervention.

Note that it is possible to preconfigure the answer files using scripts.  This can eliminate the need to manually pre-configure each hosts but it does require some scripting skills.  Refer to this blog for more information on how to accomplish this.
 
I tend to prefer using static IP reservations in DHCP, but either approach will work.   I know many customers who prefer using answer files as it allows them to maintain their ESXi host IP addresses on their own without having to involve the DHCP administrator each time they deploy new hosts or want to make changes.

VMworld 2011 (Copenhagen) – Interesting Storage Stuff from Diskeeper, Nexenta & Pivot3.

Those of you who are regular readers of this blog will know that I usually try to put something together after I've attended a conference. Here is an article about things which I thought were cool at VMworld 2011 in Las Vegas. Of course, I was also at VMworld 2011 (EMEA) in Copenhagen last week, so I'd thought I'd have a check around the Solutions Exchange and see what else caught my eye.

My Usual Disclaimer – I have to remain vendor neutral on anything I post here, so once again I want to make it clear that VMware doesn't favour any one storage partner over another. I'm not personally endorsing any of these vendor's products either. The partners listed in this post are here simply because I think what they are doing is interesting or innovative. Keep in mind that I don't get to spend time with every single exhibitor, so please do your own research if considering using any of these products. However, I hope you still find the post informative.

So what did I see?

Diskeeper V-locity 3 – I definitely wanted to look these guys up as we had some communication offline before the show. The discussions we had revolved around one of the most tweeted articles that I wrote on this blog, namely the 'Should I defrag my Guest OS?' article. I managed to grab a coffee with Spencer Allingham who is the Technical Director of Diskeeper in the UK, and we had an interesting chat about the side effects of a defrag on a Virtual Machine disk, basically the issues I discussed in the blog post. What Spencer told me was that their V-locity 3 product is now VM feature aware, and that it can automatically turn off defrag if those VM features which cause unwanted side-effects are discovered (e.g. Thin Provisioning, etc). Even better still, V-locity 3 has proprietary IntelliWrite technology which optimizes file placement and attempts to prevent file fragmentation in the first place. It sounds like this could be a pretty cool product to evaluate if you believe that your Guest OS'es are suffering performance issues from excessive fragmentation. Find out more about Diskeeper here, including a trial download of V-locity 3.

Nexenta – Many regular readers and followers in the virtual storage community will have heard about Nexenta and their participation in the VMworld Hands On Labs (HOL), possibly via this very favourable article in The Register online news site. The comments in the article are just as interesing. After reading the article, I wanted to learn more about their products. I met with Andy Bennett (Director, Sales Engineering) & Craig Morgan (Principal Solutions Engineer) who gave me a very detailed overview of their storage offering. Nexenta's storage solution is based around Open Solaris & the ZFS file system. Nexenta can present both NFS & iSCSI, with the iSCSI datastores having full support for VMware’s vSphere Storage APIs for Array Integration (VAAI). My understanding is that VAAI support for NFS is planned as well as a VASA vendor provider. Nexenta also has vSphere plugins to allow the management of the storage from the vSphere client, which is very neat. However, the thing which jumps out the most from the Nexenta solution is their Aura interface. At their stand at VMworld, Nexenta were able to display in real-time the utilization of their storage from the HOLs. Information like the most frequently accessed files (VMDKs), backend IOPS, NFS reads and writes operations from each ESXi server in the lab are displayed in a very digestable and unique manner.  The Aura interface also displays 'chords' representing bandwidth attached to each of the ESXi servers. This made it extremely easy to see, at a glance, the bandwidth utilized by each ESXi server to the storage. The color of the chord changed depending on the consumed bandwidth. The thing is that Nexenta could tune to performance statistics on a per metric, so if the VMware guys running the HOL wanted to see something specific in the performance, Nexenta could very easily map out a new metric to be displayed in the Aura interface. The reason they can do this is due to the availability of the dtrace utility in Open Solaris. Very cool indeed. In fact, from chatting with the guys, they are looking at a project which will allow these advanced storage metrics to be fed into VMware's vCenter Operations product. Learn more about Nexenta here.

Pivot3 – I had briefly met some of the guys from Pivot3 at VMworld in Las Vegas in September, but had the chance to spend a bit more time with them at Copenhagen. I was given an overview of their vSTAC appliance solution, which comes with ESXi 5.0 pre-installed, presents multi-port iSCSI targets to the host and is basically ready for deployment by a vSphere admin with limited SAN knowledge. Pivot3 claim that a single appliance can support up to 100 VDI desktops, and has full support for vSphere features like vSphere HA, Fault Tolerance, vMotion & Site Recovery Manager. From chatting to Lee Caswell & Olivier Thierry at the show, Pivot3's aim is to provide a very simple Scale-Out Storage solution thru their vSTAC offering. At VMworld in Copenhagen, Pivot3 announced a complete VDI out-of-the-box solution for VMware View using Pivot3's vSTAC VDI. If more VDI desktops are needed, more vSTAC appliances can be 'stacked' together seemlessly to meet the need. The guys also mentioned that the vSTAC solution is also certified through VMware's new Rapid Desktop Program. What is cool about this is that the appliance contains pre-configured trial licenses of VMware View & VMware vCenter Server. Basically it is ready to run, and all a customer needs to do is add the correct licenses. Pivot3 were one of a number of storage vendors which are verified for this program so I'll follow up on what this program is about in a future article.  More about Pivot3 can be found here.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

SRM Compatibility Updates

Just a quick update for those of you who've had questions about supported platforms and databases.

We've updated the VMware Product Interoperability Matrices to be more authoritative.  Some platform support has changed (e.g. we only support Update 3 for ESX/ESXi 4.0, and no prior update versions), and supported databases should be a lot broader than was initially indicated in the pdf included with the release.

So make sure you check against the online interoperability matrix for definitive information if you want to check what components will work with others in your environment.

-Ken

A brief history of VAAI & how VMware is contributing to T10 standards

I guess most people who have an interest in storage will be well versed on what VAAI is and how it can communicate to storage systems at a meta level to improve VM performance and scalability. In a nutshell, vSphere Storage APIs for Array Integration (yes, the acronym did change in 5.0) allows certain I/O operations to be offloaded from the ESX to the physical array. There are primitives available for block copy & block zeroing (used by VM Snapshots, Cloning operations, Storage vMotion and by virtual disks built with the eager zeroed thick option). There is another primitive called Atomic Test & Set (ATS) which is a superior alternative to SCSI Reservations when it comes to metadata locking on VMFS. In 5.0 we have also introduced new NAS & Thin Provisioning primitives too. More about vSphere 5.0 VAAI enhancements can be found here.

VMware first started to work on VAAI way back in 2008. VAAI was initially implemented as vendor specific commands in vSphere 4.1. However, VMware and its partners worked on standardizing these commands to the extent that all of block VAAI (including the new thin provisioning additions in vSphere 5) are based on T10 standards. The amount VMware has contributed to standards in the short amount of time between vSphere 4.1 and 5 is non-trivial and unprecedented, as one can clearly see from the functionality it supports (hardware accelerated locking, Virtual Machine cloning, Storage vMotion, thin provisioning, space reclamation, etc).

This is an extremely important VAAI change we did in vSphere 5.0 – all of the block VAAI commands are now based on standard T10 commands. It seems there is still some misinformation out there that VMware is forcing vendor-specific extensions through our implementation of VAAI. The reality is that VMware is contributing a lot to the standards. Many people may not realise that much of the T10 specification for VAAI is being driven by VMware.

Almost all array vendors now recognise the benefits of focusing on the scale and management of Virtual Machines. VAAI is the first step that enables storage arrays to get to this new level. VAAI has been responsible for a considerable number of changes to the SCSI protocol since 2009. In fact VMFS-5, released with vSphere 5.0, ships with a VAAI-only option. Almost all of the storage systems which host vSphere’s storage footprint now ship with VAAI-compliant firmware. This is a remarkable achievement.

For completeness, it should be noted that VAAI NAS is currently not standards based and is still proprietary at the time of writing.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

EMC’s VASA Implementation

For weeks now I've been trying to find the time to implement EMC's VASA Provider, which was released last month. What with researching new storage features for the next release of vSphere, and preparing for next week's VMworld 2011 in Copenhagen, I've just not had a moment. If you are not familiar with VASA, the vSphere Storage APIs for Storage Awareness, you can read about it in this earlier post. However, one of the lads I worked with in GSS Cork stepped into the breach and got it set up – take a bow Fintan Comyns :-) Fintan is an ex-EMC'er like myself, so we go back quite a way.

The VASA vendor provider from EMC is actually part of the SMI-S Provider v4.3.1 which also includes the correct version of Solutions Enabler v7.3.1.

After discussing the implementation steps with Fintan, this is basically what is needed:

  1. Install SE v7.3.1 (Windows or Linux)
  2. Add-on the SMIS-S 4.3.1 component
  3. Assign a GK (Gatekeeper) LUN from the DMX to the VM running SE (as an pRDM)
  4. For the DMX, the vendor provider was automatically discovered in-band. We're not sure if this was due to a previous version of Solution Enabler being installed, or if this is by design.
  5. For the CX, a TestSMiProvider command was used to add the provider out-of-band.
  6. Add the appropriate VASA provider URLs to the Vendor Provider section of the vCenter UI.
  7. Rescan the SAN
  8. Storage Capabilities appear against the LUNs and in the VM Storage Profiles.

Please note that Fintan observed that for the DMX, the vendor provider sync operation in vCenter took some time (minutes), but for the Clariion the sync was almost instant.

Here is a look at some of the LUN capabilities that are now surfaced:

Sample Symm/DMX LUN Capability

This particular LUN is presented to an ESXi host which is managed by a vCenter server which includes the VASA Provider from EMC. This particular LUN has a storage capability called Performance. By looking at the description of the capability, one can see that the LUN is backed by FC drives or high-end SAS drives. This is pretty cool, as historically you'd have to have an email exchange, or heaven forbid, an actual conversation with your SAN admin to discover this information :-)

Symm-lun

Sample Clariion/CX LUN Capability

Looking at a sample datastore from an EMC Clariion, the capability is Multi-Tier. The LUN is comprised of drives from multiple tier types. Again, this is useful information when determining how your VMFS datastore looks at the back-end.

Clar-lun

If you are a regular reader of this blog, you will be aware that VASA is an enabler for another vSphere 5.0 storage feature called Profile Driven Storage. This allows you to ensure that your VM is initially provisioned and remains on a suitable datastore. To build a profile, one can choose capabilities surfaced by VASA. Let's have a look at these capabilities next in the context of VM Storage Profiles.

All Symm/DMX LUN Capabilities

Here are a list of all the capabilities as they appear in the VM Storage Profiles when only LUNs from the EMC DMX are discovered:

Dmx capabilities
Using the storage capabilities, one can now start to build VM Storage Profile, and using this new vSphere 5.0 feature, you can now tell at a glance whether or not your VM is residing on the appropriate datastore. All very good – well done EMC.

All Clariion/CX LUN Capabilities

The same vCenter server was then used to add the vendor provider for the Clariion, so that both the Clariion and DMX were discovered at the same time. You can see a few additional storage capabilities appear, but many of the capabilities are the same as those seen on the DMX earlier:

Cx & dmx capabilities

Now you may have noticed something which we also noticed. The same capabilities are listed twice. How do you differentiate between capabilities which belong to the DMX & those which belong to the CX? If I select Performance, how do I select datastores that are only on the DMX & not on the CX? We're not sure either. Perhaps something we're doing here is not a best practice, meaning that perhaps only one vendor provider for one array should be associated with a vCenter at any time. If any EMC aficionados who are reading this article have any advice to offer, we'd love to hear from you.

However, we all know this is 1.0 of VASA. EMC should be congratulated in getting their VASA vendor provider out and available as soon as they did. Enhancing the management view of storage for vSphere Admins and enabling the use of storage profiles as a way of selecting storage is part of VMware's vision and vSphere 5.0 features such as VASA & Profile Driven Storage are the initial steps we are taking towards this vision. Without our storage partners sharing this vision, we wouldn't be able to deliver on these features.

Where can you get the bits (Solutions Enabler & SMI-S) to implement this on your own EMC storage arrays? Why PowerLink of course.

If any readers see any other information about VASA implementations from our storage partners, I'd be delighted if you could leave a comment directing me to it.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

Coming to VMworld 2011 Copenhagen Next Week?

If not, it is not too late to register. We will be showing plenty of the new vSphere 5 features at the solutions exchange as well as presenting the latest vSphere information in our many breakout sessions. The best part every year is the hands-on labs where you can try out all the new vSphere 5 features.

 

 

See you there!

-Mike