Home > Blogs > VMware vSphere Blog > Monthly Archives: January 2011

Monthly Archives: January 2011

PSODs and VMware HA

Despite our best efforts here at VMware, there are occasions where a PSOD may occur.  PSODs can occur for a wide range of reasons, including out of memory and hung CPU conditions.  In a HA protected environment, if a PSOD does occur you would expect that the VMs that were running on the host that encountered the PSOD would be failed over to another host.  There does exist a corner case where this may not be the case.  Let me explain why:
 
Before I gointo details here, let me first make absolutely clear a very important point.  What I am about to describe is a rare corner case.  Odds are you likely have not seen it, nor will you.  It is possible, however, and for those who have experienced it, I hope that this makes things a bit clearer.
 
ESX, as you know, has what we refer to as the ‘COS’, or Service Console.  Sometimes, the COS may become unresponsive.  This can be due to a variety of reasons, but commonly it is due to an issue withmemory.  For example, there may be a memory leak in a process, a third party application consuming a large amount of memory, or excessive numbers of processes running in the COS.
 
When the COS becomes unresponsive or throws an error, the VMkernel detects this and starts to generate a core dump.  This core dump is invaluable for troubleshooting and basically consists of an entire dump of memory as well as some additional overhead.  Of course, this is all compressed so as to save space.  During the time that the VMkernel is performing the dump, it tries to keep everything exactly like it was when the problem with the COS occurred until after the dump completes.  This includes the locks on any datastores used by VMs that were running.  Once this process is finished, the user will see a PSOD.
 
Normally, this core dump occurs very rapidly – less than a minute in most cases.  In rare cases though, it might take longer.  I’ve heard rumors that some people have seen these core dumps take as long as 20 minutes.  This is where the problem starts for people with VMware HA enabled.
 
In this scenario, only the COS has an issue.  VMkernel and everything else is completely functional.  This means that it can still respond to ICMP pings and the like.  Since the COS is not functional though, VMware HA will detect this as a failure of the system and try to restart the VMs that were hosted on another system in the cluster.   However, the‘failed’ system is not completely failed until it completes the core dump.   If VMware HA tries to start a VM on another host, it will fail as that VM is still locked, or in use by the failed system.  VMware HA tries multiple times to restart the VMs.  If for some reason though, the core dump takes an extreme amount of time to complete, VMware HA may time out and give up trying to restart the VMs.  The end result here is that once the failed system does PSOD, it appears that VMware HA failed, as the VMs are not restarted.
 
How can you prevent this from happening to you? 
 
One possible action is to not run anything within the COS.  By doing this, you will eliminate the possibility of an application or script that has not been thoroughly tested by VMware from causing issues within the COS.  
 
Another solution would be to disable the ability of vmkernel to perform a core dump.  This is not a very viable solution for many, as doing this eliminates critical information needed to perform a RCA.  Thus, you might not be able to get to the root cause of your problem.  I’d only recommend doing this in the rare case where you have a known issue with a server but are unable to fix it immediately. 
 
The simplest solution is to use ESXi.  As ESXi provides for a simpler more secure environment without a COS, this problem simply doesn’t exist. 
 
For more information, I would recommend looking at the following KB articles:

VMware ESX and ESXi 4.1 Comparison
Configuring an ESX host to capture a Service Console coredump
Understanding a "Lost Heartbeat" purple diagnostic screen
Understanding an "Oops" purple diagnostic screen   

Tuning the DRS Migration Threshold

 I often get asked for advice on tuning the DRS migration threshold, and my standard answer is to stick with the default.  However, I can appreciate that when you provide a nice looking slide bar with five possible settings there is a natural tendency to want to change things, and trying to convince people to resist this urge requires a bit more than a simple “stick with the default”.  So let me explain why, in most situations, keeping the default migration really makes sense.

1.     The DRS algorithms are complex and even seemingly minor changes can have far reaching side affects should resource contention develop.  VMware engineers put a lot of thought into the default values and what better advice can I offer than to go with what the experts recommend?

2.     Many times I find that admins want to tune/change the migration threshold right out of the box without a clear goal in mind.  While tuning in an effort to improve things or fix a problem makes sense, tuning just for the sake of tuning can create problems.   

3.     It’s not uncommon to find that a perceived DRS issue that is driving the desire to tune/change the migration threshold is actually the result of something outside of DRS, in which case tuning the migration threshold can be counter productive. 

For example, I recently talked to an admin who was concerned because following an upgrade to ESXi 4.1 DRS didn’t seem to be doing as good a job balancing memory.  In discussing his concern I discovered that DRS was fine and the problem was confusion in how to interpret the memory statistics displayed in VCenter.  When the memory statistics were put in the proper context the customer could see that DRS was doing a great job at balancing memory.  The admin commented that he had been playing with different migration threshold settings for several weeks in an effort to “fix” DRS, which is unfortunate considering there was never a problem.

Of course this naturally begs the question “When should I change the default migration threshold”?  The decision to change the migration threshold should be based on the number of DRS induced migrations occurring in your cluster.  If you feel DRS is doing too many migrations, and putting extra overhead on your hosts, you could move the migration threshold to a more conservative setting.  On the other hand, if you aren’t seeing much DRS activity in your cluster and feel you could benefit from more migrations you might choose to move to a higher, more aggressive setting.

So in summary, always start with the default migration threshold.  Monitor the number of DRS invoked migrations over a period of time.  If you decide, based on the number of DRS invoked migrations (or lack thereof), that it makes sense to change the default, adjust things slowly over time and see how it works.  Just be careful to ensure that you have a good handle on the “issue” you are addressing and a clear goal for what the expected impact of the change should be.

 

Regards,

-Kyle

VMware Go™ Pro Releases A New Version Containing Two New Features to Make Your vSphere Hypervisor Experience Even Easier

A new version of VMware's Go Pro (VMware's web-based service to guide users of any expertise level through the installation and configuration of VMware vSphere Hypervisor) is now generally available, so we wanted to highlight a couple of the new features in the release as well as its 3rd party patching capabilities.

1.    Patch Management: The importance of Patching 3rd party applications
Traditionally many organizations were primarily concerned with patching Microsoft operating systems and applications like Microsoft Office and Internet Explorer. There has been a less vigilant approach to patching of 3rd party applications which has left a significant security vulnerability. Even when organizations do patch 3rd party applications, the deployment often is less organized and takes more time. Mainly because of this casual approach to patching 3rd party applications, they have now become the primary target for malware attacks. According to a report by Secunia "Data from the first half of 2010 shows that third-party program vulnerabilities are the primary risk factor for typical end-user PCs. From an attacker's perspective, targeting third-party programs proves to be a rewarding path." VMware Go Pro helps mitigate these risks, by making it easy to patch 3rd party applications. VMware Go Pro uses Shavlik Technologies patching engine to patch many of the most vulnerable 3rd party applications including: Mozilla Firefox, Internet Explorer, Adobe Flash Player, Adobe Reader, Adobe Acrobat, Java, Microsoft Office, Microsoft Visio, Adobe Shockwave Player, Mozilla SeaMonkey, and Mozilla Thunderbird. VMware Go Pro also scans and applies patches to physical and virtual machines, so the entire IT infrastructure can be protected. For a free 30-day evaluation of VMware Go Pro including patch management go to https://www.vmware.com/go/vmware-go-pro.

2.    Help Desk: Streamline trouble ticket management
VMware Go’s straightforward Help Desk makes it easy for IT Admins to manage their trouble tickets while maintaining visibility with their management and clients. With VMware Go they can easily create and manage tickets as well as instantly know their due date, priority, and status. They can also analyze past ticket reports to better understand trends and IT workload.

•    Create and Edit Tickets and specify Email Notifications.

Helpdeskcreateticket

•    View and sort tickets by field or even export ticket data to Excel.

  Helpdeskviewticket

3.    Hardware Asset Management: Track and Manage Hardware Assets
For a company to get the most value out of its workstations, servers, and laptops, it must understand how these machines are being used. VMware Go Pro makes it easy to track what physical and virtual machines exist, as well as their configuration, cost and service history so companies can have complete visibility into their IT infrastructure.

•    Hardware Inventory – Scan physical and virtual machines for configuration details.

Hardwareinventory

•    Create hardware asset descriptions which can include asset number, PO number, vendor name, dates,       financial data and more. Track asset modifications to ensure an accurate service history.

Hardwareassetmanage2

•    For more information http://www.vmware.com/products/go
•    Try Now  https://www.vmware.com/go/vmware-go-pro

 

 

Adopting ESXi, now is the time!

Within the virtualization community we have been seeing more and more people adopting ESXi. Not only adopting it but also actively evangelizing the use of ESXi over ESX classic. The main argument being of course the reduction in operational effort involved with maintaining the platform. Last week two excellent articles were published. The first article was by Bob Plankers of LoneSysAdmin.net fame. Bob wrote an excellent article countering all often heard complaints about ESXi.

A Compendium of Concerns About ESXi (source)

Over the last few months I’ve been cataloging the complaints I’ve heard about the deprecation of VMware ESX, in favor of ESXi. I’ve been running 100% ESXi since shortly after the vSphere 4.1 release. In the words of Samuel L. Jackson as Jules in Pulp Fiction, “well, allow me to retort!”

“I have software installed on the Console OS, and I need to keep doing that.”

ESX wasn’t really a Linux box, it was an appliance. Sure, in a lot of ways it looked like a Linux box, but it was missing a lot of useful packages, with no maintainable way to add them. Yes, you could copy the RPMs from Red Hat Enterprise Linux, but then you’d have software installed on the machine you’d have to patch manually. You also run the risk of conflicts, support issues, and general chaos. You were better off from the start if you defined it as ESX, not Linux, and considered the console OS a hypervisor delivery method. Nothing more.

The appliance-like nature of ESXi is way more pronounced than ESX was. I applaud them for reducing the surface area of their products, and also reducing the amount of time and effort needed to maintain software packages that are irrelevant to the hypervisor. When it comes down to it, by embracing the appliance-like nature of the product you open yourself to all the cool new possibilities, like disklessness or replacing traditional drives with SD cards, autoconfigured PXE-booted stateless hosts, central configuration, patching by just rebooting, etc.

read more here

I am confident that after reading this excellent article by Bob all the reservations you might have had are gone. However, it probably isn't just you who makes the decision, if you need formal approval the following well written letter by Eric Siebert (vsphere-land.com) might come in handy. Here's a short quote from the letter that Eric published on Techtarget.com which also happens to be available for download as a Word document.

ESXi also has several advantages over ESX. New versions, for example, are delivered as a single image file that completely replaces the previous version, much like a server BIOS upgrade. As a result, we no longer have to worry about patch dependencies or installing patches in a specific order. Patching is much simpler, fewer patches are required, and ESXi should reduce the time and effort required to keep our hosts patched.

Another advantage is that the ESXi management console code is much smaller than ESX's full Red Hat service console. Additionally, rolling back to previous ESXi versions is a breeze. The old version is automatically saved in a backup partition, so you can easily revert to it if needed.

read more here

We want to thank Bob and Eric for these excellent articles. If you have recently migrated or are on the verge of migrating let us know, share your experiences and help others making this next logical step in the journey of making datacenter management more efficient.

New Hardware Can Affect TPS

I recently deployed some new hardware in my lab and to my surprise discovered that after moving several VMs to the new hosts their memory utilization went up.  I also noticed that Transparent Page Sharing (TPS) wasn’t working on these hosts.  This didn’t make sense so naturally I did some digging and I want to share with you what I’ve learned as I know I’m not the only one upgrading hardware and many of you will likely stumble onto this same issue.

First, I did several tests to verify that my initial observations were correct – my VMs are in fact consuming more memory and TPS is not being used on the new servers.  What I didn’t expect to find was that this is all to be expected and the reason has to do with the CPUs in my new servers.  Let me explain.

Memory is allocated to VMs in either small pages (4KB) or large pages (2MB).  While I don’t want to digress into a discussion about large vs. small pages, suffice it to say that prior to ESX 3.5 large pages weren’t really used.  I’m not sure why, but it appears that VMware’s focus was on using small pages and leveraging TPS as much as possible.

However, coinciding with the release of ESX 3.5, Intel and AMD introduced a new feature with their CPUs called Hardware Assisted Memory Management Unit (MMU) (http://kb.vmware.com/kb/1020524).   You can read the KB article for more detail, but in a nutshell MMU can provide a 10 – 20% performance improvement when using large pages.   So starting in ESX 3.5 VMware changed the VMkernel so it will detect if MMU is enabled, and if so it will use large pages.  In the event that MMU is not available, or it is disabled, ESX will then fall back to using small pages.

The reason I’m seeing more VM memory consumption and less TPS on the new servers is because they have the newer Nahalem CPUs with MMU.  The VMkernel has detected that MMU is enabled and is using large pages.  Because I’m using large pages more memory is being allocated to my VMs, and while TPS is enabled for both large and small pages, it doesn’t really do much with large pages because it’s unlikely that there will be a lot of identical 2MB memory regions that can be shared.  Hence why it appears that TPS is not working.

Does this mean TPS is broke when running on CPUs with hardware assisted MMU?  No, the good news is the VMkernel will only continue to use large pages as long as there is no memory contention on the host.  If memory contention develops, the VMKernel will automatically switch to small pages and implement TPS in an effort to free up memory (http://kb.vmware.com/kb/1021095).  So you really get the best of both worlds – when memory is plentiful ESX uses large pages with a modest performance benefit, but when memory contention develops it switches gears back to small pages in order to leverage TPS to reduce overall memory consumption.

For more information on TPS please check out Duncan’s recent blog on Yellow-Bricks: http://www.yellow-bricks.com/2011/01/10/how-cool-is-tps/.

 

-Kyle

VMware Partner Embotics Pushes Out Hyper-V Support Due to Greater vSphere Feature Demand

I saw this one this morning and wanted to share. I immediately thought of that CC Music Factory song that is titled something like "Things that Make You Go Hmmm" but then my next thought was "Yup, makes sense". This is good validation of where the market is right now based on what customers say and want, not what vendors tell you.

-Mike

Excerpt from "The Register" (Key part is the second paragraph – Mike)

                 "According to Jason Cowie, vice president of product management at Embotics, as 2010 got underway, the company was convinced that Hyper-V was going to gain a lot of momentum and start seeing good uptake as an alternative to VMware's ESX Server hypervisor, which is embedded in its vSphere stack of virtual wares. Hyper-V support was in beta as the V-Commander 3.6 release, which was announced in August, started shipping.

                  "But as 2010 went on, Cowie says, paying customers at Embotics were asking for more features to control ESX Server, and those who were interested in using V-Commander to manage Hyper-V were pushing their rollouts of the Microsoft hypervisor to late 2011 or early 2012."

Link to the complete article in "The Register"

ESXi 4.1 Active Directory Integration

 

Although day-to-day vSphere management operations are usually done on vCenter Server logged in through the vSphere Client, there are instances when users must work with ESXi directly, such as with configuration backup and log file access.  Then there are monitoring solutions, which sometimes require direct access to the ESXi host; these would typically be configured to use service accounts. Prior to ESXi 4.1, you could only create local users, which each had separate locally-stored passwords per host.  Since this is cumbersome and doesn’t scale, we decided to address this in the vSphere 4.1 release.

 

Continue reading

vSphere 4.1 Wins InfoWorld 2011 Technology of the Year Award

On the heels of winning the CRN award last week, vSphere 4.1 captured an InfoWorld 2011 Technology of the Year earlier today. InfoWorld focused on the product's ability to deliver functionality as described and the new scalability that is possible with the release. Once again, I'd like to give a personal thanks to everyone at VMware that made 4.1 possible. It was truly a team effort.

A Brief Excerpt from the Award Writeup (Source – InfoWorld 1/12/11):

"VMware competes with multiple free options (Microsoft Hyper-V, Citrix XenServer, and open source   Xen, among others) but continues to thrive in the face of this competition. The reason is the company's ability to deliver products that demonstrate vast scalability, reliability, and ease-of-use. The scalability, in particular, reaches stratospheric heights: more than 100 hosts per VMware vCenter management server and more than 3,000 virtual machines (VMs) per VMware vSphere cluster."

 

Logo_toy_2011

If you missed it, here was my blog post on the CRN award from last week.

-Mike

UpTime Growing with new writers!

Hello everyone,

I hope everyone had a great holiday season with their friends and family?  I had a quiet one but it was very nice and just what I needed.  We have recently added some people to the team I am part of.  As a result of that, we are going to enhance the UpTime blog to cover more facets of uptime, including local availability, downtime avoidance and disaster recovery. This means we can provide coverage on features of vSphere like DRS, vMotion, HA, and products like SRM and vDR.  With the new team members contributing, you will get more blog activity in a variety of areas that will help your drive to maximizing UpTime.

The two primary new writers are Tom Stephens and Kyle Gleed.  Some of the blog articles you can watch for include: advanced DRS operations, as well as advanced HA use.

As always remember that we love comments and feel free to provide suggestions about what you are interested in, or would like to learn about.

Michael

vSphere 4.1 Wins CRN Product of 2010 Award

CRN was most impressed by vSphere's new memory compression and scalability with the 4.1 release. I'd like to give a personal thanks to everyone at VMware that made 4.1 possible. It was truly a team effort.

See the complete CRN write-up by click here

-Mike