Home > Blogs > VMware PowerCLI Blog > Monthly Archives: October 2009

Monthly Archives: October 2009

San Diego VMUG October 2009 Recap

If you didn’t manage to make it to the San Diego VMUG you missed a pretty good show. Darin Pendergraft from Quest was there showing off EcoShell and I had cooked up a couple of demos of my own.

One of the more popular topics was a script I wrote to determine LUN latencies and the VMs that are writing to particular LUNs. One of the trickiest challenges when adopting virtualization is the effective use and partitioning of shared resources. Technologies like VMware DRS address exactly this challenge, but it’s safe to say that we’re still nearer the beginning than the end when it comes to automatic resource allocation and leveling.

On the other hand, PowerCLI and the vSphere API give you all the tools you need to monitor your environment, allowing you to proactively move or reallocate VMs away from storage hotspots. In particular I showed off a couple of scripts that tell you what your LUN read and write latencies are, as well as listing all VMs on a given LUN. The best part, the scripts are extremely simple, requiring less than 25 lines of code for both of these things! Don’t believe me? See for yourself:

You can easily adapt Get-VMHostLunLatency to run periodically (say, every night) to tell you if you’re experiencing any storage slowness. If you are you can easily follow that up with Get-LunVM to identify VMs that should be split off to separate LUNs. PowerCLI makes it all really simple.

Note that if you want to run Get-VMHostLunLatency against vCenter you will need to have your stats level set to 2 or higher.

As for the stuff I presented at the VMUG, I’ve posted my slides to slideshare:

And you can also download all the scripts I used here:

Come see PowerCLI at the San Diego VMUG, October 22 2009

If you’re in the San Diego area next week don’t miss the San Diego VMUG because I, your humble servant, will grace you with my presence and the vast stores of wisdom one only obtains through thousands of lonely nights in front of the computer.

I’ll be there to talk about – what else, PowerCLI and how it will help you automate your way to bliss.

As always I’ll be spending a lot of time on hands-on demos that will give you a first hand feel for what PowerCLI is and how it works. I’ll be covering reporting, provisioning and storage management, three of the most popular topics among PowerCLI users, with lots of new tips that are sure to save you lots of time. I’ll even be previewing a few new features from our upcoming release, so there’s something for everyone from beginner to expert.

Most people would call that a full day. BUT WAIT THERE”S MORE! You’ll also get the rare opportunity to see a presentation on the Virtualization Ecoshell. Now how much would you pay?

Hope to see you there.

When was the last time that VM was powered on?

People often want to know the last time that a VM was powered on. Unfortunately there is no completely fool-proof way to figure this out using just vCenter or ESX, but PowerCLI offers an approach that is pretty good, good enough that you can create a simple report and identify VMs that may be ready for that big hypervisor in the sky.

This is also a popular topic on the PowerCLI community, where LucD offers a solution to the problem, which relies on using events to determine when the poweron took place. Since Luc gave this solution a few things have happened:

  1. The bug that caused Get-VIEvent to only return 1000 events is fixed in PowerCLI 4.0.
  2. More importantly, you can pipe objects into Get-VIEvent, so you don’t have to deal with huge arrays of events. This is significant because a typical vCenter will have tens if not hundreds of thousands of events, covering everything from trivial to important things.

To take advantage of these facts I’ve written an updated PowerCLI script which generates a last-powered-on report.

One important note, ESX and vCenter actually create two events in connection with a VM poweron, one event when a user attempts to power a VM on, then another event if that power on actually succeeds. The code above actually generates a report of the last time a user attempted to power a VM on. When you attempt to power a VM on there are a lot of reasons it may fail, for instance your datastore may be full, in which case the poweron fails because ESX can’t create the memory swap file it needs (something I run into all the time).

Here’s a screen capture I made after running this against one of my ESX servers:


We can also use the always-useful, though often cryptic, select cmdlet to enhance this report by adding the total amount of space the VM uses, as follows.


This report suggests that the VM named 2k8 64 bit is a good candidate for removal, and in fact it is since I only used it to create this blog post, so it hasn’t been powered on since March 2009.

On the other hand you can see the report above is far from perfect, for example several powered on VMs have no entry for LastPoweron. Why? The reason is that, by default, ESX 4 only retains 1000 events (this is re-occurrence of the PowerCLI bug, I checked). Some of my VMs were powered on so long ago that their events no longer exist.

This brings us to another important point. ESX is not meant to store events for extended periods of time, instead vCenter will handle this for us. So the best way to generate this last powered on report is to run it against our vCenter instance, which will give us a much longer view.

Still this view does not stretch back forever, even in vCenter events will eventually expire. In vCenter 4, the event retention policy is even customizable. If you’re running vCenter 4, I’ve written this function to help you figure out what your retention policy is.

Here’s what it looks like in action:


If you know your retention policy and you know that all poweron/poweroff actions are performed through your vCenter server, you can be very confident that the information in the report is accurate, and use it to start moving unused VMs to archive storage, or even deleting them completely.

It is very important to note, though, that this report is pretty good, not perfect!! Don’t go around deleting stuff without careful human review beforehand.