Over the holiday break I was cleaning out some of my old notes and I came across the subject of capturing virtual machine screenshots in vSphere. This topic comes up from time to time when talking to customers and the methods to accomplish this task may not always be clear. Capturing screenshots of a virtual machine is a capability provided by the vSphere platform and it is actually leveraged by several VMware features and products such as:
vSphere HA – A screenshot is taken of the virtual machine console during a restart
vCloud Director – Screenshots are periodically taken for thumbnails displayed in the vCD UI
vCloud Connector – Screenshots are periodically taken for thumbnails displayed in the vCC UI
Customers and partners may also be interesting in leveraging this feature for their own custom portals or solutions and there is more than one way to accomplish this task.
VMware has just released Update 2 for vSphere 5.0 which contains a few minor new features and of course bug fixes to both ESXi 5.0 Update 2 and vCenter Server 5.0 Update 2. While going through the ESXi release notes and reviewing the changes (hopefully everyone is doing this), a few things caught my eyes. I thought I share a few of these updates since I have seen a few of these mentioned in past VMTN community threads, Twitter and internal mailing list/discussions.
Support for additional guest operating systems – This release adds support for Solaris 11, Solaris 11.1 and Mac OS X Server Lion 10.7.5 guest operating systems.
The time out option does not work when you re-enable the ESXi Shell and SSH
If you set a non-zero time out value for SSH & ESXi Shell, the SSH & ESXi Shell gets disabled after reaching the time out value. But, if you re-enable SSH or ESXi Shell without changing the timeout setting, the SSH & ESXi Shell does not timeout.
DNS might not get configured on hosts installed using scripts that specifies using DHCP
If you install ESXi host using a script that specifies the host to obtain the network settings from DHCP, after booting, the host is not configured with a DNS. The DNS setting is set to manual with no address specified.
Reinstallation of ESXi 5.0 does not remove the Datastore label of the local VMFS of an earlier installation
Re-installation of ESXi 5.0 with an existing local VMFS volume retains the Datastore label even after the user chooses the overwrite datastore option to overwrite the VMFS volume.
In additional to the updates and fixes in ESXi 5.0 Update 2, there are also several fixes for vCenter Server 5.0 Update 2. The most noticeable update is the fix that allows you to rename virtual machine files using a Storage vMotion which my colleague Frank Denneman goes into more detail in his article here. I also encourage you to check out the vCenter 5.0 Update 2 release notes for other fixes and updates and remember to test all new releases in a development or test environment prior to upgrading to production.
I hope everyone has a Happy Holiday and Happy New Years, see you all back in 2013!
Get notification of new blog postings and more by following lamw on Twitter: @lamw
Last week I received a question about retrieving the expiration date for vSphere licenses in vCenter Server which can be seen in both the vSphere Web Client and vSphere C# Client under the Licensing section. Even though there are vCenter alarms that monitor license usage and compliance, it still makes sense that users may still want to be proactively monitoring their license keys for expiration and ensuring they are renewed in a timely manner.
I provided a quick sample script to the user but thought I might as well clean it up a bit and share it with the rest of the VMware community. I also wanted to provide some additional details on where to look for the expiration details as well as other information pertaining to licenses in vSphere.
The ESXi Dump Collector service is an extremely useful feature to have enabled, this is especially important in a stateless environment where there may not be a local disk for storing core dumps generated during a host failure. By configuring ESXi hosts to send it’s core dumps to a remote vSphere Dump Collector, it still allows you to collect core dumps which will help VMware Support analyze and determine the root cause of the failure.
In addition, by leveraging the vSphere Dump Collector, it allows you centrally manage core dump collection in your vSphere environment in the rare occasion a host may generate a PSOD (Purple Screen of Death) without having to go out to the host and manually copying the core dump file. A potential challenge that may come up when configuring the ESXi Dump Collector service is how do you go about validating the configuration is correct and that everything will work if a host crashes? Continue reading →
In previous releases of ESXi, only SNMP v1 and v2c was supported on the host. With the latest release of ESXi 5.1, we now have added support for SNMPv3 which provides additional security when collecting data from the ESXi host. You also have the ability to specify where to source hardware alerts using either IPMI sensors (as used by previous release of ESXi) or CIM indicators. You can also filter out specific traps you do not wish to send to your SNMP management server.
In addition to SNMPv3 support, we also now have an ESXCLI equivalent command to the old vicfg-snmp command. This means that you no longer have to use multiple commands to configure your ESXI hosts and can standardize on just using ESXCLI for all your host level configurations.
Last week I wrote an article about resxtop failing to connect to an ESXi 5.1 host due to SSL Certificate validation which has been implemented in resxtop and I provided a few workarounds that you can use until a fix is released for resxtop. As promised at the end of that article, I will show you how you can automate the creation proper certificates for environments using CA self-signed SSL Certificates so you can continue using resxtop with ESXi 5.1 until a fix is released.
If you have recently installed the latest vCLI 5.1 release and using the remote resxtop utility to connect to a vSphere 5.1 host, you might have noticed one of the following error messages: Login failed, reason: HTTPS_CA_FILE or HTTPS_CA_DIR not set or SSL Exception: Verification parameters
A common Auto Deploy issue I come across is: “I just added a new image profile and updated the rules on the Auto Deploy server, but when I reboot my vSphere hosts they still boot from the old image”.
This situation occurs when you update the active ruleset without updating the corresponding host entries in the auto deploy cache. The first time a host boots the Auto Deploy server parses the host attributes against the active ruleset to determine (1) the Continue reading →
I recently came across an interesting issue where a customer wasn’t able to successfully PXE boot their HP DL380G7 servers using AutoDeploy. All attempts to PXE boot would result in a “connection timed out” error. They opened a support case with HP and verified they had the required updates installed, but despite this they continued to get “connection timed out” errors.
Long story short, when they figured things out they found that the problem was not with the HP DL380G7 servers, the firmware, or the NIC drivers as was initially suspected, but rather it was an issue with Spanning Tree Protocoal (STP) settings on the switch ports. What the customer discovered was that the timeout was occurring because PortFast had not been enabled on the switch ports. Once they enabled PortFast the PXE boot worked as expected.
After reading up on the Spanning Tree Protocol and how PortFast works what I learned is that when the ESXi host would power up and begin the PXE boot, the switch port had to go through a STP listening and learning state before transitioning into a forwarding state. This transitioning through the listening and learning states induced a delay that caused the PXE boot to timeout. What PortFast does is causes a switch port to enter the forwarding state immediately, bypassing the listening and learning states, and hence eliminates the delay and avoids the timeout.
In researching this I did a quick search of the VMware knowledge base portal and found KB1003804 which helped me understand a bit more about PortFast and why it’s a good idea to have it enabled, even when you are not PXE booting your vSphere hosts.