Patch management for ESXi is very different compared to traditional operating system patches, where incremental updates are made to the base operating system and thus increasing the disk footprint for each patch update. For the ESXi hypervisor, when a patch is applied, the entire ESXi image also known as an Image Profile is replaced. This means that each time you patch or upgrade your ESXi host, you are not adding on top of the original installation size.
Tag Archives: ESXi
Continuing my blog series on auto deploy stateless caching. So far I’ve covered how stateless caching works, how stateless caching works with network isolated hosts, and how stateless caching can help protect against PXE component failures. Continuing on, lets now look at the role stateless caching plays in protecting against an auto deploy server outage.
The Auto Deploy server has two components: the rules engine and the web server.
(1) Rules engine: parses the host attributes to identify the image profile, host profile, and vCenter location (cluster or folder). The rules are created ahead of time by the administrator using PowerCLI.
(2) Web server: copies the ESXi image profile to the host along with a copy of the host profile definition that will be used to configure the host after it connects to the vCenter server.
If your going to be in Las Vegas for the annual 2013 VMware Partner Exchange, why don’t you come and check out my sessions on vSphere 5.1 covering the vSphere web client and vCenter components like Single Sign-On
Thursday, Feb 28, 9:00 AM – 10:00 AM
CI1544 – vSphere Web Client – Technical Walkthrough
With the release of vSphere 5.1 was a new primary client for the management of vSphere Solutions. With this session we will build competency in the adoption of the vSphere Web Client by highlighting the differences, easing the initial reaction to a Web Client and show you how to wow your customers with real world use cases
Thursday, Feb 28, 10:15 AM – 12:15 PM
CI1545 – vSphere – Deployment Best Practices
With the new technologies introduced with vSphere 5.1 many unanswered questions exist with designing and deploying the vSphere 5.1 environment. This session will share best practices learned from the field and provide common scenarios with recommended configurations of vCenter, Single Sign-On, Inventory Service and the web client that will future proof your customers environment. This session has now been extended to include Kyle Gleed (@VMwareESXi) discussing best practices on deploying and working with vSphere hosts (ESXi)
Typically, unpreparing a host in vCloud Director is fairly straightforward and works without issue. In the rare occasion it doesn’t work, there are posted workarounds for manually removing the vcloud-agent from the vSphere Host through the esxcli. Well, what about the even rarer chance you need to remove the vCloud Director agent from a vSphere Host while it still has workloads (VMs) running on it?
I recently came across a situation where I was unable to login to my ESXi host as root. This caught me off guard as I hadn’t intentionally disabled root, but suddenly, and seemingly out of the blue, root logins stopped working.
Over the holiday break I was cleaning out some of my old notes and I came across the subject of capturing virtual machine screenshots in vSphere. This topic comes up from time to time when talking to customers and the methods to accomplish this task may not always be clear. Capturing screenshots of a virtual machine is a capability provided by the vSphere platform and it is actually leveraged by several VMware features and products such as:
- vSphere HA – A screenshot is taken of the virtual machine console during a restart
- vCloud Director – Screenshots are periodically taken for thumbnails displayed in the vCD UI
- vCloud Connector – Screenshots are periodically taken for thumbnails displayed in the vCC UI
Customers and partners may also be interesting in leveraging this feature for their own custom portals or solutions and there is more than one way to accomplish this task.
VMware has just released Update 2 for vSphere 5.0 which contains a few minor new features and of course bug fixes to both ESXi 5.0 Update 2 and vCenter Server 5.0 Update 2. While going through the ESXi release notes and reviewing the changes (hopefully everyone is doing this), a few things caught my eyes. I thought I share a few of these updates since I have seen a few of these mentioned in past VMTN community threads, Twitter and internal mailing list/discussions.
- Support for additional guest operating systems – This release adds support for Solaris 11, Solaris 11.1 and Mac OS X Server Lion 10.7.5 guest operating systems.
- The time out option does not work when you re-enable the ESXi Shell and SSH
- If you set a non-zero time out value for SSH & ESXi Shell, the SSH & ESXi Shell gets disabled after reaching the time out value. But, if you re-enable SSH or ESXi Shell without changing the timeout setting, the SSH & ESXi Shell does not timeout.
- DNS might not get configured on hosts installed using scripts that specifies using DHCP
- If you install ESXi host using a script that specifies the host to obtain the network settings from DHCP, after booting, the host is not configured with a DNS. The DNS setting is set to manual with no address specified.
- Reinstallation of ESXi 5.0 does not remove the Datastore label of the local VMFS of an earlier installation
- Re-installation of ESXi 5.0 with an existing local VMFS volume retains the Datastore label even after the user chooses the overwrite datastore option to overwrite the VMFS volume.
There are many more resolved issues and I highly encourage you to check out the rest of the fixes in the ESXi 5.0 Update 2 release notes.
In additional to the updates and fixes in ESXi 5.0 Update 2, there are also several fixes for vCenter Server 5.0 Update 2. The most noticeable update is the fix that allows you to rename virtual machine files using a Storage vMotion which my colleague Frank Denneman goes into more detail in his article here. I also encourage you to check out the vCenter 5.0 Update 2 release notes for other fixes and updates and remember to test all new releases in a development or test environment prior to upgrading to production.
I hope everyone has a Happy Holiday and Happy New Years, see you all back in 2013!
Get notification of new blog postings and more by following lamw on Twitter: @lamw
Last week I received a question about retrieving the expiration date for vSphere licenses in vCenter Server which can be seen in both the vSphere Web Client and vSphere C# Client under the Licensing section. Even though there are vCenter alarms that monitor license usage and compliance, it still makes sense that users may still want to be proactively monitoring their license keys for expiration and ensuring they are renewed in a timely manner.
I provided a quick sample script to the user but thought I might as well clean it up a bit and share it with the rest of the VMware community. I also wanted to provide some additional details on where to look for the expiration details as well as other information pertaining to licenses in vSphere.
The ESXi Dump Collector service is an extremely useful feature to have enabled, this is especially important in a stateless environment where there may not be a local disk for storing core dumps generated during a host failure. By configuring ESXi hosts to send it’s core dumps to a remote vSphere Dump Collector, it still allows you to collect core dumps which will help VMware Support analyze and determine the root cause of the failure.
In addition, by leveraging the vSphere Dump Collector, it allows you centrally manage core dump collection in your vSphere environment in the rare occasion a host may generate a PSOD (Purple Screen of Death) without having to go out to the host and manually copying the core dump file. A potential challenge that may come up when configuring the ESXi Dump Collector service is how do you go about validating the configuration is correct and that everything will work if a host crashes?