I’ve recently had several people report that Auto Deploy is adding new hosts to their vCenter inventory using the IP address and not the fully qualified hostname.
Tag Archives: ESXi
A common Auto Deploy issue I come across is: “I just added a new image profile and updated the rules on the Auto Deploy server, but when I reboot my vSphere hosts they still boot from the old image”.
This situation occurs when you update the active ruleset without updating the corresponding host entries in the auto deploy cache. The first time a host boots the Auto Deploy server parses the host attributes against the active ruleset to determine (1) the Continue reading
I recently came across an interesting issue where a customer wasn’t able to successfully PXE boot their HP DL380G7 servers using AutoDeploy. All attempts to PXE boot would result in a “connection timed out” error. They opened a support case with HP and verified they had the required updates installed, but despite this they continued to get “connection timed out” errors.
Long story short, when they figured things out they found that the problem was not with the HP DL380G7 servers, the firmware, or the NIC drivers as was initially suspected, but rather it was an issue with Spanning Tree Protocoal (STP) settings on the switch ports. What the customer discovered was that the timeout was occurring because PortFast had not been enabled on the switch ports. Once they enabled PortFast the PXE boot worked as expected.
After reading up on the Spanning Tree Protocol and how PortFast works what I learned is that when the ESXi host would power up and begin the PXE boot, the switch port had to go through a STP listening and learning state before transitioning into a forwarding state. This transitioning through the listening and learning states induced a delay that caused the PXE boot to timeout. What PortFast does is causes a switch port to enter the forwarding state immediately, bypassing the listening and learning states, and hence eliminates the delay and avoids the timeout.
In researching this I did a quick search of the VMware knowledge base portal and found KB1003804 which helped me understand a bit more about PortFast and why it’s a good idea to have it enabled, even when you are not PXE booting your vSphere hosts.
Follow me on twitter @VMwareESXi.
The VMware Technical Marketing team has produced a series of short videos to help introduce and show off many of the new features and capabilities of vSphere 5.1. I’d like to call your attention to the Auto Deploy videos that I helped put together. There are three separate videos.
Auto Deploy – Stateless: This video shows how to implement the Auto Deploy stateless mode. It includes an overview on how to configure the DHCP scope options, how to setup the TFTP home directory, and how to create the rules on the Auto Deploy server using PowerCLI.
I’m very excited to announce the new vibauthor fling. This fling is hot off the press and provides the capability to create custom vSphere Installation Bundles (VIBs). Prior to this fling the VIB authoring tools were only available to VMware partners, this fling now extends this capability to everyone.
There are a couple of use cases for creating custom VIBs. For example, if you are using Auto Deploy and you need to add a custom firewall rule to your host, or you need to make a configuration change that can’t be made using Host Profiles.
One word of caution however, the ability to create custom VIBs does come with some responsibility. If you plan to create your own VIBs here are a few things to keep in mind:
- VIBs provided by VMware and trusted partners are digitally signed, these digital signatures ensure the integrity of the VIB. Custom VIBs are not digitally signed. Be careful when adding unsigned VIBs to you ESXi hosts as you have no way of vouching for the integrity of the software being installed.
- Before adding a custom VIB you will need to set your host’s acceptance level to “Community Supported”. When running at the community supported acceptance level it’s important to understand that VMware support may ask you to remove any custom VIBs. Here’s the formal disclaimer:
“IMPORTANT If you add a Community Supported VIB to an ESXi host, you must first change the host’s acceptance level to Community Supported. If you encounter problems with an ESXi host that is at the CommunitySupported acceptance level, VMware Support might ask you to remove the custom VIB, as outlined in the support policies:”
If you are not familiar with VIBs I recommend you start with a quick review of this blog: http://blogs.vmware.com/esxi/2011/09/whats-in-a-vib.html
With that, I know several folks have been chomping at the bit to create their own custom VIBs so I’ve attached a short tutorial that shows how to use the vibauthor tool to create a VIB to add a custom firewall rule.
The following is an excerpt from my “What’s New in VMware vSphere 5.1 – Platform” white paper that introduces the new Auto Deploy Stateless Caching and Stateful Install modes. You can download the white paper from here.
vSphere 5.0 introduced VMware vSphere Auto Deploy, a new way to rapidly deploy new vSphere hosts. With Auto Deploy, the vSphere host PXE boots over the network and is connected to an Auto Deploy server, where the vSphere host software is provisioned directly into the host’s memory. After the software has been installed on the host, it is connected to the VMware® vCenter™ Server (vCenter) and configured using a host profile.
Auto Deploy significantly reduces the amount of time required to deploy new vSphere hosts. And because an Auto Deploy host runs directly from memory, there is no requirement for a dedicated boot disk. This not only provides cost savings, because there is no need to allocate boot storage for each host, but it also can simplify the SAN configuration, because there is no need to provision and zone LUNs each time a new host is deployed. In addition, because the host configuration comes from a host profile there is no need to create and maintain custom pre- and postinstall scripts.
Along with the rapid deployment, cost savings and simplified configuration, Auto Deploy provides the following benefits:
• Each host reboot is comparable to a fresh install. This eliminates configuration drift.
• With no configuration drift between vSphere hosts, less time will be spent troubleshooting and diagnosing configuration issues.
• Simplified patching and upgrading. Applying updates is as easy as creating a new image profile, updating the corresponding rule on the Auto Deploy server and rebooting the hosts. In the unlikely event you must remove an update, reverting back to the previous image profile is also easy: 1) Reupdate the rule to assign the original image profile and 2) do another reboot.
NOTE: Because an Auto Deploy host runs directly from memory, it often is referred to as being “stateless.” This is because the host state (i.e., configuration) that is normally stored on a boot disk comes from the vCenter Host Profile.
In vSphere 5.0 Auto Deploy supported only one operational mode, which was referred to as “stateless” (also known as “diskless”). vSphere 5.1 extends Auto Deploy with the addition of two new operational modes: “Stateless Caching” and “Stateful Installs”.
This page is dedicated to the VMware posters which were created by Technical Marketing and have been released at VMworld and VMUGs around the world, this is a central place to find the latest versions of the PDF versions which can be used for reference or printed off as needed.
If you didn’t enter this post with the short URL then remember, to get here just use: http://vmware.com/go/posters/
Use the following links to jump straight to the correct area of this page:
- VMware vCloud Suite Poster
- VMware ESXi 5.1 Reference Poster
- VMware Management with PowerCLI 5.1 Poster
- VMware vCloud Networking Poster
- VMware Hands-On Labs 2013 Poster
- VMware vCloud SDKs Poster
- Poster printing details
- Poster Issues and Corrections
VMware vCloud Suite
VMware ESXi 5.1 Reference Poster
Click here to download the PDF. (Please see the bottom of this page for important information)
VMware Management with PowerCLI 5.1 Poster
VMware vCloud Networking Poster
Click here to download the PDF. (Last updated Sept 13 2012)
VMware Hands-On Labs 2013 Poster
VMware vCloud SDKs Poster (1.0 – Out of date)
Poster Printing Details
The following sizes (inches) are normally used when sending these PDF files to be professionally printed, the posters should also be produced as a vector and are therefore scalable to multiples of the sizes listed below, if you are looking to print in high resolution, you should print in no lower than 300 dpi:
- vCloud Suite poster – 34 x 22
- PowerCLI 5.1 poster – 42 x 22
- Hands On Lab Reference – 33 x 17
- vCloud Networking – 44 x 30.5
- ESXi 5.1 Reference – 34 x 22
Poster Issues and Corrections:
The following issues and corrections are part of the distributed hard copy posters and PDF files received before October 13th 2012, the current uploaded PDF in this post has these corrections included.
ESXi 5.1 Reference Poster
Use the following correct code instead of the code available on the poster, the poster code will cause an error.
List Registered VMs (vCLI only)
# vmware-cmd -l
Register a VM (vCLI)
# vmware-cmd -s register /vmfs/volumes/<volume name>/<vm>/<vm>.vmx <datacenter> <resource pool>
Unregister a VM (vCLI only)
# vmware-cmd -s unregister /vmfs/volumes/<volume name>/<vm>/<vm>.vmx
Get VM power state (vCLI only)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx getstate
Power on a VM (vCLI only)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx start
Shutdown a VM (vCLI only)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx stop [ soft | hard ]
Power off a VM (vCLI only)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx stop [ soft | hard ]
Reset a VM (vCLI only)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx reset [soft | hard ]
Suspend a VM (vCLI only)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx suspend [soft | hard ]
Resume a VM (vCLI only)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx resume [soft | hard ]
Get ESXi Host Platform Information (vCLI only)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx getproductinfo [ product | platform | build | majorversion| minorversion ]
Get VM Uptime (vCLI only)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx getuptime
Get VMware Tools Status (vCLI only)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx gettoolslastactive
0 = Not installed/Not running
1 = Normal
5 = Intermittent Heartbeat
100 = No heartbeat. Guest operating system might have stopped responding
Create VM Snapshot (vCLI only)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx createsnapshot <name> <desc> <quiesce> <memory>
quiesce = Quiesce filesystem w/VMware Tools [ 0 | 1 ]
memory = Include memory state in snapshot [ 0 | 1 ]
Check if VM has a Snapshot (vCLI only)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx hassnapshot
Revert to VM Snapshot (vCLI only)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx revertsnapshot
Commit VM Snapshot (vCLI)
# vmware-cmd /vmfs/volumes/<volume name>/<vm>/<vm>.vmx removesnapshot
Forcibly Stop a VM with ESXCLI
# esxcli vm process list
# esxcli vm process kill –type [ soft | hard | force ] -w <worldId>
soft = similiar to kill or kill -SIGTERM
hard = similiar to kill -9 or kill -SIGKILL
force = use as a last resort
Under the “Virtual Machine Capabilities”. The max VM video memory for all versions is listed in KB instead of MB.
I recently blogged about how in vSphere 5.1 you can now assign full admin privileges to named users, and in that post I commented that while it is possible to create local user accounts on each vSphere host that a better approach is to add your host to a Microsoft Active Directory (AD) domain and use your existing AD credentials instead. In this post I will provide an example showing how to do this.
Note that although the ability to assign full admin privileges to local users is new in vSphere 5.1, the ability to join vSphere hosts to active directory is not new. In this example I’m using vSphere 5.0.
Of course before you can add your vSphere hosts to AD you need to have an AD domain. In addition you need to have a domain admin account with the rights to add computers to the domain.
With the release of vSphere 5.1, there have been some major enhancements to ESXCLI which is part of the vCLI 5.1 release, also available with the latest vMA 5.1. Here’s a quick summary overview of the ESXCLI top level root namespaces that have received updates.
82 new ESXCLI commands:
In addition to the new ESXCLI commands for vSphere 5.1 features, we continue to bring further parity from some of our legacy esxcfg-* and vicfg-* commands over to ESXCLI and to standardize on a common command-line interface for host configuration and management. In this release, we have introduced the following new namespaces:
|Previous Command||New ESXCLI Command|
|esxcfg-route/vicfg-route||esxcli network ip route|
|vicfg-snmp||esxcli system snmp|
|vicfg-hostops (maintenance mode)||esxcli system maintenanceMode|
|vicfg-hostops (shutdown/reboot)||esxcli system shutdown|
For more details on all the new ESXCLI commands, please take a look at the release notes here. Also stay tuned for upcoming blogs posts in which we will be exploring some of the new ESXCLI 5.1 commands in greater detail!
Download: vSphere CLI 5.1
Also don’t forget to check out our updated VMware ESXi reference poster which has recently been refreshed for ESXi 5.1 and you can download your copy here.
If you are visiting VMworld Europe in 2012 make sure you add INF-VSP1252 – What’s New with vSphere 5.1 – ESXCLI & PowerCLI to your session list and we hope to see you there!
Get notification of new blog postings and more by following lamw on Twitter: @lamw
I last blogged about how vSphere 5.1 removes the dependency on a shared root account by allowing you to assign full admin rights to non-root users (aka named users). Today I want to talk about another nice security feature that has been added in vSphere 5.1, and that is the ability to automatically terminate idle ESXi Shell connections.
The new ESXiShellInteractiveTimeOut compliments the existing ESXiShellTimeOut that has existed in ESXi for a while. As the names are very similar it’s easy to get confused between the two so I’ll go over both these settings.