Home > Blogs > VMware vSphere Blog > Tag Archives: ESXi

Tag Archives: ESXi

Network Core Dump Collector Check with ESXCLI 5.1

The ESXi Dump Collector service is an extremely useful feature to have enabled, this is especially important in a stateless environment where there may not be a local disk for storing core dumps generated during a host failure. By configuring ESXi hosts to send it’s core dumps to a remote vSphere Dump Collector, it still allows you to collect core dumps which will help VMware Support analyze and determine the root cause of the failure.

In addition, by leveraging the vSphere Dump Collector, it allows you centrally manage core dump collection in your vSphere environment in the rare occasion a host may generate a PSOD (Purple Screen of Death) without having to go out to the host and manually copying the core dump file. A potential challenge that may come up when configuring the ESXi Dump Collector service is how do you go about validating the configuration is correct and that everything will work if a host crashes?
Continue reading

Configuring SNMP v1/v2c/v3 Using ESXCLI 5.1

In previous releases of ESXi, only SNMP v1 and v2c was supported on the host. With the latest release of ESXi 5.1, we now have added support for SNMPv3 which provides additional security when collecting data from the ESXi host. You also have the ability to specify where to source hardware alerts using either IPMI sensors (as used by previous release of ESXi) or CIM indicators. You can also filter out specific traps you do not wish to send to your SNMP management server.

In addition to SNMPv3 support, we also now have an ESXCLI equivalent command to the old vicfg-snmp command. This means that you no longer have to use multiple commands to configure your ESXI hosts and can standardize on just using ESXCLI for all your host level configurations.

Continue reading

Automating CA Self-Signed Certificates for ESXi 5.1 for use with resxtop

Last week I wrote an article about resxtop failing to connect to an ESXi 5.1 host due to SSL Certificate validation which has been implemented in resxtop and I provided a few workarounds that you can use until a fix is released for resxtop. As promised at the end of that article, I will show you how you can automate the creation proper certificates for environments using CA self-signed SSL Certificates so you can continue using resxtop with ESXi 5.1 until a fix is released.

Continue reading

resxtop fails to connect to a vSphere 5.1 host

If you have recently installed the latest vCLI 5.1 release and using the remote resxtop utility to connect to a vSphere 5.1 host, you might have noticed one of the following error messages: Login failed, reason: HTTPS_CA_FILE or HTTPS_CA_DIR not set or SSL Exception: Verification parameters

Continue reading

Auto Deploy Adding Host to vCenter Using IP

I’ve recently had several people report that Auto Deploy is adding new hosts to their vCenter inventory using the IP address and not the fully qualified hostname.

Continue reading

Auto Deploy Host Booting From Wrong Image Profile

A common Auto Deploy issue I come across is:  "I just added a new image profile and updated the rules on the Auto Deploy server, but when I reboot my vSphere hosts they still boot from the old image".

This situation occurs when you update the active ruleset without updating the corresponding host entries in the auto deploy cache.  The first time a host boots the Auto Deploy server parses the host attributes against the active ruleset to determine (1) the Continue reading

Connection Timeout when PXE booting HP DL380G7

I recently came across an interesting issue where a customer wasn’t able to successfully PXE boot their HP DL380G7 servers using AutoDeploy.   All attempts to PXE boot would result in a “connection timed out” error.  They opened a support case with HP and verified they had the required updates installed, but despite this they continued to get “connection timed out” errors.

Long story short, when they figured things out they found that the problem was not with the HP DL380G7 servers, the firmware, or the NIC drivers as was initially suspected, but rather it was an issue with Spanning Tree Protocoal (STP) settings on the switch ports.  What the customer discovered was that the timeout was occurring because PortFast had not been enabled on the switch ports.  Once they enabled PortFast the PXE boot worked as expected.

After reading up on the Spanning Tree Protocol and how PortFast works what I learned is that when the ESXi host would power up and begin the PXE boot, the switch port had to go through a STP listening and learning state before transitioning into a forwarding state.  This transitioning through the listening and learning states induced a delay that  caused the PXE boot to timeout.  What PortFast does is causes a switch port to enter the forwarding state immediately, bypassing the listening and learning states, and hence eliminates the delay and avoids the timeout.

In researching this I did a quick search of the VMware knowledge base portal and found KB1003804 which helped me understand a bit more about PortFast and why it’s a good idea to have it enabled, even when you are not PXE booting your vSphere hosts.

Follow me on twitter @VMwareESXi.

vSphere 5.1 Auto Deploy Overview Videos

The VMware Technical Marketing team has produced a series of short videos to help introduce and show off many of the new features and capabilities of vSphere 5.1.  I’d like to call your attention to the Auto Deploy videos that I helped put together.  There are three separate videos.

Auto Deploy – Stateless:  This video shows how to implement the Auto Deploy stateless mode.  It includes an overview on how to configure the DHCP scope options, how to setup the TFTP home directory, and how to create the rules on the Auto Deploy server using PowerCLI.

Continue reading

Introducing the VIB Author Fling

I’m very excited to announce the new vibauthor fling.  This fling is hot off the press and provides the capability to create custom vSphere Installation Bundles (VIBs).  Prior to this fling the VIB authoring tools were only available to VMware partners, this fling now extends this capability to everyone.

There are a couple of use cases for creating custom VIBs.  For example, if you are using Auto Deploy and you need to add a custom firewall rule to your host, or you need to make a configuration change that can’t be made using Host Profiles.

One word of caution however, the ability to create custom VIBs does come with some responsibility.  If you plan to create your own VIBs here are a few things to keep in mind:

  1. VIBs provided by VMware and trusted partners are digitally signed, these digital signatures ensure the integrity of the VIB.  Custom VIBs are not digitally signed.  Be careful when adding unsigned VIBs to you ESXi hosts as you have no way of vouching for the integrity of the software being installed.
  2. Before adding a custom VIB you will need to set your host’s acceptance level to “Community Supported”.   When running at the community supported acceptance level it’s important to understand that VMware support may ask you to remove any custom VIBs.   Here’s the formal disclaimer:

IMPORTANT If you add a Community Supported VIB to an ESXi host, you must first change the host’s acceptance level to Community Supported. If you encounter problems with an ESXi host that is at the CommunitySupported acceptance level, VMware Support might ask you to remove the custom VIB, as outlined in the support policies:”


If you are not familiar with VIBs I recommend you start with a quick review of this blog: http://blogs.vmware.com/esxi/2011/09/whats-in-a-vib.html

With that, I know several folks have been chomping at the bit to create their own custom VIBs so I’ve attached a short tutorial that shows how to use the vibauthor tool to create a  VIB to add a custom firewall rule.



vSphere 5.1 - Auto Deploy Stateless Caching and Stateful Installs

The following is an excerpt from my “What’s New in VMware vSphere 5.1 – Platform” white paper that introduces the new Auto Deploy Stateless Caching and Stateful Install modes. You can download the white paper from here.

vSphere 5.0 introduced VMware vSphere Auto Deploy, a new way to rapidly deploy new vSphere hosts.  With Auto Deploy, the vSphere host PXE boots over the network and is connected to an Auto Deploy server, where the vSphere host software is provisioned directly into the host’s memory.  After the software has been installed on the host, it is connected to the VMware® vCenter™ Server (vCenter) and configured using a host profile.

Auto Deploy significantly reduces the amount of time required to deploy new vSphere hosts.  And because an Auto Deploy host runs directly from memory, there is no requirement for a dedicated boot disk.  This not only provides cost savings, because there is no need to allocate boot storage for each host, but it also can simplify the SAN configuration, because there is no need to provision and zone LUNs each time a new host is deployed.  In addition, because the host configuration comes from a host profile there is no need to create and maintain custom pre- and postinstall scripts.

Along with the rapid deployment, cost savings and simplified configuration, Auto Deploy provides the following benefits:
• Each host reboot is comparable to a fresh install.  This eliminates configuration drift.
• With no configuration drift between vSphere hosts, less time will be spent troubleshooting and diagnosing configuration issues.
• Simplified patching and upgrading.  Applying updates is as easy as creating a new image profile, updating the corresponding rule on the Auto Deploy server and rebooting the hosts.  In the unlikely event you must remove an update, reverting back to the previous image profile is also easy: 1) Reupdate the rule to assign the original image profile and 2) do another reboot.

NOTE: Because an Auto Deploy host runs directly from memory, it often is referred to as being “stateless.” This is because the host state (i.e., configuration) that is normally stored on a boot disk comes from the vCenter Host Profile.

In vSphere 5.0 Auto Deploy supported only one operational mode, which was referred to as “stateless" (also known as “diskless”).  vSphere 5.1 extends Auto Deploy with the addition of two new operational modes: "Stateless Caching" and "Stateful Installs".

Continue reading