In vSphere 6.0 we now have a new concept called Exception Users. The intent of Exception Users is that they are not general admin users. I would consider them more of a “Service Account” type of access.
As a matter of fact, just the other day I got an email from someone internal at VMware that brought up a great use case for Exception Users. They were talking to a customer that wanted to access ESXi via a PowerCLI cmdlet (Get-VMHostAccount) to list out the local accounts on an ESXi server as part of their normal security reporting.
But they also wanted to enable Lockdown Mode and were finding it difficult to comply with both things. In vSphere 6.0 this is now much easier to address. Let’s get started.
In part 1 of this article, we looked at an interesting scenario in which, despite having the Virtual SAN disk management setting set on automatic, Virtual SAN would not form disk groups around the disks present in the hosts. Upon closer examination, we discovered that the server vendor pre-imaged the drives with NTFS prior to shipping. When Virtual SAN detects an existing partition, it does not automatically erase the partitions and replace it with its own. This serves to protect from accidental drive erasure. Since NTFS partitions already existed on the drives, Virtual SAN was awaiting manual intervention. In the previous article, we displayed the manual steps to remove the existing partitions and allow Virtual SAN to build the disk groups. In this article, we will look at how to expedite the process through scripting.
Warning: Removing disk partitions will render data irretrievable. This script is intended for education purposes only. Please do not use directly in a production environment.
As promised in part 1 of this article, we will demonstrate today how to create your own utility to remove unlocked/unmounted partitions from disks located within your ESXi host. The aim of the script is to provide an example workflow for removing the partitions that insists upon user validation prior to each partition removal. This example workflow can be adapted and built upon to create your own production ready utility.
The deployment and configuration of Virtual SAN, with its two-click configuration capability, is indeed “radically simple.” Upon enabling Virtual SAN and leaving the default disk management setting (“Add disks to storage”) set on automatic, Virtual SAN will detect the solid state and magnetic disks that physically exist within the Virtual SAN cluster hosts. Virtual SAN will then create two partitions on each disk, place the disks in their relative disk groups, and pool those disk groups into a single datastore.
Recently, I was working with a customer who ran into a very interesting scenario in which, despite having the disk management setting set on automatic, Virtual SAN would not form disks groups around the disks present in the hosts. Upon examination, we discovered NTFS partitions already in existence on the disks. Evidently, the customer’s server acquisition process asks that the server vendor pre-image all disks with NTFS prior to shipping. When Virtual SAN detects an existing partition, it does not automatically erase these partitions and replace it with its own. Instead, you will notice the Virtual SAN cluster being enabled without disks groups. This serves to protect from accidental drive erasure. Since NTFS partitions already existed on the drives, Virtual SAN was awaiting manual intervention.
There are a few scenarios in which we may find pre-existing partitions on disks that are slated for Virtual SAN consumption (e.g. pre-imaged disks, repurposed hardware, rebuilding Virtual SAN testing environment, etc). In order for Virtual SAN to manage these disks, they will need to have their partitions removed. This removal process can be performed with most disk format utilities. It is important to remember not to format the disk after you erase the partition; otherwise, a new partition will be created.
A quick way to erase pre-existing partitions and enable Virtual SAN to manage the disks is to use the partedUtil utility that is native to the ESXi kernel environment and available from the command line. Below you will find a step-by-step example of how to use the utility to delete the unwanted partitions:
Updated based on feedback. Thanks for the comments!
I’d like to revisit the question “are ESXi patches cumulative”? This time I hope to hammer home the point with an example.
In short, the answer is yes, the ESXi patch bundles are cumulative. However, when applying patches from the command line using the ESXCLI command you do need to be careful to ensure you update the complete image profile and not just select VIBs.
There are two ways to update VIBs using the ESCLI command. You can use either the "esxcli software vib update ..." command or the “esxcli software profile update ...” command. The "vib" namespace is typically used with the optional "-n <vib name>" parameter in order to update individual VIBs, where the "profile" namespace is typically used to update the host's image profile, which may include multiple VIB updates. The key is when applying patches use the "profile" namespace to update the complete image profile opposed to using the "vib" namespace to update selected VIBs.
This was a recent question that was asked internally about the minimum privileges required to query VIBs on an ESXi host. The request was for a custom script that was developed for compliance check and the customer was looking to create a custom vSphere role to minimize the privileges needed to perform the task. Since I did not know the answer, it was off to the lab for some testing. Through the process of elimination, it turns out the only privilege that is required for querying VIBs on an ESXi host is Global.Settings.
In the example above, I created a custom vCenter Server Role called VIBQuery and enabled the Global.Settings privilege and assigned the role to a user. The custom role can be created on both a vCenter Server as well as directly on an ESXi host. By using vCenter Server, one can benefit from centralize management of user access to all ESXi hosts in the environment.
To confirm that our user assigned to the new role can query VIBs on an ESXi host, we will run the following ESXCLI command:
esxcli --server [VC-SERVER] --vihost [ESXi-SERVER] --username [USER] software vib list
We can also confirm that we can do the same directly on the ESXi host by running the following ESXCLI command:
esxcli --server [ESXi-SERVER] --username [USER] software vib list
When granting access to your vSphere infrastructure, you should always use good security practices by leveraging RBAC model (Role-Base Access Control) and restrict the amount permission a user has access to.
UPDATE: In addition to using ESXCLI, there are two additional options to query installed VIBs on an ESXi host as noted by the comment below by Mike.
Get notification of new blog postings and more by following lamw on Twitter: @lamw
Typically, unpreparing a host in vCloud Director is fairly straightforward and works without issue. In the rare occasion it doesn't work, there are posted workarounds for manually removing the vcloud-agent from the vSphere Host through the esxcli. Well, what about the even rarer chance you need to remove the vCloud Director agent from a vSphere Host while it still has workloads (VMs) running on it?
Recently I have been doing some work with VXLAN with my colleagues Venky Deshpande who is responsible for vCloud Networking and Ranga Maddipudi who is responsible for vCloud Security within our technical marketing team (I call them the vCloud Networking & Security Duo). While working in our lab, I came across several VXLAN commands in ESXCLI that I thought might come in handy when configuring or troubleshooting a VMware VXLAN environment. The new VXLAN namespace in ESXCLI 5.1 provides both VXLAN configuration details as well network statistics for an individual ESXi host.
When it comes to network troubleshooting in an ESXi host, there are various types of information that can be useful to a vSphere administrator such as basic information like a virtual machine’s IP Addresses, MAC Addresses, Uplink ports, Port ID, etc. to the network statistics view in the [r]esxtop command-line utility. However, all of this useful information today is spread across multiple tools and this can be challenging for a vSphere administrator when needing to quickly retrieve this data while troubleshooting an issue.
With the release of vSphere 5.1, the network namespace in ESXCLI has been enhanced to include a comprehensive set network statistics at various points in the virtual network. This enables a vSphere administrator to easily get an overall status of the vSphere network as well as provide the ability to drill down further for troubleshooting.
In earlier releases of ESXi, a VMkernel interface could transport three types of traffic: Management, vMotion and Fault Tolerance. To enable a particular traffic type, one would use either the vSphere Web/C# Client or the vSphere API. Some of you may have recalled using an undocumented command-line tool called vim-cmd in the ESXi Shell to enable vMotion and Fault Tolerance traffic. An issue with this tool is it does not support the Management traffic type. This made it a challenge to completely automate the provisioning of VMkernel interfaces from a scripted installation (kickstart) perspective and required the use of remote CLI/APIs to enable the Management traffic type.
Luckily in vSphere 5.1, we now have an easy way of tagging these various traffics types for a VMkernel interface using the new ESXCLI 5.1.
The ESXi Dump Collector service is an extremely useful feature to have enabled, this is especially important in a stateless environment where there may not be a local disk for storing core dumps generated during a host failure. By configuring ESXi hosts to send it’s core dumps to a remote vSphere Dump Collector, it still allows you to collect core dumps which will help VMware Support analyze and determine the root cause of the failure.
In addition, by leveraging the vSphere Dump Collector, it allows you centrally manage core dump collection in your vSphere environment in the rare occasion a host may generate a PSOD (Purple Screen of Death) without having to go out to the host and manually copying the core dump file. A potential challenge that may come up when configuring the ESXi Dump Collector service is how do you go about validating the configuration is correct and that everything will work if a host crashes? Continue reading →