Home > Blogs > VMware vSphere Blog > Monthly Archives: July 2012

Monthly Archives: July 2012

SRM Product Survey

Hi all, we're doing another survey of our SRM customers to try to understand your usage and inform our product management decisions with some hard data.  If you get a chance to do the survey we'd appreciate it – shouldn't take more than 15 minutes of your time.

Thanks!

SRM Product Survey

Admission control: “used slots” exceeds “total slots”

By Duncan Epping, Principal Architect.

On the VMTN forum today someone asked how it was possible that the “used slots” exceeded the “total slots”. This is what their environment showed in vCenter:

HA Advanced Runtime Info:
Slot size                          4000Mhz
                                   4 vCPUs,
                                   4232MB
Total Slots in Cluster             16
Used Slots                         66
Available Slots                    0
Total Powered on vms in Cluster    66
Total Hosts in cluster             2
Total good host                    2

You can imagine this person was very surprised to see this. How can you have 66 slots used and only 16 total slots available in your cluster? There are two possible explanations:

  1. Admission Control is disabled
  2. A reservation was set on a virtual machine after all virtual machines were powered on, skewing the numbers

Let’s tackle number 1 first. If you disable admission control the vSphere UI will still show the slot size and the number of slots etc, it just won’t do anything with it…

With regards to the second explanation it might be easier to give an example:

Just imagine you have 2 hosts and HA does its calculations and you have 100 slots available. You power-on 100 VMs. Now you set a reservation on a VM, this reservation will change the slotsize. HA does it calculations again based on this new slotsize. This will result in only 25 slots available based on this new slotsize. However you already used 100 slots. In other words, now you have 25 totals slots and your used is 100.

VAAI Offload Failures & the role of the VMKernel Data Mover

Before VMware introduced VAAI (vSphere Storage APIs for Array Integration), migrations of Virtual Machines (and their associated disks) between datastores was done by the VMkernel Data Mover (DM).

The Data Mover aims to have a continuous queue of outstanding IO requests to achieve maximum throughput. Incoming I/O requests to the Data Mover are divided up into smaller chunks. Asynchronous I/Os are then simultaneously issued for each chunk until the DM queue depth is filled. When a request completes, the next request is issued. This could be for writing the data that was just read, or to handle the next chunk.

Take the example of a clone of a 64GB VMDK (Virtual Machine Disk file). The DM is asked to move the data in 32MB transfers. The 32MB is then transferred in "PARALLEL" as a single delivery, but is divided up into a much smaller I/O size of 64KB by the DM, using 32 threads at a time. To transfer this 32MB, a total of 512 I/Os of size 64KB is issued by the DM.

By comparison, a similar a 32MB transfer via VAAI issues a total of 8 I/Os of size 4MB (XCOPY uses 4MB transfer sizes). The advantages of VAAI in terms of ESXi resources is immediately apparent. 

The decision to transfer using the DM or offloading to the array with VAAI is taken upfront by looking at storage array Hardware Acceleration state. If we decide to transfer using VAAI and then encounter a failure with the offload, the VMkernel will try to complete the transfer using the VMkernel DM. It should be noted that the operation is not restarted; rather it picks up from where the previous transfer left off as we do not want to abandon what could possibly be very many GB worth of copied data because of a single transient transfer error.

If the error is transient, we want the VMkernel to check if it is ok to start offloading once again. In vSphere 4.1, the frequency at which an ESXi host checks to see if Hardware Acceleration is supported on the storage array is defined via the following parameter:

 # esxcfg-advcfg -g /DataMover/HardwareAcceleratedMoveFrequency
Value of HardwareAcceleratedMoveFrequency is 16384

This parameter dictates how often we will retry an offload primitive once a failure is encountered. This can be read as 16384 * 32MB I/Os, so basically we will check once every 512GB of data move requests. This means that if at initial deployment, an array does not support the offload primitives, but at a later date the firmware on the arrays gets upgraded and the offload primitives are now supported, nothing will need to be done at the ESXi side – it will automatically start to use the offload primitive.

HardwareAcceleratedMoveFrequency only exists in vSphere 4.1. In vSphere 5.0 and later, we replaced it with the periodic VAAI state evaluation every 5 minutes.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

Enabling Password Free SSH Access on ESXi 5.0

Posted on 25 July, 2012 by Kyle Gleed, Sr. Technical Marketing Architect, VMware

I came across this question today:  “How do I setup password free SSH access to my ESXi hosts”.   A quick Google search turned up a lot of good info on this topic, but I didn’t find anything on the vSphere blog so I wanted to chime with a quick post, and hopefully add to what’s already out there.

When people ask  “how” to enable password free SSH, the question I always ask in return is “should" you enable password free SSH?  In most situations I would dare say the answer is probably not.  I often find that the decision to enable password free access is not based on any real requirement, but rather is done for the sake of convenience – admins want easy access to their vSphere hosts.  In my opinion, this is a case where security should trump convenience.  However, having said that I do realize that there are valid situations where SSH access is unavoidable, and depending the situation it might make sense to enable password free access.  My point here is that just because you can setup password free SSH doesn’t mean it’s a good idea.  Keep in mind, once you enable password free SSH:

Continue reading

vSphere HA isolation response… which to use when?

By Duncan Epping, Principal Architect.

A while back I wrote this article about a split brain scenario with vSphere HA. Although we have multiple techniques to mitigate these scenarios it is always better to prevent. I had already blogged about this before but I figured it wouldn’t hurt to get this out again and elaborate on it a bit more.

First some basics…

What is an “Isolation Response”?

The isolation response refers to the action that vSphere HA takes when the heartbeat network is isolated. The heartbeat network is usually the management network of an ESXi host. When a host does not receive any heartbeats it will trigger the response after an X number of seconds. So when exactly? Well that depends if the host is a slave or a master. This is the timeline:

Isolation of a slave

  • T0 – Isolation of the host (slave)
  • T10s – Slave enters “election state”
  • T25s – Slave elects itself as master
  • T25s – Slave pings “isolation addresses”
  • T30s – Slave declares itself isolated and “triggers” isolation response

Isolation of a master

  • T0 – Isolation of the host (master)
  • T0 – Master pings “isolation addresses”
  • T5s – Master declares itself isolated and “triggers” isolation response

What are my options?

Today there are three options for the isolation response. The responses is what the host will do for the virtual machines running on that host when it has validated it is isolated.

  1. Power off – When a network isolation occurs all VMs are powered off. It is a hard stop.
  2. Shut down – When a network isolation occurs all VMs running on that host are shut down via VMware Tools. If this is not successful within 5 minutes a “power off” will be executed.
  3. Leave powered on – When a network isolation occurs on the host the state of the VMs remains unchanged.

Now that we know what the options are. Which one should you use? Well this depends on your environment. Are you using iSCSI/NAS? Do you have a converged network infrastructure? We’ve put the most common scenarios in a table.

Likelihood that host will retain access to VM datastores Likelihood that host will retain access to VM network Recommended Isolation policy Explanation
Likely Likely Leave Powered On VM is running fine so why power it off?
Likely Unlikely Either Leave Powered On or Shutdown Choose shutdown to allow HA to restart VMs on hosts that are not isolated and hence are likely to have access to storage
Unlikely Likely Power Off Use Power Off to avoid having two instances of the same VM on the VM network
Unlikely Unlikely Leave Powered On or Power Off Leave Powered on if the VM can recover from the network/datastore outage if it is not restarted because of the isolation, and Power Off if it likely can’t.

But why is it important…. Well just imagine you pick “leave powered on” and you have a converged network environment and are using iSCSI storage, chances are fairly big that when the host management network is isolated… so is the virtual machine network and the storage for your virtual machine. In that case, having the virtual machine restarted will reduce the amount of “downtime” from an “application / service” perspective.

I hope this helps making the right decision for the vSphere HA isolation response. Although it is just a small part of what vSphere HA does, it is important to understand the impact a wrong decision can have.

Automatically Securing Virtual Machines Using vCenter Orchestrator

Here is another alternative to my previous blog post, which provides an automated way of hardening newly created Virtual Machines by leveraging an SNMP trap sent from vCenter Server to vCenter Orchestrator to execute a “Secure VM” workflow.

Continue reading

Path failure and related SATP/PSP behaviour

Cormac_Hogan
Posted by Cormac Hogan
Technical Marketing Architect (Storage)

This question came up in a recent conversation about what happens in the Pluggable Storage Architecture (PSA) when there is a path failure. It basically describes the roles played by both the Storage Array Type Plugin (SATP) and the Path Selection Policy (PSP) when a path fails, resulting in I/O failing. When a virtual machine issues an I/O request to a storage device managed by the NMP (Native Multipath Plugin), the following steps take place:

  1. First, the NMP calls the PSP assigned to this storage device.
  2. The PSP selects an appropriate physical path for the I/O to be sent. The PSP can load balance the I/O if necessary (round-robin)
  3. If the I/O operation is successful, the NMP reports its completion.
  4. If the I/O operation reports an error (e.g. because there is a path failure), NMP calls the appropriate SATP to select a new active path for the device.
  5. The SATP interprets the error codes and, when appropriate, activates inactive paths and selects a new active path.
  6. The I/O is retried, and the PSP is once again called to select a new path to send the I/O.

And if you'd like to watch a video on this topic, one of my colleagues uploaded a short animation to youtube which I put together for a training some years back (Before you ask, I am not responsible for the background music).

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

Automatically Securing Virtual Machines Using a vCenter Alarm

In a previous blog post, we demonstrated how you can easily automate the hardening of your Virtual Machines by using a PowerCLI or vSphere SDK for Perl script and apply the latest vSphere 5.0 Security Hardening Guide recommendations for your Virtual Machines. Now, this is great for securing your existing Virtual Machines, but what about new Virtual Machines that are created? Wouldn’t it be neat to have your Virtual Machines automatically secured after they have been created?

Continue reading

Clarification on The Auto Start Issues in vSphere 5.0 Update 1

Posted by Kyle Gleed, Sr. Technical Marketing Architect, VMware on 18 July 2012

VMware recently released a patch on July 12th that, among other things, fixes a virtual machine auto start bug that was introduced with 5.0 Update1 (5.0U1).  This bug affects customers running the free version of ESXi and has received a lot of attention in the communities in recent months.  Unfortunately, it took a while to get this fixed which has led some to speculate that there might be more to this issue than what VMware is letting on.  I’d like to provide some clarification around this and help clear up some of the confusion.

Before I talk about the auto start bug affecting the free version of ESXi I need to point out that in the same update (5.0 Update 1) there was an unrelated change to the auto start behavior for licensed ESXi hosts running inside an HA cluster.  While this is a separate issue, it’s understandable that there is a bit of confusion around the two issues as they both came with 5.0U1 and both affect virtual machine auto start behavior. 

Let’s take a closer look at both issues.

Continue reading

ESXi host connected to multiple storage array – is it supported?

The primary aim of this post is to state categorically that VMware supports multiple storage arrays presenting targets and LUNs to a single ESXi host. This statement also includes arrays from multiple vendors. We run with this configuration all the time in our labs, and I know very many of our customers who also have multiple arrays presenting devices to their ESX/ESXi hosts. The issue is that we do not appear to call this out in any of our documentation, although many of our guides and KB articles allude to it.

Some caution must be shown however.

Continue reading