Home > Blogs > VMware vSphere Blog > Monthly Archives: February 2012

Monthly Archives: February 2012

ESXCLI Partner Extensiblity

By William Lam, Sr. Technical Marketing Engineer

Many vSphere administrators are familiar with the ESXCLI command-line utility that helps manage and configure settings on their ESX and ESXi hosts. With the release of vSphere 5.0, ESXCLI now includes a total of 250 commands that span across various namespaces.

Esxcli-partner-1
One would expect that VMware can easily extend and create new namespaces to expose VMware platform specific functionality. An example of this would be the vcloud namespace that is made available when vCloud Director is installed. What you may not know is, ESXCLI was actually built with a modular and extensible framework from the ground up and can easily be extended by third party providers as well.

Wouldn’t it be cool to see a hardware vendor extend ESXCLI to include commands to help manage and configure their specific hardware?

Esxcli-partner-3
Well, this is exactly what HP had done. Juan Manuel Rey, who works for HP, recently blogged about several new HP specific namespaces that are bundled as part of the HP’s customized ESXi image profile. Note that even if you are not running HPs custom image profile, or if you have an earlier version that does not include the new namespaces, you can still get access to the HP specific ESXCLI namespaces. You can simply install the relevant VIBs from HP’s online VIB depot using the command-line or using VMware Image Builder as shown here by Kyle Gleed.

Here is a screenshot of the HP namespaces using the local ESXCLI in the ESXi Shell:
Esxcli-partner-4

Another neat thing about integrating with ESXCLI, is that you not only get access to the vendor specific commands using the local ESXCLI utility available from the ESXi shell, but you also automatically get a free remote command-line version using the remote ESXCLI utility that is part of vCLI/vMA. This provides you the benefit of centralized management and configuration of your ESX(i) hosts leveraging the capabilities provided by your vendor.

Here is a screenshot of the HP namespaces using the remote ESXCLI command:
Esxcli-partner-5

Note: The remote ESXCLI requires additional parameters such as the ESX(i) host, username and password. You also have the option of authenticating against vCenter if the ESX(i) host is being managed by a vCenter Server.

As you can see the ESXCLI extensibility framework not only benefits VMware but can also benefit other vendor solutions that integrate with VMware. If you are a customer who would like to see this type of integration from other vendors, be sure to let them know about the extensibility of ESXCLI in vSphere 5.0 and how they can seamlessly integrate their tools with VMware to help make life a lot easier for the vSphere admininstrator. If there are other vendors who have similar capabilities and have integrated with ESXCLI, I would love to hear about it.

UPDATE:

Brocade – ESXCLI plug-in (BCU) support

LSI – ESXCLI plug-in (MegaCLI) support

Get notification of new blog postings and more by following lamw on Twitter:  @lamw

Understanding ESXi Patches – Finding Patches

Kyle Gleed, Sr. Technical Product Manager, VMware

I recently met with a customer who was confused about patching ESXi hosts.  Not only did she have questions about where to find patches, she was confused about what to do with them once she finally had them.  I know she’s not alone so I figured a refresher on ESXi patching would be helpful.

Of course the easiest way to manage ESXi host patches is with Update Manager, and for most of us this simply entails letting Update Manager automatically downloads patches as they become available and then scheduling a time to remediate the hosts.  However, there are situations where Update Manager may not be allowed access to the Internet in order to automatically download patches.  In addition, there are some who, for whatever reason, either cannot or choose not to use Update Manager.  For these folks patching is still very easy, although a little bit more involved. 

Probably the easiest way to get a list of available patches is from VMware’s online patch portal at http://www.vmware.com/patchmgr/download.portal.  From the patch portal you simply select the architecture (ESX or ESXi), specify your version, and then click the search button.  The screen shot below shows my query to get all the ESXi 5.0 patches.

A1

The search will return a list showing all the available ESXi 5.0 patches.  For each patch you will see the name, size, and download information on the left and a list of all the updates included in the patch on the right.  Note that for each fix there is also a link to a related KB article where you can get more information about a specific fix or update.

A2

To download the patch simply select it by clicking in the checkbox next to the patch name and click the “Download Now” button.  You can download a single patch or multiple patches.   Each patch will be saved as a separate .zip file.  Once you’ve downloaded the patches you have a few options on how to install them.  Again, probably the easiest way to install patches is using Update Manager, but you can also use the ESXCLI command or PowerCLI.  In addition you can also use the Image Builder CLI to add the patch to your installation ISO so that it will automatically be included when you install new ESXi hosts.  Stay tuned as over the next few days I'll be posting the steps for each these options… 

Follow me on twitter @VMwareESXi

Quickest Way to Patch an ESX/ESXi Using the Command-line

By William Lam, Sr. Technical Marketing Engineer

As you know, when it comes to automating patch management for your vSphere infrastructure, we highly recommend leveraging our vSphere Update Manager (VUM) which is part of the vSphere vCenter Suite to help simplify the update process. Though not all environments have the luxury of running vCenter Server to manage their ESX(i) hosts. An example of this could be 1-2 hosts running at a ROBO (remote office/branch office) site or single test/dev host in a home or office lab where VUM is not available.

However, it is still possible to patch/upgrade your ESX(i) host using the command-line without the need of VUM, but you will have to manually identify the patch dependencies and ensure host compliance.

Depending on the version of ESX or ESXi you are running, you may have several options that could include local and/or remote command-line utilities that are available in following four forms:

  • ESX Service Console
    • esxupdate – Local utility found on classic ESX hosts to manage/install patches
  • ESXi Shell
    • ESXCLI – Local utility found on ESXi 5.0 hosts that can be used manage/install patches
  • vCLI (Windows/Linux or use vMA)
    • vihostupdate35 – Remote utility to manage/install patches for ESXi 3.5
    • vihostupdate – Remote utility to manage/install patches for ESX(i) 4.0 & 4.1
    • ESXCLI – Remote utility to manage/install patches for ESXi 5.0 (patch capability introduced in vSphere 5 for ESXi 5.0 hosts only)
  • PowerCLI(Windows)
    • InstallVMHostPatch – Remote utility using PowerCLI to manage/install patches for ESX(i) 4.0 and 4.1

Note: If you are using vSphere Hypervisor (Free ESXi), you will not be able to leverage any of the the remote CLI’s but you can still use the local CLI.

Here is a table summarizing all available command-line options based on the version of ESX(i) you are running:

Hypervisor Version Local Command vCLI Remote Command PowerCLI Remote Command
ESX 3.5 esxupdate −−bundle=<zip> update N/A N/A
ESXi 3.5 N/A vihostupdate35
−−bundle=<zip> −−install
N/A
ESX 4.0 esxupdate −−bundle=<zip> update vihostupdate <<bundle=<zip> −−install Install-VMHostPatch
ESXi 4.0 N/A vihostupdate −−bundle=<zip> −−install Install-VMHostPatch
ESX 4.1 esxupdate −−bundle=<zip> update vihostupdate −−bundle=<zip> −−install Install-VMHostPatch
ESXi 4.1 N/A vihostupdate −−bundle=<zip> −−install Install-VMHostPatch
ESXi 5.0 esxcli software vib update −−depot=/vmfs/volumes/[datastore]/<zip> esxcli software vib update −−depot=/vmfs/volumes/[datastore]/<zip> Install-VMHostPatch
Or Get-ESXCLI with the local command referenced in this table.

Note: When you download patches from VMware, there is an associated VMware KB article and it provides a link to the patch management documentation. You should always refer to that for more details and information for different methods of applying a patch.

Here is an example of using esxupdate on a classic ESX host. The patch bundle needs to be uploaded to ESX host using scp or winSCP and then specifying the full path on the command-line:

$ esxupdate −−bundle=ESX400-200907001.zip update

Here is an example of using the remote vihostupdate utility for an ESXi host, you will need to specify the ESXi host using the −−server parameter and −−username/−−password for remote authenication. You may choose to leave off −−password and you will be prompted to enter your credentials. The patch bundle does not need to be uploaded to ESXi host, it can reside on the system that is running the vihostupdate command. During the execution, the patch bundle will automatically be transfered to the host:

$ vihostupdate −−server [ESXI-FQDN] −−username [USERNAME] −−bundle=ESXi410-201011001.zip −−install

Here is an example of using the local esxcli utility for an ESXi 5.0 host. The patch bundle needs to be uploaded to ESXi host using scp or winSCP and then specifying the full path on the command-line:

$ esxcli software vib update −−depot=/vmfs/volumes/datastore1/ESXi500-201112001.zip

Here is an example of using the remote esxcli utility for an ESXi 5 host, you will need to specify the ESXi host using the −−server parameter and −−username/−−password for remote authenication. You may choose to leave off −−password and you will be prompted to enter your credentials. The patch bundle needs to be uploaded to ESXi host using scp/winSCP or vCLI’s vifs utility and then specifying the full path on the command-line:

$ vifs −−server [ESXI-FQDN] −−username [USERNAME] -p ESXi500-201112001.zip “[datastore1] ESXi500-201112001.zip”
$ esxcli −−server [ESXI-FQDN] −−username [USERNAME] software vib update −−depot=/vmfs/volumes/datastore1/ESXi500-201112001.zip

Note: In ESXi 5, −−depot only supports local server path or remote URL. The latter is to help centralize the location of your patches and help reduce manual transfer. This is why you need to transfer the patch to host if you do not have a patch depot.

Here is an example of using Install-VMHostPatch utility for an ESXi host:

Get-VMHost ESXI-FQDN | Set-VMHost -State Maintenance
$DS = Get-VMHost ESXI-FQDN | Get-Datastore datastore1
Copy-DatastoreItem C:\tmp\ESXi500-201112001\ $DS.DatastoreBrowserPath -Recurse
Get-VMHost ESX-FQDN | Install-VMHostPatch -Hostpath “/vmfs/volumes/datastore1/ESXi500-201112001/metadata.zip”

Note: The Install-VMHostPatch cmdlet does have a -LocalPath parameter for you to specify a local path to the patch. For larger files it is recommended you use the Copy-Datastore cmdlet to upload the file to a datastore on the host and then the -HostPath parameter as can be seen in the example above.

As you can see over the releases, we have had several methods of patching a host using the command-line both locally/remotely and it may not always be intuitive. When we converged to only ESXi with the release of vSphere 5.0, you will see that patching from the command-line has also converged to a single command-line utility using ESXCLI and a common patch format called a VIB. ESXCLI was first introduced in vSphere 4.0 and it had some limited capabilities. With vSphere 5.0, it has been significantly enhanced and now supports patching as one of it’s many capabilities. The syntax and expected output is exactly the same if you execute ESXCLI locally or remotely on an ESXi host with the exception of the remote authentication that is required for a remote execution. This should provide for a better user experience and consistency going forward.

An alternative method to patching from the command-line if you do not have VUM is using VMware Go, which is an online service (SaaS) provided by VMware. VMware Go can help manage your ESXi host but it also provides a patching capability similar to that of VUM.

Get notification of new blog postings and more by following lamw on Twitter:  @lamw


Storage Protocol Comparison – A vSphere Perspective

On many occasions I’ve been asked for an opinion on the best storage protocol to use with vSphere. And my response is normally something along the lines of ‘VMware supports many storage protocols, with no preferences really given to any one protocol over another’. To which the reply is usually ‘well, that doesn’t really help me make a decision on which protocol to choose, does it?’

And that is true – my response doesn’t really help customers to make a decision on which protocol to choose. To that end, I’ve decided to put a storage protocol comparison document on this topic. It looks at the protocol purely from a vSphere perspective; I’ve deliberately avoided performance, for two reasons:

  1.  We have another team in VMware who already does this sort of thing.
  2.  Storage protocol performance can be very different depending on who the storage array vendor is, so it doesn’t make sense to compare iSCSI & NFS from one vendor when another vendor might do a much better implementation of one of the protocols

If you are interested in performance, there are links to a few performance comparison docs included at the end of the post.

Hope you find it useful.

vSphere Storage Protocol Comparison Guide

 

iSCSI

NFS

Fiber Channel

FCoE

Description

iSCSI presents block devices to an ESXi host. Rather than accessing blocks from a local disk, the I/O operations are carried out over a network using a block access protocol. In case of iSCSI, remote blocks are accessed by encapsulating SCSI commands & data into TCP/IP packets. Support for iSCSI was introduced in ESX 3.0 back in 2006.

NFS (Network File System) presents file devices over a network to an ESXi host for mounting. The NFS server/array makes its local filesystems available to ESXi hosts. The ESXi hosts access the meta-data and files on the NFS array/server using a RPC-based protocol

VMware currently implements NFS version 3 over TCP/IP. VMware introduced support NFS in ESX 3.0 in 2006.

Fiber Channel presents block devices like iSCSI. Again the I/O operations are carried out over a network using a block access protocol. In FC, remote blocks are accessed by encapsulating SCSI commands & data into fiber channel frames.

One tends to see FC deployed in the majority of mission critical environments.

FC has been the only one of these 4 protocols supported on ESX since the beginning.

Fiber Channel over Ethernet also presents block devices, with I/O operations carried out over a network using a block access protocol. In this protocol, the SCSI commands and data are encapsulated into Ethernet frames. FCoE has many of the same characteristics of FC, except that the transport is Ethernet.

 

VMware Introduced support for HW FCoE in vSphere 4.x & SW FCoE in vSphere 5.0 back in 2011

Implementation Options

1.        NIC with iSCSI capabilities using Software iSCSI initiator & accessed using a VMkernel (vmknic) port

Or:

2.        Dependant Hardware iSCSI initiator

Or:

3.        Independent Hardware iSCSI initiator

Standard NIC accessed using a VMkernel port (vmknic)

Requires a dedicated Host Bus Adapter (HBA) (typically two for redundancy & multipathing)

1.        Hardware Converged Network Adapter (CNA)

Or:

2.        NIC with FCoE capabilities using Software FCoE initiator

Speed/Performance considerations

iSCSI can run over a 1Gb or a 10Gb TCP/IP network.

Multiple connections can be multiplexed into a single session, established between the initiator and target

VMware supports jumbo frames for iSCSI traffic, which can improve performance. Jumbo frames sends payloads larger than 1500. Support for jumbo frames with IP storage was introduced in ESX 4, but not on all initiators (KB 1007654 & KB  1009473). iSCSI can introduce overhead on a host’s CPU (encapsulating SCSI data into TCP/IP packets)

 

NFS can run over 1Gb or 10Gb over TCP/IP – NFS also supports UDP, but VMware's implementation does not & required TCP.

VMware supports jumbo frames for NFS traffic, which can improve performance in certain situations.

Support for jumbo frames with IP storage was introduced in ESX 4.

NFS can introduce overhead on a host’s CPU (encapsulating file I/O into TCP/IP packets)

Fiber Channel can run on 1Gb/2Gb/4Gb/8Gb & 16Gb, but 16Gb HBAs must be throttled to run at 8Gb in vSphere 5.0.

Buffer-to-Buffer credits & End-to-End credits throttle throughput to ensure lossless network

This protocol typically affects a host’s CPU the least as HBAs (required for FC) handles most of the processing (encapsulation of SCSI data into FC frames)

This protocol requires 10gb Ethernet.

The point to note with FCoE is that there is no IP encapsulation of the data like there is with NFS & iSCSI, which reduces some of the overhead/latency. FCoE is SCSI over Ethernet, not IP.

This protocol also requires jumbo frames since FC payloads are 2.2K in size and cannot be fragmented.

 


 


 

iSCSI

NFS

Fiber Channel

FCoE

Load Balancing

VMware’s Pluggable Storage Architecture (PSA) provides a Round-Robin Path Selection Policy which will distribute load across multiple paths to an iSCSI target. Better distribution of load with PSP_RR is achieved when multiple LUNs are accessed concurrently.

There is no load balancing per se on the current implementation of NFS as there is only a single session. Aggregate bandwidth can be configured by creating multiple paths to the NAS array, and accessing some datastores via one path, and other datastores via another.

VMware’s Pluggable Storage Architecture (PSA) provides a Round-Robin Path Selection Policy which will distribute load across multiple paths to an FC target. Better distribution of load with PSP_RR is achieved when multiple LUNs are accessed concurrently.

VMware’s Pluggable Storage Architecture (PSA) provides a Round-Robin Path Selection Policy which will distribute load across multiple paths to an FCoE target. Better distribution of load with PSP_RR is achieved when multiple LUNs are accessed concurrently.

Resilience

VMware’s PSA implements failover via its Storage Array Type Plugin (SATP) for all support iSCSI arrays. The preferred method to do this for SW iSCSI is with iSCSI Binding implemented, but it can be achieved with adding multiple targets on different subnets mapped to the iSCSI initiator.

NIC Teaming can be configured so that if one interface fails, another can take its place. However this is relying on a network failure and may not be able to handle error conditions occurring on the NFS array/server side.

VMware’s PSA implements failover via its Storage Array Type Plugin (SATP) for all support FC arrays

VMware’s PSA implements failover via its Storage Array Type Plugin (SATP) for all support FCoE arrays

Error checking

iSCSI uses TCP which resends dropped packets.

NFS uses TCP which resends dropped packets

Fiber Channel is implemented as a lossless network. This is achieved by throttling throughput at times of congestion using B2B and E2E credits

Fiber Channel over Ethernet requires a lossless network. This is achieved by the implementation of a Pause Frame mechanism at times of congestion.

Security

iSCSI implements the Challenge Handshake Authentication Protocol (CHAP) to ensure initiators and targets trust each other.

VLANs or private networks are highly recommended to isolate the iSCSI traffic from other traffic types.

 

VLANs or private networks are highly recommended to isolate the NFS traffic from other traffic types.

Some FC switches support the concepts of a VSAN to isolate parts of the storage infrastructure. VSANs are conceptually similar to VLANS.

 

Zoning between hosts and FC targets also offers a degree of isolation.

Some FCoE switches support the concepts of a VSAN to isolate parts of the storage infrastructure.

 

Zoning between hosts and FCoE targets also offers a degree of isolation.


 


 

iSCSI

NFS

Fiber Channel

FCoE

 

VAAI Primitives

Although VAAI primitives may be different from array to array, iSCSI devices can benefit from the full complement of block primitives:

·          Atomic Test/Set

·          Full Copy

·          Block Zero

·          Thin Provisioning

·          UNMAP

 

These primitives are built-in to ESXi, and require no additional software installed on the host.

Again, these vary for array to array. The VAAI primitives available on NFS devices are:

·          Full Copy (but not with Storage vMotion, only with cold migration)

·          Pre-allocate space (WRITE_ZEROs)

·          Clone offload using native snapshots

 

Note that for VAAI NAS, one requires a plug-in from the storage array vendor.

 

Although VAAI primitives may be different from array to array, FC devices can benefit from the full complement of block primitives:

·          Atomic Test/Set

·          Full Copy

·          Block Zero

·          Thin Provisioning

·          UNMAP

 

These primitives are built-in to ESXi, and require no additional software installed on the host.

Although VAAI primitives may be different from array to array, FCoE devices can benefit from the full complement of block primitives:

·          Atomic Test/Set

·          Full Copy

·          Block Zero

·          Thin Provisioning

·          UNMAP

 

These primitives are built-in to ESXi, and require no additional software installed on the host.

ESXi Boot from SAN

Yes

No

Yes

SW FCoE – No

HW FCoE (CNA) – Yes

RDM Support

Yes

No

Yes

Yes

Maximum Device Size

64TB

Refer to NAS array vendor or NAS server vendor for maximum supported datastore size.

Theoretical size is much larger than 64TB, but requires NAS vendor to support it.

64TB

64TB

Maximum number of devices

256

Default 8,

Maximum 256

256

256

Protocol direct to VM

Yes, via in-guest iSCSI initiator.

Yes, via in-guest NFS client.

No, but FC devices can be mapped directly to the VM with NPIV. This still requires RDM mapping to the VM first, and hardware must support NPIV (SW, HBA)

No

Storage vMotion Support

Yes

Yes

Yes

Yes

Storage DRS Support

Yes

Yes

Yes

Yes

Storage I/O Control Support

Yes, since vSphere 4.1

Yes, since vSphere 5.0

Yes, since vSphere 4.1

Yes, since vSphere 4.1

Virtualized MSCS Support

No. VMware does not support MSCS nodes built on VMs residing on iSCSI storage. However the use of software iSCSI initiators within guest operating systems configured with MSCS, in any configuration

supported by Microsoft, is transparent to ESXi hosts and there is no need for explicit support statements from

VMware. 

No. VMware does not support MSCS nodes built on VMs residing on NFS storage.

Yes, VMware supports MSCS nodes built on VMs residing on FC storage.

No. VMware does not support MSCS nodes built on VMs residing on FCoE storage.


 


 

iSCSI

NFS

Fiber Channel

FCoE

Ease of configuration

Medium – Setting up the iSCSI initiator requires some smarts, simply need the FDQN or IP address of the target. Some configuration for initiator maps and LUN presentation is needed on the array side. Once the target is discovered through a scan of the SAN, LUNs are available for datastores or RDMs.

Easy – Just need the IP or FQDN of the target, and the mount point. Datastore immediately appear once the host has been granted access from the NFS array/server side.

Difficult – Involves zoning at the FC switch level, and LUN masking at the array level once the zoning is complete. More complex to configure than IP Storage. Once the target is discovered through a scan of the SAN, LUNs are available for datastores or RDMs.

Difficult – Involves zoning at the FCoE switch level, and LUN masking at the array level once the zoning is complete. More complex to configure than IP Storage. Once the target is discovered through a scan of the SAN, LUNs are available for datastores or RDMs.

Advantages

No additional hardware necessary – can use already existing networking hardware components and iSCSI driver from VMware, so cheap to implement.

Well known and well understood protocol. Quite mature at this stage.

Admins with network skills should be able to implement.

Can be troubleshooted with generic network tools, such as wireshark.

 

No additional hardware necessary – can use already existing networking hardware components, so cheap to implement.

Well known and well understood protocol.

Also very mature.

Admins with network skills should be able to implement.

Can be troubleshooted with generic network tools, such as wireshark

Well known and well understood protocol.

Very mature, and trusted.

Found in majority of mission critical environments.

Enables converged networking, allowing the consolidation of network and storage traffic onto the same network via CNA – converged network adapter.

Using DCBx (Data Center Bridging protocol), FCoE has been made lossless even though it runs over Ethernet. DCBX does other things like enabling different traffic classes to run on the same network, but that is beyond the scope of this discussion.

Disadvantages

Inability to route with iSCSI Binding implemented.

Possible security issues, as there is no built in encryption, so care must be taken to isolate traffic (e.g. VLANs).

SW iSCSI can cause additional CPU overhead on the ESX host.

TCP can introduce latency for iSCSI.

Since there is only a single session per connection, configuring for maximum bandwidth across multiple paths needs some care and attention.

No PSA multipathing

Same security concerns as iSCSI since everything is transferred in clear text so care must be taken to isolate traffic (e.g. VLANs).

NFS is still version 3, which does not have the multipathing or security features of NFS v4 or NFS v4.1.

NFS can cause additional CPU overhead on the ESX host

TCP can introduce latency for NFS.

Still only runs at 8Gb which is slower than other networks (16Gb throttled to run at 8Gb in vSphere 5.0)

Needs dedicated HBA, FC switch, FC capable storage array which makes an FC implementation rather more expensive

Additional management overhead (e.g. switch zoning) is needed.

Could prove harder to troubleshoot compared to other protocols.

Rather new, and not quite as mature as other protocols at this time.

Requires a 10Gb lossless network infrastructure which can be expensive.

Cannot route between initiator and targets using native IP routing – instead it has to use protocols such as FIP (FCoE Initialization Protocol).

Could prove complex to troubleshoot/isolate issues with network and storage traffic using the same pipe.


Note 1 – I've deliberately skipped AoE (ATA-over-Ethernet) as we have not yet seen significant take-up of this protocol as this time. Should this protocol gain more exposure, I’ll revisit this article.

Note 2 – As I mentioned earlier, I’ve deliberately avoided getting into a performance comparison. This has been covered in other papers. Here are some VMware whitepapers which cover storage performance comparison:

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

Answering some Storage vMotion questions

By Duncan Epping, Principal Architect, VMware.

I received some questions about Storage vMotion a while back. I think these are interesting enough to share with the rest of the world. I want to thank the SvMotion engineering team for the quick response.

Q) Suppose that a SvMotion is ongoing and the host on which the SvMotion is running crashes. The SvMotion is interrupted, but will the copy of the files (on the destination datastore) be cleaned up? And if so, who initiates this?

A) vCenter Server will wait until the crashed host reconnects. When it reconnects it will work with the host to issue a migration cleanup. 

Q) What about this scenario when SvMotion is being offloaded to an array that is VAAI-enabled? The array is unaware of the failure of the host, the copy of files was not interrupted. Will another host pick the SvMotion up?

A) VAAI-enabled copy isn’t particularly special – we just offload individual datamover extent copy requests to the array, currently 2MB chunks at a time for SvMotion from the top of my head.  From the array’s perspective, it is just moving around blocks that we would otherwise be handling via software copy.  It doesn’t know that it’s moving a full VMDK, or the like.  Even though we are offloading the work of our block copies, we still run through all the normal SvMotion logic.  As such, when the source host crashes, the array will stop receiving additional 2MB copy requests from the host.  When the host reconnects, we’ll proceed with our normal cleanup logic.

 

Technical Marketing Update 2012 – Week 08

By Duncan Epping, Principal Architect, VMware.

Technical Marketing Update 2012 – Week 08.

Many of us were still recovering this week and digesting all the discussions we had with our partners at PEX. We are also finalizing several white papers which hopefully will be posted this week. I will inform you when these are published. 

Blog posts:

 

Some interesting storage related white papers you might have missed

A short post simply to highlight some white papers published recently which you may have missed:

  1. vSphere Storage Appliance Technical Deep Dive
    A deep dive on the vSphere Storage Appliance, covering networking, storage and clustering architecture.
  2. VMFS-5 Upgrade Considerations
    A look at the differences between a VMFS-3 upgraded to VMFS-5 vs. a newly created VMFS-5 volume.
  3. VMware vSphere Distributed Switch Best Practices
    An excellent look at how to get the most out of your vDS by Venky. If you use IP Storage, and are considering vDS or already using vDS, this is a must read.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

 

Uniquely Identifying Virtual Machines in vSphere and vCloud Part 2: Technical

By William Lam, Sr. Technical Marketing Engineer

In Part 1 of this article, I provide an overview of how to uniquely identify a virtual machine in both a vSphere vCenter and vCloud Director environment. In this article, we will look at an example environment and show you how to extract the information needed to uniquely identify a VM using both the vSphere API and vCloud API.

vSphere

We will begin by just looking at a vSphere environment (no vCloud Director). In the screenshot below, we have two different vCenter Servers (westcoast-vcenter and eastcoast-vcenter) and they are managing a single ESXi host each with a single VM.

If you recall from Part 1, to uniquely identify a VM in vSphere, we can use either the MoRef ID or instanceUUID. Using the following vSphere SDK for Perl script vmMoRefinder.pl (modified version of this script), we will take a look at these properties. To run the script you will need to have VMware vCLI installed on either a Windows/Linux system or you can use the VMware vMA appliance.

If you prefer to use PowerCLI then this script will perform similar actions to the Perl script.

Here is the Perl script running against the westcoast-vcenter:
$ ./vmMoRefFinder.pl –server westcoast-vcenter –username root –name MyVM1

Name: MyVM1
VM MoRe: vm-14
VM InstanceUUID: 501b8bde-95da-472a-4d2d-9d97ea394bbc
vCenter Name: westcoast-vcenter
vCenter InstanceUUID: 9B6C7A60-C60F-4C1D-A607-0A0CFA2C2D5A

Here is the Perl script running against the eastcoast-vcenter:
$ ./vmMoRefFinder.pl –server eastcoast-vcenter –username root –name MyVM2

Name: MyVM2
VM MoRef: vm-14
VM InstanceUUID: 502fb1ca-e9b8-82ae-3f9f-4a3ba85f081d
vCenter Name: eastcoast-vcenter
vCenter InstanceUUID: 63D30391-44E2-447E-A709-9DD1241C3DCC

Here is the PowerCLI script running when connected to both vCenter Servers:

Name                     : MyVM1
VM MoRef             : vm-14
VM InstanceUUID      : 501b8bde-95da-472a-4d2d-9d97ea394bbc
vCenter Name         : westcoast-vcenter
vCenter InstanceUUID : 9B6C7A60-C60F-4C1D-A607-0A0CFA2C2D5A

Name                     : MyVM2
VM MoRef             : vm-14
VM InstanceUUID      : 502fb1ca-e9b8-82ae-3f9f-4a3ba85f081d
vCenter Name         : eastcoast-vcenter
vCenter InstanceUUID : 63D30391-44E2-447E-A709-9DD1241C3DCC

Here is the same data in a table format for the two vCenter Servers:

vCenter Servers vCenter InstanceUUID VM MoRef VM InstanceUUID
westcoast-vcenter 9B6C7A60-C60F-4C1D-A607-0A0CFA2C2D5A vm-14 501b8bde-95da-472a-4d2d-9d97ea394bbc
eastcoast-vcenter 63D30391-44E2-447E-A709-9DD1241C3DCC vm-14 502fb1ca-e9b8-82ae-3f9f-4a3ba85f081d

Do you notice something interesting about the MoRef ID for the two different VMs that are hosted and managed by two different vCenter Servers? Yes, they are the same, but you might ask why is that? I thought we said MoRef ID is a unique identifier? As mentioned in Part, a MoRef ID is guaranteed to be unique, but only within the same vCenter Server. There is a possibility of seeing duplicate MoRef ID as shown with the example above and there is also possibility of seeing duplicate instanceUUID for a VM as well.

To uniquely identify a VM across vCenter Servers, you should use the combination of the unique identifier for a vCenter Server which is instanceUUID with either the VM’s MoReF ID or InstanceUUID (e.g. 9B6C7A60-C60F-4C1D-A607-0A0CFA2C2D5A-vm-14)

vCloud

Let’s say we are now interested in running vCloud Director and decide to import our existing VMs into vCloud Director. You have two import options: copy or move, “copy” will create a duplicate VM in vCloud Director and “move” will just move the same VM into the proper resource pool and VM folder being managed by vCloud Director. If you want to preserve the same instanceUUID and MoReF ID of the VM as you go from vSphere vCenter to vCloud Director, ensure you are performing the move operation.

Here is a screenshot of our two vCenter Servers now in a vCloud Director configuration

If you recall from Part 1, to uniquely identify a VM in vCloud Director, we need to use the href property. We will be using the curl utility, which is normally installed on most UNIX/Linux system to interact with the vCloud API. You can also use other REST tools such as RESTClient Firefox plugin if you do not want to use curl. You will need an account with the “System Administrator” role, which is specific to a group that only a vCloud Administrator (provider side) should use.

First we need to login to vCloud Director to retrieve authorization token that will be used throughout this example. In this example, we are connecting to a vCloud Director 1.5 instance and we need to set the header appropriately using -H option based on the version of vCloud Director you are using. The username/password is in the form of ‘username@org:password’ using the -u option and this will be a POST operation. Ensure you substitute the URL of your vCloud Director server. To get more information about the options used below, refer to the curl documentation or use man -k curl:

$ curl -i -k -H “Accept:application/*+xml;version=1.5″ -u ‘administrator@system:vmware1!’ -X POST https://vcd/api/sessions

HTTP/1.1 200 OK
Date: Wed, 08 Feb 2012 23:26:12 GMT
x-vcloud-authorization: q7uX9eIAWQ0FegrrskkvTOPXiAo31wESTJ6ah5FRyE0=
Set-Cookie: vcloud-token=q7uX9eIAWQ0FegrrskkvTOPXiAo31wESTJ6ah5FRyE0=; Secure; Path=/
Content-Type: application/vnd.vmware.vcloud.session+xml;version=1.5
Date: Wed, 08 Feb 2012 23:26:12 GMT
Content-Length: 910

<?xml version=”1.0″ encoding=”UTF-8″?>
<Session xmlns=”http://www.vmware.com/vcloud/v1.5″ user=”administrator” org=”System” type=”application/vnd.vmware.vcloud.session+xml” href=”https://vcd/api/session/” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://www.vmware.com/vcloud/v1.5 http://10.20.181.101/api/v1.5/schema/master.xsd”>
    <Link rel=”down” type=”application/vnd.vmware.vcloud.orgList+xml” href=”https://vcd/api/org/”/>
    <Link rel=”down” type=”application/vnd.vmware.admin.vcloud+xml” href=”https://vcd/api/admin/”/>
    <Link rel=”down” type=”application/vnd.vmware.admin.vmwExtension+xml” href=”https://vcd/api/admin/extension”/>
    <Link rel=”down” type=”application/vnd.vmware.vcloud.query.queryList+xml” href=”https://vcd/api/query”/>
    <Link rel=”entityResolver” type=”application/vnd.vmware.vcloud.entity+xml” href=”https://vcd/api/entity/”/>
</Session>

If you successfully logged in, you will get an HTTP return code of 200 and the vCloud authorization token like the above highlighted in red. Make sure you verify the version of vCloud API by looking at the schemaLocation which is highlighted above in green.

Next, to find our two VMs in vCloud Director, we will leverage the new Query API. Since we only have two VMs, we can just search for all “adminVM” without the need of filtering. If you have more VMs, you may want to craft a filter and you can find more details in the Query API documentation.

Using the authorization token return from the previous command, we now pass that instead of the username/password and this will be a GET operation to the URL listed below:

$ curl -i -k -H “Accept:application/*+xml;version=1.5″ -H “x-vcloud-authorization: q7uX9eIAWQ0FegrrskkvTOPXiAo31wESTJ6ah5FRyE0=-X GET https://vcd/api/query?type=adminVM

HTTP/1.1 200 OK
Date: Wed, 08 Feb 2012 23:27:20 GMT
Content-Type: */*;version=1.5
Date: Wed, 08 Feb 2012 23:27:20 GMT
Content-Length: 2428

……..
    <AdminVMRecord vmToolsVersion=”0″ vdc=”https://vcd/api/vdc/9f913a23-3d6c-44bf-95a4-8c47563fafa7″ vc=”https://vcd/api/admin/extension/vimServer/337a4193-de37-43c0-90ab-08d082766c96″ status=”POWERED_OFF” org=”https://vcd/api/org/0d020a29-90de-4000-a0ec-c8a630e65c61″ numberOfCpus=”1″ networkName=”VM Network” name=”MyVM1″ moref=”vm-14″ memoryMB=”256″ isVdcEnabled=”true” isVAppTemplate=”false” isPublished=”false” isDeployed=”false” isDeleted=”false” hostName=”10.20.182.92″ hardwareVersion=”8″ guestOs=”Other Linux (32-bit)” datastoreName=”vesxi50-1-local-storage” containerName=”MyVM1″ container=”https://vcd/api/vApp/vapp-0f5bdc4c-a0ec-4e6e-a8a0-cdec1bf89de0″ href=”https://vcd/api/vApp/vm-51201e8b-f910-40ab-b2b1-e5365a72f353″ pvdcHighestSupportedHardwareVersion=”8″ containerStatus=”RESOLVED”/>
    <AdminVMRecord vmToolsVersion=”0″ vdc=”https://vcd/api/vdc/c7022e00-dbf3-458c-b118-2cfee5250218″ vc=”https://vcd/api/admin/extension/vimServer/ebcad5a3-4356-4059-8d56-16c52c23dc1a” status=”POWERED_OFF” org=”https://vcd/api/org/ecd25f40-62d7-4009-9924-25db4df27774″ numberOfCpus=”1″ networkName=”VM Network” name=”MyVM2″ moref=”vm-14″ memoryMB=”256″ isVdcEnabled=”true” isVAppTemplate=”false” isPublished=”false” isDeployed=”false” isDeleted=”false” hostName=”10.20.182.95″ hardwareVersion=”8″ guestOs=”Other Linux (32-bit)” datastoreName=”vesxi50-2-local-storage” containerName=”MyVM2″ container=”https://vcd/api/vApp/vapp-ce7e478b-9200-45db-851b-f21657156fd2″ href=”https://vcd/api/vApp/vm-ab59412f-b6c8-424d-bc0e-1414e1341084″ pvdcHighestSupportedHardwareVersion=”8″ containerStatus=”RESOLVED”/>
</QueryResultRecords>

Here is the same data in a table format for the two VMs in vCloud Director:

VM Name MoRef HREF
MyVM1 vm-14 https://vcd/api/vApp/vm-51201e8b-f910-40ab-b2b1-e5365a72f353
MyVM2 vm-14 https://vcd/api/vApp/vm-ab59412f-b6c8-424d-bc0e-1414e134108

You should see a list of VM records found in vCloud Director and there are three properties that are of interest which are highlighted: name, moref and href. The first two are pretty self-explanatory and we can see that the MoRef ID has not changed when we performed the “move” import operation into vCloud Director. We also see the href property which is the unique identifier for a VM in vCloud Director. The reason for this is vCloud Director uses a single database and the generation of the UUID for a VM is unique and as part of the URL, it also includes the specific vCloud Director Cell address managing that VM instance.

To uniquely identify a VM across multiple vCloud Director Cells managing multiple vCenter Servers, you should use the href property.

vSphere and vCloud

Now that we can uniquely identify a VM in either a vSphere or vCloud environment, how do we go about correlating the two and go from a VM in vCloud Director back to a VM in vSphere vCenter? Continuing from our previous command, we can retrieve more information about the VM, which will provide the necessary details to map back to a vSphere vCenter Server.

Using the href value for the VM we are interested in, we wiill perform a GET operation on the URL:

$ curl -i -k -H “Accept:application/*+xml;version=1.5″ -H “x-vcloud-authorization: q7uX9eIAWQ0FegrrskkvTOPXiAo31wESTJ6ah5FRyE0=” -X GET https://vcd/api/vApp/vm-51201e8b-f910-40ab-b2b1-e5365a72f353

HTTP/1.1 200 OK
Date: Wed, 08 Feb 2012 23:28:14 GMT
Content-Type: application/vnd.vmware.vcloud.vm+xml;version=1.5
Date: Wed, 08 Feb 2012 23:28:14 GMT
Content-Length: 14519

……..
    <VCloudExtension required=”false”>
       <vmext:VmVimInfo>
           <vmext:VmVimObjectRef>
               <vmext:VimServerRef type=”application/vnd.vmware.admin.vmwvirtualcenter+xml” name=”westcoast-vcenter” href=”https://vcd/api/admin/extension/vimServer/337a4193-de37-43c0-90ab-08d082766c96″/>
               <vmext:MoRef>vm-14</vmext:MoRef>
               <vmext:VimObjectType>VIRTUAL_MACHINE</vmext:VimObjectType>
           </vmext:VmVimObjectRef>
           <vmext:DatastoreVimObjectRef>
               <vmext:VimServerRef type=”application/vnd.vmware.admin.vmwvirtualcenter+xml” name=”westcoast-vcenter” href=”https://vcd/api/admin/extension/vimServer/337a4193-de37-43c0-90ab-08d082766c96″/>
               <vmext:MoRef>datastore-10</vmext:MoRef>
               <vmext:VimObjectType>DATASTORE</vmext:VimObjectType>
           </vmext:DatastoreVimObjectRef>
           <vmext:HostVimObjectRef>
               <vmext:VimServerRef type=”application/vnd.vmware.admin.vmwvirtualcenter+xml” name=”westcoast-vcenter” href=”https://vcd/api/admin/extension/vimServer/337a4193-de37-43c0-90ab-08d082766c96″/>
               <vmext:MoRef>host-9</vmext:MoRef>
               <vmext:VimObjectType>HOST</vmext:VimObjectType>
           </vmext:HostVimObjectRef>
           <vmext:VirtualDisksMaxChainLength>1</vmext:VirtualDisksMaxChainLength>
       </vmext:VmVimInfo>
    </VCloudExtension>
……..

You can see there is a lot of information returned about the VM such as configurations and settings, but it also includes the vCloud Extension information that contains details about the underlying vSphere properties such as the vSphere VM MoRef ID, Datastore and ESXi host managing the VM. The two properties that we should key on is highlighted: vmext:MoRef and vmext:VimServerRef. The latter property provides information about the vCenter Server that is currently managing the VM in question. We need to perform one additional query using it’s href property.

To get details about the vCenter Server, we will perform another GET operation using the vCenter Server href URL provided from the last example:

$ curl -i -k -H “Accept:application/*+xml;version=1.5″ -H “x-vcloud-authorization: q7uX9eIAWQ0FegrrskkvTOPXiAo31wESTJ6ah5FRyE0=” -X GET https://vcd/api/admin/extension/vimServer/337a4193-de37-43c0-90ab-08d082766c96

HTTP/1.1 200 OK
Date: Wed, 08 Feb 2012 23:29:14 GMT
Content-Type: application/vnd.vmware.admin.vmwvirtualcenter+xml;version=1.5
Date: Wed, 08 Feb 2012 23:29:14 GMT
Content-Length: 3212

……..
    <vmext:Username>root</vmext:Username>
    <vmext:Url>https://10.20.183.48:443</vmext:Url>
    <vmext:IsEnabled>true</vmext:IsEnabled>
    <vmext:IsConnected>true</vmext:IsConnected>
    <vmext:ShieldManagerHost>10.20.183.95</vmext:ShieldManagerHost>
    <vmext:ShieldManagerUserName>admin</vmext:ShieldManagerUserName>
    <vmext:Uuid>9B6C7A60-C60F-4C1D-A607-0A0CFA2C2D5A</vmext:Uuid>
……..

The results will provide details about the vCenter Server and vShield Manager. You will see two properties of interest as highlighted above: Uuid and Url. This is the instanceUUID and URL of the vCenter Server that can then be used inconjunction with the VM’s MoRef ID to map a VM in vCloud Director back to vSphere vCenter Server.

To come full circle, let’s take a look at what the vSphere environment now looks like after we had imported the two VMs earlier.

As you can see, our VM display name has slightly changed in vSphere. The reason for this is vCloud Director automatically appends the unique UUID that is generated to the VM name as a way to uniquely display the objects. You can easily generate the VM href property for vCloud Director by just looking at the UUID in the parenthesis (e.g. https://vcd/…./vm-UUID)

To complete the correlation, we will perform the same vSphere query as we did earlier to confirm the vCenter Server instanceUUID and VM’s MoRef that we found in our vCloud Director environment.

Here is the Perl script running against the westcoast-vcenter:
$ ./vmMoRefFinder.pl –server westcoast-vcenter –username root –name “MyVM1 (51201e8b-f910-40ab-b2b1-e5365a72f353)”

Name: MyVM1 (51201e8b-f910-40ab-b2b1-e5365a72f353)
VM MoRef: vm-14
VM InstanceUUID: 501b8bde-95da-472a-4d2d-9d97ea394bbc
vCenter Name: westcoast-vcenter
vCenter InstanceUUID: 9B6C7A60-C60F-4C1D-A607-0A0CFA2C2D5A

Here is the Perl script running against the westcoast-vcenter:
$ ./vmMoRefFinder.pl –server eastcoast-vcenter –username root –name “MyVM2 (ab59412f-b6c8-424d-bc0e-1414e1341084)”

Name: MyVM2 (ab59412f-b6c8-424d-bc0e-1414e1341084)
VM MoRef: vm-14
VM InstanceUUID: 502fb1ca-e9b8-82ae-3f9f-4a3ba85f081d
vCenter Name: eastcoast-vcenter
vCenter InstanceUUID: 63D30391-44E2-447E-A709-9DD1241C3DCC

Here is the PowerCLI script running when connected to both vCenter Servers:

Name                     : MyVM1
VM MoRef             : vm-14
VM InstanceUUID      : 501b8bde-95da-472a-4d2d-9d97ea394bbc
vCenter Name         : westcoast-vcenter
vCenter InstanceUUID : 9B6C7A60-C60F-4C1D-A607-0A0CFA2C2D5A

Name                     : MyVM2
VM MoRef             : vm-14
VM InstanceUUID      : 502fb1ca-e9b8-82ae-3f9f-4a3ba85f081d
vCenter Name         : eastcoast-vcenter
vCenter InstanceUUID : 63D30391-44E2-447E-A709-9DD1241C3DCC

Hopefully this example provided concrete details on how to uniquely identify a VM in a vSphere, vCloud or combination of the two environments. Though we used the vSphere SDK for Perl, PowerCLI 5.0.1 and the vCloud REST API in the examples above, this information can be obtained using any of our vSphere SDKs and vCloud SDKs.

Get notification of new blog postings and more by following lamw on Twitter:  @lamw

Disable ballooning?

During Partner Exchange I've had multiple discussions about disabling ballooning, specifically about the recommendation of disabling ballooning when running particular workloads such as SQL and Oracle.  The goal of this recommendation usually is to stop the VMkernel from reclaiming memory but unfortunately this will not happen. This article describes why ballooning is helpful and how to achieve your goals by utilizing other resource management settings.

Let’s stress the most important bit immediately: Disable the ballooning mechanism does not disable memory reclamation. It will just disable the most intelligent mechanism of the entire memory management stack.

Why is disabling the ballooning mechanism bad?
Many organizations that deploy virtual infrastructures rely on memory over-commitment to achieve higher consolidation ratios and higher memory utilization. In a typical virtual infrastructure not every virtual machine is actively using its assigned memory at the same time and not every virtual machine is making use of its configured memory footprint.

To allow memory over-commitment, the VMkernel uses different virtual machine memory reclamation mechanisms:

  1. Transparent Page Sharing
  2. Ballooning
  3. Memory compression
  4. Host swapping

Except from Transparent Page Sharing, all memory reclamation techniques only become active when the ESXi host experiences memory contention. The VMkernel will use a specific memory reclamation technique depending on the level of the host free memory. When the ESXi host has 6% or less free memory available it will use the balloon driver to reclaim idle memory from virtual machines. The VMkernel selects the virtual machines with the largest amounts of idle memory (detected by the idle memory tax process) and will ask the virtual machine to select idle memory pages.

To fully understand the beauty of the balloon driver, it’s crucial to understand that the VMkernel is not aware of the Guest OS internal memory management mechanisms. Guest OS’s commonly use an allocated memory list and a free memory list. When a guest OS makes a request for a page, the VMkernel will back that “virtual” page with physical memory. When the guest OS stops using the page internally, it does not remove the data, the guest OS just removes the address space pointer from the allocated memory list and places this pointer on the free memory list. Because the data itself has not changed, ESX will remain keeping this data in physical memory.

When the Balloon driver is utilized, the balloon driver request the guest OS to allocated a certain amount of pages. Typically the guest OS will allocate memory that has been idle or registered in the guest OS free list. If the virtual machine has enough idle pages no guest-level paging or even worse kernel level paging is necessary. Scott Drummonds tested an Oracle database VM against an OLTP load generation tool and researched the (lack of) impact of the balloon driver on the performance of the virtual machine. The results are displayed in this image:

Picture-2
Impact on performance: Ballooning versus swapping

Scott’s explanation:
Results of two experiments are shown on this graph: in one memory is reclaimed only through ballooning and in the other memory is reclaimed only through host swapping. The bars show the amount of memory reclaimed by ESX and the line shows the workload performance. The steadily falling green line reveals a predictable deterioration of performance due to host swapping. The red line demonstrates that as the balloon driver inflates, kernel compile performance is unchanged.

So the beauty of ballooning lies in the fact that it allows the guest OS itself to make the hard decision about which pages to be paged out without the hypervisor’s involvement. Because the guest OS is fully aware of the memory state, the virtual machine will keep on performing as long as it has idle or free pages.

When ballooning is disabled
When we follow the recommendations of disabling the balloon driver the VMkernel can use the following memory reclamation techniques:

  1. Transparent Page Sharing
  2. Memory compression
  3. Host-level swapping (.vswp)

Memory compression
Memory compression was introduced in vSphere 4.1. The VMkernel will always try to compress memory before swapping. This feature is very helpful and a lot faster than swapping. However, the VMkernel will only compress a memory page if it can reach a compression ratio of 50% or more, otherwise the page will be swapped. Furthermore, the default size of the compression cache is 10%, if the compression cache is full, one compressed page must be replaced in order to make room for a new page. The older pages will be swapped out. This means that during heavy contention memory compression will become the first stop before ultimately ending up as a swapped page.

Increasing the memory compression cache can have a contradictive effect, as the memory compression cache is a part of the virtual machine memory usage, it can introduce memory pressure or contention due to configuring large memory compression caches.

Host-level Swapping
Contrary to ballooning, host-level swapping does not communicate with the guest OS. The VMkernel has no knowledge about the status of the page in the guest OS only that the physical page belongs to a specific virtual machine. Because the VMkernel is unaware of the content of the stored data inside the page and its significance to the guest OS, it could happen that the VMkernel decides to swap out guest OS kernel pages. The guest OS will never swap kernel pages as they are crucial to maintaining kernel performance.

By disabling ballooning, you have just deactivated the most intelligent memory reclamation technique. Leaving the VMkernel with the option to either compress a memory page or just rip out complete random (crucial) pages, significantly increasing the possibility of deteriorating the virtual machine performance. Which to me does not sound something worth recommending.

How to guarantee performance without disabling the balloon driver?
The best option to guarantee performance is to use the resource allocation settings; shares and reservations.

Shares:

Use shares to define priority levels and use reservations to guarantee physical resources even when the VMkernel is experiencing resource contention.

Reservation
A reservation specifies the guaranteed minimum allocation for a virtual machine. This means that the VMkernel does not reclaim physical memory if it is protected by a reservation, even if there is contention. This physical memory will be available to that specific virtual machine at all times. In essence, by appling memory reservation to a virtual machine, you are disabling memory reclamation for that chunk of virtual machine memory.

Additional information about reservations:

However setting reservations will impact the virtual infrastructure, a well know impact of setting a reservation is on the HA slot size if the cluster is configured with “Host failures cluster tolerates”. More info on HA can be found in the HA deep dive on yellow-bricks. To circumvent this impact one might choose to configure the HA cluster with the HA policy “Percentage of cluster resources reserved as fail over spare capacity”. Due to the HA-DRS integration introduced in vSphere 4.1 the main caveat of dealing with defragmented clusters is dissolved.

Conclusion
Disabling the balloon-driver will likely worsen the performance of the virtual machine and drives the problem down the stack. If you want to disable memory reclamation for that virtual machine, apply reservations.

 

 

Interesting storage stuff from VMware Partner Exchange 2012

I had the pleasure of attending the VMware Partner Exchange this year. I delivered a deep dive presentation on VMware's Storage Appliance (the pdf can be found here), as well as an overview of the features going into the next version of the VSA. I can't discuss these features with you just yet, but rest assured I will be doing a number of posts on the new features as soon as I have permission to do so.

Like other conferences that I get invited to, I always try to take a look around the Solutions Exchange and see what cool things are going on in the storage space. This post simply discusses some of the new and interesting products/features that I've seen from our partners at PEX. It is no way all-encompassing, as I didn't get to see everyone. Hopefully you'll find it interesting all the same.

 

Disclaimer – Once again, (I am sure that you are fed up with me repeating this message at this stage) the vSphere storage blog has to remain storage vendor neutral to retain any credibility. VMware doesn't favour any one storage partner over another. I'm not personally endorsing any of these vendor's products either. What I'm posting here is what I learnt about the products and features at the sessions & something which I hope you find interesting too.

 

My first stop off was to the Atlantis Computing stand to find out more about their new VDI Performance Accelerator product called ILIO. I'm taking a bit of poetic license by including it in 'storage', but it addresses an area that is of concern right now in the VDI space. This is a product that VDI folks have been getting very excited about. I spoke to Joshua Petty, Director of Systems Engineering at Atlantis. He told me that the ILIO software appliance (virtual machine), which sits in the I/O path between your hypervisor and storage, will increase your VDI performance 10 fold, allowing up to 4-7 times more desktops per host.The ILIO also does inline deduplication, reducing storage capacity for each desktop by up to 99%. This is all very impressive. Since the appliance sits in the I/O path, it presents an iSCSI or NFS datastore at the front-end to the hypervisor, and at the backend, it is presented with NFS, iSCSI or Fibre Channel storage from the storage array.The appliance then sits in the middle, doing its dedupe & acceleration bit.

The one concern I did have is what happens to I/O in flight should this appliance fail. It would appear that Atlantis offer the ILIO in both fault tolerant and high availability configurations. They can also create a synchronous FT cluster of Atlantis ILIO virtual machines on separate physical hosts. There certainly seems to be a lot of interest brewing in this technology. More here at http://www.atlantiscomputing.com/products/

During this trip, I also managed to meet up with Martin Lister, who heads up the Rapid Desktop Program at VMware. I did a previous post about this program where I describe how Pivot3 on boarded very quickly, and appeared to have gained a lot of traction as a result of their participation in this program. It was great to again meet with Lee, Olivier, Mike and the rest of the Pivot3 guys at PEX and hear how well things are going for them in the VDI space.

Martin also told me that a number of other VMware storage partners are readying themselves to certify very soon in this program. Certified partners then get listed in the VMware HCL under the category VMware View POC Solutions.

Note however that the program is not directed at storage vendors, but is aimed at solution providers. The partner becomes the one-stop-shop for a complete VDI solution that has been fully tested & then certified by VMware. This includes the hypervisor, storage & VDI. The RDP certified partner will have to follow strict criteria for ease of roll-out/deployment of the VDI solution, as well as provide a solution that can host a defined number of View desktops. Pivot3 also did a presentation at PEX around their VDI solution which was very well received. Unfortunately I missed it myself, but I've since heard that there was a lot of interest in their solution.

I'm hoping to do a follow up post shortly detailing some of the new partners that are certifying on this program. Its definitely going from strenght to strength.

My final update is around a new partner called Starboard Storage Systems. These were the new kids on the block at PEX 2012, and after chatting with Tony Lagera, one of Starboard's Regional Sales Managers, it would appear that these guys have just recently come out of stealth mode. It looks like their play is to make the management of mixed workload environments much simpler, and allow a vSphere admin who may not have in-depth storage knowledge to easily manage their storage infrastructure on a per application basis. If I understood correctly, their array implements tiering to achieve this – what Starboard are calling their Mixed-workload, Application-Crafted Storage Tiering (MAST) architecture. This replaces a RAID configuration found in traditional arrays. Starboard state that this reduces a lot of complexity involved in setting up the correct type of storage for your application. Their AC72 storage system also comes with an SSD tier for improved performance, and supports connectivity across multiple storage protocols including NFS, iSCSI, and Fibre Channel.

Not sure about VAAI support, or if there is a vCenter plugin for management at this time. I'll have to keep an eye out for these guys at future conferences. More about them here – http://www.starboardstorage.com/

 

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage