Home > Blogs > VMware vSphere Blog > Monthly Archives: August 2011

Monthly Archives: August 2011

Guest OS Partition Alignment

We've been running a new session at VMworld 2011 called Group Discussions. These are round table discussions about best practices, and is an opportunity for customers to bounce questions between each other as well as ask advice from the VMware folks in attendance. I've been moderating the #GD21 session, and was interested to see the question about Guest OS Partition Alignment come up in both sessions.

An unaligned partition results in the I/O crossing a track boundary and causes an additional I/O. This incurs a penalty on latency and throughput. The additional I/O (especially if small) can impact system resources significantly on some host types. An aligned partition ensures that the single I/O is serviced by a single device, eliminating the additional I/O and resulting in overall performance improvement.

Before Alignment:

  Picture1

After Alignment:

Picture2

I should point out that this issue doesn't affect many of the newer Guest OS, which are automatically aligned. Operating Systems which I understand to be partition aligned, and thus unaffected by this issue, are Windows 7, Windows Vista & Windows 2008. There maybe others – if you know of additional ones, please leave a comment.

Many of our storage partners have published best practices around what to do with their particular storage array for partition alignment. A list of papers that I am aware of are here. Again, if you know of others, feel free to leave a comment.

EMC:
http://www.emc.com/collateral/hardware/technical-documentation/h2370-microsoft-sql-svr-2005-ns-series-iscsi-bp-plan-gde-ldv.pdf
http://www.emc.com/collateral/hardware/solution-overview/h2529-vmware-esx-svr-w-symmetrix-wp-ldv.pdf

HP:
http://h71019.www7.hp.com/ActiveAnswers/downloads/Exchange2003EVA5000PerformanceWhitePaper.doc

IBM:
http://www.redbooks.ibm.com/redbooks/pdfs/sg247521.pdf
http://www.redbooks.ibm.com/redbooks/pdfs/sg247146.pdf

Microsoft:
http://support.microsoft.com/kb/929491

NetApp:
http://media.netapp.com/documents/tr-3747.pdf

 

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter VMwareStorage

SRM 5 recovery plan speed compared to SRM 4

I've been doing some work in the lab doing upgrades and testing the process of moving from SRM 4 to 5.0.  There will be a blog entry soon about this to augment the upgrade guide that will be released with SRM 5, but during the course of testing I've come across some very interesting numbers with regards to the speed of a recovery plan in version 5 versus previous releases.

To wit, SRM 5 has some very nice changes that can help make it considerably quicker to complete recovery plans!

In accordance with the practice of testing thoroughly before and after upgrades, I ran through the recovery plans a few times in test-mode to make sure they would failover correctly and that there were no issues with the way I had things configured.  As part of that I tracked how long each recovery plan took to complete the test and saw that they were quite consistent through a few runs.  Then it was time to upgrade!

In the lab as it stands now I have a few recovery plans: A couple running on FalconStor NSS iSCSI storage, and a single large recovery plan running on NFS storage on an EMC VNX 5500.  These are both great systems, and I'm quite grateful to have the ability to test with different protocols and know that storage is never a bottleneck!

It was immediately apparent on both of these cutting-edge storage systems that SRM 5 is considerably faster to complete the test runs than SRM 4 was.  I don't mean it was a few percent quicker to do a few steps, either: Across the board I saw fairly dramatic and exciting speed improvements.

There are a number of reasons for the speed improvement, but two of features with the most impact on this deal with IP customization and the start-sequencing for VMs at the recovery site. 

IP customization in previous versions required the use of sysprep and customization specifications.  If a system needed IP changes when running at the recovery site, SRM would need to call the customization spec and use sysprep to instantiate the network changes in the virtual machine.  This would often add a few minutes to the start time for each VM that needed these changes, as it needed to boot the VM, run the sysprep to make its changes, and reboot the VM again in order to complete the change. 

SRM 5 no longer uses sysprep or customization specs, but instead we inject the networking information through a VIX API call pushed through VMware Tools in the VM.  This takes considerably less time to complete than the full sysprep cycle. 

So right off the bat with this change, we can reduce the overall time for a recovery plan to complete by shaving down the customization time for each VM.

The second area of improvement deals with how VMs are started by SRM at the recovery site.  Previously by default VMs would be started two-at-a-time on up to 10 hosts for a maximum of 20 simultaneous booting virtual machines.  On my lab systems I have two hosts at the recovery site, so I could do a maximum of four simultaneous VM power-ons.  Each one would do its sysprep and needed to be finished booting before SRM could start the next VM.  

With SRM 5 this process is no longer followed.  Now, by default, SRM sends a call out to the vCenter server with instructions of which systems it needs booted, and allows the VC to determine how many VMs it is capable of booting at once, dependent on cluster settings, available resources, HA admission control, etc.  Meaning, in my lab with two hosts, I could start all the VMs in my recovery plan in parallel. 

Moreover, the sequence is slightly different in terms of customization.  SRM 5 will do an initial "customization boot" of any virtual machines that need to be customized.  It will do a network-isolated preparatory boot to inject the IP changes, then shut down the VM so that when it is called on to start according to its place in the recovery plan, it has *already been customized* and is ready to boot "for real" without extra delay for customization.

So what were my results?

On the iSCSI FalconStor with a recovery plan with small numbers of VMs the test time shaved off about nine minutes.  On the NFS VNX with three times the number of VMs it shaved off almost 24 minutes.

Now here is the caveat section:  Your numbers may be quite different!  For example, it may not be possible or advisable for you to try to start every VM in your recovery plan all at once.  Perhaps your VC can not handle it, or the cluster does not have resources to do so.  I had also not used different priority groups or set any dependencies, so my test scenario very likely looks nothing like your environment.  

The speed increase therefore will depend quite highly on how your environment is configured, the capabilities of your infrastructure, your dependencies and priorities, and what your recovery plans look like.

Regardless, any improvement of recovery time is something to be happy about, and SRM 5 is looking great in this regard!

Stay tuned for an SRM upgrade blog entry, coming soon.

 

Win a iPad 2 with the VMworld PM Survey!

The Product Managers at VMware are always interested in getting feedback from customers!  This feedback helps them in making decisions on the roadmap for some of your favorite features.

To help get this feedback, they have created a small survey for VMworld 2011.  This short survey takes about 5 minutes to complete and gives you the opportunity to provide that feedback directly to them.

Last year, over 800+ people completed the survey.  Several of who also were given a free iPad just for doing so.  This year is no different…  By completing the survey you will be entered into a drawing for a iPad 2! 

So if your interested in helping to guide the product roadmap or you just want a chance to win a iPad 2, go take the survey here!

 

vSphere 5.0 Storage Features Part 12 – iSCSI Multipathing Enhancements

In 5.0, VMware has added a new UI interface to make it much easier to configure multipathing for Software iSCSI. The UI is a major change from 4.x. Previously users needed to use the command-line to get an optimal multi-path configuration with Software iSCSI.

The UI allows one to select different network interfaces for iSCSI use, check them for compliance and configure them with the Software iSCSI Adapter. This multipathing configuration is also referred to as iSCSI Port Binding.

Why should I use iSCSI Multipathing?

The primary use case of this feature is to create a multipath configuration with storage that only presents a single storage portal, such as the DELL EqualLogic and the HP/Lefthand. Without iSCSI multipathing, these type of storage would only have one path between the ESX host and each volume.  iSCSI multipathing allows us to multipath to this type of clustered storage.

Another benefit is the ability to use alternate VMkernel networks outside of the ESXi Management network. This means that if the management network suffers an outage, you continue to have iSCSI connectivity via the VMkernel ports participating in the iSCSI bindings.

Let's see how you go about setting this up. In this example, I have configured a Software iSCSI adapter, vmhba32.

No-devices

At present, no targets have been added, so no devices or paths have been discovered. Before implementing the iSCSI bindings, I need to create a number of additional VMkernel ports (vmk) for port binding to the Software iSCSI adapter.

Vmkernel-nw

As you can see from the above diagram, these vmnics are on trunked VLAN ports, allowing them to participate in multiple VLANs. For port binding to work correctly, the initiator must be able to reach the target directly on the same subnet – iSCSI port binding in vSphere 5.0 does not support routing. In this configuration, if I place my VMkernel ports on VLAN 74, they will be able to reach the iSCSI target without the need of a router. This is an important point, and needs further elaboration as it causes some confusion. If I do not implement port binding, and use a standard VMkernel port, then my initiator can reach the targets through a routed network. This is supported and works just fine. It is only when iSCSI binding is implemented that a direct, non-routed network between the initiators and targets is required, i.e. initiators and targets must be on the same subnet.

There is another important point to note when it comes to the configuration of iSCSI port bindings. On vSwitches which contain multiple vmnic uplinks, each VMkernel (vmk) port used for iSCSI bindings must be associated with a single vmnic uplink. The other uplink(s) on the vSwitch must be placed into an unused state. See below for an example of such a configuration:

Unused-vmnic

This is only a requirement when you have multiple vmnic uplinks on the same vSwitch. If you are using multiple vSwitches with their own vmnic uplinks, then this isn't an issue. Continuing with the network configuration,  we create a second VMkernel (vmk) port. I now have two vmk ports, labeled iscsi1 & iscsi2. These will be used for my iSCSI binding. Note below that one of the physical adapters, vmnic1, appeares disconnected from the vSwitch. This is because both of my VMkernel ports will be bound to vmnic0 only, so vmnic1 has been set to unused across the whole of the vSwitch.

Vmkernel-nw4Next, I return to the properties of my Software iSCSI adapter, and configure the bindings and iSCSI targets. There is now a new Network Configuration tab in the Software iSCSI Adapter properties window. This is where you add the VMkernel ports that will be used for binding to the iSCSI adapter. Click on the Software iSCSI adapater properties, then select the Network configuration tab, and you will see something similar to the screenshot shown below:

Bindings1

After selecting the VMkernel adapters for use with the Software iSCSI Adapter, the Port Group Policy tab will tell you whether or not these adapters are compliant or not for binding. If you have more than one active uplink on a vSwitch that has multiple vmnic uplinks, the vmk interfaces will not show up as compliant. Only one uplink should be active, all other uplinks should be placed into an unused state.

Bindings2
You then proceed to the Dynamic Discovery tab, where the iSCSI targets can now be added. Now, because we are using port binding, you must ensure that these targets are reachable by the Software iSCSI Adapter through a non-routeable network, i.e the storage controller ports are on the same subnet as the VMkernel NICs:

Dynamic-discovery
At this point, I have two vmkernel ports bound to the Software iSCSI Adapter, and 4 targets. These 4 targets are all going to the same storage array, so if I present a LUN out on all 4 targets, this should give me a total of 8 paths. Let's see what happens when I present a single LUN (ID 0):

Devices-8paths
So it does indeed look like I have 8 paths to that 1 device. Let's verify by looking at the paths view:

8paths
And if I take a look at a CLI multipath output for this device, I should see it presented on 8 different targets:

~ # esxcfg-mpath -l -d naa.6006016094602800c8e3e1c5d3c8e011 | grep "Target Identifier"
   Target Identifier: 00023d000002,iqn.1992-04.com.emc:cx.ckm00100900477.a2,t,1
   Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ckm00100900477.a2,t,1
   Target Identifier: 00023d000002,iqn.1992-04.com.emc:cx.ckm00100900477.b3,t,4
   Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ckm00100900477.b3,t,4
   Target Identifier: 00023d000002,iqn.1992-04.com.emc:cx.ckm00100900477.b2,t,3
   Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ckm00100900477.b2,t,3
   Target Identifier: 00023d000002,iqn.1992-04.com.emc:cx.ckm00100900477.a3,t,2
   Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ckm00100900477.a3,t,2

 
This new UI for iSCSI Bindings certainly makes configuring multipathing for the Software iSCSI Adapter so much easier. But do keep in mind the requirements to have a non-routable network between the initiator and target, and the fact that vmkernel ports must have only a single active vmnic uplink in vSwitches that have multiple vmnic uplinks.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter VMwareStorage

vSphere 5 New Networking Features – Enhanced NIOC

Network I/O Control Enhancements

Consolidated I/O or I/O virtualization delivers similar benefits as provided by x86 virtualization in terms of better utilization and consolidation of resources. However, as multiple traffic types flow through a single physical network interface, it becomes important to manage the traffic effectively such that critical application flows don’t suffer because of a burst of low-priority traffic. Network traffic management provides the required control and guarantee to different traffic types in the consolidated I/O environment. In the VMware vSphere 5 platform, NIOC supports traffic management capabilities for the following traffic types or also called as network resource pools:

• Virtual machine traffic

• Management traffic

• iSCSI traffic

• NFS traffic

• Fault-tolerant traffic

• VMware vMotion traffic

• User-defined traffic

• vSphere replication traffic

Similar to CPU and memory resource allocation in the vSphere platform, a network administrator through NIOC can allocate I/O shares and limits to different traffic types, based on their requirements. In this new release of vSphere, NIOC capabilities are enhanced such that administrators can now create user-defined traffic types and allocate shares and limits to them. Also, administrators can provide I/O resources to the vSphere replication process by assigning shares to vSphere replication traffic types. Let’s look at some details on User defined and vSphere replication traffic types.

 

User-Defined Network Resource Pools

User-defined network resource pools in vSphere 5 provide an ability to add new traffic types beyond the standard system traffic types that are used for I/O scheduling.

Figure below shows an example of a user-defined resource pool with shares, limits and IEEE 802.1p tag parameters described in a table. In this example, Tenant 1 and Tenant 2 are two user-defined resource pools with virtual machines connected to their respective independent port groups. Tenant 1, with three virtual machines, has five I/O shares. Tenant 2, with one virtual machine, has 15 I/O shares. This indicates that during contention scenarios, Tenant 2 virtual machines will have a higher guaranteed share than Tenant 1 virtual machines.

NIOC_user_defined

Usage 

When customers are deploying critical applications on virtual infrastructure, they can utilize this advanced feature to reserve I/O resources for the important, business-critical application traffic and provide SLA guarantees.

Service providers who are deploying public clouds and serving multiple tenants can now define and provision I/O resources per tenant, based on each tenant’s need.

Configuration

The new resource pools can be defined at the Distributed Switch level by selecting the resource allocation tab and clicking on new network resource pools. After a new network resource pool is defined with shares and limits parameters, that resource pool can be associated with a port group. This association of a network resource pool with a port group enables customers to allocate I/O resources to a group of virtual machines or workloads. The figure below shows the new Tenant 1 and Tenant 2 resource pools created under user-defined network resource pools.

Nioc_resource_allocation

vSphere Replication Traffic

 vSphere replication is a new system traffic type that carries replication traffic from one host to another. NIOC now supports this new traffic type along with other system and user-defined traffic types.

 Usage

Customers implementing a disaster recovery (DR) solution with VMware vCenter Site Recovery Manager (Site Recovery Manager) and vSphere replication can use this vSphere replication traffic type to provide required network resources to the replication process.

Configuration

A vSphere replication traffic type can be configured on a Distributed Switch under the resource allocation tab. This traffic type is now part of the system network resource pool. Customers can allocate shares and limits parameters to this traffic type.

 

IEEE 802.1p Tagging

IEEE 802.1p is a standard for enabling QoS at MAC level. The IEEE 802.1p tag provides a 3-bit field for prioritization, which allows packets to be grouped into seven different traffic classes. The IEEE doesn’t mandate or standardize the use of recommended traffic classes. However, higher-number tags typically indicate critical traffic that has higher priority. The traffic is simply classified at the source and sent to the destination. The layer-2 switch infrastructure between the source and destination handles the traffic classes according to the assigned priority. In the vSphere 5.0 release, network administrators now can tag the packets going out of the host.

 

Usage

Customers who are deploying business-critical applications in a virtualized environment now have the capability to guarantee I/O resources to these workloads on the host. However, it is not sufficient to provide I/O resources just on the host. Customers must think about how to provide end-to-end QoS to the business-critical application traffic. The capability of a Distributed Switch to provide an IEEE 802.1p tag helps such customers meet those requirements for end-to-end QoS or service-level agreements.

 Configuration

IEEE 802.1p tagging can be enabled per traffic type. Customers can select the Distributed Switch and then the resource allocation tab to see the different traffic types, including system and user-defined traffic types. After selecting a traffic type, the user can edit the QoS priority tag field by choosing any number from 1 to 7. Figure below is the screenshot of QoS priority tag configuration for the MyVMTraffic traffic type.

Nioc_802.1p 

With this post, I have completed the coverage of new networking features in vSphere 5. Also, today VMware has officially announced the genearal availability of vSphere 5.

I will be attending VMworld 2011 during the week of Aug 29th. At VMworld, I have a session on VDS best practices and couple of group discussions. Looking forward to meeting with various partners and customers.

After VMworld, I will focus my attention on writing about the different deployment options with vSphere Distributed Switch (VDS). 

 

vSphere 5 is here! But What Don’t I Know?

First off, let me state that vSphere 5 has GA'd and is available now. Awesome.

Now, let's get into the details. As with any of our platform releases vSphere 5 is a” whopper” in the true sense of the word. But with close to 200 features and functions that are either new or enhanced how does a user sort through it all? Let’s take a closer look at what I consider the top 5 features and enhancements of vSphere 5 to answer that question. My comments below are designed to not only alert to a particular feature but really to tell you something you may not know or realize about it. To supplement this article make sure you get your hands or eyes on the release notes and configuration maximum documents.

The Top 5 (at least according to Mike =)!

1. Storage DRS – Place and Balance Virtual Disks

Something you make not realize about this feature – Storage DRS I/O load balancing can be turned off if you want to use a particular hardware vendor’s auto tiering or dynamic movement capabilities. This turns Storage DRS into an initial placement and space balancing tool. The major benefit though that Storage DRS can provide is I/O load balancing across protocols and disk arrays regardless of the vendor. Don’t you want to just set-up rules once? Don’t forget about datastore maintenance mode either, it is a lifesaver! I/O load balancing within Storage DRS also turns Storage I/O Control on as well.

2. New vSphere High Availability Architecture  -Simplify, Better Guarantees, and Scale Availability

Something you make not realize about this feature – The maximum size of an HA cluster has not increased from 4.1 to 5. It still remains at 32. However, most of you have 8 node HA clusters for a variety of reasons. The new HA architecture has actually been tested well beyond the officially support 32 nodes and works very well.  Am I telling you to build larger than 32 node HA clusters? NOPE! I am telling you though that since HA is more simplified in its setup, provides better resource guarantees than ever before, and can really scale that it is time to create larger HA clusters beyond 8 nodes and start getting HA for more VMs at a really low cost.

3. Auto Deploy – A New Operational Model for the Deployment and Updating of vSphere Hosts

Something you make not realize about this feature – Auto Deploy is awesome for faster deployment of hosts. There is no doubt about that fact. However, its real value lies in its ability to change your operational model around how to update and patch vSphere hosts. Changing image profiles in a centralized location means “on the fly” delivery of updated images very quickly. That is the real value of Auto Deploy since deployment is normally a one-time benefit.

4. Profile-Driven Storage – “Correctly” Align Storage with SLAs

Something you may not realize about this feature – This new feature is really crafted for the core VMware administrator to make life easier for others as well as significantly added to by the new vSphere Storage APIs for Storage Awareness and Discovery. What many users don’t seem to realize when they hear about or see this feature is that it does not impact anything they are already doing. Nor does it force a tiering structure. It is what I call a “view and alignment” tool designed to more efficiently help you or your team select the right storage that is compliant with the SLA you are trying to meet for a particular VM.  You can name your storage buckets anything you like with this capability and make sure you make the right choice the first time.

5. vSphere Web Client – Access vSphere from More Devices via Browser!

Something you make not realize about this feature – Thought I would pick something else to round out my top five didn’t you? There is definitely plenty to choose from with this release but this one to me is important for both right now and the future. The flexibility of the new web client is obvious here. What is not so obvious right away is that while this interface isn’t as robust as the C sharp client it is very much by design. We really want you to be comfortable with this interface over time. Not too much to describe here as this is really more about the experience. Give it a try!

 

Did you know? The vSphere Update Manager can assist in a migration from ESX to ESXi. While it won’t handle the conversion of any COS agents or scripts, it will help users transition to ESXi in a shorter timeframe. Need more detail? Head to the ESXi and ESX Info Center on vmware.com. No marketing fluff, I promise!

Other top features and enhancements to evaluate in vSphere 5:

-vCenter Server Appliance (Linux)

-Storage I/O Control (now for NFS)

-Network I/O Control (now with per VM controls)

-VMFS 5

Happy trails!

Mike

vSphere 5.0 Features

By Duncan Epping, Principal Architect, VMware

When discussing vSphere 5.0 internally someone came up with the idea to list all new features that vSphere brings. Let me warn you that this is a long list, and the list could even be longer if we would have included all API changes and back-end changes.

Now before we will give you the full list we want to challenge you… Who will be the first one to show 50 of the below listed features in an article? We will give a "vSphere 5.0 Clustering Technical Deepdive" book signed by both authors to the first 5 people who manage to write a single article detailing 50 of the below features with short paragraph about what this feature brings including a screenshot. It's up to you to pick which features you want to show… Post a link in a comment and make sure you fill out a valid email address!

Here we go:

  1. Storage DRS
  2. Storage I/O Control for NFS
  3. VMFS-5
  4. ESXi Firewall
  5. VMFS Scalability and Performance enhancements
  6. 2TB+ pass-through RDM support
  7. vCenter inventory extensibility
  8. Storage APIs — VAAI T10 Compliancy
  9. Storage APIs — VAAI Offloads for NAS
  10. Storage APIs — VAAI Thin Provisioning
  11. Storage APIs — Storage Awareness/Discovery
  12. Storage APIs — Data Protection compatible with MN
  13. APD, Permanent APD Survivability Enablement
  14. Snapshot enhancements
  15. Storage vMotion scalability improvements
  16. iSCSI Enablement: iSCSI UI Support
  17. iSCSI Enablement: Stateless Support
  18. Multi-queue Storage IO adapters
  19. Increase NFSv3 Max Share Count to 256
  20. SATA 3.0
  21. Software FCoE initiator support
  22. Enhanced logging support
  23. Enhanced Storage metrics
  24. Profile-Driven Storage
  25. Storage vMotion support for snapshots
  26. vSphere Storage Appliance (VSA)
  27. SSD Detection and Enablement
  28. vSphere Replication
  29. vSphere Data Recovery 2.0
  30. VADP enhancements
  31. vCenter Orchestrator (vCO) Enhancements
  32. vCO — Library extension and consolidation
  33. vCO — Scalability
  34. Network I/O Control (NIOC) Phase 2
  35. NIOC — User Defined Resource Pools
  36. NIOC — HBR traffic type
  37. NIOC — 802.1p tagging
  38. Network Traffic Stats for iOPS
  39. Improvement to UDP and Multicast traffic types
  40. New networking drivers for server enablement
  41. vDS support for Port mirror, LLDP and NetFlow V5
  42. vDS Manage Port Group UI enhancement
  43. Hot-Insert/Remove of Filters
  44. Enhanced vMotion Compatibility
  45. Storage vMotion support for Linked Clones
  46. vMotion scalability (dual-NIC & longer latency support)
  47. vNetwork API enhancements
  48. vNetwork Opaque Channel
  49. Support for 8 10GbE Physical NIC ports per host
  50. Add Host Resources MIB to SNMP offering
  51. Metro vMotion
  52. Host Profile for DRS to support Stateless ESX
  53. HA interop with agent VMs
  54. DRS/DPM interop with agent VMs
  55. DRS enhancements for Maintenance Mode
  56. Enhanced processor support for FT
  57. vSphere 5.0 HA aka "FDM / Fault Domain Manager"
  58. vSphere HA – Heartbeat Datastores
  59. vSphere HA – Support for partitions of management network
  60. vSphere HA – Default isolation response changed
  61. vSphere HA – New Status information in UI
  62. vSphere HA – IPv6 support
  63. vSphere HA – Application Awareness API publicly available
  64. Extensions to create special icons for VMs
  65. ESX Agent Management
  66. Solution Management Plugin
  67. Next-Gen vSphere Client
  68. Host Profiles Enhancements
  69. vCenter enhancements for stateless ESXi
  70. vCenter Server Appliance
  71. vCenter: Support for FileManager and VirtualDiskManager APIs
  72. Virtual Hardware – Smartcard support for vSphere
  73. Virtual Hardware Version 8
  74. Virtual HW v8 — 1TB VM RAM
  75. Virtual HW v8 — 32-way Virtual SMP
  76. Virtual Hw v8 — Client-Connected USB Devices
  77. Virtual HW v8 — EFI Virtual BIOS
  78. Virtual HW v8 — HD Audio
  79. Virtual Hw v8 — Multi-core Virtual CPU Support UI
  80. Virtual HW v8 — New virtual E1000 NIC
  81. Virtual HW v8 — UI and other support
  82. Virtual HW v8 — USB 3.0 device support
  83. Virtual HW v8 — VMCI device enhancements
  84. Virtual HW v8 — xHCI
  85. Support SMP for Mac OS X guest OS
  86. Universal Passthrough (VMdirect path with vMotion support)
  87. Guest Management Operations (VIX API)
  88. Guest OS Support — Mac OS X Server
  89. VM Serial Port to Host Serial Port Redirection (Serial Port Pass-Through)
  90. VMware Tools Portability
  91. VMRC Concurrent Connections enhancements
  92. Scalability: 512 VMs per host
  93. ESXCLI enhancements
  94. Support SAN and hw-iSCSI boot
  95. Hardware — Interlagos Processor Enablement
  96. Hardware — SandyBridge-DT Processor Enablement
  97. Hardware — SandyBridge-EN Processor Enablement
  98. Hardware — SandyBridge-EP Processor Enablement
  99. Hardware — Valencia Processor Enablement
  100. Hardware — Westmere-EX Processor Enablement
  101. Platform — CIM Enhancements
  102. Platform — ESX i18n support
  103. Host Power Management Enhancements
  104. vCenter Web Client
  105. Improved CPU scheduler
  106. Improved scalability of CPU (NUMA) scheduler
  107. Memory scheduler improvements to support 32-way VCPU's
  108. Swap to host cache
  109. API enhancements to configure VM boot order
  110. VMX swap
  111. Support for ESXi On Apple XServe
  112. Redirect DCUI to host serial port for remote monitoring and management
  113. UEFI BIOS Boot for ESXi hosts
  114. Scalability — 160 CPU Threads (logical PCPUs) per host
  115. Scalability — 2 TB RAM per host
  116. Scalability — 2048 VCPUs per host
  117. Scalability — 2048 virtual disks per host
  118. Scalability — 2048 VMs per VMFS volume
  119. Scalability — 512 VMs per host
  120. Stateless — Host Profile Engine and Host Profile Completeness
  121. Stateless — Image Builder
  122. Stateless — Auto Deploy
  123. Stateless — Networking Host Profile Plugin
  124. Stateless — VIB Packaging Enhancement
  125. Stateless — VMkernel network core dump
  126. Host profiles enhancements for storage configuration
  127. Enhanced driver support for ESXi
  128. Intel TXT Support
  129. Memsched policy enhancements w.r.t. Java balloon
  130. Native Driver Autoload support
  131. Root password entry screen in interactive installer
  132. vCenter Dump Collector
  133. vCenter Syslog Collector
  134. VMware Update Manager (VUM) enhancements
  135. VUM — Virtual Appliance enhancements
  136. VUM — vApp Support
  137. VUM — Depot management enhancements
  138. vCLI enhancements
  139. PowerCLI enhancements
  140. VProbes — ESX Platform Observability

 

I warned you about it, it is a long list. Remember, first 5 who post a link to an article showing 50 of these new features and detailing what they do get a signed copy of vSphere 5 Clustering Tech Deepdive!

A sneak-peek at how some of VMware’s Storage Partners are implementing VASA

Last week, I posted a high level overview of what VASA (vSphere Storage APIs for Storage Awareness) will do for making the management of a storage infrastructure that little bit easier for vSphere admins. I didn't really go into too much detail about individual capabilities in the previous blog post as a lot of our storage partners are still working on getting their respective implementations ready.

However, over the last couple of weeks, I reached out to a number of our storage partners to see if they would be willing to share some additional detail about their VASA implementation. Disclaimer#1 – Before going any further, I want to make it abundantly clear that VMware doesn't favour any one storage partner over another when it comes to VASA implementations. The partners listed in this post are here simply because I knew who to reach out to in those partner organizations for this info, and fortunately, they were prepared to share these details with us.

Lets now take a look at some of the implementations. To make the post easy follow, I asked the same questions of each partner. The responses are below:

 

DELL

Q1.    Which array models will support VASA in this first release?
A1.    All models of EqualLogic PS series arrays

Q2. How has DELL done the Vendor Provider Implementation (the glue that sits between the array & vCenter) For example, is it done in firmware, in management product or a stand-alone implementation?
A2. DELL has implemented the vendor provider in the EqualLogic vSphere Plugin which is part of Dell EqualLogic Host Integration Tools for VMware.

Q3. Which Storage Capabilities will be surfaced by VASA into vCenter?
A3. Storage Capabilities surfaced by VASA for datastores include identifying Homogeneous vs Mixed RAID type, snap space reserved, SSD drives and replication (Values – RAID, MIXED, SNAP, SSD and REPLICATED). VASA will also surface EqualLogic storage array events and alarms for critical events related to SpaceCapacity, Storage Capability and Storage Object (as defined by VASA developer guide). And of course VASA has EqualLogic integration for use with vSphere Storage DRS.

Here are some screen shots showing a capability being surfaced in vSphere vCenter from a DELL/EQL device. Note that datastores now have a new Storage Capabilities window. If you click on the 'bubble', the details/description are displayed too, as shown below:

Picture1

Picture2
 

EMC

Q1. Which array models will support VASA in this first release?
A1. EMC's first implementation will support VMAX, DMX, VNX, CX4 & NS arrays using block protocols. Support for NAS devices, VNXe arrays & Isilon will follow soon afterwards.

Q2. How has EMC done the Vendor Provider Implementation?
A2. EMC has implemented the Vendor Provider it via Solutions Enabler. Solutions Enabler is available as an installable binary for EMC customers from the Powerlink site, and also as a virtual appliance in the VMware Virtual Appliance store.

Q3. Which Storage Capabilities will be surfaced by VASA into vCenter?
A3. Unfortunately, no details were available on actual capabilities that EMC will surface from their provider, nor were there any screen shots available at the time of going to press. We might come back to this again in a future post.

 

NetApp

Q1. Which array models will support VASA in this first release?
A1. All NetApp FAS storage systems that are capable of running Data ONTAP version 7.3.3 or later will be supported.

Q2. How has NetApp done the Vendor Provider Implementation?
A2. NetApp’s VASA Vendor Provider is a standalone software “application” with a 64-bit Windows (only) installer that can be installed on either a physical or virtual machine.  It communicates with NetApp storage systems via NetApp ZAPIs and requires no license and will be a free download from NetApp’s NOW software download site.

Q3. Which Storage Capabilities will be surfaced by VASA into vCenter?
A3. NetApp's VASA Provider creates a concatenated string of the storage attributes as applicable for each volume/LUN. The storage attributes are Disk Type (SSD, SATA, SAS, FC) – Represents a underlying disk used by NetApp storage, Dedupe - Indicates space efficiency on the underlying NetApp LUN or a volume & Replication – Indicates that the underlying NetApp storage is configured for SnapMirror or SnapVault.

Note that actual storage capabilities strings could be combination of storage attributes mentioned above. Whenever storage capability contains more than one storage attributes, the attributes are ordered in ascending order of storage attribute name.

Here are some screen shots showing a capability being surfaced in vSphere vCenter from a NetApp device:

Picture1
Picture3

HP

Q1. Which array models will support VASA in this first release?
All HP Storage arrays will support VASA. This includes the 3PAR, P4000, EVA, P6000, P9500, XP20000/24000 and P2000.

Q2. How has HP done the Vendor Provider Implementation?
The VASA providers for HP storage arrays are packaged with HP's vSphere management plug-in’s which include the Insight Control Storage Module for vCenter Server and HP 3PAR Recovery Manager for VMware Software Suite. The VASA providers are free of charge and can be downloaded and installed from HP's website without any additional license.

Q3. Which Storage Capabilities will be surfaced by VASA into vCenter?
The specific storage capabilities that HP arrays provide to vCenter Server are Drive Type – DriveType_FC, DriveType_NL, DriveType_SSD, DriveType_Mixed, RAID Type – RAID0, RAID1, RAID5, RAID6, Provisioning Type – ThinProvisioned, FullyProvisioned, VV Type – VirtualCopy, PhysicalCopy, Base & Remotecopy – InRemotecopy, NotInRemotecopy.

 

Conclusion

As you can see, this blog is only displaying what some of our storage partners are doing with VASA, and I'm sure you will agree that this stuff is very cool indeed. I'd urge you to contact your own respective storage array vendor to get specific details about their VASA implementation. My understanding is that the partners will be creating their own landing pages for VASA, so this will be a great place to learn more.

Thanks again to my friends at DELL, EMC, NetApp & HP for providing me (and by extension, you) with early access to this information. Disclaimer#2 – This may not be what the end product looks like when these partners ship their VASA Vendor Provider. The storage capabilities may be added/removed/changed. There could be many changes made to their final implementation, so please don't use the information posted in this blog as definitive. Reach out to your storage partner for actual implementation details.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter VMwareStorage

Bulk IP Customization in SRM 5

In SRM 5 we have a new interface to managing the IP information of protected virtual machines.  It makes it easy to change or update IP information for a virtual machine.  You can see what the SRM UI looks like here.  See below for what the new UI for editing IP addresses in SRM 5 looks like.

  Ipcustempty

(BTW, you get to this screen by right clicking on a VM listed in a Recovery Plan and selecting Configure)  Notice how you can edit both the protected and recovery sides here?  This is to facilitate failback operations.  But this is – while much improved – very much one VM at a time.  If you have 10 VM or more you likely will be very tired of doing your IP customization in the UI.

We have a command line utility that is good for bulk IP customization and I use it for even less than 10 virtual machines as it is easy (once you know how and have done it once) and it is harder to make mistakes.  It can be used to configure protected and recovery sites so that you are ready for failback operations – after all, if you only do the IP customization for the failover site, how will you be able to failback?

This blog will show some samples of how to use the command line tool that is called dr-ip-customizer.  So lets get started.

The tool is found in the BIN folder that is by default found in:

C:\program files (x86)\vmware\vmware vcenter site recovery manager\bin

The drive letter will likely change in your world but the path should be very similar.

The steps below will help you use the tool to implement IP customization for a number of VMs and you should be able to use these steps in your own infrastructure.

Once you are at the command line and are in the BIN folder you can use the command below for more info:

Dr-ip-customizer.exe –h

This command will show you the info below.

Driphelp

It is important to understand that you must use these commands – always – on the same side.  So use them on one side, or the other consistently – I always use them on recovery side myself.  Do not start using them on one side, and than finish on the other side.  This will confuse the technology and it will not work.  So start working on one or the other side and finish there.

We need to create a .csv file with a list of our protected VMs in it.

Dr-ip-customizer.exe –cfg ..\config\vmware-dr-xml –o c:\example.csv –cmd generate –vc FQDN_of_The_Local_vCenter

Above we see a wrapped line, but it is actually one long line.

You will now have a .CSV file that you can work with – c:\example.csv in this case.  You should use Excel to work on it.

This is what you will see when you first look at the CSV file.

  Csvdown

We need to edit the spreadsheet in a specific way so that it can be imported to SRM without error.

Each VM will require four lines in this spread sheet so you will need to insert some (make sure to keep the begining of the lines as they were – you only make changes in the Adapter column and to the right).  Each VM will have two lines (0 and 1) for each side (the vC is used to denote which side you are configuring).  This is how you can have IP changed during failing over, and than changed back when you fail back to the orginal side.  If a VM has more than one adapter, they will need another line but numbered as Adapter 2 or higher.    If a VM has more than one gateway, you will need another line for the second gateway address.

When we are finished it will look like the one below.

Csvup

This CSV has filled in both protected and recovery side  information, which is done by the vCenter reference.  This is important for failback operations but it is not necessary. 

Now the CSV has been updated, we can upload it to SRM and have the IP specification information attached to the appropriate VMs.

Dr-ip-customizer.exe –cfg ..\config\vmware-dr-xml –csv c:\example.csv –cmd apply –vc FQDN_of_The_Local_vCenter

Now you can visit one of the virtual machines you have just managed the IP Customization settings for and see what you have done.  You should see something like you see below.

  Ipsettings

Some things to remember

  • Make sure you test your work.  You do this with a test failover, but also by looking at the properties of the VM in the recovery plan as seen above.
  • One IPv4 or one IPv6 addresses per adapter.
  • Don’t empty or clear a cell while working in the CSV with the spacebar!
  • The vC names used in the command line should be the same as you used in the SRM registration. 
  • Generally one row per adapter.  Multiple values, like for DNS servers would require multiple lines.
  • Adapter ID=0 is only used for global IP settings like DNS Server(s), DNS Suffix (s).
  • IPv6 fields can be empty but not IPv4.  If you are using IPv6 and want to leave IPv4 blank put a dhcp (not DHCP) in the IPv4 area.
  • Keep a master copy of the spreadsheet so that you have something to work in, and you can always paste the appropriate info into a different CSV file you upload.
  • The –cmd can have the values of generate (the CSV file), apply (upload your configuration info), and drop (for deleting the IP information).

Conclusion

You have seen how to use the dr-ip-customizer command line tool to configure bulk numbers of VMs with IP customization information.  Different in this release is the ability to do IP information for both the protected and recovery sides to support failback.

If you have any questions on this please do not hesitate to leave a comment for me.  This blog was carefully tested on a pre – GA release of SRM.  If necessary I will update it for the GA release.

Michael

vSphere 5.0 Storage Features Part 11 – Profile Driven Storage

In an earlier blog around vSphere 5.0 storage features, I mentioned how one could use the capabilities of the underlying storage devices surfaced into the vCenter UI by VASA (vSphere Storage APIs for Storage Awareness), and use them to make profiles for Virtual Machine storage. In this blog, I'll dive a little deeper into this functionality, and look at this compelling feature that we are calling Profile Driven Storage.

 

Introduction

Profile Driven Storage is a feature which will allow you to easily & correctly select the correct datastore on which to deploy Virtual Machines. The selection of the datastore is based on the capabilities of that datastore. Then, throughout the life-cycle of that VM, you can check if its underlying storage is still compatible, i.e. it has the correct capabilities. This means that if the VM is cold migrated or Storage vMotion'ed, you can ensure that it moves to storage that meets its requirements. If the VM is moved without paying attention to the capabilities of the destination storage, you can still check the compliance of the VM's storage from the UI at any time, and take corrective actions if it no longer on a datastore which meets it storage requirements (i.e. move it back to a compliant datastore).

 

Part 1 – Create User-Defined Storage Capabilities

There are a number of steps to follow in order to successfully use Profile Driven Storage. Before building a Storage Profile, the storage devices on your host must have capabilities associated with them. Now, as mentioned, these can come via VASA and be associated automatically with the storage devices, or these can be user-defined and manually associated. For instance, you might like to use User-Defined business tags for your storage, such as Bronze, Silver & Gold. How then do you create these User-Defined capabilities? Quite easily in fact. From the vSphere UI, click on the icon label VM Storage Profiles:

Home
This will take you to the VM Storage Profiles view:

Overview
The next step is to start adding the user-defined storage capabilities (or business tags).To do this, you select 'Manage Storage Capabilities', and add them in. If we stick with the gold/silver/bronze example, here is how I would create a 'Bronze' user-defined storage capability.

Bronze

If I continue creating additional storage capabilities, I can use them to classify my different types of storage.

All-capabilities-defined
Remember this is just one example; you can use other capabilities to define your storage too. Note that if capabilities were being surfaced by VASA, they would appear here in this "Manage Storage Capabilities" view automatically.

 

Part 2 – Create a VM Storage Profile

At this point, my user-defined storage capabilities are created. The next step is to create a storage profile. To create a profile, select the option "Create VM Storage Profile" in the VM Storage Profiles view seen earlier. First give it a name and description, and then select the storage capabilities for that profile:

Profile-silver
You can make a number of different profiles. For my example, I created three, one for each tier of storage, and each containing a different capability (Bronze, Silver & Gold):

All-tiers
 

Part 3 – Add the User-Defined Capability to the Datastore

The capabilities are now defined & the VM Storage Profiles are created. The next step is to add the capabilities to the datastores. This is a simple point & click task. Simply right click on the desired datastore and select the option "Assign User-Defined Storage Capability…":

Assign-to-datastore
In the Summary tab of the datastore, a new Window called Storage Capabilities now displays both System Storage Capabilities (VASA) and User-defined Storage Capabilities. The bubble icon next to the capability will display additional details:

Summary-tab
 

Part 4 – Using the VM Storage Profile

At this point, the profile is created and the user-defined capabilities are added to the datastore. Now we can use the profile to select the correct storage for the VM. The profile is automatically attached to the VM during deployment phase. Later, we can check if the datastore on which the VM is placed has the same capabilities as the profile. If it does, then the VM is said to be compliant. If they do not, the VM is said to be non-compliant.

VM Storage Profiles can be used during deployment or during migrations, or can be attached on-the-fly. In this example, I am deploying an OVF Template, and when it comes to storage selection, you choose a particular profile from the list of profiles:

Attach-profile
Let's pretend that this is a mission critical VM for me, so I am going to put it on my Gold tiered storage. I select the "Gold" VM Storage Profile from the list. See what happens to my storage selection:

Compatible-list

Notice the way that the datastores are now split into Compatible & Incompatible. The Compatible datastore are those which have the same storage capabilities as those defined in the profile called 'Gold'. Only one datastore (VSADs-2) has this capability; none of the other datastores have that capability. However, you can still choose to deploy this VM onto one of the Incompatible datastores if you wish. All it means is that the VM will show up as Incompatible in the UI when checked.

 

Part 5 – Checking Compatibility

OK, at this point we have seen how to associate the VM with the correct storage at initial deployment time using VM Storage Profiles. But during the life-cycle of a VM, it could be migrated to other storage. How do I tell if it is still compliant. Well, that's easy as there a number of built-in mechanisms for checking the compliance of individual VMs or multiple VMs.

To check individual VMs, simply go to the Summary tab of the VM, and you'll see a new VM Storage Profiles window which will indicate if the VM is compliant or not. Here are some sample screen-shots:

Compliant

Noncompliant

However it would be tedious to check all of these individually. Therefore if you go back into the VM Storage Profiles view, you can check all VMs per profile in one place. Here I have one VM which is compliant, and another which is not, but I can see this from a single view:

Compliance-check-all

 

Common questions

Q1. Are Profile Driven Storage & Storage DRS complimentary?

A common question I get when I present on new vSphere 5.0 Storage features is whether Profile Driven Storage & Storage DRS can work together. Then answer is 'absolutely'. Just ensure that all datastores in the datastore cluster of Storage DRS have the same capabilities, and you're good to go. Now instead of presenting a single datastore as compatible, a datastore cluster will now be presented as compatible (or indeed Incompatible). Here is a datastore cluster shown as compatible:

Sdrs

Q2. Can multiple profiles be associated with the same VM?

This is another common question, and the answer is of course, yes you can. If you have a VM with multiple VMDKs, you may associate different profiles with the individual VMDKs to that they are compliant with a certain storage type.

So there you have it. Not only can VM Storage Profile allow you to select the correct datastore for VM placement each and every time, but through-out the lifetime of the VM, you can continually check to make sure that it is running on its proper storage and that it hasn't been moved to somewhere it shouldn't be. I can see this feature being a big hit.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter VMwareStorage