Home > Blogs > VMTN Blog > Monthly Archives: February 2010

Monthly Archives: February 2010

Top 5 Planet V12n blog posts week 08

Creating a top 5 seems to be getting more difficult week after week. Not only does the quality of the blog articles increase, the amount of blogs listed on PlanetV12n and the amount of articles also increase steadily. I hope I can keep up with you guys, or I might just need to get a cute assistant to help me out with this…. Hmmm, that's actually not a bad idea. Anyway, here's the list!

  • Eric Gray – Taking snapshots of VMware ESX 4 running in a VM
    Clearly, the capability introduced with VMware vSphere 4 that allows VMware ESX 4 to virtualize itself is a real crowd-pleaser.
    However, one limitation that some have discovered while using this lab-testing technique is the lack of ability to use snapshots with virtual ESX systems. In fact, after taking a snapshot of a virtual ESX VM, you will see the system boot into the recovery shell.
  • Kenneth van Ditmarsch – Using LeftHand Snapshot techniques within a VMware Environment
    Well, currently no integration exists between the LeftHand Snapshot
    technique and vCenter. If the LeftHand Snapshot process is started, vCenter isn’t alerted to quiesce the VM’s and therefore the VM’s are able to continue processing while a LeftHand
    Snapshot is made, which leads to inconsistent VM states. Last year the LeftHand roadmap indicated that vCenter application integration would be available in the new SAN/iQ 8.5. SAN/iQ 8.5 is currently shipped with the HP/LeftHand P4000 G2 nodes and will be available for download on 29th of March for existing P4000 user. For some reasons however vCenter application integration is shoved back to Q4 2010 or later.
  • Steve Kaplan – The multi-hypervisor fallacy
    Implicit in multi-hypervisor advocacy is an undertone of virtualizing
    servers rather than the data center. This myopic perspective
    limits both savings and synergies. Cisco studies, for example, show a
    lack of vNetwork capability results in 30% fewer servers that can be
    virtualized along with 30% higher administrative requirements. Network
    administrators have no way to monitor traffic over a vSwitch for
    compliance, auditing and troubleshooting purposes, and they cannot
    apply network and security policies that follow a VM as it
    live-migrates. Since only vSphere enables vNetwork capabilities,
    multiple hypervisors leave at least a portion of the data center
    running less efficiently and less secure.
  • Steve Chambers – IT Departments and the Collapse of the Silos
    Today I had the opportunity to present at the National Computing Center Think Tank. The NCC have a fantastic remit to bring together practitioners from the private and public sector to explore the current realities. Add to this the vendor invitations where folks like me can share our observations with no axe to sell, and it makes for a really great discussion. Awesome stuff. Prior to this invitation I prepared two documents. First I wrote a blunt paper based on my observations and feedback via Twitter. Second I wrote a Prezi for that to share the findings in ten pieces.
  • Craig Risinger – The Resource Pool Priority-Pie Paradox
    We run into this on a daily basis; Misunderstanding of the “shares”
    concept in combination with resource pools. To start with a bold
    statement: A few VMs in a Low-shares Resource Pool can outperform each
    of many VMs in a High-shares Resource Pool. How is this possible you
    might ask. Resources are divided at the Resource Pool level first. Each
    Resource Pool is like a pie whose size determines amount of resources
    usable (during contention). Then that pie is subdivided among the VMs
    in the pool. A Resource Pool applies to all its VMs collectively. Thus
    a smaller pie divided among fewer VMs can yield more resources per VM
    than a larger pie divided by even more VMs.

Top 5 Planet V12n blog posts week 07

Let me start by congratulating two well known community members with achieving the VCDX certification. Congrats Jason Boche and Scott Lowe, well done. These two guys just received the news that they passed the final stage and I am already been preparing the upcoming VCDX Defense Panels in Munich. Upcoming week is an exciting one for me personally, shifting jobroles… As of Monday I will be a vCloud Architect for VMware Advanced Services. My focus, in terms of blogging, will remain the same but of course will include more cloud related topics. But enough introduction blabla, let's start digging into the top-5:

  • Frank Denneman – Impact of host local VM swap on HA and DRS
    This rule also applies when migrating a VM configured with a host-local
    VM swap file as the swap file needs to be created on the local VMFS
    volume of the destination host. Besides creating a new swap file, the
    swapped out pages must be copied out to the destination host. It’s not
    uncommon that a VM has pages swapped out, even if there is not memory
    pressure at that moment. ESX does not proactively return swapped pages
    back into machine memory. Swapped pages always stays swapped, the VM
    needs to actively access the page in the swap file to be transferred
    back to machine memory but this only occurs if the ESX host is not
    under memory pressure (more than 6% free physical memory).
  • Jason Boche – My VCDX Defense Experience
    During the days leading up to my defense, I felt very confident.  I had
    been studying my design and going over all the Enterprise Admin and
    Design exam study material on a daily basis.  I had been brushing up on
    white papers and blog articles for areas which I felt I was weak on or
    had forgotten details of.  I brought a 3 ring binder filled with about
    400 pages of documentation as well as every VI3 published .pdf known to
    mankind on my thumb drive.  While I didn’t read all the .pdf files,
    they were with me if I needed them for reference.  As it turned out, a
    few of the documents I crammed on the night before my panel would play
    a nice role during part of my defense.
  • Scott Sauer – Performance troubleshooting VMware vSphere CPU , Memory
    Watch pCPU0 on non ESXi hosts.  If pCPU0 is consistently saturated,
    this will negatively impact performance of the overall system.  If you
    are using third party agents, ensure they are functioning properly.  A
    couple of years ago we had issues with HP System Insight management
    agents (Pegasus process) which was creating a heavy load on our COS. 
    All of the virtual machines looked fine from a performance perspective,
    but once we dug a little bit deeper, we discovered this was our root
    cause.
  • Gabrie van Zanten -
    Converting vscsiStats data into Excel charts
    Some time ago I wrote a posting on how to use vscsiStats to gather even more data from your VMs and their SCSI performance ( See: Using vscsiStats – the full how-to). Last week I received an e-mail from Paul Dunn who had written an Excel macro that can read the output from the vscsiStats exported csv file and convert it into Excel histograms.Using the macro is very straight forward. First you let vscsiStats run for a while and have it export the data to csv file. For example with the following command (Do pay attention to just one capital S in vscsiStats):
    /usr/lib/vmware/bin/vscsiStats -p all -w id -c > /root/vscsiStats-export.csv
  • Simon Gallagher – The Computing Super-Powers are Aligning Their Stacks
    With HP’s recent acquisition on 3Com and their existing HP ProCurve
    range I would hazard a guess that they will stop selling Cisco blade
    switches in future – I also note from an email that all HP partners got
    this week that all Cisco manufactured blade switch components were
    facing supply issues, stoking the fires somewhat to resellers to push
    the HP product with some choice anti-Cisco FUD which I won’t repeat
    here.

Top 5 Planet V12n blog posts week 06

VMware PEX 2010 was great… but it did mean I was extremely busy and didn't have time to create the top-5. I just picked the 5 best reads this week. Check it out:

  • Jason Boche – My VCDX defense experience
    The first 75 minutes is spent “defending” my design.  I’ve got about a
    15 slide deck to get through and to use as reference throughout the
    design defense.  I’d highly recommend putting as much reference as you
    can in the slide deck which you can yourself refer to during the
    defense.  It will help illustrate design choices and jog your memory
    for design elements which you’ve forgotten due to nervousness. The
    first 5-10 minutes I was pretty nervous and stuttered once or twice
    during my presentation. After that, I warmed up and it felt more like a
    good technical discussion with co-workers which I enjoyed.
  • Mike La Spina – Running ZFS over NFS as a VMware Store
    In this architecture we are defining a fault tolerant configuration
    using two physical 1Gbe switches with a quad or dual Ethernet
    adapter(s). On the OpenSolaris storage head we are using IPMP aka IP
    Multipathing to establish a single IP address to serve our NFS store
    endpoint. A single IP is more appropriate for VMware environments as
    they do not support multiple NFS IP targets per NFS mount point.  IPMP
    provisions layer 3 load balancing and interface fault tolerance. IPMP
    commonly uses ICMP and default routes to determine interface failure
    states thus it well suited for a NAS protocol service layer. In a
    effort to reduce excessive ICMP rates we will aggregate the two dual
    interfaces into a single channel connection to each switch. This will
    allow us to define two test IP addresses for the IPMP service and keep
    our logical interface count down to a minimum. We are also defining a 2
    port trunk/aggregate between the two physical switches which provides
    more path availability and reduces  switch failure detection times.
  • Hany Michael – vSphere In Motion: A Real-World Live Migration Scenario
    I was having a discussion with one of the large enterprises here in
    Qatar lately, and I was quite surprised to know from them that they are
    hesitated to migrate their VI3.5 environment to vSphere because of the
    associated downtime. What surprised me was not the fact that they can't
    afford a downtime, I've spent 6 years of my career working in the
    Telecom sector and I know for a fact that 1 second of downtime could
    mean a disaster, or even translate to a loss of thousand of $$. What
    surprised me was that they didn't know that it is possible to do this
    migration without any downtime!
  • Scott Drummonds – Inaccuracy of In-guest Performance Counters
    Every couple of months I receive a request for an explanation as to why performance counters in a virtual machine cannot be trusted. While it is unfairly cynical to say that in-guest counters are never right, accurate capacity management and troubleshooting should rely on the counters provided by vSphere in either vCenter or esxtop. The explanation is too short to merit a white paper but I hope a blog article will serve as the authoritative comment on the subject.
  • Bouke Groenescheij – Removevmha
    Today I've updated the popular removevmhba script to version 5.0. This version now includes the removal of the drivers in vSphere ESX 4.0 update 1 isos. Thanks to Dinny Davies who did excellent work again on finding a solution for removing them on vSphere ESX4 (he just beat me to it Wink). Check the original ESX 3.x.x version here, and the new ESX 4.x.x document here. Go ahead, grab removevmhba from the downloads section and give it a try. It removes the drivers only during installation, so you don't need to bother disconnecting your SAN or zone out anything during installation (both Emulex and Qlogic – and also hardware initiated iSCSI adapters). It's much safer for a scripted installation of ESX using the UDA or EDA. After the installation you will have the drivers (since it is installed as a package) – so you will get connection back to your SAN.

Introducing the VMware Express: hands-on virtual desktops coming to your town

Today VMware is proud to unveil the VMware Express during its inaugural stop at the 2010 VMware Partner Exchange in Las Vegas, NV.  This state of the art mobile datacenter, demo environment and briefing center has been built to bring VMware solutions directly to our customers across the USA and Canada during the 2010 Virtualization Tour. The VMware Express is sponsored by Cisco, EMC, Dell, MDS, NetApp, Xsigo, ChipPC, Amulet Hotkey and Teradici.

Expess truck image 3

There are 5 demo stations covering both VMware desktop and server virtualization solutions. Customers will have the unique opportunity to get hands on and dig deep into solutions with VMware Experts. There are demos highlighting the following products and solutions:

VMware View

  • Best User Experience – Highlighting the power of the PCoIP display protocol to deliver a rich user experience, perfectly adapted for the network connection and end-point device.

  • Follow-Me Desktop – Enabling immediate access to desktops, applications and data while ensuring a consistent user experience across sessions and endpoint devices.

  • Access Across Boundaries – Providing access to desktops, applications and data anytime, anywhere regardless of network availability.

  • Windows 7 Migration – Reducing the costs and complexity associated with desktop and application migration.


VMware vSphere

  • The industry’s most reliable platform for datacenter virtualization offering the highest levels of availability and responsiveness for all applications and services.  Optimize IT services and deliver the highest levels of application service agreements with the lowest total cost per application workload by decoupling your business-critical applications from the underlying hardware for unprecedented flexibility and reliability.

vCenter Server

  • Learn about this scalable and extensible platform that forms the foundation for virtualization management with the family of vCenter products including CapacityIQ, AppSpeed, Chargeback and many more focused on providing advanced operational controls.

Customers will not only benefit from being able to see and interact with multiple VMware products in one place but can also take advantage of the conference room where they can have deep dive conversations with VMware solution experts. Leaving the VMware Express, visitors will have an improved understanding of the VMware Desktop partner eco-system, VMware solutions, and how they are positioned to address today’s technical and business requirements.

The VMware Express is letting us reach customers like never before and is ready to roll to industry and partner events as well as customer sites bringing VMware solutions directly to the customer.  Don’t miss your opportunity to catch the VMware Express on the 2010 Virtualization Tour as it crosses the U.S. and Canada coming to a location near you.  Learn more and keep up to date by going to http://www.vmware.com/tour

Top 5 Planet V12n blog posts week 05

For a lot of people it has been a crazy week. Some of you might wonder why, some of you know what I'm talking about. VMware Partner Exchange 2010. With PEX coming up for many of you that means GTJD, GTJD? Yeah, Getting The Job Done! Being away for a week in my case means I need to wrap up project and answer a lot of emails before it gets out of control. That doesn't however mean that I don't have time to create a top-5…. This weeks list contains the all-star bloggers:

  • Scott Drummonds – PVSCSI and Low-IO Workloads
    At low IOPS the CPU is doing very little work to access storage
    hardware.  In these environments it is simply not worth anyone’s time
    to implement and use a special driver storage driver.  But when 10-50k
    IOPS are streaming through the virtual SCSI bus, a new approach that
    halves the number of cycles spent on each IO will noticeably decrease
    CPU utilization.  This is why we created PVSCSI. The current design of PVSCSI coalesces based on OIOs only, and not
    throughput.  This means that when the virtual machine is requesting a
    lot of IO but the storage is not delivering, the PVSCSI driver is
    coalescing interrupts.  But without the storage supplying a steady
    stream of IOs there are no interrupts to coalesce.  The result is a
    slight increase to latency with little or no efficiency gain.
  • Frank Denneman – Sizing VMs and NUMA nodes
    ESX is NUMA aware and will use the NUMA CPU scheduler when detecting a
    NUMA system. On non-NUMA systems the ESX CPU scheduler spreads load
    across all sockets in a round robin manner. This approach improves
    performance by utilizing as much as cache as possible. When using a
    vSMP virtual machine in a non-NUMA system, each vCPU is scheduled on a
    separate socket.
  • Jason Boche – Configure VMware ESX(i) Round Robin on EMC Storage
    The answer was buried on page 88.  The nmp roundrobin setting useANO is
    configured by default to 0 which means unoptimized paths reported by
    the array will not be included in Round Robin path selection unless
    optimized paths become unavailable.  Remember I said early on that
    unoptimized and optimized paths reported by the array would be a key
    piece of information.  We can see this in action by looking at the
    device list above.  The very last line shows working paths, and only
    one path is listed for Round Robin use – the optimized path reported by
    the array.
  • Scott Lowe – Using IP-Based Storage with VMware vSphere on Cisco UCS
    From the VMware side of the house, since you’re using 10GbE end-to-end,
    it’s very unlikely that you’ll need to worry about bandwidth; that
    eliminates any concerns over multiple VMkernel ports on multiple
    subnets or using multiple NFS targets so as to be able to use link
    aggregation. (I’m not entirely sure you could use link aggregation with
    the 6100XP interconnects anyway. Anyone?) However, since you are
    talking Cisco UCS you’ll have only two 10GbE connections (unless you’re
    using the full width blade, which is unlikely).
  • Gabe – Licensing problems with VMware VIEW4
    The problem Jon was facing, was that it was impossible to just add
    those 20 (2×10) licenses to vCenter without assigning them to a host.
    Because, in our believe, there should just be 20 licenses in some sort
    of pool that each VDI VM would take one license from. It is possible to
    assign multiple hosts to one license so they can share the number of
    available VMs in that license. What you can’t do is have a host connect
    to more than one license, which in our opinion would also be feasible.