Home > Blogs > VMTN Blog > Monthly Archives: November 2009

Monthly Archives: November 2009

Top 5 Planet V12n blog posts week 48

For me personally this was a great week as the "vSphere Quick Start Guide" has finally officially been released via Amazon. We've already sold roughly a 1000 copies, including the preview copies, which is a great success! Besides that the long awaited VMware View 4.0 was released this week so I expect to see a lot of View related blog articles next week in the top 5 as most bloggers are exploring it this week.

Edit: This weeks Top-5 is a Top-6. Somehow Mike's excellent upgrade article slipped and it deserves to be part of this list.

  • Mike Laverick – A Strange and Terrible Saga: Fear & Loathing – Upgrading from vSphere4.0 to vSphere4 U1
    Well, it’s the time already – a new update to VMware’s flagship virtualization platform – vSphere4. I probably would have not upgraded from 4.0 to 4.0 Update 1, if it hadn’t been for the almost simultaneous release of View4 and the eagerly awaited – PCoIP protocol. One of View4 pre-requisites is for vSphere4 U1. Despite this I started of with a deploy of View4 on vSphere4.0 to see how hard and fast that “pre-requisite” was. In truth I was a bit nervous (perhaps more than normal) about this Update 1 roll-out. You see I’m right in the middle of updating my SRM 1.0 book to be SRM 4.0 book. I’ve spent some weeks playing with storage (EMC Clarrion/Celerra, NetApp FSA and HP Lefthand P400 VSA), and I’m was just about to update the chapters concerning recovery plans – when a long came this update from VMware. I’m sure its been thoroughly tested – but my fear was upgrading vCenter might make my SRM 4.0 build go a little wobbly. As you will see if you keep on reading – my fears weren’t completely groundless.
  • Steve Kaplan – Array-based backup advantages in a VMware virtual infrastructure
    The primary challenge faced by software backup solutions is their
    inability to offload impact from hosts, a capability that becomes
    imperative as VM consolidation ratios increase. Applications relying on
    host CPU cycles and disk IO to facilitate backup compete for those
    shared resources with the production workload.
     
  • Scott Lowe – Understanding NPIV and NPV
    Two technologies that seem to have come to the fore recently are NPIV
    (N_Port ID Virtualization) and NPV (N_Port Virtualization). Judging
    just by the names, you might think that these two technologies are the
    same thing. While they are related in some aspects and can be used in a
    complementary way, they are quite different. What I’d like to do in
    this post is help explain these two technologies, how they are
    different, and how they can be used. I hope to follow up in future
    posts with some hands-on examples of configuring these technologies on
    various types of equipment.
  • Eric Siebert – The mechanics of VMware Go
    VMware defines an SMB as a company with fewer than
    1,000 employees and recognizes that these businesses have the same
    challenges as larger enterprises, but more constraints with IT staffing
    and resources. VMware Go which was announced at VMworld 2009 and is
    currently in Beta. Go is considered a cloud-based application. It was
    developed in partnership with Shavlik Technologies (who also provided
    VMware its Update Manager technology) and is a tool that provides
    assistance with a variety of functions including implementation of
    ESXi, physical-to-virtual (P2V) conversions and patching of ESXi hosts.
  • Jakob Fabritius – VLAN testing in ESX 3.5
    The traditional way of testing is to create a vSwitch with only one vmnic connected. Then connect a VM on that vSwitch with one of the VLANs. Configure an IP address in the address space of the VLAN and ping the gateway. Do this for all the VLANs, and then connect the next vmnic to the vSwitch and start over. The following method speeds up VLAN testing significantly (in this case from 100 to 16 test cases). It is not totally automated, but I have found it very useful nonetheless.
  • Tom Finnis – Planning for vSphere: Key Considerations for a Successful Deployment
    To begin, when considering a vSphere deployment, there are two basic
    areas where you need to assess your requirements; virtual machine
    resource capacity and vSphere features such as high availability. As
    well as the immediate needs you also need to consider what they will be
    in the future too, as a small additional outlay now could save you
    having to spend a much bigger sum a year down the line. One of the key
    benefits of virtualization is that you can separate your software
    upgrade cycle from your hardware upgrades, so you don't have to worry
    about whether you should update your server OS's when you virtualize
    them. All the same you do need to consider what the resource
    requirements will be if you undertake a software upgrade cycle 18
    months later, as new software almost invariably requires more resources
    than its predecessor.

Top 5 Planet V12n blog posts week 47

It was very tough to pick a top 5 this time as most posts this week were about vSphere Update 1 and View 4. But I did manage to find 5 excellent articles again. Make sure you read them:

  • Scott Sauer – More Bang for Your Buck with PVSCSI (Part 1)
    So let’s first find out if it’s all that.  We need to do some testing
    to validate the hype.  I created two virtual machines, one with the
    traditional LSI Logic SCSI driver, and one with the new PVSCSI driver. 
    The host is the same for each VM, 4 socket Intel Xeon system with 64 GB
    of RAM, connected to EMC Clariion CX3-80 storage.  The Raid
    configuration is a 4+1 RAID 5 set (10K spindles), with the default
    Clariion Active/Passive MRU setup (No PPVE).  Each VM has 2 vCPU’s and
    4 GB of RAM and both are running 32 bit Microsoft Windows 2003 R2. 
    Both Virtual Machines data disks were formatted using diskpart and the
    tracks were correctly aligned.  Anti-virus real time scanning was
    disabled on both systems.  This test is meant to get as close as
    possible to a standard configuration that we can benchmark from.
  • Arnim van Lieshout – Geographically dispersed cluster design
    Let’s take it back one step and have a look at an active-passive setup.
    These setups have some sort of storage replication in place. The most
    common design I encounter is showed in figure 1. In the main datacenter
    there’s an ESX cluster with some sort of SAN based
    replication/mirroring to a second datacenter. In the second datacenter
    there is a passive ESX cluster available to start-up the virtual
    servers in case of disaster. Let’s use this setup as a starting point
    and turn this active-passive into an active-active setup.
  • Andre Leibovici – Your Organization’s Desktop Virtualization Project – Part 3
    At the time this solution was designed, the numbers of users per CPU
    core could range from 3.8 to 4.2, however for most VDI deployments
    using new processors (Intel Nehalem 5500 and AMD Phenom II) this number
    can be around 6.0 per CPU core, allowing up to 100 virtual desktop
    machines in a single dual-quad server.
  • Scott Drummonds – Another Day, Another Misconfigured Storage
    You will have to size your storage to peak, to average, or somewhere in between. If you size to the average, you are counting on the peaks occurring at different times. If you are wrong, when two workloads peak simultaneously, a bottleneck will form at the array. Also note that sizing to the average in this case (350 IOPS) is insufficient for VM C’s peak of 400 IOPS. You could size to the aggregate peak of 1200 IOPS but unless all of the virtual machines peaked at once the workloads would never consume the available bandwidth.
    All you can do in this case is make a best guess and modify later, as needed. I often suggest that a good start is one third of the way from average to peak which equals 633 IOPS in this case. If we assume 150 IOPS per spindle, that means five spindles for this VMFS volume.
  • Luc Dekens – Scripts for Yellow Bricks’ advise: Thin Provisioning alarm & eagerZeroedThick
    This script will convert an existing thick VMDK to eagerZeroedThick. As you can read in Duncan’s blog entry there is a serious performance improvement to be obtained by doing this.
    Note that the guest needs to be powered off to be able to do the conversion ! This is in fact the case for most of the VirtualDiskManager methods. See also my Thick to Thin with PowerCLI and the SDK entry.

Top 5 Planet V12n blog posts week 46

It was a normal week again. No exciting announcements just business as usual. Luckily there are always bloggers who publish articles with refreshing views, new technical details or old technical details overhauled. It wasn't difficult to pick this weeks top-5, each article I selected stands out for a specific reason, read them and you know what I mean:

  • Frank Denneman – NFS and IP-HASH loadbalancing
    The result of this calculation is 1 (one) The VMkernel chooses the second uplink because it has the same binary representation of the Hash. Hereby balancing outbound NFS traffic across the two uplinks.
    Using IP-Hash to load-balance is a excellent choice, but you do need to fulfill certain technical requirements to get it supported by VMware and plan your IP-address scheme accordingly to get the most out of this load-balancing Policy.
  • Steve Chambers – The end is nigh for Protocol Passionistas
    In the IT world you meet professionals (small p) who have grasped hold of technologies and defend them like their (professional) life depended on it. You don’t have to look far for this in virtualization with VDI desktop protocols (ICA vs. RDP vs. PCoIP etc) or storage protocols (NFS vs. iSCSI vs. FC). Just walk around any data center with one of these professionals and ask them “Why did you choose ” and it’s like you are asking why they chose their wife, like there’s some kind of inferred criticism, like questions and inquisitiveness are bad. Why is this? When the defensive attitude is related to protocols, I negatively refer to these professionals as Protocol Passionistas.
  • Jason Boche – Tame Electrical and Heating Costs with CPU Power Management
    A casual Twitter tweet about my power savings through the use of VMware Distributed Power Management (DPM) found its way to VMware Senior Product Manager for DPM, Ulana Legedza, and Andrei Dorofeev. Ulana was interested in learning more about my situation. I explained how VMware DPM had evaluated workloads between two clustered vSphere hosts in my home lab, and proceeded to shut down one of the hosts for most of the month of October, saving me more than $50 on my energy bill.
    Ulana and Andrei took the conversation to the next level and asked me if I was using vSphere’s Advanced CPU Power Management feature (See vSphere Resource Management Guide page 22). I was not, in fact I was unaware of its existence. Power Management is a new feature in ESX(i)4 available to processors supporting Enhanced Intel SpeedStep or Enhanced AMD PowerNow! power management technologies.
  • Maish Saidel-Keesing – Patching your ESXi Host – Without vCenter
    VMware Update Manager is the Enterprise tool for Patching your ESX Hosts and for some also the tool used to patch your Windows / Linux Guests as well.
    This is all fine and dandy, but what is you do not have all of your ESXi hosts connected to your vCenter?
    Why would you so that – you may ask? Well in my environment, we have several labs that are running their Environment on a ESXi Whitebox,with the free ESXi License.

  • Simon Long – Testing Network throughput between VMware ESX Hosts
    Have you ever wanted to check your Network throughput between your ESX Hosts? or even between VM’s? Well I needed to do this, and I couldn’t find any straight forward how-to’s.
    Having been pointed in the direction of a simple application called IPerf by Simon Gallagher I opted to use the Windows version. I’m not great with Linux, and as this is an open source application, documentation is a little hard to come by. So for me, this post is also to remind me how on IPerf works should i need to use it again.

Top 5 Planet V12n blog posts week 45

It was an exciting week this week. For some the VCE announcement was not a real surprise for many it seemed to be. Like always some were skeptical and others were enthusiastic about this new initiative. The first post on this Top 5 covers every single aspect, keep in mind that Chad is an EMC employee. I can also recommend the articles by Chuck Hollis on this topic but as he is not part of PlanetV12n he did not make the top 5:

  • Chad Sakac – VCE Coverage: Post 1, Post 2, Post 3, Post 4, Post 5, Post 6
    Let’s focus on the “Vblock” management layer. To restate the challenge – the goal is to have a thing that makes utility-like management of a Vblock (or more importantly a series of them), including server + LAN/SAN network (UCS manager does this well for one UCS system) + storage itself. As with all things in the VMware, Cisco, EMC consortium, we know customers need choice – and any one element is replaceable. The value proposition is that the things we build are so tightly focused, so tightly integrated, that if you are looking at something like this – the integration value is so high it’s nearly irresistible.
  • Alan Renouf – Virtu-Al VESI & PowerGUI PowerPack & vCheck v3
    I have been teasing people on twitter for a week or so now and have
    just uploaded my PowerPack to the PowerGUI site, you can download it
    here. This is a first attempt at providing most of my scripts in one
    PowerPack and adding to the already great management that VESI and
    PowerGUI give you.
  • Andre Leibovic – Your Organization’s Desktop Virtualization Project – Part 1 & Part 2
    I would anticipate that when your CAPEX is calculated for the next 5
    years after the adoption of desktop virtualization your CIO and CEO
    will not be very impressed only with the numbers, especially if you
    have incorporated acquisition of Thin Clients to your CAPEX. If you are looking for a justification to adopt desktop
    virtualization you should focus on your OPEX and cost savings coming
    from Lower Operating Cost/TCO, Power and Cooling Energy Savings and
    increased seat utilization, when applicable.
  • Mike Laverick – Virtual Compute Environment – VMware, Cisco and EMC Coalition
    So here’s my attempt. It seems the case that whether you like or not -
    we are creeping steadily away from a best-of-breeds approach to
    building out datacenters. Everyone yaks endless about the
    commoditization of IT – and it’s happening right before our eyes. Each
    of the major OEMs – HP, IBM, Dell have been for sometime junking their
    valued partner relationships in effort to seal their customers into a
    one-stop solution. Of course, IBM are probably the company that’s most
    famous/notorious for this approach. In recent years, HP have been
    steadily improving their HP ProCurve stuff to the degree that they no
    longer feel the need to promote/resell Cisco switching gear. To me the
    VCE announcement amounts to 4th OEM provider coming along to this
    party. So in short while you will be able to CHOOSE which OEM to
    shackle yourself too. This choice will be limited to the “Gang of Four”.
  • Duncan Epping – How to avoid HA slot sizing issues with reservations
    When you select a specific percentage that percentage of the total
    amount of resources will stay unused for HA purposes. First of all
    VMware HA will add up all available resources to see how much it has
    available. Then VMware HA will calculate how much resources are
    currently consumed by adding up all reservations of both memory and cpu
    for powered on virtual machines. For those machine that do not have a
    reservation a default of 256Mhz will be used for CPU and a default of
    0MB+memory overhead will be used for Memory.

Top 5 Planet V12n blog posts week 44

This was probably one of the toughest Top-5's to write as I had the week off this week. I basically had to catch-up with a whole week of Planet V12n. One of the most annoying things about it is that half of the blogs on PlanetV12n enabled "content summary only". Yes I know you will have a couple of extra visits, but isn't blogging about getting people to read your content instead of being "numbers"(visits) focused? Now that I got that off my chest lets move on to what this article is about. It's about the 5 top articles this week:

  • Vaugn Stewart – VCE-101 Thin Provisioning Part 1 – The Basics & VCE-101 Thin Provisioning Part 2 – Going Beyond
    Like the thick format, thin VMDKs are not formatted at the time of
    deployment. This also means that data that needs to be written must
    pause while the blocks required to store the data are formatted. The
    formatting operation only occurs on demand at anytime an area of the
    virtual disk, which has never been written to, is required to store
    data.
  • Chad Sakac – Solid State Disk will change the storage world…
    But surely, if you were looking for performance, you wouldn’t use the SATA disk, right? You would probably use a 15K RPM FC disk. Those cost about $1000. They do about 200 random write IOPs. So, you would need 20 of them to do what that $115 SSD could do. That’s 0.2 IOps per dollar – or 170x more expensive than the SSD on a IOps/$ basis. Oh, you think SAS 15K drives are a better deal? They are – than FC disks. A 15K SAS disk on Pricewatch costs about $210, and they also do about 200 IOps. that’s 0.95 IOps per dollar – or 37x more expense than the SSD on a IOps/$ basis.
  • Luc Dekens – dvSwitch scripting – Part 4 – NIC teaming
    The double Service Consoles and vmKernel connection might look confusing at first. But when you select one these connections, the vSphere client will show you to which uplink a specific connection is going.
    To increase the availability of the dvSwitch, I will show how to add two pNics and how to activate and configure NIC Teaming.
    When I created the dvSwitch I configured it for two uplink ports (per host). Since I’m adding two pNics, I will first have to change the maximum number of dvUplink ports.
  • Gabrie van Zanten – Design tips for VMware vSphere 4
    Recently at the Belgium VMUG I gave a presentation in which I covered some design tips for VMware vSphere 4. I talked about some business decisions that, how boring they may seem, are crucial for your design. I covered some security requirements you should check with the security department of the organisation and of course advised good capacity planning which also is very important for your design.
    What the average geek found most interesting where topics like: “What size of ESX host will you buy?”, “How to run vCenter in a VM”, “VMFS best practises”, “Understanding queue depth and lun size” and more….
  • Simon Gallagher – iSCSI LUN is very slow/no longer visible from vSphere host
    Due to too many SCSI reservation conflicts, so hopefully it wasn’t looking like corruption but a locked-out disk – a quick Google turned up this KB article – which reminded me that SATA disks can only do so much :)
    Multiple reboots of hosts and the OpenFiler hadn’t cleared this situation – so I had to use vmkfstools to reset the locks and get my LUN back, these are the steps I took..
    You need to find the disk ID to pass to the vmkfstools –L targetreset command, to do this from the command line look under /vmfs/devices/disks