It was very tough to pick a top 5 this time as most posts this week were about vSphere Update 1 and View 4. But I did manage to find 5 excellent articles again. Make sure you read them:
- Scott Sauer – More Bang for Your Buck with PVSCSI (Part 1)
So let’s first find out if it’s all that. We need to do some testing
to validate the hype. I created two virtual machines, one with the
traditional LSI Logic SCSI driver, and one with the new PVSCSI driver.
The host is the same for each VM, 4 socket Intel Xeon system with 64 GB
of RAM, connected to EMC Clariion CX3-80 storage. The Raid
configuration is a 4+1 RAID 5 set (10K spindles), with the default
Clariion Active/Passive MRU setup (No PPVE). Each VM has 2 vCPU’s and
4 GB of RAM and both are running 32 bit Microsoft Windows 2003 R2.
Both Virtual Machines data disks were formatted using diskpart and the
tracks were correctly aligned. Anti-virus real time scanning was
disabled on both systems. This test is meant to get as close as
possible to a standard configuration that we can benchmark from. - Arnim van Lieshout – Geographically dispersed cluster design
Let’s take it back one step and have a look at an active-passive setup.
These setups have some sort of storage replication in place. The most
common design I encounter is showed in figure 1. In the main datacenter
there’s an ESX cluster with some sort of SAN based
replication/mirroring to a second datacenter. In the second datacenter
there is a passive ESX cluster available to start-up the virtual
servers in case of disaster. Let’s use this setup as a starting point
and turn this active-passive into an active-active setup. - Andre Leibovici – Your Organization’s Desktop Virtualization Project – Part 3
At the time this solution was designed, the numbers of users per CPU
core could range from 3.8 to 4.2, however for most VDI deployments
using new processors (Intel Nehalem 5500 and AMD Phenom II) this number
can be around 6.0 per CPU core, allowing up to 100 virtual desktop
machines in a single dual-quad server. - Scott Drummonds – Another Day, Another Misconfigured Storage
You will have to size your storage to peak, to average, or somewhere in between. If you size to the average, you are counting on the peaks occurring at different times. If you are wrong, when two workloads peak simultaneously, a bottleneck will form at the array. Also note that sizing to the average in this case (350 IOPS) is insufficient for VM C’s peak of 400 IOPS. You could size to the aggregate peak of 1200 IOPS but unless all of the virtual machines peaked at once the workloads would never consume the available bandwidth.
All you can do in this case is make a best guess and modify later, as needed. I often suggest that a good start is one third of the way from average to peak which equals 633 IOPS in this case. If we assume 150 IOPS per spindle, that means five spindles for this VMFS volume. - Luc Dekens – Scripts for Yellow Bricks’ advise: Thin Provisioning alarm & eagerZeroedThick
This script will convert an existing thick VMDK to eagerZeroedThick. As you can read in Duncan’s blog entry there is a serious performance improvement to be obtained by doing this.
Note that the guest needs to be powered off to be able to do the conversion ! This is in fact the case for most of the VirtualDiskManager methods. See also my Thick to Thin with PowerCLI and the SDK entry.