Uncategorized

Top 5 Planet V12n blog posts week 23

As I was watching one of the World Cup games yesterday evening I totally forgot to click "publish". Thanks Jason for pointing this out. Here's this weeks top 5:

  • Aaron Delp – Comparing Vblocks
    I believe one of the most interesting concepts to come along in our industry recently has been Cisco/EMC/VMware's Vblock. My best definition for Vblock is a reference architecture that you can purchase. Think about that for a second. Many vendors publish reference architectures that are guidelines for you to build to their specifications. Vblock is different because it is a reference architecture you can purchase. This concept is a fundamental shift in our market to simplify the complexity of solutions as we consolidate Data Center technologies. We are no longer purchasing pieces and parts, we are purchasing solutions.
  • Scott Drummonds – VMDirectPath
    The only reason why anyone is considering VMDirectPath for production deployments is the possibility of increased performance. But the only workload for which VMware has ever claimed substantial gains from this feature is the SPECweb work I quoted above. That workload sustained 30 Gb/s of network traffic. I doubt any of VMware’s customers are using even a fraction of this network throughput on a single server in their production environments.
  • Jason Boche – NFS and Name Resolution
    A few weeks ago I had decided to recarve the EMC Celerra fibre channel SAN storage. The VMs which were running on the EMC fibre channel block storage were all moved to NFS on the NetApp filer. Then last week, the Gb switch which supports all the infrastructure died. Yes it was a single point of failure – it’s a lab. The timing for that to happen couldn’t have been worse since all lab workloads were running on NFS storage. All VMs had lost their virtual storage and the NFS connections on the ESX(i) hosts eventually timed out.
  • Frank Denneman – Memory Reclaimation, When and How?
    Back to the VMkernel, in the High and Soft state, ballooning if favored over swapping. If it ESX server cannot reclaim memory by ballooning in time before it reaches the Hard state, the ESX turns to swapping. Swapping has proven to be a sure thing within a limited amount of time. Opposite of the balloon driver, which tries to understand the needs of the virtual machine let the guest decides whether and what to swap, the swap mechanism just brutally picks pages at random from the virtual machine, this impacts the performance of the virtual machine but will help the VMkernel to survive.
  • Duncan Epping – Is this VM actively swapping?
    At one point the host has most likely been overcommitted. However currently there is no memory pressure (state = high (>6% free memory)) as there is 1393MB of memory available. The metric “swcur” seems to indicate that swapping has occurred” however currently the host is not actively reading from swap or actively writing to swap (0.00 r/s and 0.00 w/s).