Uncategorized

Top 5 Planet V12n blog posts week 13

I almost forget about this one… Although the weekend is officially over it was Easter Monday so I take that as an excuse for being late. Here’s this weeks top 5:

  • Hany Michael – VMware vSphere on IBM BladeCenter H – (Part 1 of 2)
    Due to the insane number of expansion modules/options available in the IBM BladeCenter H, I had to split this post into two parts. In fact, I was initially planning to have around 12 different designs for vSphere on BladeCenter H (yes twelve) but I then I started to shrink and skip some designs to fit as many scenarios as possible in a reasonable two-part article. With that said, the following is by no mean a list of all the possible design scenarios you can achieve with this hardware platform. If you started the “mix and match” game, you may literally end-up with uncountable possibilities!
  • Scott Drummonds – Memory Reservations Drive Over-commit
    What do I mean by “drive over-commitment”? I mean that, when properly used, memory reservations allow a VI admin to optimally pack virtual machines across a cluster’s memory. With properly set reservations, an admin can continue to power on a cluster’s VMs until vCenter’s admission control refuses to allow more. At that point you can know that you the optimal number of virtual machines is on your hosts.
  • Scott Lowe – The View from the Other Side
    Let me make something clear: I’m not advocating against high consolidation ratios. What I’m advocating against is a blind race for higher and higher consolidation ratios simply because you can. Steve’s article seems to push for higher consolidation ratios simply for the sake of higher consolidation ratios. I’ll use a phrase here that I’ve used with my kids many times: “Just because you can doesn’t mean you should.”
  • Chad Sakac – Understanding more about NMP RR and iooperationslimit=1
    So… What did we test? Answers below. BTW – this is still (IMO) a “non ideal” test – as it didn’t show even further scaling in terms of datastores, VMs (which is expected to make IOoperationslimit values even more neutral in a comparison), or under network/port congestion (which is expected to to benefit the adaptive/predictive PP/VE model more in a comparison), but this is a useful set of data. The really weird IOoperationslimit value is what it changed to on ESX reboot (included for completeness)
  • Duncan Epping – What’s the point of setting “–IOPS=1″ ?
    So far none of of the vendors have published this info and I very much doubt, yes call me sceptical, that these tests have been conducted with a real life workload. Maybe I just don’t get it but when consolidating workloads a threshold of a 1000 IOPS isn’t that high is it? Why switch after every single IO? I can imagine that for a single VMFS volume this will boost the performance as all paths will be equally hit and load distribution on the array will be optimal. But for a real life situation where you would have multiple VMFS volumes this effect decreases. Are you following me?