Uncategorized

Top 5 Planet V12n blog posts week 50

What happened this week… Yes the Dutch VMUG! The Netherlands is just a tiny country but when we are talking about technology it seems that we can beat any country out there in terms of enthusiasm. This years anual VMUG meeting had over 600 attendees, I repeat over 600 attendees. It's almost like a dutch VMworld. Keynote by VMware's evangelist Richard Garsthagen and a welcome message from Steve Herrod. Eric Sloof did multiple blogs about the VMUG meeting but this one contains a video which captures the atmosphere. That's enough introduction blabla… here are the articles that made it to the top-5:

  • Frank Denneman – Impact of memory reservation
    I have a customer who wants to set memory reservation on a large scale.
    Instead of using resource pools they were thinking of setting
    reservations on VM level to get a guaranteed performance level for
    every VM. Due to memory management on different levels, using such a
    setting will not get the expected results. Setting aside the question
    if it’s smart to use memory reservation on ALL VM’s, it raises the
    question what kind of impact setting memory reservation has on the
    virtual infrastructure, how ESX memory management handles memory
    reservation and even more important; how a proper memory reservation
    can be set.
  • Joep Piscaer – Virtualizing vCenter with vDS: Another Catch-22
    To make matters worse: I could not select the correct network label (Port Group) in the drop down list. After some long and hard thinking, I figured out why: ESX couldn’t communicate with vCenter to update the dvSwitch’s status. This is simply because the vCenter VM was one of the migrated VM’s, and thus suffered from the same problem: it wasn’t connected to the network. How’s that for a catch-22!
    As I said earlier, the physical hosts run on a single vmnic. No easy fix here then, I cannot create a standard vSwitch, create a port group on it, add a vmnic and migrate the vCenter VM to this port group to get the VM online and thus be able to get the other VM’s attached to the right (dvSwitch) Port Group, after which I can migrate the vCenter VM to the right PG.
  • Hany Michael – Diagram: VMware High-Availability
    This is not an introduction to the VMware HA, and it's not a very advanced diagram for it either. I assume here that you have a general idea on the topic before looking into it to appreciate this incredible technology. If you are a VMware professional you may also find this useful to keep your information sharp and present about the topic at any given time. You really don't have to re-read the documentation every time you'd like to remember a small detail about the subject.
  • Forbes Guthrie – vSphere 4 card – version 2
    Its been a long time coming. Version 2 of this card has many changes that I’ve wanted to make since writing these cards. It’s taken a good couple of months of hard (and frankly a bit boring :0) work, which had pulled me away from blogging about more interesting things and playing with some of the newly released products. The best bit is you probably won’t notice much of a difference. A lot of the work is under the covers, to make the most out of the paper real estate.
  • Massimo Re Ferre' – From Scale Up vs Scale Out… to Scale Down
    One of the implications is that servers are now memory-bound. If you ask 10 virtualization architects in the x86 space they will all tell you that the limiting factor today in servers is the memory subsystem. Put it another way, you are reaching the physical memory usage limit far before you manage to saturate the processors in a virtualized server. Have you ever wondered why that is the case? As users move backwards from 8-Socket servers to 4-Socket servers to 2-Socket servers the number of memory slots available per server gets reduced. That's how x86-based servers have been designed over the years: the more sockets the server has, the more memory slots that are available. What is happening now is that customers tend to use much smaller servers because they can support the same number of partitions per physical host, but the memory requirements haven't changed. That's because the amount of memory needed is a function of the number of partitions running, and if that number of partitions is kept constant you will always need the same amount of memory.