Home > Blogs > Virtual Reality > Monthly Archives: October 2008

Monthly Archives: October 2008

Hyper-V Server is Finally Here – But What Exactly Is It?


First, Thank-you for the Feedback

I wanted to quickly say thanks to everyone who provided feedback on our most recent post comparing the installation and configuration of VMware ESXi versus Windows Server Hyper-V with Server Core — Microsoft’s recommended deployment option.  You may not agree with all of the conclusions we presented, but we just couldn’t let Microsoft Exec Bob Muglia go unchallenged in claiming that Hyper-V is simply “The Windows You Know” and therefore an easier product to use than ESXi – it isn’t.  But thank-you, as many of you had very insightful (inciteful?) comments and we got some good, healthy debate going. For a pretty fair Microsoft response, please refer to James O’Neil’s blog .

But “Apples to Oranges”??  They’re Both Hypervisors!

But there was one piece of feedback, stated in comments by a number of Microsoft employees readers, that puzzled me. Some people cried foul because they saw our evaluation of Hyper-V with Server Core and ESXi as somehow comparing apples to oranges – I guess that was because Windows Server Hyper-V with Server Core requires a full instance of a general purpose operating system as its parent partition and ESXi does not. The comments/bloggers suggested that a more fair comparison would be ESXi vs. Hyper-V Server 2008, since Hyper-V Server is supposedly Microsoft’s ‘thin’ hypervisor that doesn’t require Windows Server OS in the parent partition – as reported by Microsoft here.  (Note: the MSFT blog linked there incorrectly states that ESXi has a Linux parent partition. That is untrue, ESXi has no parent partition.)

Well, regarding “Apples to Oranges” I am not going to dwell too much on that one, because in my opinion, ESXi and Hyper-V (all configurations) are both Hypervisors and are both aiming to serve the same purpose within a customer’s datacenter, so therefore the comparison is valid.  And, to support that notion, Microsoft compares all versions of Hyper-V to ESX/ESXi in every one of its virtualization presentations, so I think they’re in agreement with us that it is a fair comparison.  However, given that, if you want us to compare ESXi to Hyper-V Server, sure, now that the product is finally available, we can talk about that one too.

Hyper-V Server – Initial Thoughts

Ever since Microsoft first announced Hyper-V Server, almost a year ago, we’ve been speculating as to what it would look like.  It was billed as “standalone”, but until right before its release, Microsoft provided no technical details, so we were all left in the dark.   Existing Hyper-V versions were wholly dependent on Windows Server, so how “thin”, how “standalone” could it really be?

(Note: I am actually thinking that, at the time of Hyper-V server’s announcement, Microsoft itself didn’t know what the Hyper-V Server 2008 architecture would look like…)

Well, now that Hyper-V Server 2008 has finally been released – with very little fanfare considering its initial push from Microsoft – we were able to perform a preliminary evaluation.   There were two things we were initially interested in: 1) How the Hyper-V Server deployment/configuration processes compare to ESXi – gotta answer our critics, and 2) How Hyper-V Server architecture compares to ESXi – is it a more “apples to apples” comparison, or does Hyper-V Server contain Windows Server OS and is it therefore subject to all the patches, updates, vulnerabilities of the other configurations of Hyper-V?

We’ll save tackling the first issue — comparing the install/configure processes – for another blog post.  While our initial eval tells me that the install/config process hasn’t improved with Hyper-V Server, it will still take a little time to undertake a complete analysis.  But the second item – understanding what components of Windows Server the Hyper-V Server actually contains, how the architecture compares to ESXi, and what the benefits of Hyper-V Server actually are – we can start that discussion here.

Hyper-V Server is not “Windows-less” but is merely “Windows License-less”

Our initial finding is that Hyper-V Server is not “thin”; Hyper-V Server is still ultimately Windows.  Hyper-V Server appears for the most part to be just Windows Server Hyper-V with Server Core where all other Server Core roles (except Hyper-V) have been disabled. Hyper-V Server has practically the same footprint as Windows Server Hyper-V with Server Core and is subject to the same patches, updates, attacks. It also appears to have the same restricting, indirect Windows-based driver model. In fact, it seems that the only advantage of Hyper-V Server is that one doesn’t have to buy a Windows Server license in order to deploy it – that’s it.   Hyper-V Server is not “Windows-less”, but just “Windows License-less.”

Hyper-V Server also has some significant limitations that it seems to have inherited from the Standard Edition of Windows Server 2008. It can only support a maximum of 4 sockets per host, 32GB of physical memory per host, 31GB of virtual memory per VM, and requires a rip and replace upgrade to support features like Microsoft Clustering and Quick Migration. So it seems that Hyper-V Server is more of a starter kit, meant only for very basic use cases. In comparison, ESXi is a fully functional, production ready, enterprise offering. Actually, as 1) Both ESXi and Hyper-V Server are free and 2)Only free ESXi can easily be upgraded via license key to a production solution, why would anyone ever use Hyper-V Server? What’s the advantage?


Virtualization Needs

Microsoft Hyper-V Server 2008

VMware ESXi
Free Hypervisor

Small Disk Footprint

No – 2500MB

Yes – 32MB

Large Host Memory Support

No – 32GB

Yes – 256GB

Maximum Physical CPUs



Maximum VM Memory



Supported Guest OSs



Memory Over-Commitment






Clustered File System


Yes – VMFS

Simple Upgrade Path


Yes, to full VI3 versions

Hyper-V Server – An Overview of Our Installation Experience

For proof points supporting our above conclusions, following is a blow-by blow of Michael Hong’s experience installing Hyper-V Server:

I got my 936MB iso of Hyper-V Server downloaded. I burned it onto a CD and popped it into my brand new HP DL 360 and fired it up. After doing some recommended BIOS configurations and rebooting, I’m watching the boot sequence and getting a feeling of déjà vu. Did I just put in the wrong DVD? Because I swear this looks exactly like a full Windows Server 2008 or even a Server Core installation.


Wait…did that just say, “Installing Windows?” I thought this was Hyper-V Server that wasn’t supposed to be Windows! At this point I’m thinking, “Hey, maybe that’s not too bad. I can get a free copy of Windows without having to deal with any of their licensing nightmares.” Well let’s wait and see before I get too excited…

Hyper-V Server Disk Footprint is Similar to Hyper-V Server Core!

Okay, so after THREE more reboots I’m finally able to log in and start looking around. The first thing I check is Hyper-V Server’s disk footprint. After all, Microsoft states that one of its only three key benefits is a “small footprint.” So how “small” is it really? After plotting the numbers into my trusty byte convertor, Hyper-V Server is coming in around 2.5GB! (pagefile not included in size) WOW, that’s only a hundred megabytes less than a full-blown Windows Server Core installation! Perhaps it really just is a Windows Server Core Standard edition with one role enabled. Anyone else have any thoughts on this?


Also notice the number of files and directories. My basic install of Windows Server Core with Hyper-V enabled has:


In this install of Hyper-V Server there is actually more files and directories:


Next, let’s take a look at patching. Option number 5 in the dos-like Hyper-V Configuration menu shows an option for enabling Windows Update. Once set to automatic, it scanned for applicable patches. I didn’t expect to see any new patches since Hyper-V Server was just released yesterday. Any new patches would probably arrive next patch Tuesday right? And since this is supposed to be a light, secure hypervisor, it probably wouldn’t need as many patches as a full blown OS right? The results may surprise you:


13 applicable patches, including 2 for Internet Explorer 7? This is looking more and more like the “Windows I Know.” What in Hyper-V Server actually relies on IE7? Hyper-V Server looks like it’s a full blown Windows OS. If that’s the case, I’m sure hackers will have a field day copying over few additional files and turning it into a full working copy of Windows Server Core.

Some other things to keep in mind:

· Server Core and Hyper-V Server have the same directory structures

· Server Core and Hyper-V Server have the same command line toolset

· Server Core Standard and Hyper-V both have the same 32GB of physical memory limit and up to 4 processors

· Server Core and Hyper-V Server have the same parent partition driver model

Is Hyper-V Server really Windows Server Core Standard with only the Hyper-V role enabled? If so, will it be vulnerable to the same threats as Windows? Those 13 patches are just the starting point. What about viruses? Windows Server Core is exposed to viruses and as a result, there are anti-virus products out there today that are certified on Server Core. What about the size of Hyper-V Server’s footprint? Being only 100MB smaller than Server Core still shows it still has a very large attack surface as compared to VMware ESXi.

In Sum

We feel that, in contrast to how it is being marketed, Hyper-V Server 2008 is not “standalone”, “thin”, or  Operating System agnostic in the same manner as ESXi. Hyper-V Server is still very dependent on and subject to the limitations of Windows and therefore should not be considered an equal to ESXi.   Also, given Hyper-V Server’s restrictions and lack of upgrade path, and given that ESXI is also free and has a simple upgrade path, I question what the viable use cases for Hyper-V Server really are. Give it a try yourself and let us know what you think.  Better yet, download our free VMware ESXi and let us know how you feel it compares to Hyper-V Server.

Stay tuned for our comparison of the deployment and configuration processes for ESXi and Hyper-V Server.

Memory Overcommit – Real life examples from VMware customers

Memory overcommit, aka the ability of VMware ESX to provision memory to virtual machines in excess of what is physically available on a host, has been a topic of discussion in virtualization blogs for quite some time (e.g., “More on Memory Overcommitment”) and apparently still is (e.g. VMware vs. Microsoft: Why Memory Overcommitment is Useful in Production and Why Microsoft Denies it and “Microsoft responds to VMware’s ability to overcommit memory” ).

Given the benefits of memory overcommit and the fact that today only VMware ESX/ESXi has it as a standard feature, it is understandable that other vendors try to downplay it by advocating that it is irrelevant, dangerous and not used in production environments. Microsoft’s position on the topic is particularly interesting….or confusing, I should say. On one side, in an interview Bob Muglia, Microsoft VP, confirmed the usefulness of memory overcommit, announcing plans to add it to their hypervisor some time in the future (have you heard this line before?), but then on the other side they don’t miss an opportunity to speak against it. James O’Neill, also from Microsoft, in his blog even challenged us to provide a reference of a customer that is actually using it in production, promising in return to make a charitable donation of $270 to an organization of our choice.

Anyway, internally at VMware we certainly have no doubts about the importance and effectiveness of memory overcommit, but we felt that after all this discussion among vendors, and after all the confusion from Microsoft as to whether it is/isn’t important and is/isn’t on the Hyper-V roadmap, that it might be more interesting for you to hear directly from our customers. Therefore, the bulk of this post will document a survey of memory overcommit usage among VMware customers. You’ll hear directly from VMware users regarding how they leverage ESX memory overcommit in their production datacenters, with no impact to performance, to increase VM density and further reduce VMware’s already low cost per application – the most relevant metric of virtualization TCO.

(Side Note: I bet MSFT will no longer question the value of overcommit once they are finally able to list it as an upcoming Hyper-V feature),

Before jumping into the survey results, I think a few clarifications are necessary.

What is memory overcommit?

Here I won’t go into all the granular, technical details of how memory overcommit works, because there is already a ton of great literature available that explains what it is and how it works (e.g., “The Role of Memory in VMware ESX Server 3” ).

However, there are a couple points that I’d like to make regarding the functionality of and requirements for Memory Overcommit.

Memory Overcommit: Required Components

Memory overcommit is the combination of three key ingredients:

  1. Transparent memory page sharing
  2. Balloon driver
  3. Optimized algorithms in the hypervisor kernel

These 3 elements must all be present and work together seamlessly. One alone is not enough , unlike what some vendors would like people to believe (see Ballooning is more than enough to do memory overcommit on Xen, Oracle says). To date, only VMware ESX has all the necessary components, has had them since 2001, and has continued to improve them ever since.

Memory Overcommit: Security Impact

Transparent memory page sharing de-duplicates memory pages by sharing the identical pages among VMs. In doing so, it makes the shared pages “read-only” at the physical RAM level. If the VM tries to write to it, ESX will get a callback and it will create a private copy of the page for the VM that wants to write to it, while letting the other VMs continue to use the original shared page. Marking it read-only ensures that it is a secure technology, one VM won’t be able to affect any other VMs. However, if you need additional assurance of Memory Overcommit’s security, you should keep in mind that VMware ESX, with its Memory Page Sharing feature, is the only hypervisor in the market that has earned a Common Criteria Evaluation Assurance Level 4 (EAL4+) under the CSEC Common Criteria Evaluation and Certification Scheme (CCS). Therefore, only VMware ESX is approved for use in “sensitive, government computing environments that demand the strictest security.”

Why is memory overcommit important?

Memory overcommit enables customers to achieve higher VM density per virtual host, increasing consolidation ratios and providing a more efficient scale up – scale out model. Ultimately this translates into substantial savings and a lower cost per application than with alternative solutions, as Eric Horschman shows in his blog post.

While the declining cost of memory could suggest that hypervisors with no memory overcommit can get away without it, in reality throwing more memory at the problem is not a sustainable solution for a few reasons:

  • The number of VMs deployed grows over time

  • Going forward systems will be even more memory constrained than today as the number of CPU cores per server will increase considerably faster than memory capacity. As a matter of fact, in 2011 a two sockets system is expected to be capable of 64 logical CPUs and 256GB of RAM, whereas today the same system is probably capable of 8 logical CPUs and 64GB of RAM. This means that the ability of a hypervisor to efficiently manage memory will be an even more critical factor to minimize the number of servers required to run applications and ensure efficient scalability.

  • Memory capacity requirements aren’t determined only by application workloads, but also by a number of valuable IT services, such as: high availability, zero downtime system maintenance, power management and rapid system provisioning. Virtualization solutions that don’t allow memory overcommit corner customers into a lose-lose situation: either reduce system utilization or don’t provide the service. Thanks to memory overcommit, our customers tell us that they were able to reduce their dependence on available physical resources, avoid unnecessary purchases, and improve infrastructure utilization. (see below for few examples on how VMware customers use memory overcommit)

Enough with the clarifications – let’s move on to the customer survey ….

We conducted an online survey of 110 VMware customers essentially asking them three questions:

  1. Do you use memory overcommit?
  2. Do you use memory overcommit in test/dev, production or both?
  3. What is your virtual-to-physical memory ratio per ESX host (i.e., overcommit ratio)?

Here are the results:

1) 57% answered they are using memory overcommit


……so much for “nobody uses it”

2) Of the 57% who answered yes, 87% said they use it in production and test and development, 2% only in production, 11% only in test/dev


……so much for “nobody uses it in production”

3) Finally, plotting the virtual-to-physical ratios on a chart, we can see what usage looks like. Virtual-to-physical memory ratios ranged from 1.14 to 3 (average 1.8, median 1.75). 75% of the respondents use memory overcommit ratios of 1.5 or higher and 37% utilize a ratio of 2 or higher


.…..so much for “memory overcommit ratios must be low”

What the chart can’t show is that, based on our findings, companies at the low end of the memory overcommit usage spectrum tend to be recent VMware customers, while those at the high-end tend to be long standing VMware customers. This looks very similar to what we have seen happening with other VMware technologies such as VMotion: once people try it and they see how well it works, they want to extract its potential.

I believe this data clearly demonstrates that VMware customers use memory overcommit in production systems and do so with high virtual-to-physical ratios.

Finally, here is what few customers who use memory overcommit in production have to say about it:

Kadlec Medical Center – Large 188-bed hospital in southern Washington State with over 270 medical staff members and over 10,000 annual patient admissions.

“Memory overcommit is one of the unique and powerful features of VMware ESX that we leverage everyday in our production environments. Thanks to memory overcommit, we were able to increase the consolidation of production environment by over 50%, maximizing utilization without giving up on the performance of our production systems. We appreciate that VMware makes it available to customers as a standard feature of ESX” – Tim Harper, Sr. System Analyst, Kadlec Medical Center

WTC Communications – regional phone, cable, Internet provider in Kansas

"A small business like ours derived tremendous benefits from the ability of VMware ESX to overcommit memory. We cannot afford the big IT budget of a large enterprise, so we must get the most out of our production servers while guaranteeing SLAs with our customers. This is exactly what VMware ESX memory overcommit allowed us to achieve. We were able to consolidate 35 production virtual machines (both Linux and Windows) on just 3 Dell PowerEdge 2850 servers with 8GB RAM each. Typically we run our production servers at an average ratio of 1.25 virtual-to-physical memory, however during maintenance operations, the ratio increases to 1.88 as we VMotion VMs out of the host that undergoes maintenance completely transparently to the users. Memory overcommit adds unparallel flexibility to our infrastructure and saves us a lot of money not just by allowing higher consolidation, but also by eliminating the need for spare capacity to perform routine maintenance operations. Memory overcommit is a fully automated feature of ESX and it is extremely simple to use. It is really a no brainier.” — Jim Jones, Network Administrator, WTC Communications

U.S. Department of Energy – Savannah River

"Our virtualization effort began 4 years ago, and we have made great strides in server utilization since then. After upgrading to VI3, we took advantage of VMware memory overcommit. We now routinely overcommit memory at a 2:1 ratio in our production environments and have even reached 3:1 on occasion. We even run large applications such as Lotus Domino and SQL server 2008 in VMs but this has not been an issue – no performance impact. As a result, we fully trust VMware memory overcommit in our production environments. Our IT budget is tight so in the past we have had to wait over 6 months to receive a new server. By using memory over commit, we can now deploy a system in less than 30 minutes without waiting for a new server. This keeps our internal customers very happy," – Joseph Collins, Senior Systems Engineer, U.S. Department of Energy – Savannah River