Home > Blogs > Virtual Reality > Author Archives: Alberto Farronato

Author Archives: Alberto Farronato

Alberto Farronato

About Alberto Farronato

Sr. Director Product Marketing, VMware Integrated Systems

Setting Microsoft straight on the VMware-Novell OEM agreement

As you may already know, VMware recently announced an OEM agreement with Novell to redistribute SUSE Linux Enterprise Server (SLES) to eligible VMware vSphere customers. Just a few hours later (wow – that was quick), Microsoft published its take on the Microsoft Virtualization Team blog. Unfortunately, fast is not always a synonym of well-thought out. The arguments presented in the Microsoft blog not only miss the point about the announcement, but are so far off base that it makes one wonder whether Microsoft has learned anything about virtualization or is just trying to generate some headlines for headline’s sake.

In either case, given the level of marketing spin in the Microsoft blog post, I feel obliged to address the most blatant misinformation and set the record straight:

Myth #1: “Looks like VMWare finally determined that virtualization is a server OS feature. I’m sure we’ve said that once or twice over the years ;-)”

Our announcement is about providing SLES as a guest operating system (OS) and not as a hypervisor. Offering a more cost effective way to deploy SLES in VMware environments has nothing to do with the architecture of the hypervisor. Come on Microsoft – this is Virtualization 101 level stuff. VMware is committed to a hypervisor architecture that does not rely on a general purpose OS, unlike Hyper-V’s reliance on Windows, as it is a fundamentally better design that leads to higher reliability, robustness and security. This is why in our latest generation hypervisor architecture – VMware ESXi – we removed the console OS. Independent industry analysts agree that a slimmer hypervisor is the right approach – see “A Downside to Hyper-V”. We certainly don’t plan to reverse our direction, quite the opposite actually. As we publicly stated multiple times in the future ESXi will be the exclusive focus of VMware development efforts. Thanks to the ESXi hypervisor architecture, our customers won’t run the same risks they would have to face with Hyper-V.

Myth #2: “The vFolks in Palo Alto are further isolating themselves within the industry. Microsoft’s interop efforts have provided more choice and flexibility for customers, including our work with Novell.”

  1. VMware vSphere supports 65 guest operating systems versus Hyper-V R2 supporting only 17
  2. VMware vSphere supports more Microsoft operating systems than Microsoft Hyper-V R2 itself
  3. VMware vSphere supports 6 times more Linux operating systems than Hyper-V R2

Who is isolating itself? Who provides more choice?  Let’s not forget that just a few months ago Bob Muglia, President of Microsoft’s Servers and Tools business unit, stated that the number #3 top competitor for his division in 2010 is Linux! And just this past week at TechEd 2010, Steve Ballmer listed “Open Source” as a top competitor for Microsoft. How is it possible that on one hand Microsoft touts “new interoperability” with Linux and on the other one wants to kill it? Something has got to give and I think I know which one it will be….

Myth #3: “As one of many examples of our work with open source communities, we’re [Microsoft] adding functionality to the Linux Integration Services for Hyper-V. In fact, we have an RC version of the Linux Integration Services, which support Linux virtual machines with up to 4 virtual CPUs. In fact, we’ll talk more of this on June 25 at Red Hat Summit.”

VMware has a track record of providing equal support to Windows and Linux operating systems. We have supported 4 virtual CPUs for Windows and Linux guest OSs since 2006 and added 8-way vSMP in 2009. The OEM agreement with Novell doesn’t change our commitment to guest operating system neutrality. Positioning the Hyper-V’s upcoming support of 4 virtual CPUs for a small subset of Linux operating systems as a big win only confirms that 1) Microsoft is failing to keep up with VMware, and that 2) Microsoft has treated Linux guests as second class citizens. Is it credible that this second-class status for Linux will somehow change given that Linux and Open Source are being classified as top competitors?

Myth #4: “This is a bad deal for customers as they’re getting locked into an inflexible offer. Check out the terms and conditions. [..] So be sure not to drop support or you’ll invalidate your license”

So, before customers had to purchase SnS for VMware vSphere and a SLES subscription for patches and updates. With this new VMware-Novell agreement, they only have to purchase VMware SnS. Help me understand how this is a bad deal for customers? Talking about “invalidating” licenses in the context of a Linux operating system doesn’t make much sense given how the Linux licensing model works. The SLES deal offered by VMware is about subscription to patches and updates and not about licenses. Applying the same logic to Microsoft Software Assurance, we should warn customers that Microsoft SA is a bad deal because it locks them in an inflexible offer that forces them to pay in order to get the next Windows upgrade.

Myth #5: “Last, the vFolks have no public cloud offering, like Windows Azure, like Amazon EC2. While we’re demoing and building capabilities so customers have a common and flexible application and management model across on-premises and cloud computing, they’re stitching together virtual appliances to fill the void.”

Microsoft clearly “forgets” about VMware’s 1,000+ vCloud partners and public infrastructure as-a-service solutions based on VMware technology like vCloud Express . Our objective is to enable a broad partner ecosystem of service providers that leverage VMware’s technology to offer cloud services. This will give customers freedom of choice. We also want to make sure that companies retain control of their applications and are not locked in to any one particular service. Virtual appliances are a key component of this strategy, because that’s ultimately where the application lives. Microsoft isn’t interested in virtual appliances, because it isn’t interested in enabling application portability among cloud provider. Ultimately Microsoft’s strategy with Azure it to have customers run applications on Microsoft operating systems using Microsoft databases in Microsoft datacenters…. looks like the mother of all lock-ins.

Is Microsoft suffering from Hyper-Desperation R2?

Such an incredibly off-base reaction is clear evidence that the VMware-Novell OEM agreement struck a nerve at Microsoft. Could it be a sign of Hyper-Desperation R2? After all, the events of the past 2 months must have been pretty hard on the nerves of the Microsoft Virtualization Team:

  • On April 27th and May 19th , VMware announces new technology partnerships with two major cloud computing vendors, Salesforce.com (Salesforce.com and VMware Form Strategic Alliance to Launch VMforce™, the World’s First Enterprise Java Cloud) and Google (VMware to Collaborate with Google on Cloud Computing). This strategy offers far more choice to customers than Microsoft’s Azure-only approach
  • Then, on May 26th , Gartner publishes the 2010 x86 Server Virtualization Magic Quadrant, placing VMware in the “Leaders” quadrant, thereby demonstrating our clear leadership
  • Finally, just last week at Microsoft TechEd 2010 (one of Microsoft’s biggest shows of the year), VMware vSphere wins the “Best of TechEd 2010” award in the virtualization category and the “Best of TechEd Attendee’s Pick” awards. It must have been unsettling for the Microsoft virtualization team to see attendees at their own conference vote for VMware vSphere

But all of this aside, at the end of the day, what really matters is that VMware continues to show strong execution in our mission of simplifying IT and providing customers a pragmatic path to the cloud. Our agreement with Novell is another great example of how we’re delivering on our mission.

VMware virtualization solutions for small medium business cost less and provide more value

Last week VMware announced very positive business results for the first quarter 2010, beating analyst expectations by a considerable margin. Needless to say, we are thrilled to see how customers continue to turn to VMware to reduce complexity, dramatically lower costs, and increase business agility.

In the earnings call, our execs also described how we saw strong momentum in our vSphere bundles for small and medium businesses (vSphere Essentials, vSphere Essentials Plus and vSphere Acceleration Kits). For some people, hearing this may be a bit of a surprise as other virtualization vendors have been trying to create the misconception that VMware’s virtualization is too expensive for small businesses. Microsoft, for example, claims that Hyper-V and System Center are 3x to 12x less expensive than the VMware’s Essentials bundles. Ultimately, nothing better than positive business results demonstrates how such claims are a misrepresentation of reality, but just in case, here is some simple math that proves it.

In the table below we compare the cost of software licenses and support to deploy a virtualization solution with centralized management and the ability to quick recovery from hardware failures. The size of the deployment is 3 virtualization hosts (2 CPUs per host) running 15 VMs total.

image

As you can see from the table, VMware Essentials Plus is actually about $800 cheaper than a comparable offering from Microsoft even when only considering licensing cost and assuming equal consolidation ratios. While we believe that cost per application is a better approach to determine cost of acquisition for large and small/medium businesses, we wanted to debunk our competitor’s claim using their preferred methodology – upfront software license costs.

Price is certainly an important element of consideration for a small business; however it is not the only advantage that explains the momentum of the VMware Essentials bundles. Reliability, ease-of-use, unlimited technical support calls, and rapid recovery from unplanned hardware failure are also other very important ones. In the last couple of blogs (“VMware ESX to ESXi Upgrade Center – Check it Out!”, “Jumpstart a free virtual environment in a few clicks with VMware GO and ESXi”) we have talked about how and why VMware ESXi is a more reliable, robust and easy-to-use virtualization platform. VMware’s solutions for SMBs are based on the same core technology that the largest companies in the world use in their production environments — there just is not a more proven x86 virtualization platform out there. Why settle for less when you can afford to get the best? I guess many people have already figured this out.

In my next blog, I will discuss in greater detail the advantages that vSphere Essentials Plus offers for recovery and protection from hardware failures.

Jumpstart a free virtual environment in a few clicks with VMware GO and ESXi

In our last blog post, we discussed what motivated VMware to create ESXi and how its unique streamlined architecture, with no dependence on a general purpose operating system (Windows or Linux), makes it a more reliable, secure and efficient virtualization platform. These benefits are fundamental advantages of ESXi over other hypervisors, however there’s more. In fact, ESXi is also the ideal solution to quickly jumpstart a free virtual environment without a lot of pre-existing virtualization expertise.

The first key ingredient for jumpstarting a free virtual environment is a simple installation and set up process. The video linked below that the guys at GoVirtual.tv put together shows how installing ESXi is very straightforward and can be completed in a few minutes with basic system administration skills.

image

While a simple installation process for a hypervisor is certainly important, it represents only the initial step towards the ultimate goal of migrating applications from physical servers to virtual machines and getting them up and running as rapidly and smoothly as possible. For most large companies these are now well established processes of datacenter operations, however for first time users or resource constrained small business they could still represent a roadblock towards virtualizing apps. To help remove this roadblock, in January 2010, VMware launched VMware GO – a free web service for SMBs to help them virtualize their physical servers on ESXi and have the new virtual machines up and running with just a few mouse clicks.

VMware GO eliminates the few manual operations still present in ESXi’s installation (like verifying the hardware compatibility list or burning a CD with ESXi’s image), provides everything needed to deploy VMs on ESXi hosts, and offers basic management and reporting capabilities. Here is a list of what VMware GO can do:

  • automatically check hardware for ESXi compatibility
  • download ESXi and burn installation CD
  • verify configuration of new ESXi installation
  • apply NTP settings
  • convert physical servers to virtual machines
  • create, delete and resize VMs
  • download and deploy virtual appliances from VMware’s Virtual Appliance Marketplace
  • open a remote console to a VM
  • scan ESXi hosts and VMs for patches
  • generate reports: ESXi host inventory, VM inventory, patch status

You can see VMware GO in action in this great video that Dave Davis at TRAINSIGNAL recorded (Dave’s video is based on the beta version of VMware GO, nevertheless it is a very good overview of actual capabilities).

The video clearly shows how VMware GO perfectly complements the already simple installation process of ESXi, making it extremely fast not only to set up the hypervisor, but also to deploy virtual machines and perform basic centralized management tasks – all for free! Keeping in mind the unique benefits of ESXi’s architecture discussed in our previous blog post, it is easy to see how by providing greater advantages in terms of reliability and ease of use than even Microsoft Hyper-V or Citrix XenServer VMware GO + ESXi is a better solution for first time users and budget constrained small businesses.

Hyper-V passes Microsoft’s checkmarks exam: isn’t that always the case?

While browsing through the Microsoft Virtualization website, I stumbled across this table included in the Cost Saving section that presents cost and feature checklist comparison between Hyper-V/System Center with few vSphere editions.

image

While Microsoft’s spin on the theoretical cost advantage of Hyper-V/System Center over vSphere isn’t surprising (I am not going to address it here, since we have already shown how it doesn’t hold water), the checklist comparison struck me as having a few factual errors and misrepresentations of actual product capabilities which I think are worth pointing out:

  • vSMP Support – Microsoft’s support for vSMP is actually much more limited than the table shows. Hyper-V R2 supports 4-way vSMP only in VMs running Windows Server 2008 and Windows 7. For Windows Server 2003 VMs, Hyper-V R2 supports up to 2-way vSMP and for Linux (SUSE /RHEL) VMs just single virtual CPU. vSphere, on the other hand, supports up to 4-way vSMP with Standard, Advanced and Enterprise Editions and 8-way vSMP with Enterprise Plus edition on any vSphere supported guest OS (over 50 versions).

  • HA/Clustering – The table incorrectly shows that vSphere Standard does not include HA/Clustering, when in reality it does. Microsoft seems also very generous with Hyper-V by implying it provides equal HA capabilities as vSphere. Unlike vSphere, for example, Hyper-V R2 does not provide VM restart prioritization, which means that there is no easy way for admins to make sure that critical VMs are being restarted first. Incidentally, the lack of VM restart prioritization is one the reasons why Burton Group stated that Hyper-V R2 is not an enterprise production-ready solution. In addition because Hyper-V R2 lacks memory overcommit (a feature that is totally missing from Microsoft’s checklist), it can restart VMs only if the failover server has enough spare memory capacity to accommodate the VMs of the failed host.

  • Hot add – Microsoft gives Hyper-V R2 a check on Hot Add and then below the checkmark specifies “Storage” to indicate Hyper-V supports only Hot Add of a VM’s virtual disk capacity. vSphere gets a checkmark too, but what the table doesn’t tell is that it not only provides Hot Add of a VM’s virtual disk capacity, but also of virtual memory and CPU

  • Storage VMotion – This checkmark is funny to say the least. If you don’t know what the word “quick” means in Microsoft’s marketing jargon (and believe me I have heard illuminating translations of the term from Microsoft’s own employees), you’d think that Microsoft has a fast Storage VMotion (possibly faster than VMware’s). The reality is that even just talking about Storage VMotion in Hyper-V’s case doesn’t make sense, because Microsoft’s Quick Storage Migration, just like Quick Migration for VMs, cannot migrate VM virtual disks without downtime. VMware Storage VMotion, on the other hand, can migrate virtual disks without any application downtime.

  • DRS/PRO – Even now that Hyper-V has live migration, positioning PRO as a DRS-equivalent isn’t accurate. PRO is a fundamentally less usable and more complex solution for resource balancing. Unlike DRS, which can be configured from vCenter in a matter of few clicks, PRO Tips requires both System Center Virtual Machine Manager (SCVMM) and Operations Manager (SCOM). As Microsoft TechNet shows, SCOM is a very complex product that consumes a considerable amount of servers and databases that – opposite to what Microsoft wants people to believe – are neither free nor included in the cost of SMSD licenses. In addition to being hard to set up, PRO is dependent on software packages (PRO Packs) that each hardware vendor creates for its own products Last but not least, PRO lacks a global view of the resources of a group of servers (like DRS does with Resource Pools) and consequently it cannot optimize resource allocation across a cluster, but only react to the local conditions of a certain workload.

  • vNetwork/Host Profiles in my opinion, this line wins the Oscar for best checkmark in a “mis-leading” role. First, Microsoft drops the words “Distributed Switch” from VMware’s vNetwork Distributed Switch (vDS) making it look like a generic virtual networking feature. Then, it gives Hyper-V R2 a check for the vNetwork/Host Profiles combination implying that System Center also provides the same functionality as VMware Host Profiles when in reality the only way it could would be through extensive development of custom scripts and customization of SC Configuration Manager (should we include the extra cost to the System Center price at the top of the table?)

While there is more that could be said about this table from Microsoft, this already shows how easy it is for Hyper-V to pass Microsoft’s checkmark exam. . This isn’t something new, though. Looking through the Virtual Reality archives, I found a 2 year old post (“Can I have the check, please?”) by a former VMware SE now with Microsoft on this same checkmark issue. I guess it is true that old habits die hard.

The new economics of a virtualized datacenter: moving towards an application-based cost model

It’s now widely accepted that virtualization is one of those industry-wide “techtonic” shifts that, like previous shifts to client-server architecture or to the Internet, mark the beginning of a major transformation in how datacenters are architected and run. However, the virtualization transformation doesn’t stop there. The recurring theme that we hear from customers and partners is that the transformational effect of virtualization goes far beyond technology to directly impacting how IT looks at the economics and cost structure of the datacenter. At the core of this economic shift is the fact that in the virtualized datacenter, applications run on shared compute resources, not on dedicated ones like in the old physical datacenter. Customers tell us that in the new reality of the virtual datacenter the main variable driving planning and budgeting activities isn’t the number of physical servers any more, but the number of virtual machines (i.e. applications). In our experience, we have found that for this reason in the context of a fully or even partially virtualized datacenter application-based or VM-based models do a more accurate job of quantifying cost because they inherently factor in the economies of scale of multiple applications running on a shared infrastructure. It is very intuitive to recognize that, everything else equal, a higher the number of VMs per virtualization host, aka. VM density, corresponds to a higher infrastructure utilization rate and a lower average cost of running an application.

To help companies see how to implement a per-VM cost model for their datacenter, we asked IDC to work with one of our customers – Landmark Healthcare – and document how they are using an application-based cost model to track the cost structure evolution of their datacenter in their journey to the fully virtualized datacenter and the private cloud. The findings were documented in the IDC White Paper sponsored by VMware, The Economics of Virtualization: Moving Toward an Application – Based Cost Model.[1]

"Very clearly the existing process for the datacenter are undergoing a dramatic transformation. IDC believes that we are entering a new business cycle for IT and that virtualization will remain the foundation for enabling and driving a new set of economics for the datacenter. As customers increasingly deploy virtual environments they find themselves re-evaluating their traditional procurement and sourcing models. We continue to see cost per application as a more valid metric for measuring datacenter efficiency that will challenge existing server-based cost models."Michelle Bailey, Research VP, Enterprise Platforms and Datacenter Trends

Landmark Healthcare – IDC Case Study Summary

Landmark Healthcare is an insurance provider for chiropractic medicine, currently generating $20+ millions in revenues with about 110 employees. The company is transitioning its business from that of a pure-play insurance company to one that specializes in claims processing and claims analytics for other health care providers. In 2008 Landmark decided to virtualize about half of their business applications in an effort to consolidate their server install base and contain their expected growth. Like many others, Landmark rapidly realized that the benefits of a virtual infrastructure could be extended well beyond the initial capital savings into areas like improved business continuity, faster application deployment, simpler management and higher productivity. Coming from the success of their initial virtualization phase and seeing the tremendous additional value they could get, Landmark now plans to virtualize all the remaining business applications and to invest in an iSCSI storage array to leverage business continuity features like VMware HA, VMotion, and Fault Tolerance. Once this second virtualization phase is completed in 2010, Landmark’s datacenter will be 100% virtual and will have enough extra capacity to support the expected applications growth for the following two years. In this time frame, Landmark expects it will only have to continue expanding its disk storage capacity according to the typical needs of its applications.

Landmark Healthcare – cost per application analysis

Working with Landmark, IDC built a detailed view of the company’s annualized cost per application at each stage, from a) fully physical to b) fully virtualized to c) planning for future needs. The chart below – taken from the IDC white paper – shows a summary of Landmark’s past, present, and projected cost-per-application and VM density (i.e. number of VMs per virtualization host).

Cost Per Application of VMware Virtual Infrastructure

As the chart shows, Landmark will reduce the average annualized cost of running its applications from $1,514 before virtualization (2008) to $643 in a fully virtualized datacenter (2009) despite a sizable investment in a new SAN. While this may sound like a big reduction, it is actually typical for VMware customers, who can achieve higher economies of scale by driving up hardware utilization without compromising application performance, reliability or availability. It is also important to note that the chart only shows the “hard” capital cost savings for Landmark’s infrastructure. It does not include a quantitative measure of the other benefits of virtualizing, such as increased admin productivity, shorter recovery time in case of system failure, and faster IT response time to business needs.

VMware’s technology has been instrumental for us to reliably virtualize our apps while maximizing the utilization rate of our hardware. As our datacenter becomes 100% virtual, we will lower the cost of running our apps by over 60% even despite a sizable investment in a new iSCSI SAN. At the same time we will improve availability, improve business continuity and enable our business owners to achieve a faster time to market for our services. The ability to drive up consolidation ratios with vSphere will curb the need for additional infrastructure investment. In fact, we expect that in the near future we will be able to support the forecasted growth for our business without the need for additional servers, simply by expanding our storage capacity to accommodate the natural demands of our business applications – Ron Davis, Senior Network Engineer, Landmark Healthcare

It is easy to predict that with the trend towards fully virtualized datacenters, private clouds, and hybrid clouds, application-based cost models will become a standard practice. Read about the details of the Landmark Healthcare case in the IDC white paper. In addition to the detailed quantitative analysis, the white paper also provides information on virtualization industry trends as well as practical advices on leveraging vSphere to maximize hardware utilization.

What’s your take? How are you measuring cost in your virtualized datacenter?


[1] IDC White Paper sponsored by VMware, The Economics of a Virtualized Datacenter: Moving Toward an Application- Based Cost Model, Doc # 220766, November 2009

Did Microsoft just agree with us that Hyper-V is NOT 1/6th the cost of vSphere?

Despite the fact that Hyper-V R2 addresses some of the issues of R1, Microsoft Hyper-V still cannot compete with VMware vSphere on a value-added capabilities and functionality. Just look at how Burton Group (“Microsoft Hyper-V Still a Work in Progress”) still deems Hyper-V R2 as not enterprise-ready. Therefore, Microsoft resorts to competing with VMware on cost. As such, Microsoft execs have been going around touting how Hyper-V is an order of magnitude cheaper than vSphere. Actually it is funny to see how the fraction they cite keeps changing — the claim started at 1/3rd the cost of VMware (“…We [i.e. Microsoft Hyper-V] are one-third the price of VMware’s”), then became 1/5th (“…the cost of vSphere Enterprise is five times that of buying the Microsoft solution”), and now Microsoft execs are saying 1/6th the cost (“…Hyper-V, which ships with Windows Server 2008, costs one sixth that of VMware’s virtualization solutions”). I guess 1/3rd didn’t work or something so they keep marking it down – 25% off, 50% off, no wait if you buy now 75% off!

Given all this noise, imagine my surprise when I see a Microsoft blog that basically debunks Microsoft’s own “1/6th the cost” claim. In “Investigating the VMware Cost-Per-Application Calculator”, a Microsoft employee publishes a lengthy dissertation on our updated VMware Cost Per Application Calculator with which we demonstrate how thanks to its superior technology vSphere is actually a less expensive solution than Hyper-V. It appears that the author’s intent was to point out our model’s supposed flaws. But, one would have expected that after he “fixed” all of our “flawed” assumptions, his calculations would definitively show Hyper-V as truly 1/6th the cost of vSphere. However that’s not the case at all. In, fact, the only clear takeaway from Microsoft’s blog, after all the twists, turns, objections and re-calculations, is that Hyper-V is nowhere close to being 1/6th the cost of vSphere. Even in the author’s best case scenario for Hyper-V, in which Hyper-V hosts run more VMs than vSphere ones thanks to more physical RAM on the Hyper-V hosts, Hyper-V is only 31% less than vSphere’s highest-end edition. Last time I checked, 31% less is nowhere near 1/6th the cost. If he had compared Hyper-V to lower-end editions of vSphere, those that more closely match what Hyper-V R2 delivers, there would have been practically no cost advantage for Hyper-V R2.

The bottom line is that Microsoft’s blog doesn’t uncover anything new about the VMware Cost Per Application Calculator. Quite the opposite, it confirms it. Try our calculator for yourself and create a customized report. You will find that it includes a sensitivity analysis showing vSphere’s cost per application at different consolidation ratios. The analysis clearly demonstrates that even at equal consolidation ratios (worst case scenario for vSphere), Hyper-V’s total acquisition cost is, at best, only marginally lower. Once you factor in vSphere’s tremendous consolidation ratio advantage over Hyper-V and vSphere’s ability to scale up to 2X more VMs than Hyper-V (check-out the “Evaluating the ESX 4 Hypervisor and VM Density Advantage” report), vSphere delivers the lowest cost per application by up to 20-30%. In fact, often vSphere becomes a less expensive solution than Hyper-V with just 1-2 more VM’s per ESX host – in addition to being a much more functional, more scalable, more proven product.

So you can either believe us when we say that Microsoft Hyper-V is actually about the same cost as VMware products or you can believe Microsoft when they say that VMware solutions cost about as much as Hyper-V – take your pick!

OK, now let’s get back to talking about how virtualization technologies solve business needs. Oh, and thanks Microsoft for busting your own myth.

Is Microsoft Urging Their Partners to Stretch the Truth?

After catching Microsoft in the act of removing a layer of the Hyper-V architecture to back up their claims that VMware vSphere somehow “taxes” users with extra layers, it now appears that their partners are making unfounded derogatory statements about VMware while posing as VMware partners. If you haven’t seen it, ChannelWeb published an article this week titled, “Microsoft Continues To Rain On VMware's Parade”. In the article, after a repetition of “the additional layer theory” by David “substrate” Greschler, director of Microsoft virtualization and management, you will find a quote by Rand Morimoto, president of Convergent Computing, who is quoted as saying:

More and more of our customers are switching over from VMware to Hyper-V because Hyper-V uses a familiar interface, works out of the box and is included in the organization's existing licensing agreement."

The original version of the article identified Convergent Computing as a, “solution provider that partners with both Microsoft and VMware.” (CMP has since revised their article. It now says the company is a, “solution provider that has a staff of consultants with expertise in Hyper-V and VMware.”) Our partner team at VMware saw the article and immediately told us that Convergent Computing is NOT a VMware partner at all. That set off some alarms, so we followed Rand Morimoto on Twitter to see what else he had to say that might clear up the mystery. When we found him, we discovered he is essentially a Microsoft spokesperson who has even a Microsoft badge…check it out:

Rand Morimoto

If Rand is close enough to Microsoft to be issued a badge, we don’t think his comments about VMware users should be accepted as truth, especially when he provides no examples of customers who have supposedly made the switch to Hyper-V. The subterfuge didn’t fool us and it also didn’t fool the CMP readers, one of whom left this comment to the article:

“I have had the pleasure of knowing Rand Morimoto, president of Convergent Computing who is quoted in this article, for around 18 years (we both served on the Microsoft Partner Advisory Council and were Microsoft MVPs, and ran neighboring Novell Platinum shops before that).  While Rand is a brilliant and accomplished technologist as well as concert pianist, former gymnast, really nice guy and all around Renaissance man, one thing his organization is not is VMware certified.  This article makes it seem as if CCO is an unbiased partner of both Microsoft and VMware, but CCO credentials are 100% on the Microsoft side.“

Now that we’ve seen that the reference partners Microsoft trots out for the press can’t be trusted, let’s go back to talking about real technology instead.

Cost-Per-Application – The Right Way To Estimate The Acquisition Cost Of Virtualization

As you might have already heard from our press release earlier today (see today’s announcement), we have announced the availability of the VMware Cost Per Application Calculator  – an easy-to-use web tool that aims at helping companies accurately estimate and compare the acquisition cost of virtualization. Understanding the true acquisition cost of a virtualization solution can be quite confusing these days, so in an effort to shed some light on this subject and get to reliable conclusions we have built a simple tool (the Cost Per Application Calculator) with the support of customers and industry analysts . The Calculator compares the acquisition costs of VMware Infrastructure 3 with the one of Microsoft Windows Server 2008 with Hyper-V plus System Center, using the standard metric of “Cost per Application”.    .   

Calculating acquisition costs by only looking at software licenses may be an easy thing to do, but it provides a simplistic and incomplete picture of reality because:

  • It does not account for VM density (i.e. number of applications that can be run on a virtualization host) – higher VM density means less servers, storage, networking, guest operating system licenses, etc. 
  • It does not account for virtualization management cost (both software and hardware) – hypervisors are free (or almost), but management solutions are not.

Cost Per Application (see definition below) addresses both shortcomings while still keeping things simple. Refer to the Calculator itself for more detailed information about Cost Per Application.

Formula

Although this first version of the Calculator can be used only to compare VMware and Microsoft, Cost Per Application as a methodology can be applied to determine the acquisition cost of any virtualization offering.

It is important to point out that while VM density is critical to realizing increased savings from virtualization (see Why Isn’t Server Virtualization Saving Us More?, Forrester Research), not all solutions provide the same level of VM density. Third-party validated tests demonstrate that:

 

image

clip_image002

Based on these results, Taneja Group concludes that on average VMware Infrastructure 3 can safely run 50% more VMs per host than Windows Server 2008 with Hyper-V, while providing the same level of application performance. But, as the Cost per Application shows, you don’t need to run 50% more VMs on an ESX host to realize a lower cost per application with VMware VI3 Enterprise Edition. With only 1-2 more VMs per ESX host, when compared to a Windows Server 2008 with Hyper-V host, VI3 Enterprise Edition is the lower cost solution – for a whole lot more functionality.

 

Example – Virtualizing 100 applications at different consolidation ratios

 clip_image004

Results may vary depending on the scenarios, however there are some general lessons to be learned: 

1) Even when choosing the VI 3 Enterprise Edition and assuming equal VM density, VMware’s solution is never three times more expensive than Microsoft’s offering. At equal consolidation ratios, VI 3 Enterprise is only marginally more expensive than Windows Server 2008 with Hyper-V – and it offers significantly more capabilities. Any of these capabilities in its first year alone would generate enough operating expenses savings to more than compensate for such a small premium in acquisition costs

2) On average with only 2 additional VMs per VMware host, the fully featured VMware’s and Microsoft’s solutions are at cost parity. In most cases, lower priced editions of VI 3 Standard and VI 3 Foundation are at cost parity (or lower cost) even at equal VM density

3) At a reasonable 50% higher consolidation ratio, even VI 3 Enterprise Edition is significantly less expensive than Microsoft’s offering – and of course it is more feature rich. Note that examples of real life deployments show how VI 3 servers can scale even to 2x the VM density of Hyper-V hosts

Final note – the VMware Cost-Per-Application Calculator is not meant to be the end-all be-all cost analysis or to show cost estimates exact to the last digit. Our goal is to help people start in the right direction and provide a more solid baseline to look at acquisition cost for server virtualization. Clearly, it is impossible to keep things simple and, at the same time, account for everyone’s very specific situation (existing infrastructure, software ELAs, special OEM contracts, etc.). Out of the box, the VMware Cost-Per-App Calculator provides a good level of flexibility by allowing users to specify six input:

  1. Number of applications to virtualize (between 10 – 1000 VMs)
  2. Virtualization server types (low-end option $5,000 , mid-range option $8,000)
  3. VMware Infrastructure 3 Edition (Foundation, Standard, Enterprise)
  4. Virtualization management deployment (in VMs or on physical servers)
  5. Cost of electricity
  6. Cost of DC space

In addition, we also provide full disclosure of our assumptions and methodology so that people can adapt calculations to their specific case.

 

To learn more about the VMware Cost Per Application Calculator and how to use it check out this video.

Feel free to leave your feedback on how to improve our tool as a comment to th
is blog.

Memory Overcommit – Real life examples from VMware customers

Memory overcommit, aka the ability of VMware ESX to provision memory to virtual machines in excess of what is physically available on a host, has been a topic of discussion in virtualization blogs for quite some time (e.g., “More on Memory Overcommitment”) and apparently still is (e.g. VMware vs. Microsoft: Why Memory Overcommitment is Useful in Production and Why Microsoft Denies it and “Microsoft responds to VMware’s ability to overcommit memory” ).

Given the benefits of memory overcommit and the fact that today only VMware ESX/ESXi has it as a standard feature, it is understandable that other vendors try to downplay it by advocating that it is irrelevant, dangerous and not used in production environments. Microsoft’s position on the topic is particularly interesting….or confusing, I should say. On one side, in an interview Bob Muglia, Microsoft VP, confirmed the usefulness of memory overcommit, announcing plans to add it to their hypervisor some time in the future (have you heard this line before?), but then on the other side they don’t miss an opportunity to speak against it. James O’Neill, also from Microsoft, in his blog even challenged us to provide a reference of a customer that is actually using it in production, promising in return to make a charitable donation of $270 to an organization of our choice.

Anyway, internally at VMware we certainly have no doubts about the importance and effectiveness of memory overcommit, but we felt that after all this discussion among vendors, and after all the confusion from Microsoft as to whether it is/isn’t important and is/isn’t on the Hyper-V roadmap, that it might be more interesting for you to hear directly from our customers. Therefore, the bulk of this post will document a survey of memory overcommit usage among VMware customers. You’ll hear directly from VMware users regarding how they leverage ESX memory overcommit in their production datacenters, with no impact to performance, to increase VM density and further reduce VMware’s already low cost per application – the most relevant metric of virtualization TCO.

(Side Note: I bet MSFT will no longer question the value of overcommit once they are finally able to list it as an upcoming Hyper-V feature),

Before jumping into the survey results, I think a few clarifications are necessary.

What is memory overcommit?

Here I won’t go into all the granular, technical details of how memory overcommit works, because there is already a ton of great literature available that explains what it is and how it works (e.g., “The Role of Memory in VMware ESX Server 3” ).

However, there are a couple points that I’d like to make regarding the functionality of and requirements for Memory Overcommit.

Memory Overcommit: Required Components

Memory overcommit is the combination of three key ingredients:

  1. Transparent memory page sharing
  2. Balloon driver
  3. Optimized algorithms in the hypervisor kernel

These 3 elements must all be present and work together seamlessly. One alone is not enough , unlike what some vendors would like people to believe (see Ballooning is more than enough to do memory overcommit on Xen, Oracle says). To date, only VMware ESX has all the necessary components, has had them since 2001, and has continued to improve them ever since.

Memory Overcommit: Security Impact

Transparent memory page sharing de-duplicates memory pages by sharing the identical pages among VMs. In doing so, it makes the shared pages “read-only” at the physical RAM level. If the VM tries to write to it, ESX will get a callback and it will create a private copy of the page for the VM that wants to write to it, while letting the other VMs continue to use the original shared page. Marking it read-only ensures that it is a secure technology, one VM won’t be able to affect any other VMs. However, if you need additional assurance of Memory Overcommit’s security, you should keep in mind that VMware ESX, with its Memory Page Sharing feature, is the only hypervisor in the market that has earned a Common Criteria Evaluation Assurance Level 4 (EAL4+) under the CSEC Common Criteria Evaluation and Certification Scheme (CCS). Therefore, only VMware ESX is approved for use in “sensitive, government computing environments that demand the strictest security.”

Why is memory overcommit important?

Memory overcommit enables customers to achieve higher VM density per virtual host, increasing consolidation ratios and providing a more efficient scale up – scale out model. Ultimately this translates into substantial savings and a lower cost per application than with alternative solutions, as Eric Horschman shows in his blog post.

While the declining cost of memory could suggest that hypervisors with no memory overcommit can get away without it, in reality throwing more memory at the problem is not a sustainable solution for a few reasons:

  • The number of VMs deployed grows over time

  • Going forward systems will be even more memory constrained than today as the number of CPU cores per server will increase considerably faster than memory capacity. As a matter of fact, in 2011 a two sockets system is expected to be capable of 64 logical CPUs and 256GB of RAM, whereas today the same system is probably capable of 8 logical CPUs and 64GB of RAM. This means that the ability of a hypervisor to efficiently manage memory will be an even more critical factor to minimize the number of servers required to run applications and ensure efficient scalability.

  • Memory capacity requirements aren’t determined only by application workloads, but also by a number of valuable IT services, such as: high availability, zero downtime system maintenance, power management and rapid system provisioning. Virtualization solutions that don’t allow memory overcommit corner customers into a lose-lose situation: either reduce system utilization or don’t provide the service. Thanks to memory overcommit, our customers tell us that they were able to reduce their dependence on available physical resources, avoid unnecessary purchases, and improve infrastructure utilization. (see below for few examples on how VMware customers use memory overcommit)

Enough with the clarifications – let’s move on to the customer survey ….

We conducted an online survey of 110 VMware customers essentially asking them three questions:

  1. Do you use memory overcommit?
  2. Do you use memory overcommit in test/dev, production or both?
  3. What is your virtual-to-physical memory ratio per ESX host (i.e., overcommit ratio)?

Here are the results:

1) 57% answered they are using memory overcommit

yes-no

……so much for “nobody uses it”

2) Of the 57% who answered yes, 87% said they use it in production and test and development, 2% only in production, 11% only in test/dev

prod-test

……so much for “nobody uses it in production”

3) Finally, plotting the virtual-to-physical ratios on a chart, we can see what usage looks like. Virtual-to-physical memory ratios ranged from 1.14 to 3 (average 1.8, median 1.75). 75% of the respondents use memory overcommit ratios of 1.5 or higher and 37% utilize a ratio of 2 or higher

USAGE

.…..so much for “memory overcommit ratios must be low”

What the chart can’t show is that, based on our findings, companies at the low end of the memory overcommit usage spectrum tend to be recent VMware customers, while those at the high-end tend to be long standing VMware customers. This looks very similar to what we have seen happening with other VMware technologies such as VMotion: once people try it and they see how well it works, they want to extract its potential.

I believe this data clearly demonstrates that VMware customers use memory overcommit in production systems and do so with high virtual-to-physical ratios.

Finally, here is what few customers who use memory overcommit in production have to say about it:

Kadlec Medical Center – Large 188-bed hospital in southern Washington State with over 270 medical staff members and over 10,000 annual patient admissions.

“Memory overcommit is one of the unique and powerful features of VMware ESX that we leverage everyday in our production environments. Thanks to memory overcommit, we were able to increase the consolidation of production environment by over 50%, maximizing utilization without giving up on the performance of our production systems. We appreciate that VMware makes it available to customers as a standard feature of ESX” – Tim Harper, Sr. System Analyst, Kadlec Medical Center

WTC Communications – regional phone, cable, Internet provider in Kansas

"A small business like ours derived tremendous benefits from the ability of VMware ESX to overcommit memory. We cannot afford the big IT budget of a large enterprise, so we must get the most out of our production servers while guaranteeing SLAs with our customers. This is exactly what VMware ESX memory overcommit allowed us to achieve. We were able to consolidate 35 production virtual machines (both Linux and Windows) on just 3 Dell PowerEdge 2850 servers with 8GB RAM each. Typically we run our production servers at an average ratio of 1.25 virtual-to-physical memory, however during maintenance operations, the ratio increases to 1.88 as we VMotion VMs out of the host that undergoes maintenance completely transparently to the users. Memory overcommit adds unparallel flexibility to our infrastructure and saves us a lot of money not just by allowing higher consolidation, but also by eliminating the need for spare capacity to perform routine maintenance operations. Memory overcommit is a fully automated feature of ESX and it is extremely simple to use. It is really a no brainier.” — Jim Jones, Network Administrator, WTC Communications

U.S. Department of Energy – Savannah River

"Our virtualization effort began 4 years ago, and we have made great strides in server utilization since then. After upgrading to VI3, we took advantage of VMware memory overcommit. We now routinely overcommit memory at a 2:1 ratio in our production environments and have even reached 3:1 on occasion. We even run large applications such as Lotus Domino and SQL server 2008 in VMs but this has not been an issue – no performance impact. As a result, we fully trust VMware memory overcommit in our production environments. Our IT budget is tight so in the past we have had to wait over 6 months to receive a new server. By using memory over commit, we can now deploy a system in less than 30 minutes without waiting for a new server. This keeps our internal customers very happy," – Joseph Collins, Senior Systems Engineer, U.S. Department of Energy – Savannah River