Home > Blogs > Virtual Reality > Monthly Archives: March 2009

Monthly Archives: March 2009

Cost-Per-Application – The Right Way To Estimate The Acquisition Cost Of Virtualization

As you might have already heard from our press release earlier today (see today’s announcement), we have announced the availability of the VMware Cost Per Application Calculator  – an easy-to-use web tool that aims at helping companies accurately estimate and compare the acquisition cost of virtualization. Understanding the true acquisition cost of a virtualization solution can be quite confusing these days, so in an effort to shed some light on this subject and get to reliable conclusions we have built a simple tool (the Cost Per Application Calculator) with the support of customers and industry analysts . The Calculator compares the acquisition costs of VMware Infrastructure 3 with the one of Microsoft Windows Server 2008 with Hyper-V plus System Center, using the standard metric of “Cost per Application”.    .   

Calculating acquisition costs by only looking at software licenses may be an easy thing to do, but it provides a simplistic and incomplete picture of reality because:

  • It does not account for VM density (i.e. number of applications that can be run on a virtualization host) – higher VM density means less servers, storage, networking, guest operating system licenses, etc. 
  • It does not account for virtualization management cost (both software and hardware) – hypervisors are free (or almost), but management solutions are not.

Cost Per Application (see definition below) addresses both shortcomings while still keeping things simple. Refer to the Calculator itself for more detailed information about Cost Per Application.

Formula

Although this first version of the Calculator can be used only to compare VMware and Microsoft, Cost Per Application as a methodology can be applied to determine the acquisition cost of any virtualization offering.

It is important to point out that while VM density is critical to realizing increased savings from virtualization (see Why Isn’t Server Virtualization Saving Us More?, Forrester Research), not all solutions provide the same level of VM density. Third-party validated tests demonstrate that:

 

image

clip_image002

Based on these results, Taneja Group concludes that on average VMware Infrastructure 3 can safely run 50% more VMs per host than Windows Server 2008 with Hyper-V, while providing the same level of application performance. But, as the Cost per Application shows, you don’t need to run 50% more VMs on an ESX host to realize a lower cost per application with VMware VI3 Enterprise Edition. With only 1-2 more VMs per ESX host, when compared to a Windows Server 2008 with Hyper-V host, VI3 Enterprise Edition is the lower cost solution – for a whole lot more functionality.

 

Example – Virtualizing 100 applications at different consolidation ratios

 clip_image004

Results may vary depending on the scenarios, however there are some general lessons to be learned: 

1) Even when choosing the VI 3 Enterprise Edition and assuming equal VM density, VMware’s solution is never three times more expensive than Microsoft’s offering. At equal consolidation ratios, VI 3 Enterprise is only marginally more expensive than Windows Server 2008 with Hyper-V – and it offers significantly more capabilities. Any of these capabilities in its first year alone would generate enough operating expenses savings to more than compensate for such a small premium in acquisition costs

2) On average with only 2 additional VMs per VMware host, the fully featured VMware’s and Microsoft’s solutions are at cost parity. In most cases, lower priced editions of VI 3 Standard and VI 3 Foundation are at cost parity (or lower cost) even at equal VM density

3) At a reasonable 50% higher consolidation ratio, even VI 3 Enterprise Edition is significantly less expensive than Microsoft’s offering – and of course it is more feature rich. Note that examples of real life deployments show how VI 3 servers can scale even to 2x the VM density of Hyper-V hosts

Final note – the VMware Cost-Per-Application Calculator is not meant to be the end-all be-all cost analysis or to show cost estimates exact to the last digit. Our goal is to help people start in the right direction and provide a more solid baseline to look at acquisition cost for server virtualization. Clearly, it is impossible to keep things simple and, at the same time, account for everyone’s very specific situation (existing infrastructure, software ELAs, special OEM contracts, etc.). Out of the box, the VMware Cost-Per-App Calculator provides a good level of flexibility by allowing users to specify six input:

  1. Number of applications to virtualize (between 10 – 1000 VMs)
  2. Virtualization server types (low-end option $5,000 , mid-range option $8,000)
  3. VMware Infrastructure 3 Edition (Foundation, Standard, Enterprise)
  4. Virtualization management deployment (in VMs or on physical servers)
  5. Cost of electricity
  6. Cost of DC space

In addition, we also provide full disclosure of our assumptions and methodology so that people can adapt calculations to their specific case.

 

To learn more about the VMware Cost Per Application Calculator and how to use it check out this video.

Feel free to leave your feedback on how to improve our tool as a comment to th
is blog.

Response to Brian Madden’s Blog Entry about Cost Savings of VDI

Background
At last month’s VMworld Europe conference, one of the sessions in the desktop track was focused on the topic of TCO for VMware View. (Click here to view the content: http://www.vmworld.com/docs/DOC-2782). Brian Madden, a blogger, attended the Partner Day version of this training session and subsequently issued a blog entry claiming that VMware was misleading customers by positioning VMware View (VDI) against traditional desktops, not against TS (Terminal Services).  (URL for full blog entry: http://www.brianmadden.com/blogs/brianmadden/archive/2009/02/23/how-vmware-is-misleading-everyone-about-the-cost-savings-of-vdi.aspx).  The following is VMware’s response to the claims made in Brian Madden’s blog entry.

Summary

  1. VMware does not message TCO for VMware View (VDI) compared to TS because customers select View solutions specifically to address use cases they have not been able to fully meet with traditional desktops and/or TS, e.g. delivering a full desktop replacement to users, not just access to point applications like email.
  2. TS can often provide greater user density on a server, but those applications must be validated to work well in a multi-user environment.  Without the benefits of isolation, HA, DRS, etc. that VDI brings with the VI3 platform, all users in a TS environment on a given server will be affected when a single user experiences an application issue, resulting in a higher TCO over time.
  3. When applications are not compatible with ThinApp, they can still be used in a VMware View Composer environment by being installed into the Master Image, making minor impact to the total storage consumed and in the TCO calculations.

After criticizing VMware for comparing View with traditional desktops, but not including TS in the discussion, Brian Madden proceeded to post another blog entry, taking an about-face on his stance, by saying “So positioning VDI against Terminal Server is somewhat of a losing battle that trivializes its larger potential. Why not let Terminal Server “win” against VDI today, because when VDI is ready, it will not be about “VDI versus Terminal Server”—it will be about ‘VDI versus traditional desktops

What was claimed in Brian Madden’s blog and what is VMware’s response?

Excerpted from Brian Madden’s blog:
Misleading Tactic #1 – VMware compares VDI to traditional computing, yet ignores TS
The whole session was basically a cost savings analysis of VDI over traditional fat-client computing. Fundamentally I don't have a problem with that. I even agree with all the savings numbers that were presented. The BIG PROBLEM I have is that for EVERY POINT made in the "pro" VDI category, the exact same point could have been made in a "pro" Terminal Server category. So while I agree 100% that yes, VDI could have saved the amount of money the presenter was suggesting, I think that a Terminal Server-based solution could have saved EVEN MORE money.

The bottom line: VDI was a lower cost option than the traditional computing. But the presenter never mentioned the lowest cost option which was TS. And sure, there are certain cases where VDI is needed and where TS won't work, but the case studies presented in the session were not those kind of cases. TS would have worked fine for them and would have been much cheaper than VDI.

VMware Response:
True, the OPEX savings presented at VMworld Europe, and that are used in VMware’s online TCO/ROI calculator are based on 3rd party analyst (Gartner and Forrester) reports on comparisons between traditional desktops and server-based computing.

The real bottom line is that customers don’t consider a VDI solution because they think it will be cheaper than TS, but rather, they consider VDI when TS does not meet their needs.  The footprint of TS is wide across enterprise organizations (though rarely deep), and even the customer who presented at VMworld Europe during the “Understanding TCO for View” session acknowledged he had hundreds of TS servers that he would keep around, and was considering VDI because TS didn’t enable him to replace the thick desktop he still needed to provide to users.  As he details in his slide from VMworld, similar to many customers, TS is used in his organization to provide
access to a bare set of applications, coexisting with thick clients to
meet the full use case. Slide20

In considering a VDI architecture, a true desktop replacement is possible, potentially at a higher cost than TS, but with the ability to deliver a full desktop to a user without compromise on applications compatibility, user environment and stability due to lack of workload isolation. 
Another point to add is, if Citrix felt TS was sufficient to meet all their customers’ requirements, they wouldn’t have bought a virtualization platform and launched their own VDI solution.

Excerpted from Brian Madden’s blog:
The biggest savings would be on the capital expenditure of the server hardware. The VMware VDI session listed $150-200 per user for server hardware. This was assuming a fairly standard Dell server with 6 to 8 users per core. Again, I agree 100%. HOWEVER, the exact same architecture based on Terminal Server could have what, 20 or 30 users per core? Even if we assume only 20 users per core, we'll still looking at a 3x difference in hardware costs per user.

VMware Response:
Yes, TS is a great way to get high density on a small number of key applications – when those applications work well in a multi-user environment.    Customers move to VDI because because Citrix/TS doesn’t work for all situations, and when it doesn’t, you still need a solution like VDI for the exceptions.  Many VDI customers fall into this category.

Excerpted from Brian Madden’s blog:
In fact one of the case studies used a customer was going with Vista, so the presenter said that customer got even less than 6 users per core since they needed the performance. And that's a good point. If you have very intense apps, you can also dial-down the number of users on a Terminal Server too. (I'll take a Terminal Server with 6 users per core over a VDI solution with 6 users per core any day…)

VMware Response:
Really? You want to share one copy of Windows Server 2003 with a bunch of other folks running a crash-prone application, even if the consolidation ratio is the same?  TS does not have the workload isolation benefits of virtualization, so sharing resources for intense applications can result in instability and availability issues that result in more support calls and a higher TCO.  In contrast, virtual desktops are completely isolated as well as disaster recovery-enabled through the HA and DRS capabilities on the VI platform.

Excerpted from Brian Madden’s blog:
Misleading Tactic #2 – VMware assumes all apps will work with ThinApp
But here's the problem with ThinApp (and all other application virualization products too, like App-V, InstallFree, etc.). THINAPP CANNOT VIRTUALIZE 100% OF YOUR APPS. Sure, it can do most of them. But it can't do 100%. So what happens when you decide to implement a VMware VDI solution and you build your whole cost analysis model around getting rid of supporting your desktop and app issues and everything, but then you learn that you can't put all of your apps into ThinApp?
1. Install the apps in the traditional way (manually or via SMS) into the VMs. This would work, but now you break your linked-clone disk savings (since the apps would either (a) be installed for all users in the Parent Image, (b) be installed for each user into the diff clones, or (c) you'd need to have multiple Parent Images. Either way you destroy your cost savings model.
2. Install the apps onto the local fat desktops. But now you have a MAJOR user experience problem since VMware View only runs in remote desktop mode (i.e. no published apps), so now your users have to switch back and forth between two desktops, and you'd have profile sync issues and all sorts of problems, ON TOP OF THE FACT that you're still supporting local desktops, again completely destroying your cost savings model.

VMware Response:
To reap the storage savings benefits and one-to-many image management architecture of View Composer, you do need to think about your applications a different way.  If the application is not compatible with ThinApp, then there are several options that can be utilized to yield the end result of working with View Composer and enabling a single point of management. 

  • Category A:  A single Parent Image with View Composer and Thinapp.  To minimize storage utilization, applications should be validated for ThinApp, keeping applications installed directly on the Parent Image to a minimum. We typically see that about 80% of applications can be ThinApped without issue.  These ThinApp’ed applications are installed on a server and streamed to users on demand, making application management faster and simpler.
  • Category B:  Multiple Parent Images for different User Groups – For applications that are not compatible with ThinApp, you will need to install the application directly onto a separate Parent Image.  While you are now managing more than one linked clone Parent for different user populations, you are still gaining considerable storage and management benefits with the one-to-many View Composer architecture.  Even with multiple Parents, the storage reduction numbers are compelling.  Since only 20% of applications are not compatible with ThinApp, typically those with specialized hardware requirements, the total number of Parent Images required in a View environment should be minimal.  The ability to have centralized distribution of these virtual applications coupled with a few Parent Images can easily yield storage reductions by greater than 75% compared to traditional VDI or desktop requirements.  Using small pools with each of these Parent Images delivers cost savings through reduction of overall storage and management overhead.
  • Category C: If all else fails, for the particularly troublesome applications, a full independent VM may be needed, but the application management used would still leverage traditional tools like SMS/Altiris.  The impact to the cost model would be additional storage for this population of full desktop VMs, but it should constitute the smallest population of desktops in the overall deployment.

A Big Step Backwards for Virtualization Benchmarking

Why There's Still a Benchmarking Clause in Our EULA

We have a regularly repeating discussion here at VMware regarding benchmarking that goes along these lines:

Executive A: It seems like most of the people writing about virtualization have figured out that performance benchmarks need some special treatment when used with hypervisors.  It appears that our performance and benchmarking best practices guidelines are making an impact.  They've been available for a while and we're not seeing as many articles with badly flawed tests as we used to.  You know, the tests with bizarre results that come from errors like trying to measure network performance when the CPUs are saturated, or timing benchmark runs using the VM's system clock, or measuring a VM's disk I/O when everything is cached in host memory.  Perhaps it's finally time to drop the clause in our EULA that requires VMware review of performance tests before publication.

Executive B: That would be great!  We respond to every request for a benchmark review and we work with the submitters to improve their test processes, but VMware still gets criticized by competitors who claim we use that clause in our EULA to unfairly block publication of any ESX benchmarks that might not be favorable to VMware.  Even vendors whose benchmarks have been approved by us complain that it's an unreasonable restriction.  If we drop the clause, then maybe everyone will stop complaining and, since it seems people now understand how to benchmark a virtualization solution, we won't see as many botched tests and misleading results.

Executive A: OK, then it's agreed — we'll drop the EULA benchmark clause in our next release.

And then something like this gets published causing us to lose faith once again with the benchmarking wisdom of certain members of the virtualization community and we're back to keeping the clause in our EULA.

Bad Benchmarking in Action

To summarize, the bad news was in a hypervisor benchmark published by Virtualization Review that showed ESX trailing the other guys in some tests and leading in others.  It was a benchmark unlike any we'd seen before and it left us scratching our heads because there were so few details and the results made no sense whatsoever. Of course, Microsoft didn't let the benchmark's flaws stop them from linking to the article claiming it as proof that Hyper-V performs better than other hypervisors.  As near as we can tell, the Virtualization Review test consisted of a bunch of VMs each running a PC burn-in test program along with a database VM running a SQL Server script.  To be fair to Virtualization Review, they had given us a heads up some time ago that they would be running a test and we gave them some initial cautions that weren't heeded, but we certainly never approved publication of the ESX test results.  If we had an opportunity to review the test plan and results, our performance experts would have some long discussions with the author on a range of issues.

Take for instance the results of the third test in the series, as published in the article:

Test 3 Component Hyper-V XenServer VMware ESX
CPU Operations (millions) 5000 3750 7080
RAM Operations (millions) 1080 1250 1250
Disk Operations (millions) 167 187 187
SQL Server (m:ss) 4:43 5:34 5:34

A cursory glance would suggest that one hypervisor demonstrated a performance win in this test. In fact, it is actually very difficult to draw any conclusions from these results.  We at VMware noticed that the ESX numbers reported for CPU Operations seemed to be 40% greater than for Hyper-V and 88% better than for XenServer.  Is ESX really that good, and XenServer and Hyper-V really that bad?  We'd like to take credit for a win, but not with this flawed test.

What’s happening here is that there are a wide variety of problems with this configuration – we found many of them during our inspection of the tests:

  • The fact that ESX is completing so many more CPU, memory, and disk operations than Hyper-V obviously means that cycles were being used on those components as opposed to SQL Server.  Which is the right place for the hypervisor to schedule resources?  It’s not possible to tell from the scarce details in the results.
  • All resource-intensive SQL Servers in virtual and native environments have large pages enabled.  ESX supports this behavior but no other hypervisor does.  This test didn’t use that key application and OS feature.
  • The effects of data placement with respect to partition alignment were not planned for.  VMware has documented the impact of this oversight to be very significant in some cases.
  • The disk tests are based on Passmark’s load generation, which uses a test file in the guest operating system.  But the placement of that file, and its alignment with respect to the disk system, was not controlled in this test.
  • The SQL Server workload was custom built and has not been investigated, characterized, or understood by anyone in the industry. As a result, its sensitivity to memory, CPU, network and storage changes is totally unknown, and not documented by the author.  There are plenty of industry standard benchmarks to use with hypervisors and the days of ad hoc benchmark tests have passed.  Virtual machines are fully capable of running the common benchmarks that users know and understand like TPC, SPECweb and SPECjbb.  An even better test is VMmark, a well-rounded test of hypervisor performance that has been adopted by all major server vendors as the standard measurement of virtualization platforms or the related SPECvirt benchmark under development by SPEC.
  • With ESX’s highest recorded storage throughput already measured at over 100,000 IOPS on hundreds of disks, this experiment’s use of an undocumented, but presumably very small, number of spindles would obviously result in a storage system bottleneck. Yet storage performance results vary by tremendous amounts. Clearly there's an inconsistency in the configuration.

We're Not Against Benchmarking – We’re Only Against Bad Benchmarking

Benchmarking is a difficult process fraught with error and complexity at every turn. It’s important for those attempting to analyze performance of systems to understand what they’re doing to avoid drawing the wrong conclusions or allowing their readers to do so. For those that would like help from VMware, we invite you to obtain engineering assistance from benchmark@vmware.com. And everyone can benefit from the recommendations in the Performance Best Practices and Benchmarking Guidelines paper.  Certainly the writers at Virtualization Review can.

Postscript: Chris Wolf of Burton Group commented on virtualization benchmarks in his blog. He points out the need for vendor independent virtualization benchmarks as promised by programs like SPECvirt.  I couldn't agree more.  VMware got the ball rolling with VMmark, which is a public industry standard, and we're fully supporting development of SPECvirt.