Home > Blogs > VMware End-User Computing Blog > Monthly Archives: April 2010

Monthly Archives: April 2010

Single Server Scalability Revisited

A while back Citrix published a report on “Single Server Scalability”, and at that time I blogged about how at VMware we believe that it doesn’t help customers a whole lot to simply publish one piece of a complex puzzle – Customers need guidance on the overall architecture of a desktop virtualization solution (such as what we provide in our VMware View reference architectures.
Over the past couple of week’s my technical curiosity got the best of me, and I decided to spend some time in the lab to see results I might come up with.
By way of context: In this report Citrix claims they could host 130 VM’s (as measured using the Login Consultant’s LoginVSI tool) on a Dell R710 server with dual Xeon 5570 CPUs and 72GB RAM.
I set out to replicate their testing as closely as possible in my own labs to see how VMware View stacks up.
Unfortunately, I did not have a system that was exactly the same as what Citrix used, but I had two that were close.
The first system I used was an IBM HS22 with the same Xeon 5570 2.93GHz processors that Citrix used in their testing. However, it had 96GB of RAM (compared to 72GB in the Citrix testing).
The second system was a Dell R710 (which is what Citrix had), but it featured the slower Xeon 5520 2.26GHz processors and only 48GB of RAM.

Following the procedure outlined in the Citrix paper, I downloaded LoginVSI Express 2.12 – however, it appears there is a bit of difference here between the code I used and what Citrix had.
In the Citrix documentation they reported using the freeware version of the 2.0 workload which performed the following tasks:

  • Outlook 2007: Browse 10 messages and type a new message
  • Internet explorer: Browse to cached pages for VMware, MSFT, and Citrix
  • Word 2007: one instance to measure response time (9 times), one instance review edit and print a random document
  • Solidata PDF Writer and Acrobat Reader, print word doc to PDF and review
  • Excel 2007: open a large randomized spreadsheet and edit
  • Power Point 2007, a random PPT is reviewed and edited
  • 3 breaks (40, 20, and 40 seconds) to emulate real world usage

However, this task list is actually more in line with what was previously offered with LoginVSI 1.0 vs. the task list included in LoginVSI 2.0
Per the admin manual the task list for LoginVSI 2.0 is actually:

  • Open Outlook 2007 and browse 10 email messages
  • Open IE and browse to a locally cached copy of BBC.com – this session is left open. A 2nd session is started and browses to wired.com, lonelyplanet.com, and a flash based website (gettheglass.com)
  • Two instances of Word 2007 are opened – one is left open to measure response time, the other is used to review and edit documents
  • The word document is printed to PDF using Bullzip PDF writer, and subsequently viewed in Acrobat Reader
  • A very large random spreadsheet is opened and edited in Excel 2007
  • Open, review, and edit a PPT in PowerPoint 2007
  • 7-zip command line is used to zip the output of the session

As you can see there are some notable  difference between the LoginVSI 2.0 task list and what Citrix outlines in their document (which appears to be the LoginVSI 1.0 task list). According to the LoginVSI 2.0 admin guide, the 2.0 workload “is approximately 35% more resource intensive than VSI 1.0”.
The other difference was that Citrix outlines that they used a custom tool to simulate a user logging in every 15 seconds. They go on to hypothesize that increasing the delay between users would reduce the impact on the server, and increase the number of VMs that could be run simultaneously. I would assume then that decreasing the delay should increase the load and reduce the number of servers that can be hosted.
Unfortunately the LoginVSI express tool only allows for three launch interval choices – 10 seconds, 30 seconds, and 60 seconds.
In order to stay as conservative as possible I chose the 10 second option – which according to Citirix’s hypothesis should actually increase the load on the server (and that makes sense to me).
I did nothing to optimize VMware vSphere (I used the current production release of VMware View 4.01 which includes VMware vSphere 4 Update 1) or the target guest operating system (Windows XP SP3 configured with one vCPU and 512MB of RAM). Additionally, I used the default installation routines for all  of the LoginVSI components.
So to recap, I ran a (hypothetically) more aggressive login interval with a script that appears to be 35% more intensive than what Citrix outlines in their paper using a default installation of all the software components. On both servers I ran three complete test iterations and averaged the results.
What did I find?
On the HS22 (E5570 2.93GHz with 96 GB RAM) running 200 instances of the LoginVSI Express 2.0 medium workload the LoginVSI analyzer reported that we could simultaneously run 170 sessions before performance started to degrade.
On the R710 (E5520 2.26GHz with 48 GB RAM) running 160 instances of the LoginVSI Express 2.0 medium workload the LoginVSI analyzer reported that we could simultaneously run 142 sessions before performance started to degrade
To recap Citrix claimed to only be able to run 130 simultaneous sessions on the R710 with dual Xeon 5570 2.93GHz  CPUs and 72GB of RAM.
This table recaps:




Intel Xeon 5520 2.26 Ghz

Intel Xeon 5570 2.93 Ghz

Intel Xeon 5570 2.93 Ghz





# of VMs




VMs / Core




What does that mean for you? Well it depends  :)
Asking how many sessions can you run on a single server is a lot like asking how many plants can I get in my car (I spent the weekend at Home Depot with my wife gathering greenery for the garden). The answer to that question is how big is your car and how large are the plants.
The test results are very interesting in terms of how they compare to each other.  But for customers the relative comparison is important not because it shows exactly what a customer can achieve in their own environement (with their own applications, network, etc), but because it can help them understand the relative scale and density characteristics of the two platforms.
Having said all that,  I think  a reasonable conclusion is that, despite running a more aggressive login interval with a script that was potentially 35% more resource intensive (than what Citrix said they used, VMware View was able to host more virtual desktops than Citrix XenDesktop. 
Have you done your own testing? If so, what have you found? I would appreciate hearing from any customers about their experiences.