Home > Blogs > VMware VROOM! Blog > Author Archives: Vincent Lin

Author Archives: Vincent Lin

VMware vSphere 5.5 Performs Well Running Big Data Scenario with Greenplum

VMware recently released a white paper on the performance and best practices of running a Pivotal Greenplum database cluster in virtual machines. The paper reports the results of two studies. In each study, six physical machines are used in the Greenplum cluster. Three different big data workloads are run on the physical machines, and then on virtual machines in the same configuration.

One experiment compares a physical setup to a virtual configuration for running Greenplum segment servers, one per host. The response times of all the workloads in the virtual environment are within 6% of those measured in the physical environment.

Another test shows the performance impact of deploying multiple, smaller virtual machines instead of a single, large virtual machine on each segment host. The results from this test show that vSphere 5.5 provides a reduction of 11% in workload process time when compared to the same hardware configuration in a physical environment. The main performance gain occurs when each smaller virtual machine fits into a NUMA node on the physical host. For more information, please read the full paper: Greenplum Database Performance on VMware vSphere 5.5.

IBM solidDB Universal Cache on VMware vSphere 5.0

VMware recently released a white paper showing the performance scalability of virtualized IBM solidDB Universal Cache and IBM DB2 on the IBM System x3850 X5 server with MAX5, using VMware vSphere 5.0.  This paper shows that the virtualized IBM solidDB Universal Cache environment achieves excellent performance and scalability on a typical online transaction processing (OLTP) system in today’s enterprise.

The test results show superior performance when adding the solidDB Universal Cache feature into an existing application server configuration.  The throughput of the solidDB Universal Cache environment increases 315% while the response time is also reduced 59%.

Even the largest configuration, which scales out to five solidDB virtual machines, shows throughput increased to 5.09 times when compared to the throughput of one solidDB virtual machine. The largest configuration performed with a response time of less than 20ms for all of solidDB scale-out cases, which is key to maintaining a positive user experience.

      SolidbBlog

For more information on this research, read the full paper: Performance Study of Virtualized IBM solidDB Universal Cache and IBM DB2 on the IBM System x3850 X5 server with MAX5, Using VMware vSphere 5.0.

Microsoft Exchange Server 2010 Performance on vSphere 5

A white paper has been published that examines how Microsoft Exchange Server 2010 performs on vSphere 5 in terms of scaling up (adding more virtual CPUs) and scaling out (adding more VMs).  Having the choice to scale up or out while maintaining a positive user experience gives IT more flexibility to right-size system deployments and maximize total cost of ownership with respect to licensing and hardware purchases.

Testing shows the effectiveness of vSphere 5 to add compute power by scaling up Exchange Server VMs, in increments, from 2 to 12 virtual CPUs. This allowed the total number of very heavy Exchange users to increase from 2,000 to 12,000 while sendmail latency remained well within the range of acceptable user responsiveness. Processor utilization remained low, at about 15% of the total host processing capacity for 12,000 very heavy Exchange users.

Testing also shows that scaling out to eight Exchange Server VMs supports a workload of up to 16,000 very heavy users, with the load consuming only 32% of the ESXi host processing capacity.

Additional tests were undertaken to show the performance improvements of vMotion and Storage vMotion in vSphere 5. vMotion migration time for a 4-vCPU Exchange mailbox server VM showed a 34% reduction in vSphere 5 over vSphere 4.1. Storage vMotion migration time for a 350GB database VMDK showed an 11% reduction in vSphere 5 over vSphere 4.1.

For the full paper, see Microsoft Exchange Server 2010 Performance on vSphere 5.

 

Exchange 2007 performance on vSphere 4

VMware recently released a whitepaper showing the performance scalability of Exchange 2007 on VMware vSphere. This paper shows that vSphere 4.0 achieves excellent performance and scalability both with regards to scale up (adding more vCPUs) and scale out (adding more VMs).  The results indicate that vSphere can easily support 4,000 heavy Exchange users with a single 8 vCPU VM or 8,000 heavy Exchange users with multiples of either 2 or 4 vCPU VMs. While supporting these high user counts, the latencies of most of our virtualized Exchange configurations are half the recommended threshold (500 ms) with little overhead compared to physical.

 

Even the largest configuration, which supports 8,000 Heavy users with 16 vCPUs on an 8-way server, provides outstanding user experience. For our 8,000 heavy user mailbox configuration, the 95th Percentile Send Mail latency Is 273 ms with eight 2 vCPU VMs and 304 ms with four 4 vCPU VMs.

 

95th Percentile Send Mail Latency (2 vCPU VM vs. 4 vCPU VM)

 

  

VMs-Latency

 

 

In addition to these low latencies, this paper also shows that the 8,000 mailbox configuration consumes less than 60% of host CPU resources, which leaves room for further user growth and further consolidation. In addition, the paper shows that ESX provides consistent performance across all consolidated virtual machines. For example, the response times of the Exchange transactions in the eight 2 vCPU configuration were within 2% of each other. For more information on this research, read the full paper: Microsoft Exchange Server 2007 Performance on VMware vSphere.