Home > Blogs > VMware VROOM! Blog > Monthly Archives: April 2009

Monthly Archives: April 2009

VMmark 1.1.1 Released

VMmark 1.1.1 was released earlier this week to address a few minor bugs in VMmark 1.1. In addition, we have been working closely with the members of the VMmark review panel and other partners to improve the data gathering necessary for a benchmark review. To that end, we have automated several data gathering tasks and increased the amount of supporting data required for official benchmark submissions. Beginning on June 1st, we will only accept benchmark submissions using VMmark 1.1.1. Those of you running VMmark 1.1 internally for testing with no plans to publish results can probably skip this upgrade. You can download the new version here .

Database performance shines on vSphere 4


VMware
recently released a whitepaper Virtualizing
Performance-Critical Database Applications in VMware® vSphere™
that shows
why vSphere 4 is an excellent platform
for performance-critical database applications. 
The paper details performance experiments using an OLTP workload against
an Oracle database. Results show that even at very high loads, benchmark
thoughput is  85% of native on vSphere 4.
The table below summarizes statistics which give an indication of the load
placed on the system in the native and virtual machine configurations.

 

Table 1. Comparison of Native and Virtual Machine Benchmark Load
Profiles.

Metric

Native

VM

Throughput in business
transactions per minute

293K

250K

Disk IOPS

71K

60K

Disk Megabytes/second

305MB/s

258 MB/s

Network packets/second

12K/s receive

19K/s send

10K/s receive

17K/s send

Network bandwidth/second

25Mb/s receive

66Mb/s send

21Mb/s receive

56Mb/s send

 

 

Scale-up
ratios show that every doubling of vCPUs results in a 90% increase in
throughput. 

Figure 1 


Figure 1. vSphere 4 vs. Native – throughput
normalized to 2vCPU, ESX 4.0.


  These results are the outcome of numerous
performance enhancements in vSphere 4. These include added hardware support for
memory virtualization, more efficient and feature-rich storage stack, and
significantly better CPU resource management. The net result is a 24-28%
increase in throughput over ESX 3.5, for 2- and 4- vCPU VMs, respectively.
Additionally, with support for 8-vCPU VMs,
maximum throughput achievable from a single VM is much higher in vSphere 4 than
in ESX 3.5.


Figure 2 


Figure 2. vSphere 4 vs. ESX 3.5 –
throughput normalized to 2vCPU, ESX 4.0.

 

vSphere has
the capability to handle loads far larger than that demanded by most Oracle
database applications in production. Support for VMs with  8 vCPUs, a near-linear scale-up and a 24%
performance boost over ESX 3.5, make vSphere
4 an excellent platform for virtualizing very high end Oracle databases.

 

For details regarding experiments and the
performance enhancements in vSphere, please read the paper at:  Virtualizing
Performance-Critical Database Applications in VMware® vSphere™
.

Database Sizing Charts for vSphere 4.0

Many of our customers have databases running on proprietary hardware that is approaching end of life. Often these databases are not considered as candidates for virtualization due to the fact that they are running on larger systems with more sockets while x86 systems have fewer cores and virtual machines often have even lower vCPU limits. Advances in processor technology however often place the performance of a VM with a smaller number of virtual processors at par with a lot of these much larger systems.

In this document, we introduce capacity planning and sizing charts for databases in a virtual environment. The purpose of these charts is to aid in sizing of databases which are being moved from a legacy RISC based physical machine to virtual machine running on a modern x86 based system running VMware vSphere.

We combined a set of data from both experiments in the lab and published throughput results to derive a VMware conversion metric from older RISC machines. The data should be used to identify potential candidates of database servers which can easily be virtualized, and to then perform first order sizing for those virtual machines.

The following table provides an estimated capacity planning reference for aggregate database throughput compared to the performance of a virtualized database running on a reference system.

The reference system used for this chart is an Intel Nehalem 2 socket 8 core system, as measured in the in the “Virtualizing Performance Critical Database Applications in vSphere 4” paper. The reference system has a VMware DBunit throughput of approximately 1000 per core, for a total capacity of 8000 with 8 cores. We’ve assumed near linear scalability in the first version of these tables.

The table may be used two ways:

  1. To calculate if the largest VMware virtual machine is large enough to accommodate the database from the physical RISC system.  The “Number of RISC Processors…” column shows the number of sockets in the largest system that an 8-vcpu VMware VM can replace for that machine type. Items denoted with a * indicate that the virtual machine is larger than the biggest physical system for that range.
  2. As a sizing estimator. “VMware DBunits” is the number of processing units needed for each RISC CPU in the source physical system. To calculate how many virtual cores are required to replace a physical system, simply multiply the number of processors (sockets) in the physical system by the DBunit score, and then divide the result by the processing capacity of the reference core (1038 for our Nehalem reference system).

As an example, a Sun E1290 system with 8 1.5Ghz USIV+ processors would require a DBunit of 8 x 723 = 5784.  Rounded up, this would require a virtual machine with 6 cores, thus a 6-vcpu VM would be required for equivalent processing capacity and headroom.

More often than not, the physical system is less than 100% utilized, because significant headroom is required. With virtualization, much less headroom is required, since if the virtual machine needs more processing capacity, it can simply be reconfigured with a larger number of processors. If we assume that the source physical system is at most 35% utilized, then we can run the same database in a virtual environment using only 2024 units, which is accommodated by 2-vcpus.

This is the first public version of this table. I'll update it as we gather more information and feedback on its use.

Table of RISC Throughputs
compared to a virtual machine (Version 1.1, April 2009)

 

 

Vendor

CPU and System Type

VMware DBunits
per RISC Processor

Number of RISC Processors replaced by one
8vcpu VM

Sun

Ultra Enterprise Server 450 – 250Mhz

100

4*

Sun

e10k – 250Mhz

48

64*

Sun

e10k 464Mhz

156

53

Sun

E6000-250

45

30*

Sun

E4500 – 464Mhz

166

14*

Sun

E4800 -  1.2Ghz USIV

402

14*

Sun

E6800 – 1.2Ghz USIV

402

21

Sun

E1280 -  1.2Ghz USIV

402

21

Sun

E4900 -  1.5Ghz USIV+

723

11

Sun

E6900 – 1.5Ghz USIV+

723

11

Sun

E1290 -  1.5Ghz USIV+

723

11

Sun

E15k – 900Mhz

166

50

Sun

E15k – 1.2Ghz

186

45

Sun

E15k – 1.2Ghz USIV

335

25

Sun

E15k – 1.5Ghz USIV+

603

14

IBM

IBM eServer pSeries p690, Power4, 1.3 Ghz

726

11

HP

HP Superdome PA-RISC/875 MHz

297

28

HP

HP Integrity Superdome Server, Intel
Itanium

710

12

 

* Capped by
max physical CPUs supported by system