Home > Blogs > VMware VROOM! Blog


HPC Application Performance on ESX 4.1: Stream

Recently VMware has seen increased interest in migrating High Performance Computing (HPC) applications to virtualized environments. This is due to the many advantages virtualization brings to HPC, including consolidation, support for heterogeneous OSes, ease of application development, security, job migration, and cloud computing (all described here). Currently some subset of HPC applications virtualize well from a performance perspective. Our long-term goal is to extend this to all HPC apps, realizing that large-scale apps with the lowest latency and highest bandwidth requirements will be the most challenging. Users who run HPC apps are traditionally very sensitive to performance overhead, so it is important to quantify the performance cost of virtualization and properly weigh it against the advantages. Compared to commercial apps (databases, web servers, and so on), which are VMware’s bread-and-butter, HPC apps place their own set of requirements on the platform (OS/hypervisor/hardware) in order to execute well. Two common ones are low-latency networking (since a single app is often spread across a cluster of machines) and high memory bandwidth. This article is the first in a series that will explore these and other aspects of HPC performance. Our goal will always be to determine what works, what doesn’t, and how to get more of the former. The benchmark reported on here is Stream, which is a standard tool designed to measure memory bandwidth. It is a “worst case” micro-benchmark; real applications will not achieve higher memory bandwidth.

Configuration

All tests were performed on an HP DL380 with two Intel X5570 processors, 48 GB memory (12 × 4 GB DIMMs), and four 1-GbE NICs (Intel Pro/1000 PT Quad Port Server Adapter) connected to a switch. Guest and native OS is RHEL 5.5 x86_64. Hyper-threading is enabled in the BIOS, so 16 logical processors are available. Processors and memory are split between two NUMA nodes. A pre-GA lab version of ESX 4.1 was used, build 254859.

Test Results

The OpenMP version of Stream is used. It is built using a compiler switch as follows:

gcc -O2 -fopenmp stream.c -o stream

The number of simultaneous threads is controlled by an environment variable:

export OMP_NUM_THREADS=8

The array size (N) and number of iterations (NTIMES) are hard-wired in the code as N=108 (for a single machine) and NTIMES=40. The large array size ensures that the processor cache provides little or no benefit. Stream reports maximum memory bandwidth performance in MB/sec for four tests: copy, scale, add, and triad (see the above link for descriptions of these). M stands for 1 million, not 220. Here are the native results, as a function of the number of threads:

Table 1. Native memory bandwidth, MB/s

Threads 1 2 4 8 16
Copy 6388 12163 20473 26957 26312
Scalar 5231 10068 17208 25932 26530
Add 7070 13274 21481 29081 29622
Triad 6617 12505 21058 29328 29889

Note that the scaling starts to fall off after two threads and the memory links are essentially saturated at 8 threads. This is one reason why HPC apps often do not see much benefit from enabling Hyper-Threading. To achieve the maximum aggregate memory bandwidth in a virtualized environment, two virtual machines (VMs) with 8 vCPUs each were used. This is appropriate only for modeling apps that can be split across multiple machines. One instance of stream with N=5×107 was run in each VM simultaneously so the total amount of memory accessed was the same as in the native test. The advanced configuration option preferHT=1 is used (see below). Bandwidths reported by the VMs are summed to get the total. The results are shown in Table 2: just slightly greater bandwidth than for the corresponding native case.

Table 2. Virtualized total memory bandwidth, MB/s, 2 VMs, preferHT=1

Total Threads 2 4 8 16
Copy 12535 22536 27606 27104
Scalar 10294 18824 26781 26537
Add 13578 24182 30676 30537
Triad 13070 23476 30449 30010

It is apparent that the Linux “first-touch” scheduling algorithm together with the simplicity of the Stream algorithm are enough to ensure that nearly all memory accesses in the native tests are “local” (that is, the processor each thread runs on and the memory it accesses both belong to the same NUMA node). In ESX 4.1 NUMA information is not passed to the guest OS and (by default) 8-vCPU VMs are scheduled across NUMA nodes in order to take advantage of more physical cores. This means that about half of memory accesses will be “remote” and that in the default configuration one or two VMs must produce significantly less bandwidth than the native tests. Setting preferHT=1 tells the ESX scheduler to count logical processors (hardware threads) instead of cores when determining if a given VM can fit on a NUMA node. In this case that forces both memory and CPU of an 8-vCPU VM to be scheduled on a single NUMA node. This guarantees all memory accesses are local and the aggregate bandwidth of two VMs can equal or exceed native bandwidth. Note that a single VM cannot match this bandwidth. It will get either half of it (because it’s using the resources of only one NUMA node), or about 70% (because half the memory accesses are remote). In both native and virtual environments, the maximum bandwidth of purely remote memory accesses is about half that of purely local. On machines with more NUMA nodes, remote memory bandwidth may be less and the importance of memory locality even greater.

Summary

In both native and virtualized environments, equivalent maximum memory bandwidth can be achieved as long as the application is written or configured to use only local memory. For native this means relying on the Linux “first-touch” scheduling algorithm (for simple apps) or implementing explicit mechanisms in the code (usually difficult if the code wasn’t designed for NUMA). For virtual a different mindset is needed: the application needs to be able to run across multiple machines, with each VM sized to fit on a NUMA node. On machines with hyper-threading enabled, preferHT=1 needs to be set for the larger VMs. If these requirements can be met, then a valuable feature of virtualization is that the app needs to have no NUMA awareness at all; NUMA scheduling is taken care of by the hypervisor (for all apps, not just for those where Linux is able to align threads and memory on the same NUMA node). For those apps where these requirements can’t be met (ones that need a large single instance OS), current development focus is on relaxing these requirements so they are more like native, while retaining the above advantage for small VMs.

Next: NAMD

2 thoughts on “HPC Application Performance on ESX 4.1: Stream

  1. PiroNet

    Hi Jeffrey,

    Can I say that memory intensive workloads benefit from vSphere NUMA-aware scheduler by placing as much of the vCPU into a home node, that is a socket/cores/local memory.

    And that on the opposite, CPU intensive workloads would eventually perform better by placing vCPU into as much shared Last-Level Caches (sockets) as possible as it used to be with ESX prior 4.1?

    If true, how with vSphere 4.1 and later can I turn back on this initial placement mechanism that used to exist in ESX 4.0 and prior? Maybe by disabling NUMA in the BIOS for the entire host though? I would rather do it per vm based!

    Thx,
    Didier

    Reply
  2. Pingback: HPC Application Performance on ESX 4.1: NAMD | VMware VROOM! Blog - VMware Blogs

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>