Home > Blogs > VMware VROOM! Blog > Tag Archives: benchmarks

Tag Archives: benchmarks

New Scheduler Option for vSphere 6.7 U2

Along with the recent release of VMware vSphere 6.7 U2, we published a new whitepaper that shows the performance of a new scheduler option that was included in the 6.7 U2 update.  We referred to this new scheduler option internally as the “sibling” scheduler, but the official name is the side-channel aware scheduler version 2, or SCAv2.  The whitepaper includes full details about SCAv1 and SCAv2, the L1TF security vulnerability that made them necessary, and the performance implications with several different workload types.  This blog is a brief overview of the key points, but we recommend that you check out the full document.

In August of 2018, a security vulnerability known as L1TF, affecting systems using Intel processors, was revealed, and patches and remediations were also made available. Intel provided micro-code updates for its processors, operating system patches were made available, and VMware provided an update for vSphere. The full details of the vCenter and ESXi patches are in a VMware security advisory that links to individual KB articles.

The ESXi-provided patches included a side-channel aware scheduler (SCAv1) that mitigated the concurrent-context attack vector for L1TF. Once that mode was enabled, the scheduler would only schedule processes on one thread for each core. This mode impacted performance mostly from a capacity standpoint because the system was no longer able to use both hyper-threads on a core. A server that was already fully utilized and running at maximum capacity would see a decrease in capacity of up to approximately 30%. A server that was running at 75% of capacity would see a much smaller impact to performance, but CPU utilization would rise.

In vSphere 6.7 U2, the side-channel aware scheduler has been enhanced (SCAv2) with a new policy to allow hyper-threads to be used concurrently if both threads are running vCPU contexts from the same VM. In this way, L1TF side channels are constrained to not expose information across VM/VM or VM/hypervisor boundaries.

Performance testing with several different workloads found a range of impact in performance for both SCAv1 and SCAv2 as compared to the default scheduler as the baseline of performance. If SCAv1 or SCAv2 were able to achieve the same performance, it would be 1.0, and if it achieved 75% of the performance, it would be .75.  The graphs here show the performance impact at max server utilization and the impact at the reduced load of approximately 75% utilization.

The charts show that the SCAv2 scheduler, represented by the third bar in each group, recovers a significant percentage of performance in all cases, except for the monster VM test case.  The monster VM test case was for a single large Oracle database VM that consumed an entire 4 socket host with 192 vCPUs.  In configurations with a single large monster VM that uses all the logical threads of the host, SCAv1 had a slight performance advantage over SCAv2 in our testing.

The reduced load numbers show that at server usage levels of approximately 75%, the overall impact to performance is much lower. With SCAv2 and the overall load below 75%, tests show that the largest performance impact measured in these tests was 11%. The SCAv2 scheduler option, available in vSphere 6.7 U2, provides better performance than SCAv1 in almost all cases.

For full details about the individual benchmark tests as well as more details about L1TF and VMware’s response to it, please see the full whitepaper and VMware KB 55806.

IoT Analytics Benchmark adds neural network–based deep learning with Keras and BigDL

The IoT Analytics Benchmark released last year dealt with an important Internet of Things use case—monitoring factory sensor data for impending failure conditions. This year, we are tackling an equally important use case—image classification. Whether used in facial recognition, license plate readers, inspection systems, or autonomous vehicles, neural network–based deep learning is making image detection and classification a viable technology.

As in the classic machine learning used in the original IoT Analytics Benchmark code (which used the Spark Machine Learning Library), the new deep learning code first trains a model using pre-labeled images and then deploys that model to infer the classification of new images. For IoT this inference step is the most important. Thus, the new programs, designated as IoT Analytics Benchmark DL, use previously trained models (included in the kit) to demonstrate inferencing that can be performed at the edge (on small gateway systems) or in scaled-out Spark clusters.

The programs run Keras and Intel’s BigDL image classifiers with the CIFAR10 image set. For each type of classifier, there is a program that sends the images as a series of encoded strings and a second program that reads those strings, converts them back to images, and infers which of the 10 CIFAR10 classes that image belongs to. The Keras classifier is a Python-based single node program for running on an IoT edge gateway. The BigDL classifier is a Spark-based distributed program. The programs use Intel’s BigDL library and the CIFAR10 dataset. (Also see Learning Multiple Layers of Features from Tiny Images, by Alex Krizhevsky.)

The CIFAR10 image set consists of 50,000 pre-labeled training images and 10,000 pre-labeled test images. Each image is a 32 x 32 color image from one of ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship or truck. For example, here’s a ship, frog, and truck:

   

Here’s what the Python-based Keras program looks like running a complex ResNet model on a small, virtualized edge gateway system:

First, the inference program is started on the VM on the edge gateway using a pre-trained ResNet model included in the kit:

[root@iotdemo ~]# nc -lk 10000 | python3 infer_cifar.py --modelPath cifar10_ResNet20v1_model_91470.h5
Using TensorFlow backend.
Loaded trained model cifar10_ResNet20v1_model_91470.h5
Start send program
2019-01-31T04:09:37Z: 100 images classified
...
2019-01-31T04:11:06Z: 1000 images classified
Inferenced 1000 images in 99.3 seconds or 10.1 images/second, with 916 or 91.6% correctly classified

Then, when the inference program prints out “Start send program”, the send program is started from a driver system, in this case the author’s Mac:

[djaffe@djaffe-a01 ~/code/neuralnetworks/BigDL]$ python3 send_images_cifar.py -s -i 100 -t 1000 | \
  nc 192.168.2.3 10000
Using TensorFlow backend.
2019-01-31T04:09:12Z: Loading and normalizing the CIFAR10 data
2019-01-31T04:09:22Z: Sending 100 images per second for a total of 1000 images with pixel mean
subtracted
2019-01-31T04:09:31Z: 100 images sent
...
2019-01-31T04:11:00Z: 1000 images sent
2019-01-31T04:11:00Z: Image stream ended

We are planning to use the new workloads in several VMware projects. As always, please send us your feedback and contributions!

Introducing TPCx-HS Version 2 – An Industry Standard Benchmark for Apache Spark and Hadoop clusters deployed on premise or in the cloud

Since its release on August 2014, the TPCx-HS Hadoop benchmark has helped drive competition in the Big Data marketplace, generating 23 publications spanning 5 Hadoop distributions, 3 hardware vendors, 2 OS distributions and 1 virtualization platform. By all measures, it has proven to be a successful industry standard benchmark for Hadoop systems. However, the Big Data landscape has rapidly changed over the last 30 months. Key technologies have matured while new ones have risen to prominence in an effort to keep pace with the exponential expansion of datasets. One such technology is Apache Spark.

spark-logo-trademarkAccording to a Big Data survey published by the Taneja Group, more than half of the respondents reported actively using Spark, with a notable increase in usage over the 12 months following the survey. Clearly, Spark is an important component of any Big Data pipeline today. Interestingly, but not surprisingly, there is also a significant trend towards deploying Spark in the cloud. What is driving this adoption of Spark? Predominantly, performance.

Today, with the widespread adoption of Spark and its integration into many commercial Big Data platform offerings, I believe there needs to be a straightforward, industry standard way in which Spark performance and price/performance could be objectively measured and verified. Just like TPCx-HS Version 1 for Hadoop, the workload needs to be well understood and the metrics easily relatable to the end user.

Continuing on the Transaction Processing Performance Council’s commitment to bringing relevant benchmarks to the industry, it is my pleasure to announce TPCx-HS Version 2 for Spark and Hadoop. In keeping with important industry trends, not only does TPCx-HS support traditional on premise deployments, but also cloud.

I envision that TPCx-HS will continue to be a useful benchmark standard for customers as they evaluate Big Data deployments in terms of performance and price/performance, and for vendors in demonstrating the competitiveness of their products.

 

Tariq Magdon-Ismail

(Chair, TPCx-HS Benchmark Committee)

 

Additional Information:  TPC Press Release

Weathervane, a benchmarking tool for virtualized infrastructure and the cloud, is now open source.

Weathervane is a performance benchmarking tool developed at VMware.  It lets you assess the performance of your virtualized or cloud environment by driving a load against a realistic application and capturing relevant performance metrics.  You might use it to compare the performance characteristics of two different environments, or to understand the performance impact of some change in an existing environment.

Weathervane is very flexible, allowing you to configure almost every aspect of a test, and yet is easy to use thanks to tools that help prepare your test environment and a powerful run harness that automates almost every aspect of your performance tests.  You can typically go from a fresh start to running performance tests with a large multi-tier application in a single day.

Weathervane supports a number of advanced capabilities, such as deploying multiple independent application instances, deploying application services in containers, driving variable loads, and allowing run-time configuration changes for measuring elasticity-related performance metrics.

Weathervane has been used extensively within VMware, and is now open source and available on GitHub at https://github.com/vmware/weathervane.

The rest of this blog gives an overview of the primary features of Weathervane.

Continue reading

Machine Learning on vSphere 6 with Nvidia GPUs – Episode 2

by Hari Sivaraman, Uday Kurkure, and Lan Vu

In a previous blog [1], we looked at how machine learning workloads (MNIST and CIFAR-10) using TensorFlow running in vSphere 6 VMs in an NVIDIA GRID configuration reduced the training time from hours to minutes when compared to the same system running no virtual GPUs.

Here, we extend our study to multiple workloads—3D CAD and machine learning—run at the same time vs. run independently on a same vSphere server.

This is episode 2 of a series of blogs on machine learning with vSphere. Also see:

Continue reading

New White Paper: Best Practices for Optimizing Big Data Performance on vSphere 6

A new white paper is available showing how to best deploy and configure vSphere for Big Data applications such as Hadoop and Spark. Hardware, software, and vSphere configuration parameters are documented, as well as tuning parameters for the operating system, Hadoop, and Spark.

The best practices were tested on a Dell 12-server cluster, with Hadoop installed on vSphere as well as on bare metal. Workloads for both Hadoop (TeraSort and TestDFSIO) and Spark (Support Vector Machines and Logistic Regression) were run on the cluster. The virtualized cluster outperformed the bare metal cluster by 5-10% for all MapReduce and Spark workloads with the exception of one Spark workload, which ran at parity. All workloads showed excellent scaling from 5 to 10 worker servers and from smaller to larger dataset sizes.

Continue reading