Home > Blogs > VMware VROOM! Blog > Tag Archives: Performance

Tag Archives: Performance

DRS Lens – A new UI dashboard for DRS

DRS Lens provides an alternative UI for a DRS enabled cluster. It gives a simple, yet powerful interface to monitor the cluster real time and provide useful analyses to the users. The UI is comprised of different dashboards in the form of tabs for each cluster being monitored.

Cluster Balance

Dashboard showing the variations in the cluster balance metric plotted over time with DRS runs. This shows how DRS reacts to and tries to clear cluster imbalance every time it runs.

VM Happiness

This dashboard shows VM happiness for the first time in a UI. This chart shows the summary of total VMs in the cluster that are happy and those that are unhappy based on the user defined thresholds. Users can then select individual VMs to view performance metrics related to its happiness, like CPU ready time and memory swapIn rate.

vMotions

This dashboard provides a summary of vMotions that happened in the cluster over time. For each DRS run period, there will be a breakdown of vMotions as DRS-initiated and user-initiated. This helps users see how actively DRS has been working to resolve cluster imbalance. It also helps to see if there are vMotions outside of DRS control, which may be affecting cluster balance.

Operations

This dashboard tracks different operations (tasks, in vCenter Server) that happened in the cluster, over time. Users can correlate information about tasks from this dashboard against DRS load balancing and its effects from the other dashboards.

 

Users can download DRS Lens from VMware flings website.

The Extreme Performance Series at VMworld 2017

I’m excited to announce that the “Extreme Performance Series” is back for its 5th year, and with 7 additional sessions, it’s our largest year ever! These sessions are created and presented by VMware’s best and most distinguished performance engineers, principals, architects and gurus. You do not want to miss these advanced sessions.

Continue reading

Introducing VMmark3: A highly flexible and easily deployed benchmark for vSphere environments

VMmark 3.0, VMware’s multi-host virtualization benchmark is generally available here.  VMmark3 is a free cluster-level benchmark that measures the performance, scalability, and power of virtualization platforms.

VMmark3 leverages much of previous VMmark generations’ technologies and design.  It continues to utilize a unique tile-based heterogeneous workload application design. It also deploys the platform-level workloads found in VMmark2 such as vMotion, Storage vMotion, and Clone & Deploy.  In addition to incorporating new and updated application workloads and infrastructure operations, VMmark3 also introduces a new fully automated provisioning service that greatly reduces deployment complexity and time.

Figure 1: VMmark3

The VMmark3 Benchmark:

  • Allows accurate and reliable benchmarking of virtual data center performance and power consumption of host and storage components.
  • Allows heterogeneous workload comparisons between different virtualization platforms.
  • Allows the analysis of changes in hardware, software, and configuration within virtualization environments.

VMmark3 Application Workloads:

  • DVDstore3: The third generation DVDstore benchmark is a complete online e-commerce test application with a back-end database component, a web application tier, and driver programs. The application simulates users logging into a web server and browsing a catalog of products using basic queries. VMmark3 utilizes DVDstore3 with 4 virtual machines, 3 Apache web-tier VMs and 1 MySQL database VM.  One of the web servers delivers a constant load to the database, while the other two deliver a cyclical load to generate a bursty profile.
  • Weathervane: This is a highly scalable web application that contains a variety of support services working with a core application that simulates an online auction. Each VMmark3 tile contains two independent instances of the Weathervane Auction application, one static and one elastic, for a sum of 14 VMs (8 static and 6 elastic).  The elastic workload mimics self-scaling applications by periodically adding and removing an application server and web server throughout the benchmark run.
  • Standby: The standby server mimics a heartbeat server.

VMmark3 Infrastructure Operations:

  • vMotion: This infrastructure operation live migrates one of the Weathervane Auction RabbitMQ VMs in a round-robin fashion to simulate modern sysadmin operations.
  • Storage vMotion: For this operation, one of the Standby VMs is migrated to a user-specified maintenance partition and then, after a period of rest, returns to the original location.
  • XvMotion: This operation simultaneously moves one of the DS3WebA VMs to an alternate host and maintenance partition. Similar to Storage vMotion, after a period of rest, the VM will return to its original location.
  • Automated Load Balancing (DRS): VMmark requires that DRS be enabled and running to ensure typical rebalancing operations occur within the environment under test.

VMmark3 Provisioning Service:

  • VMmark3 features a highly-automated setup and tile-creation process that makes benchmark deployment fast and easy, with little to no manual intervention. The entire process is seeded from a single OVA and can be utilized in an unattended mode for tile0 to N. VMmark3 uses CentOS-based free or open-source software throughout, eliminating the need for purchasing additional software licenses.

 

NEW VMworld 2017 Bootcamp – vSphere Advanced Performance Design, Configuration and Troubleshooting

New this year for VMworld 2017 in Las Vegas, we will be offering a pre-VMworld bootcamp focused on vSphere platform performance. Specific SQL and Oracle bootcamps will still be offered, but we have had many requests for a workload agnostic program. This bootcamp will enable you to confidently support all your virtual workloads and give you an opportunity to directly interact with VMware Performance Engineering.

Continue reading

Introducing TPCx-HS Version 2 – An Industry Standard Benchmark for Apache Spark and Hadoop clusters deployed on premise or in the cloud

Since its release on August 2014, the TPCx-HS Hadoop benchmark has helped drive competition in the Big Data marketplace, generating 23 publications spanning 5 Hadoop distributions, 3 hardware vendors, 2 OS distributions and 1 virtualization platform. By all measures, it has proven to be a successful industry standard benchmark for Hadoop systems. However, the Big Data landscape has rapidly changed over the last 30 months. Key technologies have matured while new ones have risen to prominence in an effort to keep pace with the exponential expansion of datasets. One such technology is Apache Spark.

spark-logo-trademarkAccording to a Big Data survey published by the Taneja Group, more than half of the respondents reported actively using Spark, with a notable increase in usage over the 12 months following the survey. Clearly, Spark is an important component of any Big Data pipeline today. Interestingly, but not surprisingly, there is also a significant trend towards deploying Spark in the cloud. What is driving this adoption of Spark? Predominantly, performance.

Today, with the widespread adoption of Spark and its integration into many commercial Big Data platform offerings, I believe there needs to be a straightforward, industry standard way in which Spark performance and price/performance could be objectively measured and verified. Just like TPCx-HS Version 1 for Hadoop, the workload needs to be well understood and the metrics easily relatable to the end user.

Continuing on the Transaction Processing Performance Council’s commitment to bringing relevant benchmarks to the industry, it is my pleasure to announce TPCx-HS Version 2 for Spark and Hadoop. In keeping with important industry trends, not only does TPCx-HS support traditional on premise deployments, but also cloud.

I envision that TPCx-HS will continue to be a useful benchmark standard for customers as they evaluate Big Data deployments in terms of performance and price/performance, and for vendors in demonstrating the competitiveness of their products.

 

Tariq Magdon-Ismail

(Chair, TPCx-HS Benchmark Committee)

 

Additional Information:  TPC Press Release

Oracle Database Performance on vSphere 6.5 Monster Virtual Machines

We have just published a new whitepaper on the performance of Oracle databases on vSphere 6.5 monster virtual machines. We took a look at the performance of the largest virtual machines possible on the previous four generations of four-socket Intel-based servers. The results show how performance of these large virtual machines continues to scale with the increases and improvements in server hardware.

Oracle Database Monster VM Performance across 4 generations of Intel based servers on vSphere 6.5

Oracle Database Monster VM Performance on vSphere 6.5 across 4 generations of Intel-based  four-socket servers

In addition to vSphere 6.5 and the four-socket Intel-based servers used in the testing, an IBM FlashSystem A9000 high performance all flash array was used. This array provided extreme low latency performance that enabled the database virtual machines to perform at the achieved high levels of performance.

Please read the full paper, Oracle Monster Virtual Machine Performance on VMware vSphere 6.5, for details on hardware, software, test setup, results, and more cool graphs.  The paper also covers performance gain from Hyper-Threading, performance effect of NUMA, and best practices for Oracle monster virtual machines. These best practices are focused on monster virtual machines, and it is recommended to also check out the full Oracle Databases on VMware Best Practices Guide.

Some similar tests with Microsoft SQL Server monster virtual machines were also recently completed on vSphere 6.5 by my colleague David Morse. Please see his blog post  and whitepaper for the full details.

This work on Oracle is in some ways a follow up to Project Capstone from 2015 and the resulting whitepaper Peeking at the Future with Giant Monster Virtual Machines . That project dealt with monster VM performance from a slightly different angle and might be interesting to those who are also interested in this paper and its results.

 

Capturing the Flag for Cloud IaaS Performance with VMware’s vSphere 6.5 and VIO 3.1 on Dell PowerEdge Servers

This week SPEC has published a new SPEC CloudTM IaaS 2016 result for a private cloud configuration built using VMware vSphere 6.5 and VMware Integrated OpenStack 3.1 (VIO 3.1) and Dell PowerEdge Servers. Working with VMware, Dell has pushed their lead in cloud performance even further. This time, the primary metric produced was a Scalability score of 78.5 @ 72 Application Instances (468 VMs). The Elasticity score was 87.4%.

VMware and Dell are active participants in SPEC and have contributed to the development of its industry standard benchmarks including SPEC Cloud IaaS 2016. Both organizations strongly support SPEC’s mission to provide a set of fair and realistic metrics on which to differentiate modern systems and technologies.

Continue reading

Weathervane, a benchmarking tool for virtualized infrastructure and the cloud, is now open source.

Weathervane is a performance benchmarking tool developed at VMware.  It lets you assess the performance of your virtualized or cloud environment by driving a load against a realistic application and capturing relevant performance metrics.  You might use it to compare the performance characteristics of two different environments, or to understand the performance impact of some change in an existing environment.

Weathervane is very flexible, allowing you to configure almost every aspect of a test, and yet is easy to use thanks to tools that help prepare your test environment and a powerful run harness that automates almost every aspect of your performance tests.  You can typically go from a fresh start to running performance tests with a large multi-tier application in a single day.

Weathervane supports a number of advanced capabilities, such as deploying multiple independent application instances, deploying application services in containers, driving variable loads, and allowing run-time configuration changes for measuring elasticity-related performance metrics.

Weathervane has been used extensively within VMware, and is now open source and available on GitHub at https://github.com/vmware/weathervane.

The rest of this blog gives an overview of the primary features of Weathervane.

Continue reading

vCenter 6.5 Performance: what does 6x mean?

At the VMworld 2016 Barcelona keynote, CTO Ray O’Farrell proudly presented the performance improvements in vCenter 6.5. He showed the following slide:

6x_slide

Slide from Ray O’Farrell’s keynote at VMworld 2016 Barcelona, showing 2x improvement in scale from 6.0 to 6.5 and 6x improvement in throughput from 5.5 to 6.5.

As a senior performance engineer who focuses on vCenter, and as one of the presenters of VMworld Session INF8108 (listed in the top-right corner of the slide above), I have received a number of questions regarding the “6x” and “2x scale” labels in the slide above. This blog is an attempt to explain these numbers by describing (at a high level) the performance improvements for vCenter in 6.5. I will focus specifically on the vCenter Appliance in this post.

Continue reading

SQL Server VM Performance with VMware vSphere 6.5

Achieving optimal SQL Server performance on vSphere has been a constant focus here at VMware; I’ve published past performance studies with vSphere 5.5 and 6.0 which showed excellent performance up to the maximum VM size supported at the time.

Since then, there have been quite a few changes!  While this study uses a similar test methodology, it features an updated hypervisor (vSphere 6.5), database engine (SQL Server 2016), OLTP benchmark (DVD Store 3), and CPUs (Intel Xeon v4 processors with 24 cores per socket, codenamed Broadwell-EX).

Continue reading