The ways in which we use, design, deploy, and evaluate the performance of large-scale web applications have changed significantly in recent years. These changes have been driven by the increase in computing capacity and flexibility provided by virtualized and cloud-based computing infrastructures. The majority of these changes are not reflected in current web-application benchmarks.
Zephyr is a new web-application benchmark we have been developing as part of our work on optimizing the performance of VMware products for the next generation of cloud-scale applications. The goal of the Zephyr project has been to develop an application-level benchmark that captures the key characteristics of the workloads, design paradigms, deployment architectures, and performance metrics of the next generation of large-scale web applications. As we approach the initial release of Zephyr, we are starting to use it to understand performance across our product range. In this post, we will give an overview of Zephyr that will provide context for the performance results that we will be writing about over the coming months.
There have been many changes in usage patterns and development practices for large-scale web applications. The design and development of Zephyr has been driven by the goal of capturing these changes in a highly scalable benchmark that includes these key aspects:
- The effect of increased user interactivity and rich web interfaces on workload patterns
- New design patterns for decoupled and asynchronous services
- The use of multiple data sources for data with varying scalability and consistency requirements
- Flexible architectures that allow for deployment on a wide range of virtual and cloud-based infrastructures
The effect of increased user interactivity and rich web interfaces is one of the most important of these aspects. In current benchmarks, a user is represented by a single thread operating independently from other users. Contrast that to the way we interact with applications as diverse as social media and stock trading. Many user interactions, such as responding to a status update or selling shares of stock, are in direct response to the actions of other users. In addition, the current generation of script-rich web interfaces performs many operations asynchronously without any action from, or even awareness by, the user. Examples include web pages and rich client interfaces that update active friend lists, check for messages, or maintain stock tickers. This leads to a very different model of user behavior than the traditional single-threaded, click-and-think design used by existing benchmarks. As a result, one of the key design goals for Zephyr was to develop both a benchmark application and a workload generator that would allow us to capture the effect of these new workload patterns.
An application-level benchmark typically consists of two main parts: the benchmark application and the workload driver. The application is selected and designed to represent characteristics and technology choices that are typical of a certain class of applications. The workload driver interacts with the benchmark application to simulate the behavior of typical users of the application. It also captures the performance metrics that are used to quantify the performance of the application/infrastructure combination. Some benchmarks, including Zephyr, also provide a run harness that assists in the set-up and automation of benchmark runs.
Zephyr’s benchmark application is LiveAuction, which is a web application for managing and hosting real-time auctions. An auction hosted by LiveAuction consists of a number of items that will be placed up for bid in a set order. Users are given only a limited time to bid before an item is sold and the next item is placed up for bid. When an item is up for bid, all users attending the auction are presented with a description and image of the item. Users see and respond to bids placed by other users. LiveAuction can support thousands of simultaneous auctions with large numbers of active users, with each user possibly attending multiple, simultaneous auctions. The figure below shows the browser application used to interact with the LiveAuction application. This figure shows the bidding screen for a user who is attending two auctions. The current item, bid, and bid status for each auction are updated in real-time in response to bids placed by other users.
In addition to managing live auctions, LiveAuction provides auction and item search, profile management, historical data queries, image management, auction management, and other services that would be required by a user of the application.
LiveAuction uses a scalable architecture that allows deployments to be easily sized for a large range of user loads. A full deployment of LiveAuction includes a wide variety of support services, such as load-balancing, caching, and messaging servers, as well as relational, NoSQL, and filesystem-based data stores supporting scalability for data with a variety of consistency requirements. The figure below shows a full deployment of LiveAuction and the Zephyr workload driver.
The following is a brief description of the role played by each tier.
TCP Load Balancers: The simulated users on the workload driver address the application through a set of IP addresses mapped to the application’s external hostname. The TCP load balancers jointly manage these IP addresses to ensure that all IP addresses remain available in the event of a failure. The TCP load balancers distribute the load across the web servers while maintaining SSL/TLS session affinity.
Messaging Servers: The application nodes use the messaging backbone to distribute work and state-change information regarding active auctions.
Web Servers: The web servers terminate SSL, serve static content, act as load-balancing reverse proxies for the application servers, and provide a proxy cache for application content, such as images returned by the application servers.
Application Servers: The application servers run Java servlet containers in which the application services are deployed. The LiveAuction application services use a stateless implementation with a RESTful interface that simplifies scaling.
Relational Database: The relational database is used for all data that is involved in transactions. This includes user account information, as well as auction, item, and high-bid data.
NoSQL Data Server: The NoSQL Document Store is used to store image metadata as well as activity data such as auction attendance information and bid records. It can also be used to store uploaded images. Using the NoSQL store as an image store allows the application to take advantage of its sharding capabilities to easily scale the I/O capacity for image storage.
File Server: The file server is used exclusively to store item images uploaded by users. Note that the file server is optional, as the images can be stored and served from the NoSQL document store.
Zephyr currently includes configuration support for deploying LiveAuction using the following services:
- Virtual IP Address Management: Keepalived
- TCP Load Balancer: HAProxy
- Web Server: Apache Httpd and Nginx
- Application Server: Apache Tomcat with EHcache for in-memory caching
- Messaging Server: RabbitMQ
- Relational Database: MySQL and PostgreSQL
- NoSQL Data Store: MongoDB
- Network Filesystem: NFS
Additional implementations will be supported in future releases.
Zephyr can be deployed with different subsets of the infrastructure and application services. For example, the figure below shows a minimal deployment of Zephyr with a single application server and the supporting data services. In this configuration, the application server performs the tasks handled by the web server in a larger deployment.
The Zephyr workload driver also monitors quality-of-service (QoS) metrics for both the LiveAuction application and the overall workload. The application-level QoS requirements are based on the 99th percentile response-times for the individual operations. An operation represents a single action performed by a user or embedded script, and may consist of multiple HTTP exchanges. The workload-level QoS requirements define the required mix of operations that must be performed by the users during the workload’s steady state. This mix must be consistent from run to run in order for the results to be comparable. In order for a run of the benchmark to pass, all QoS requirements must be satisfied.
Zephyr also includes a run harness that automates most of the steps involved in configuring and running the benchmark. The harness takes as input a configuration file that describes the deployment configuration, the user load, and many service-specific tuning parameters. The harness is then able to power on virtual machines, configure and start the various software services, deploy the software components of LiveAuction, run the workload, and collect the results, as well as the log, configuration, and statistics files from all of the virtual machines and services. The harness also manages the tasks involved in loading and preparing the data in the data services before each run.
Scaling to large deployments is a key goal of Zephyr. Therefore, it will be useful to conclude with some initial scalability data to show how we are doing in achieving that goal. There are many possible ways to scale up a deployment of LiveAuction. For the sake of providing a straightforward comparison, we will focus on scaling out the number of application server instances in an otherwise fixed deployment configuration. The CPU utilization of the application server is typically the performance bottleneck in a well-balanced LiveAuction deployment.
The figure below shows the logical layout of the VMs and services in this deployment. Physically, all VMs reside on the same network subnet on the vSphere hosts, which are connected by a 10Gb Ethernet switch.
The VMs in the LiveAuction deployment were distributed across three VMware vSphere 6 hosts. Table 1 gives the hardware details of the hosts.
|Host Name||Host Vendor/Model||Processors||Memory|
|Host1||Dell PowerEdge R720
|Intel® Xeon® CPU E5-2690 @ 2.90GHz
8 Core, 16 Thread
|Host2||Dell PowerEdge R720
|Intel® Xeon® CPU E5-2690 @ 2.90GHz
8 Core, 16 Thread
|Host3||Dell PowerEdge R720
|Intel® Xeon® CPU E5-2680 @ 2.70GHz
8 Core, 16 Thread
Table 1. vSphere 6 hosts for LiveAuction deployment
Table 2 shows the configuration of the VMs, and their assignment to vSphere hosts. As the goal of these tests was to examine the scalability of the LiveAuction application, and not the characteristics of vSphere 6, we chose the VM sizing and assignment in part to avoid using more virtual CPUs than physical cores. While we did some tuning of the overall configuration, we did not necessarily obtain the optimal tuning for each of the service configurations. The configuration was chosen so that the application server was the bottleneck as far as possible within the restrictions of the available physical servers. In future posts, we will examine the tuning of the individual services, tradeoffs in deployment configurations, and best practices for deploying LiveAuction-like applications on vSphere.
|Service||Host||VM vCPUs (each)||VM Memory|
|Nginx 1, 2, and 3||Host3||2||8GB|
|Tomcat 1, 3, 5, 7, and 9||Host1||2||8GB|
|Tomcat 2, 4, 6, 8, and 10||Host2||2||8GB|
|MongoDB 1 and 3||Host2||1||32GB|
|MongoDB 2 and 4||Host1||1||32GB|
Table 2. Virtual machine configuration
Figure 5 shows the peak load that can be supported by this deployment configuration as the number of application servers is scaled from one to ten. The peak load supported by a configuration is the maximum load at which the configuration can satisfy all of the QoS requirements of the workload. The dotted line shows linear scaling of the maximum load extrapolated from the single application server result. The actual scaling is essentially linear up to six application-server VMs. At that point, the overall utilization of the physical servers starts to affect the ability to maintain linear scaling. With seven application servers, the web-server tier becomes a scalability bottleneck, but there are not sufficient CPU cores available to add additional web servers.
It would require additional infrastructure to determine how far the linear scaling could be extended. However, the current results provide strong evidence that with sufficient resources, Zephyr will be able to scale to support very large loads representing large numbers of users.
The discussion in this post has focused on the use of Zephyr as a traditional single-application benchmark with a focus on throughput and response-time performance metrics. However, that only scratches the surface of our future plans for Zephyr. We are currently working on extending Zephyr to capture more cloud-centric performance metrics. These fall into two broad categories that we call multi-tenancy metrics and elasticity metrics. Multi-tenancy metrics capture the performance characteristics of a cloud-deployed application in the presence of other applications co-located on the same physical resources. The relevant performance metrics include isolation and fairness along with the traditional throughput and response-time metrics. Elasticity metrics capture the performance characteristics of self-scaling applications in the presence of changing loads. It is also possible to study elasticity metrics in the context of multi-tenancy environments, thus examining the impact of shared resources on the ability of an application to scale in a timely manner to satisfy user demands. These are all exciting new areas of application performance, and we will have more to say about these subjects as we approach Zephyr 1.0.