Product Announcements

VMware Virtual SAN Performance Testing – Part I

Virtual SAN ObserverAs people begin to assess, design, build, and deploy VMware Virtual SAN based solutions for the first time, there is great curiosity in understanding the performance expectations to have,  and results one can achieve when utilizing Virtual SAN in specific configurations. Most customers are running some type of benchmark in proof-of-concept environments in order to gauge the performance of VMware Virtual SAN in their environment. In working with customers and partners, we have seen a variety of methods used in attempting to benchmark and analyze Virtual SAN performance. In order to ease this process, we are developing guidance on how best to perform performance testing on Virtual SAN. This guidance will be presented in a  four part series as follows:

Virtual SAN Performance Design Principles

Before we delve into performance testing methodology and configuration recommendations, first lets discuss some Virtual SAN performance concepts. Virtual SAN was purpose built to be an ideal platform for server consolidation of virtualized workloads. A key design principle of Virtual SAN is to optimize for aggregate consistent performance in a dynamic environment over localized individual performance.simulated by  many artificial benchmark tests. This adheres to the principle of vSphere and Virtual SAN enabling hyper-converged environments. One way in which Virtual SAN does this is through minimizing the IOblender effect in virtualized environments.

IOblender VSANThe IOblender effect is caused by multiple virtual machines simultaneously sending I/O to a storage subsystem, causing sequential I/O to become highly randomized. This can increase latency in your storage solution. Virtual SAN mitigates this through a hybrid design, using a flash acceleration layer that acts as a read cache and write buffer combined with spinning disk for capacity. The majority of reads and all writes will be served by flash in a properly sized Virtual SAN solution, allowing for excellent performance in environments with highly random I/O. When data does need to be destaged to spinning disk from the flash acceleration layer, this destaging operation will predominately consist of sequential I/O, efficiently taking advantage of the full IO capability of the underlying spinning disk.

A second design principle used in optimizing Virtual SAN for aggregate consistent performance is not depending on data locality to guarantee performance. This concept is reviewed in-depth in the Understanding Data Locality in VMware Virtual SAN whitepaper. This is key as vSphere balances compute resources in an automated fashion through Distributed Resource Scheduler (DRS) initiated vMotion operations.

Selecting Your Proof-of-Concept Testbed

When you plan and design your Virtual SAN solution, you will need to start out by choosing a solution building block that maps most closely to your expected performance requirements. Whether the Virtual SAN solution will be to build you own Virtual SAN solution (with guidance from the Virtual SAN Hardware Quick Reference Guide as  starting point), selecting a vendor specific Ready Node option, or using a Virtual SAN based EVO:Rail solution, the target platform depends on what fits your environment best. When building your own solution, you must  adhere to guidance on the VMware Compatibility Guide  (VCG)  for Virtual SAN. This is the hardware capability list that acts as the source of truth defining supported hardware with which you can build a Virtual SAN solution.

Performance Testing Methodology

The methodology described in this series can be utilized for any storage subsystem supported by vSphere, whether it be VMware Virtual SAN, another scale-out storage solution, or a traditional array. As we get into analyzing performance results, there will be specific Virtual SAN tools recommended (such as Virtual SAN Observer) used in combination with vSphere based tools such as esxtop and vscsistats.

For optimal performance during performance tests, we recommend 10 GbE uplinks that operate at line rate. For more information around network recommendations, see the recently published Virtual SAN Network Design Guide.

The next step is sizing your Virtual SAN solution adequately. To assist with sizing, we have developed the Virtual SAN sizing tool.  For optimal performance, we recommend that the active working set of your virtual machine fit into the flash acceleration layer of Virtual SAN. But you may ask, how do you measure the “active working set” size of a VM. From our experience, a good conservative estimate is 10% of your used capacity, not taking into account capacity utilized for failure to tolerate redundancy policy.

 Performance Testing Tools

There are a number of tools out there that can be utilized to test the performance of storage subsystems. To efficiently test a scale-out storage system, we recommend utilizing VMware I/O Analzer as a standard tool in testing scale-out storage systems on vSphere. I/O Analyzer is supplied as an easy-to-deploy virtual appliance and automates storage performance testing and analysis through a unified web interface that can be used to configure and deploy storage tests and view graphical results for those tests. I/O Analyzer can be utilized to run either iometer based workloads, or use application trace replays. In this post we will focus on using I/O Analyzer with iometer, but will delve into trace replay usage in part II and III of this blog post series.

If you are evaluating using another tool to test Virtual SAN, we recommend utilizing a tool that can perform multiple Outstanding IOs (OIO). For this reason, tools such as dd and SIO are not recommended for performance testing of Virtual SAN, as they can only be configured to run tests that sequentially issue only a single OIO.

 SPBM Policy Configuration for Performance Testing

Virtual SAN supports the configuration of per object policies that impact performance and availability. The policies applicable to performance testing include

  • Flash Read Cache Reservation – In general we do not recommend utilizing this policy during performance testing of server workloads. Any reservation will reserve a portion of the 70% allocation of flash read cache to an object, whether it is needed or not. We recommend letting the Virtual SAN algorithms handle read cache reservation automatically. This policy is utilized by default for Horizon 6 with View when utilizing linked clones, but only for the Horizon replica object.
  •  Stripe Width –  Specifically for performance testing, this policy may be used if you are utilizing a single virtual machine/vmdk to test performance. This will allow that vmdk to be split into multiple components , and allow those components to be spread across your cluster. Recommendation is to set the stripe with equal to the # of nodes in the cluster, up to the the max of 12. If you are performing scale out performance tests with multiple virtual machines or multiple vmdks, this policy is not recommended.
  •  Object Space Reservation – This policy is recommended to encourage even distribution of components through a Virtual SAN cluster, by forcing the VSAN algorithms to take into account the full size of the object when making placement decisions. Using this policy is similar to a lazy zero thick disk format, and has no impact on performance outside of the influence on component placement and distribution.
  •  Failures To Tolerate – We recommend keeping the failure to tolerate setting that adheres to the availability needs of your environment. The default is FTT=1.

 Iometer Testing Parameters

In the “How to Super Charge Your Virtual SAN Cluster” blog post, we mentioned two differing Iometer configurations used. Below are the configurations used and the rationale behind them. If testing your Virtual SAN solution, we recommend these two test workloads as a baseline to level set the performance of your Virtual SAN solution, and sanity check the configuration and environment. Once these are complete, you may choose further testing using differing iometer configurations, application trace files,  specific application workloads based on the application requirements of your environment, and level of testing you would like to perform.

  • 70/30  RW (80%random)–>  Common Industry standard Ioprofile

Recommend Test duration 2 hours, disregard results from the first hour to provide a warm up and achieve steady state performance

  • 100  RW (100% random)–> To achieve Max performance, max Iops, Sanity check SSD performance, Stress the network and validate network performance

Recommended Test duration 1 hour, disregard results for first 30 mins  to achieve warmup for steady state performance

  • OIO per host – We recommend configuring workers to not exceed an aggregate of 256 OIO total per host in scale out testing. The optimal number of OIO per worker will differ depending on the number of simultaneous workers per host you choose to utilize.
  • Block Size – We recommend configuring the block size to mimic the predominant application profile of your environment. This typically is 4K in most virtualized environments.

I/O Analyzer for Scale-Out Testing

I/O Analyzer allows for easy scale out testing of a Virtual SAN cluster. One “controller” I/O Analyzer VM can schedule tests on up to 512 I/O Analyzer VM’s (called workers) on up to 32 Hosts. We recommend reading the I/OAnalyzer Installation and User’s Guide before deploying the appliance. A few items specific to Virtual SAN testing to be noted are

  • For scale out testing of large Virtual SAN clusters, configure the I/O Analyzer Controller Virtual Machine with 2 vCPU and 4 Cores, and increase the memory to 32GB. We recommend placing the controller VM in separate host or management cluster, separate from the Virtual SAN cluster that will house the I/O Analyzer worker VMs. Ideally this management cluster will also be used to co-locate an out-of-band vCenter Server Appliance to run RVC and Virtual SAN Observer for performance analysis.
  • The  I/O Analyzer Users Guide will say to deploy the appliance template and choose Thick provision Eager Zeroed, but this is not for Virtual SAN.  Virtual SAN does not support Eager-Zero thick provisioning, as all objects are thin provisioned. Even when using the  Object Space Reservation policy, the actual object is similar to a lazy-Zero format,  so there is no pre-allocation or warming of the bits.
  • Enable Auto-Login on I/O Analyzer Worker VM – When an I/O Analyzer controller or worker reboots, it cannot be used until a user logs in. To do so, use the following procedure on the initial worker before cloning out additional worker I/O Analyzer VMs.
  1. Open a Terminal on the I/O Analyzer worker VM and run the following command: sed -i ‘s/AUTOLOGIN=””/AUTOLOGIN=”root”/g’ /etc/sysconfig/displaymanager
  2. Verify Auto-login by running the following command: cat /etc/sysconfig/displaymanager | grep ‘AUTOLOGIN’
  3. Expect to see AUTOLOGIN=”root”
  4. Verify Auto-Login with a reboot

In the next post in this series, we will delve into utilizing application trace replays to allow for performance testing of real world applications. So stay tuned and happy performance testing!