Product Announcements

VMware Virtual SAN Performance Testing – Part III

Virtual SAN ObserverIn the previous VMware Virtual SAN Performance Testing blog post we reviewed the benefits of running performance tests utilizing I/O trace files over synthetic workload tools such as IOmeter to more accurately characterize the performance of a Virtual SAN cluster. The VMware I/O Analyzer includes pre-created trace files of specific application profiles that allows you to quickly perform scale-out testing utilizing a mix of industry standard workloads But what if you want to characterize the performance of your existing vSphere virtualized environment within a new Virtual SAN configuration? This is were the use of custom I/O Trace replays can be useful.

Utilizing Custom Application Trace files

It is possible to capture the storage performance characteristics of real-times workloads on a vSphere host and leverage a replay of captured  workloads for Virtual SAN scale-out testing. To do this, you can use the vscsiStats trace command to create a trace replay of any application workload running within a vSphere virtual machine and then upload it to the VMware I/O Analyzer trace repository for replay. Below are the necessary steps.

Capturing I/O Trace files

To collect a trace from an application’s I/O workload using the vscsiStats utility on a vSphere host.

  1. Connect to your reference vSphere host with SSH (consult the following KB  on how to enable SSH on a vSphere host)
  2.  Reset the statistics by typing in the ESXi shell: vscsiStats -r
  3.  Start collecting statistics and create a unique ID : vscsiStats -s -t -w <worldId> -i <handleId> (where <worldId> is the world ID for the virtual machine in which you will be running the workload and <handleId> is the identifier for the specific virtual disk you will be testing).
  4. NOTE:   You can find <worldId> and <handleId> with the vscsiStats -l command. You can find additional attributes of the vscsiStats utility with the vscsiStats -h command.
  5. Using the unique ID generated in the previous step, configure ESX/ESXi to capture the statistics in a disk file: logchannellogger <unique-id> <temporary-file-name>
  6. Run your application within the virtual machine identified by <worldId>.
  7. After the application run is completed (or the trace collection is over) return to the ESXi shell and stop the logchannellogger process by typing <Ctrl>-X (or <Ctrl>-C).
  8. Stop the statistics collection: vscsiStats -x -w <worldId> -i <handleId>
  9. Convert the binary trace file to a .csv file: vscsiStats -e <temporary-file-name> > <trace-file-name.csv> (for example: vscsiStats -e testvm01 > testvm01.csv)

Using the above procedure, the I/O workload is captured in a binary file and converted it to a .csv file. This .csv can now be leveraged by I/O Analyzer. You can extract the .csv from the vSphere host with WinSCP and download it to your local machine, to be uploaded to I/O Analyzer.

Uploading to I/O Analyzer

vmware_io_analyzer_tutorial-v1_5_pdfCustom I/O Trace Files can be stored and analyzed using I/O Analyzer’s trace repository. To upload an I/O Trace File to I/O Analyzer, follow the procedure documented in Part II of this blog series. Once uploaded to the I/O Analyzer trace repository, you can then view the characteristics of your captured trace file from within I/O Analyzer.
Below is example of a Trace Replay captured on a vSphere host running DVDSstore backed by a SQL database. Once uploaded to I/O Analyzer, the Trace Characteristics can be viewed via a generated Trace Graph.  The details of the below replay shows it requires a 29GB Hard Drive 2, runs for 12 minutes, and has a 54/46 RW I/O ratio.

Parameter Name Parameter
Minimum Disk Size 28.87 GB
ESX build  5.5 U2
DB disk 35GB EZT VMDK
OS disk and log disk separate physical disk
Orders per minute achieved 2462
Trace Duration 12 minutes
Trace Requests 157,334
Trace Read / Write % 54% / 45%
Approx. Working-set Size 3.7 GB
Total Number of Cache Hits 44,310 or 28%
Number of 4K-aligned IOs 132,840
Number of Reads 85,337 or 54%
Number of Read Cache Hits 525 or .62%
Number of Read hits after a previous Read 505 or .59%
Number of Read hits after a previous Write 20 or .02%
Number of Writes 71,997 or 46%
Number of Write Cache Hits 43,785 or 61%
Number of Write hits after a previous Read 43,629 or 60%
Number of Write hits after a previous Write 43,629 or .21%
Logical Block Numbers 2832-60548980

DVDstore Characterization Pic

Trace Replay Caveats

While utilizing I/O trace replay is generally a more representative method of characterizing the performance needs of workloads than utilizing synthetic I/O tools such as IOmeter, it is still not as accurate as running a full-blown application natively. This is because when running a real application natively, the application will issue I/Os based on how fast it receives completion of previous I/Os. So if utilizing an I/O Trace that was captured on a storage array with lower performance than Virtual SAN, Virtual SAN will probably demonstrate better performance, but it doesn’t necessarily reproduce the same I/O pattern as running the application natively.

When weighing the effort needed to create a performance testing testbed versus the accuracy of the results, utilizing I/O Trace Files is a good middle ground between synthetic I/O generation tools and full-blown application build-outs for scale-out storage performance testing. In the next post in this series, we will look at some keys to analyzing Virtual SAN performance results.

  1. Virtual SAN Performance Testing Part I – Utilizing I/O Analyzer with Iometer
  2. Virtual SAN Performance Testing Part II – Utilizing I/O Analyzer with Application Trace Files
  3. Virtual SAN Performance Testing Part III – Utilizing Custom Application Trace files
  4. Virtual SAN Performance Testing Part IV – Analyzing Performance Results