Home > Blogs > VMware VROOM! Blog > Tag Archives: vsan

Tag Archives: vsan

vSAN Performance Diagnostics Now Shows “Specific Issues and Recommendations” for HCIBench

By Amitabha Banerjee and Abhishek Srivastava

The vSAN Performance Diagnostics feature, which helps customers to optimize their benchmarks or their vSAN configurations to achieve the best possible performance, was first introduced in vSphere 6.5 U1. vSAN Performance Diagnostics is a “cloud connected” feature and requires participation in the VMware Customer Experience Improvement Program (CEIP). Performance metrics and data are collected from the vSAN cluster and are sent to the VMware Cloud. The data is analyzed and the results are sent back for display in the vCenter Client. These results are shown as performance issues, where each issue includes a problem with its description and a link to a KB article.

In this blog, we describe how vSAN Performance Diagnostics can be used with HCIBench and show the new feature in vSphere 6.7 U1 that provides HCIBench specific issues and recommendations.

What is HCIBench?

HCIBench (Hyper-converged Infrastructure Benchmark) is a standard benchmark that vSAN customers can use to evaluate the performance of their vSAN systems. HCIBench is an automation wrapper around the popular and proven VDbench open source benchmark tool that makes it easier to automate testing across an HCI cluster. HCIBench, available as a fling, simplifies and accelerates customer performance testing in a consistent and controlled way.

Continue reading

VMware’s AI-based Performance Tool Can Improve Itself Automatically

PerfPsychic  our AI-based performance analyzing tool, enhances its accuracy rate from 21% to 91% with more data and training when debugging vSAN performance issues. What is better, PerfPsychic can continuously improve itself and the tuning procedure is automated. Let’s examine how we achieve this in the following sections.

How to Improve AI Model Accuracy

Three elements have huge impacts on the training results for deep learning models: amount of high-quality training data, reasonably configured hyperparameters that are used to control the training process, and sufficient but acceptable training time. In the following examples, we use the same training and testing dataset as we presented in our previous blog.

Continue reading

VMware Speedily Resolves Customer Issues in vSAN Performance Using AI

We in VMware’s Performance team create and maintain various tools to help troubleshoot customer issues—of these, there is a new one that allows us to more quickly determine storage problems from vast log data using artificial intelligence. What used to take us days, now takes seconds. PerfPsychic analyzes storage system performance and finds performance bottlenecks using deep learning algorithms.

Let’s examine the benefit artificial intelligence (AI) models in PerfPsychic bring when we troubleshoot vSAN performance issues. It takes our trained AI module less than 1 second to analyze a vSAN log and to pinpoint performance bottlenecks at an accuracy rate of more than 91%. In contrast, when analyzed manually, an SR ticket on vSAN takes a seasoned performance engineer about one week to deescalate, while the durations range from 3 days to 14 days. Moreover, AI also wins over traditional analyzing algorithms by enhancing the accuracy rate from around 80% to more than 90%.

Continue reading

How to correctly test the performance of Virtual SAN 6.2 deduplication feature

In VMware Virtual SAN 6.2, we introduced several features highly requested by customers, such as deduplication and compression. An overview of this feature can be found in the blog: Virtual SAN 6.2 – Deduplication And Compression Deep Dive.

The deduplication feature adds the most benefit to an all-flash Virtual SAN environment because, while SSDs are more expensive than spinning disks, the cost is amortized because more workloads can fit on the smaller SSDs. Therefore, our performance testing is performed on an all-flash Virtual SAN cluster with deduplication enabled.

When testing the performance of the deduplication feature for Virtual SAN, we observed the following:

  • Unexpected deduplication ratio
  • High device read latency in the capacity tier, even though the SSD is perfectly fine

In this blog, we discuss the reason behind these two issues and share our testing experience.

Continue reading

VMware Virtual SAN Stretched Cluster Best Practices White Paper

VMware Virtual SAN 6.1 introduced the concept of a stretched cluster which allows the Virtual SAN customer to configure two geographically located sites, while synchronously replicating data between the two sites. A technical white paper about the Virtual SAN stretched cluster performance has now been published. This paper provides guidelines on how to get the best performance for applications deployed on a Virtual SAN stretched cluster environment.

The chart below, borrowed from the white paper, compares the performance of the Virtual SAN 6.1 stretched cluster deployment against the regular Virtual SAN cluster without any fault domains. A nine- node Virtual SAN stretched cluster is considered with two different configurations of inter-site latency: 1ms and 5ms. The DVD Store benchmark is executed on four virtual machines on each host of the nine-node Virtual SAN stretched cluster. The DVD Store performance metrics of cumulated orders per minute in the cluster, read/write IOPs, and average latency are compared with a similar workload on the regular Virtual SAN cluster. The orders per minute (OPM) is lower by 3% and 6% for the 1ms and 5ms inter-site latency stretched cluster compared to the regular Virtual SAN cluster.

vsan-stretched-fig1a
Figure 1a.  DVD Store orders per minute in the cluster and guest IOPS comparison

Guest read/write IOPS and latency were also monitored. The read/write mix ratio for the DVD Store workload is roughly at 1/3 read and 2/3 write. Write latency shows an obvious increase trend when the inter-site latency is higher, while the read latency is only marginally impacted. As a result, the average latency increases from 2.4ms to 2.7ms, and 5.1ms for 1ms and 5ms inter-site latency configuration.

vsan-stretched-fig1b
Figure 1b.  DVD Store latency comparison

These results demonstrate that the inter-site latency in a Virtual SAN stretched cluster deployment has a marginal performance impact on a commercial workload like DVD Store. More results are available in the white paper.

Virtual SAN 6.0 Performance with VMware VMmark

Virtual SAN is a storage solution that is fully integrated with VMware vSphere. Virtual SAN leverages flash technology to cache data and improve its access time to and from the disks. We used VMware’s VMmark 2.5 benchmark to evaluate the performance of running a variety of tier-1 application workloads together on Virtual SAN 6.0.

VMmark is a multi-host virtualization benchmark that uses varied application workloads and common datacenter operations to model the demands of the datacenter. Each VMmark tile contains a set of virtual machines running diverse application workloads as a unit of load. For more details, see the VMmark 2.5 overview.

 

Testing Methodology

VMmark 2.5 requires two datastores for its Storage vMotion workload, but Virtual SAN creates only a single datastore. A Red Hat Enterprise Linux 7 virtual machine was created on a separate host to act as an iSCSI target to serve as the secondary datastore. Linux-IO Target (LIO) was used for this.

 

Configuration

Systems Under Test 8x Supermicro SuperStorage SSG-2027R-AR24 servers
CPUs (per server) 2x Intel Xeon E5-2670 v2 @ 2.50 GHz
Memory (per server) 256 GiB
Hypervisor VMware vSphere 5.5 U2 and vSphere 6.0
Local Storage (per server) 3x 400GB Intel SSDSC2BA4012x 900GB 10,000 RPM WD Xe SAS drives
Benchmarking Software VMware VMmark 2.5.2

 

Workload Characteristics

Storage performance is often measured in IOPS, or I/Os per second. Virtual SAN is a storage technology, so it is worthwhile to look at how many IOPS VMmark is generating.  The most disk-intensive workloads within VMmark are DVD Store 2 (also known as DS2), an E-Commerce workload, and the Microsoft Exchange 2007 mail server workload. The graphs below show the I/O profiles for these workloads, which would be identical regardless of storage type.

 Figure1

The DS2 database virtual machine shows a fairly balanced I/O profile of approximately 55% reads and 45% writes.

Microsoft Exchange, on the other hand, has a very write-intensive load, as shown below.

Figure2

Exchange sees nearly 95% writes, so the main benefit the SSDs provide is to serve as a write buffer.

The remaining application workloads have minimal disk I/Os, but do exert CPU and networking loads on the system.

 

Results

VMmark measures both the total throughput of each workload as well as the response time.  The application workloads consist of Exchange, Olio (a Java workload that simulates Web 2.0 applications and measures their performance), and DVD Store 2. All workloads are driven at a fixed throughput level.  A set of workloads is considered a tile.  The load is increased by running multiple tiles.  With Virtual SAN 6.0, we could run up to 40 tiles with acceptable quality of service (QoS). Let’s look at how each workload performed with increasing the number of tiles.

DVD Store

There are 3 webserver frontends per DVD Store tile in VMmark.  Each webserver is loaded with a different profile.  One is a steady-state workload, which runs at a set request rate throughout the test, while the other two are bursty in nature and run a 3-minute and 4-minute load profile every 5 minutes.  DVD Store throughput, measured in orders per minute, varies depending on the load of the server. The throughput will decrease once the server becomes saturated.

Figure3

For this configuration, maximum throughput was achieved at 34 tiles, as shown by the graph above.  As the hosts become saturated, the throughput of each DVD Store tile falls, resulting in a total throughput decrease of 4% at 36 tiles. However, the benchmark still passes QoS at 40 tiles.

Olio and Exchange

Unlike DVD Store, the Olio and Exchange workloads operate at a constant throughput regardless of server load, shown in the table below:

Workload Simulated Users Load per Tile
Exchange 1000 320-330 Sendmail actions per minute
Olio 400 4500-4600 operations per minute

 

At 40 tiles the VMmark clients are sending over ~12,000 mail messages per minute and the Olio webservers served ~180,000 requests per minute.

As the load increases, the response time of Exchange and Olio increases, which makes them a good demonstration of the end-user experience at various load levels. A response time of over 500 milliseconds is considered to be an unacceptable user experience.

Figure4

As we saw with DVD Store, performance begins to dramatically change after 34 tiles as the cluster becomes saturated.  This is mostly seen in the Exchange response time.  At 40 tiles, the response time is over 300 milliseconds for the mailserver workload, which is still within the 500 millisecond threshold for a good user experience. Olio has a smaller increase in response time, since it is more processor intensive.  Exchange has a dependence on both CPU and disk performance.

Looking at Virtual SAN performance, we can get a picture of how much I/O is served by the storage at these load levels.  We can see that reads average around 2000 read I/Os per second:

Figure5

The Read Cache hit rate is 98-99% on all the hosts, so most of these reads are being serviced by the SSDs. Write performance is a bit more varied.

Figure6

We see a range of 5,000-10,000 write IOPS per node due to the write-intensive Exchange workload. Storage is nowhere close to saturation at these load levels. The magnetic disks are not seeing much more than 100 I/Os per second, while the SSDs are seeing about 3,000 – 6,000 I/Os per second. These disks should be able to handle at least 10x this load level. The real bottleneck is in CPU usage.

Looking at the CPU usage of the cluster, we can see that the usage levels out at 36 tiles at about 84% used.  There is still some headroom, which explains why the Olio response times are still very acceptable.

Figure7

As mentioned above, Exchange performance is dependent on both CPU and storage. The additional CPU requirements that Virtual SAN imposes on disk I/O causes Exchange to be more sensitive to server load.

 

Performance Improvements in Virtual SAN 6.0 (vs. Virtual SAN 5.5)

The Virtual SAN 6.0 release incorporates many improvements to CPU efficiency, as well as other improvements. This translates to increased performance for VMmark.

VMmark performance increased substantially when we ran the tests with Virtual SAN 6.0 as opposed to Virtual SAN 5.5. The Virtual SAN 5.5 tests failed to pass QoS beyond 30 tiles, meaning that at least one workload failed to meet the application latency requirement.  During the Virtual SAN 5.5 32-tile tests, one or more Exchange clients would report a Sendmail latency of over 500ms, which is determined to be a QoS failure.  Version 6.0 was able to achieve passing QoS at up to 40 tiles.

Figure8

Not only were more virtual machines able to be supported on Virtual SAN 6.0, but the throughput of the workloads increased as well.  By comparing the VMmark score (normalized to 20-tile Virtual SAN 5.5 results) we can see the performance improvement of Virtual SAN 6.0.

Figure9

Virtual SAN 6.0 achieved a performance improvement of 24% while supporting 33% more virtual machines.

 

Conclusion

Using VMmark, we are able to run a variety of workloads to simulate applications in a production environment.  We were able to demonstrate that Virtual SAN is capable of achieving good performance running heterogeneous real world applications.  The cluster of 8 hosts presented here show good performance in VMmark through 40 tiles.  This is ~12,000 mail messages per minute sent through Exchange, ~180,000 requests per minute served by the Olio webservers, and over 200,000 orders per minute processed on the DVD Store database.  Additionally, we were able to measure substantial performance improvements over Virtual SAN 5.5 using Virtual SAN 6.0.

 

VMware View Planner 3.5 and Use Cases

by   Banit Agrawal     Nachiket Karmarkar

VMware View Planner 3.5 was recently released which introduces a slew of new features, enhancements in user experience, and scalability. In this blog, we present some of these new features and use cases. More details can be found in the whitepaper here.

In addition to retaining all the features available in VMware View Planner 3.0, View Planner 3.5 addresses the following new use cases:

  • Support for VMware Horizon 6  (support of RDSH session and application publishing)
  • Support for Windows 8.1 as desktops
  • Improved user experience
  • Audio-Video sync (AVBench)
  • Drag and Scroll workload (UEBench)
  • Support for Windows 7 as clients

In View Planner 3.5, we augment the capability of View Planner to quantify user experience for user sessions and application remoting provided through remote desktop session hosts (RDSH) as a sever farm. Starting this release, we will support Windows 8.1 as one of the supported guest OSes for desktops and Windows 7 as the supported guest OS for clients.

New Interactive Workloads

We also introduced two advanced workloads: (1) Audio-Video sync (AVBench) and (2) Drag and Scroll workload (UEBench). AVBench determines audio fidelity in a distributed environment where audio and video streams are not tethered. The “Drag and Scroll” workload determines spatial and temporal variance by emulating user events like mouse click, scroll, and drag.

UEBench

Fig 1. Mouse-click and drag  (UEBench)

As seen in Figure 1, a mouse event is sent to the desktop and the red and black image is dragged across and down the screen.

UEBench-scroll

Fig. 2. Mouse-click and scroll (UEBench)

Similarly, Figure 2 depicts a mouse event sent to the scroll bar of an image that is scrolled up and down.

Better Run Status Reporting

As part of improving the user experience, the UI can track the current stage the View Planner run is in and notifies the user through a color-coded box. The text inside the box is a clickable link that provides a pop-up giving deeper insight about that particular stage.

run-progress-status

Fig. 3. View Planner run status shows the intermediate status of the run

Pre-Check Run Profile for Errors

A “check” button provides users a way to verify the correctness of their run-profile parameters.

check-runprofile

Fig. 4. ‘Check’ button in Run & Reports tab in View Planner UI

 In the past, users needed to optimize the parent VMs used for deploying clients and desktop pools. View Planner 3.5 has automated these optimizations as part of installing the View Planner agent service. The agent installer also comes with a UI that tracks the current stage the installer is in and highlights the results of various installer stages.

Sample Use Cases

Single Host VDI Scaling

Through this release, we have re-affirmed the use case of View Planner as an ideal tool for platform characterization for VDI scenarios.  On a Cisco UCS C240 server, we started with a small number of desktops running the “standard benchmark profile” and increased them until the Group A and Group B numbers exceeded the threshold. The results below demonstrate the scalability of a single UCS C240 server as a platform for VDI deployments.

host-vdi-scaling

Fig. 5. Single server characterization with hosted desktops for CISCO UCS C240

Single Host RDSH Scaling

We followed the best practices prescribed in the VMware Horizon 6 RDSH Performance & Best Practices whitepaper  and set up a number of remote desktop session (RDS) servers that would fully consolidate a single UCS C240 vSphere server. We started with a small number of user sessions per core and then increased them until the Group A and Group B numbers exceeded the threshold level. The results below demonstrate how ViewPlanner can accurately gauge the scalability of a platform (CISCO UCS in this case) when deployed in an RDS scenario

host-RDSH-scaling

Fig. 6. Single server characterization with RDS sessions for CISCO UCS C240

Storage Characterization

View Planner can also be used to characterize storage array performance. The scalability of View Planner 3.5 to drive a workload on thousands of virtual desktops and process the results thereafter makes it an ideal candidate to validate storage array performance at scale. The results below demonstrate scalability of VDI desktops obtained on Pure Storage FA-420 all-flash array. View Planner 3.5 could easily scale to 3000 desktops, as highlighted in the results below.

storage-characterization

Fig. 7. 3000 Desktops QoS results on Pure Storage FA-420 storage array

Custom Applications Scaling

In addition to characterizing platform and storage arrays, the custom app framework can achieve targeted VDI characterization that is application specific. The following results show Visio as an example of a custom app scale study on an RDS deployment with a 4-vCPU, 10GB vRAM Windows 2008 Server.

visio-custom-app

Fig. 8 Visio operation response times with View Planner 3.5 when scaling up application sessions

Other Use Cases

With a plethora of features, supported guest OSes, and configurations, it is no wonder that View Planner is capable to of characterizing multiple VMware solutions and offerings that work in tandem with VMware Horizon. View Planner 3.5 can also be used to evaluate the following features, which are described in more detail in the whitepaper:

  • VMware Virtual SAN
  • VMware Horizon Mirage
  • VMware App Volumes

For more details about new features, use cases, test environment, and results, please refer to the View Planner 3.5 white paper here.

Virtual SAN 6.0 Performance: Scalability and Best Practices

A technical white paper about Virtual SAN performance has been published. This paper provides guidelines on how to get the best performance for applications deployed on a Virtual SAN cluster.

We used Iometer to generate several workloads that simulate various I/O encountered in Virtual SAN production environments. These are shown in the following table.

Type of I/O workload Size (1KiB = 1024 bytes) Mixed Ratio Shows / Simulates
All Read 4KiB Maximum random read IOPS that a storage solution can deliver
Mixed Read/Write 4KiB 70% / 30% Typical commercial applications deployed in a VSAN cluster
Sequential Read 256KiB Video streaming from storage
Sequential Write 256KiB Copying bulk data to storage
Sequential Mixed R/W 256KiB 70% / 30% Simultaneous read/write copy from/to storage

In addition to these workloads, we studied Virtual SAN caching tier designs and the effect of Virtual SAN configuration parameters on the Virtual SAN test bed.

Virtual SAN 6.0 can be configured in two ways: Hybrid and All-Flash. Hybrid uses a combination of hard disks (HDDs) to provide storage and a flash tier (SSDs) to provide caching. The All-Flash solution uses all SSDs for storage and caching.

Tests show that the Hybrid Virtual SAN cluster performs extremely well when the working set is fully cached for random access workloads, and also for all sequential access workloads. The All-Flash Virtual SAN cluster, which performs well for random access workloads with large working sets, may be deployed in cases where the working set is too large to fit in a cache. All workloads scale linearly in both types of Virtual SAN clusters—as more hosts and more disk groups per host are added, Virtual SAN sees a corresponding increase in its ability to handle larger workloads. Virtual SAN offers an excellent way to scale up the cluster as performance requirements increase.

You can download Virtual SAN 6.0 Performance: Scalability and Best Practices from the VMware Performance & VMmark Community.

Web 2.0 Applications on VMware Virtual SAN

Here in VMware Performance Engineering, Virtual SAN is a hot topic. This storage solution leverages economical hardware compared to more expensive storage arrays and offers all the vSphere solutions like vMotion, HA, and DRS. We have been testing Virtual SAN with a number of workloads to characterize their performance. In particular we found that Web 2.0 applications, modeled with the Cloudstone benchmark, performs exceptionally with low application latency on vSphere 5.5 with Virtual SAN. We are giving a quick glimpse of the testing configuration and result here and the full detail can be found in the recently published technical white paper about Web 2.0 applications on VMware Virtual SAN.

We ran the Cloundstone benchmark using Olio server and client virtual machine pairs. Server virtual machines were on a 3-host server cluster, whereas client virtual machines were on a 3-node client cluster. An Olio server virtual machine ran Ubuntu 10.04 with a MySQL database, a NGINX Web server with PHP scripts, and a Tomcat application server. An Olio client virtual machine simulated typical Web 2.0 workloads by exercising 7 different types of user operations that involved web file exchanges and database inquiries and transactions. Virtual SAN was configured on the server cluster. Web files, database files, and OS files were all on the Virtual SAN with dedicated virtual disks to store files separately.

fig1-blog

In the paper, we report test results that show Virtual SAN achieves good application latency performance. Each server-client virtual machine pair was pre-configured for 500 Olio users. In one test, we ran 1500 Olio users and 7500 users by having 3 and 15 pairs of virtual machines respectively. We collected the average round-trip time of Olio operations. These operations were divided into frequent ones (EventDetail, HomePage, Login and TagSearch) and less frequent ones (AddEvent, AddPerson, and PersonDetail) according to how often they were exercised in the tests.

The following graph shows the average round-trip times for various operations. The threshold for these operations was defined in the passing critera, which used 250 milliseconds for the frequent operations and 500 milliseconds for the less frequent operations. In the 15VMs/7500 users case, the server cluster was at 70% CPU utilization, but the round-trip time was well below the passing threshold as shown. We also present the 95th-percentile round-trip time results and how it performed in the white paper.

fig2-blog

To learn the full story of the 15VMs/7500 Olio users test and how we further stressed storage with the workload and read the results, see the white paper.

VDI Performance Benchmarking on VMware Virtual SAN 5.5

In the previous blog series, we presented the VDI performance benchmarking results with VMware Virtual SAN public beta and now we announced the general availability of VMware Virtual SAN 5.5 which is part of VMware vSphere 5.5 U1 GA and VMware Horizon View 5.3.1 which supports Virtual SAN 5.5. In this blog, we present the VDI performance benchmarking results with the Virtual SAN GA bits and highlight the CPU improvements and 16-node scaling results. With Virtual SAN 5.5 with default policy, we could successfully run 1615 heavy VDI users (VDImark) out-of-the-box on a 16-node Virtual SAN cluster and see about 5% more consolidation when compared to Virtual SAN public beta.

virtualsan-view-block-diagram

To simulate the VDI workload, which is typically CPU bound and sensitive to I/O, we use VMware View Planner 3.0.1. We run View Planner and consolidate as many heavy users as we can on a particular cluster configuration while meeting the quality of service (QoS) criteria and we define the score as VDImark. For QoS criteria, View Planner operations are divided into three main groups: (1) Group A for interactive operations, (2) Group B for I/O operations, and (3) Group C for background operations. The score is determined separately for Group A user operations and Group B user operations by calculating the 95th percentile latency of all the operations in a group. The default thresholds are 1.0 second for Group A and 6.0 seconds for Group B. Please refer to the user guide, and the run and reporting guides for more details. The scoring is based on several factors such as the response time of the operations, compliance of the setup and configurations, and other factors.

As discussed in the previous blog, we used the same experimental setup (shown below) where each Virtual SAN host has two disk groups and each disk group has one PCI-e solid-state drive (SSD) of 200GB and six 300GB 15k RPM SAS disks. We use default policy when provisioning the automated linked clones pool with VMware Horizon View for all our experiments.

virtualsan55-setup

CPU Improvements in Virtual SAN 5.5

There were several optimizations done in Virtual SAN 5.5 compared to the previously available public beta version and one of the prominent improvements is the reduction of CPU usage for Virtual SAN. To highlight the CPU improvements, we compare the View Planner score on Virtual SAN 5.5 (vSphere 5.5 U1) and Virtual SAN public beta (vSphere 5.5).  On a 3-node cluster, VDImark (the maximum number of desktop VMs that can run with passing QoS criteria) is obtained for both Virtual SAN 5.5 and Virtual SAN public beta and the results are shown below:

virtualsan55-3node

The results show that with Virtual SAN 5.5, we can scale up to 305 VMs on a 3-node cluster, which is about 5% more consolidation when compared with Virtual SAN public beta. This clearly highlights the new CPU improvements in Virtual SAN 5.5 as a higher number of desktop VMs can be consolidated on each host with a similar user experience.

Linear Scaling in VDI Performance

In the next set of experiments, we continually increase the number of nodes for the Virtual SAN cluster to see how well the VDI performance scales. We collect the VDImark score on 3-node, 5-node, 8-node, 16-node increments, and the result is shown in the chart below.

virtualsan55-scaling

The chart illustrates that there is a linear scaling in the VDImark as we increase the number of nodes for the Virtual SAN cluster. This indicates good performance as the nodes are scaled up. As more nodes are added to the cluster, the number of heavy users that can be added to the workload increases proportionately. In Virtual SAN public beta, a workload of 95 heavy VDI users per host was achieved and now, due to CPU improvements in Virtual SAN 5.5, we are able to achieve 101 to 102 heavy VDI users per host. On a 16-node cluster, a VDImark of 1615 was achieved which accounts for about 101 heavy VDI users per node.

To further illustrate the Group A and Group B response times, we show the average response time of individual operations for these runs for both Group A and Group B, as follows.

virtualsan55-groupA

As seen in the figure above, the average response times of the most interactive operations are less than one second, which is needed to provide a good end-user experience. If we look all the way up to 16 nodes, we don’t see much variance in the response times, and they almost remain constant when scaling up. This clearly illustrates that, as we scale the number of VMs in larger nodes of a Virtual SAN cluster, the user experience doesn’t degrade and scales nicely.

virtualsan55-groupB

Group B is more sensitive to I/O and CPU usage than Group A, so the resulting response times are more important. The above figure shows how VDI performance scales in Virtual SAN. It is evident from the chart that there is not much difference in the response times as the number of VMs are increased from 305 VMs on a 3-node cluster to 1615 VMs on a 16-node cluster. Hence, storage-sensitive VDI operations also scale well as we scale the Virtual SAN nodes from 3 to 16.

To summarize, the test results in this blog show:

  • 5% more VMs can be consolidated on a 3-node Virtual SAN cluster
  • When adding more nodes to the Virtual SAN cluster, the number of heavy users supported increases proportionately (linear scaling)
  • The response times of common user operations (such as opening and saving files, watching a video, and browsing the Web) remain fairly constant as more nodes with more VMs are added.

To see the previous blogs on the VDI benchmarking with Virtual SAN public beta, check the links below: