Home > Blogs > VMware VROOM! Blog


VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 3

In part 1 and part 2 of the VDI/VSAN benchmarking blog series, we presented the VDI benchmark results on VSAN for 3-node, 5-node, 7-node, and 8-node cluster configurations. In this blog, we compare the VDI benchmarking performance of VSAN with an all flash storage array. The intent of this experiment is not to compare the maximum IOPS that you can achieve on these storage solutions; instead, we show how VSAN scales as we add more heavy VDI users. We found that VSAN can support a similar number of users as that of an all flash array even though VSAN is using host resources.

The characteristic of VDI workload is that they are CPU bound, but sensitive to I/O which makes View Planner a natural fit for this comparative study. We use VMware View Planner 3.0 for both VSAN and all flash SAN and consolidate as many heavy users as much we can on a particular cluster configuration while meeting the quality of service (QoS) criteria. Then, we find the difference in the number of users we can support before we run out of CPU, because I/O is not a bottleneck here. Since VSAN runs in the kernel and uses CPU on the host for its operation, we find that the CPU usage is quite minimal, and we see no more than a 5% consolidation difference for a heavy user run on VSAN compared to the all flash array.

As discussed in the previous blog, we used the same experimental setup where each VSAN host has two disk groups and each disk group has one PCI-e solid-state drive (SSD) of 200GB and six 300GB 15k RPM SAS disks. We built a 7-node and a 8-node cluster and ran View Planner to get the VDImark™ score for both VSAN and the all flash array. VDImark signifies the number of heavy users you can successfully run and meet the QoS criteria for a system under test. The VDImark for both VSAN and all flash array is shown in the following figure.

View Planner QoS (VDImark)

 

 From the above chart, we see that VSAN can consolidate 677 heavy users (VDImark) for 7-node and 767 heavy users for 8-node cluster. When compared to the all flash array, we don’t see more than 5% difference in the user consolidation. To further illustrate the Group-A and Group-B response times, we show the average response time of individual operations for these runs for both Group-A and Group-B, as follows.

Group-A Response Times

As seen in the figure above for both VSAN and the all flash array, the average response times of the most interactive operations are less than one second, which is needed to provide a good end-user experience.  Similar to the user consolidation, the response time of Group-A operations in VSAN is similar to what we saw with the all flash array.

Group-B Response Times

Group-B operations are sensitive to both CPU and IO and 95% should be less than six seconds to meet the QoS criteria. From the above figure, we see that the average response time for most of the operations is within the threshold and we see similar response time in VSAN when compared to the all flash array.

To see other parts on the VDI/VSAN benchmarking blog series, check the links below:
VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 1
VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 2
VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 3

 

13 thoughts on “VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 3

  1. Pingback: VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 2 | VMware VROOM! Blog - VMware Blogs

  2. Pingback: VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 3 | VMware VROOM! Blog - VMware Blogs

  3. Lee Reynolds

    Excellent series of posts. Can you say what type of all flash array that was tested? Also, what was the interconnect on the hosts (simple the minimum one 10GB ethernet, or something different)?

    Thanks

    Reply
  4. Pingback: VDI Benchmarking and Beta refresh!

    1. Ed Marchand

      Great information showing the comparisons, especially this blog.

      Just to clarify, was the comparison of the VSAN to Flash array a comparison of
      a) the VSAN setup in Part 1 of the blog with
      b) a SAN-connected Storage Flash array, connected to the 7 or 8 “nodes” via a 10Gb Enet link, and with all VM related data and VMDKs on the flash array instead of using the disks and the SSD? or Was vSphere caching with the local SSD enabled, and then the disks were replaced by the Flash array?

      Reply
      1. pradeep chikku

        thank you.
        a) yes it is.
        b) vSphere storage accelerator for VIEW is enabled in both all-Flash-SAN and VSAN experiments. vSphere storage Accelerator (a.k.a CBRC) is an in memory caching solution. host side SSD is NOT used as cache for all-flash-san experiments.

        Reply
  5. Pingback: Welcome to vSphere-land! » VSAN Links

  6. Pingback: What you mean to say about VSAN - Virtual Ramblings

  7. Pingback: Consolidated list of all Virtual SAN (VSAN) deep dive resources. |

  8. Pingback: Official VMware Virtual SAN Blog Index | VMware vSphere Blog - VMware Blogs

  9. Pingback: Official VMware Virtual SAN Blog Index - Punching Clouds

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>