vSAN Hyperconverged Infrastructure Technology Partners

DataStax Enterprise on VMware vSAN for Development

This blog was co-authored by Kathryn Erickson, Director of Strategic Partnerships at DataStax Enterprise

After several months’ diligent work, DataStax and VMware achieved another milestone by delivering a technical white paper for the solution of DataStax Enterprise (DSE) on vSAN for development environments after publishing the solution brief in VMworld.

VMware and DataStax have jointly undertaken an extensive technical validation to demonstrate VMware vSAN™ as a storage platform for globally distributed cloud applications for test and development environments. In the first phase of this effort, DataStax and VMware jointly advocate this solution for test and development environments while we are working towards a future vSAN offering.

DataStax Enterprise (DSE) is powered by the best distribution of Apache Cassandra™. The new Datastax Enterprise on VMware vSAN for Development—Technical White Paper is now available on StorageHub.


This joint solution is a showcase of using VMware vSAN as a Hyper-Converged Infrastructure (HCI) for deploying DSE in a vSphere environment. This technical white paper:

  • Provides the solution architecture for deploying DSE in a vSAN cluster for development.
  • Measures performance when running DSE in a vSAN cluster, to the extent of the testing and cluster size described.
  • Evaluates the impact of different parameter settings in the performance testing.
  • Identified steps required to ensure resiliency and availability against various failures.
  • Provides best practice guides.

Here are some white paper highlights in addition to the solution brief of the joint solution.

Test Setup

We created an 8-node vSphere and vSAN all-flash cluster and deployed a 16-node DSE cluster, and we deployed DataStax OpsCenter and 8 DSE stress client VMs running Cassandra-stress on another hybrid vSAN cluster in the same VM network. Typically, one stress client can generate workload to saturate two DSE nodes. We also configured separate storage cluster on the hybrid cluster to avoid performance impact on the tested DSE cluster.

Figure 1. Solution Setup

To ensure continued data protection and availability of DSE during planned or unplanned downtime, a minimum of four nodes are recommended for the vSAN cluster and an all-flash configuration is required for performance consistency and predictable latency.

In our solution validation, the hardware and software configuration are the same as the solution brief described. We used large data sets to simulate real case so the data set of each node should obviously exceed the RAM capacity, and we loaded base data set of at least 500GB per node.


The eight client nodes all run Cassandra-stress, a built-in DSE benchmark tool used for workload testing.

Evaluate the Impact of VM Settings

In this test, write operations appeared to be CPU-bound. DSE is highly concurrent and uses as many CPU cores as available. The maximum of DSE VM CPU depends on the number of physical CPU cores.

We verified the more CPUs of the DSE nodes, the more throughputs for loading data. We set the DSE VM memory to 64GB and tested loading 1,000,000 records RF=3 CL= LOCAL_ONE, TH=256 with 8vCPU, 12vCPU, and 24vCPU.

Figure 2.  Load Testing with Different vCPU Performance

Tuning Java Resources

DSE runs on a Java virtual machine (JVM). Insufficient garbage collection (GC) throughput can introduce GC pauses that can lead to a high latency. To achieve the best performance, it is important to select the right garbage collector and heap size settings. In our solution, we used DSE version 5.1 that uses the G1 garbage collector by default, We followed the Set the heap size for optional Java garbage collection in DataStax Enterprise guide to determine the optimum heap size.

50% Write and 50% Read Performance Testing Results

Run the stress testing with various thread number. Figure 3 shows with the increase of thread count, the throughput increased accordingly. Write IOPS was 71,350 with TH=100, 85,104 with TH=140, about 20% increase, and 90,212 with TH=175, about 26% increase comparing with TH=100. Read IOPS results were close.

Figure 3.  50% Write and 50% Read with Different Thread Performance

From the latency curve in Figure 4, TH=175 read latency curve and write latency curve were much steeper. TH=140 latency curve was similar to that of TH=100. If users consider latency together with throughput, TH=140 is a better setting.

Figure 4. 50% Write and 50% Read with Different Thread Latency


Overall, deploying, running, and managing DSE applications on VMware vSAN provides high availability and predictable performance within the scenarios tested. All storage management moves into a single software stack, thus taking advantage of the security operational simplicity, and cost-effectiveness of vSAN.

It is simple to expand using a scale-up or scale-out approach without incurring any downtime, With the joint efforts of VMware and DataStax, customers can deploy DSE clusters on vSAN for their modern cloud applications with ease and confidence in test and development environments. Check back for further developments around the future of this partnership including vSAN enhancements with a design focused on cloud applications that require data to be contextual, always on, real-time, distributed, and scalable.

To learn more, check out the full technical white paper here.


Leave a Reply

Your email address will not be published.