A performance study is available that shows how to best deploy and configure vSphere 6.5 for Big Data applications such as Hadoop and Spark running on a cluster with fast processors, large memory, and all-flash storage (Non-Volatile Memory Express storage and solid state disks). Hardware, software, and vSphere configuration parameters are documented, as well as tuning parameters for the operating system, Hadoop, and Spark.
We tested the best practices on a 13-server cluster with Hadoop installed on vSphere and on bare metal. We ran workloads for both Hadoop (TeraSort and TestDFSIO) and Spark Machine Learning Library routines (K-means clustering, Logistic Regression classification, and Random Forest decision trees) on the cluster. We tested configurations with 1, 2, and 4 VMs per host and compared this to a similar setup running on bare metal. Among the 4 virtualized configurations, 4 VMs per host ran fastest due to the best use of storage and the highest percentage of data transfer within a server. The 4 VMs per host configuration also ran faster than bare metal on all Hadoop and Spark tests but one.
Here are the results for the TeraSort suite:
And for Spark Random Forest decision trees:
Here are the best practices cited in this paper:
- Reserve about 5-6% of total server memory for ESXi; use the remainder for the virtual machines.
- Do not overcommit physical memory on any host server that is hosting Big Data workloads.
- Create one or more virtual machines per NUMA node.
- Limit the number of disks per DataNode to maximize the utilization of each disk: 4 to 6 is a good starting point.
- Use eager-zeroed thick VMDKs along with the ext4 or xfs filesystem inside the guest.
- Use the VMware Paravirtual SCSI (pvscsi) adapter for disk controllers; use all 4 virtual SCSI controllers available in vSphere 6.5.
- Use the vmxnet3 network driver; configure virtual switches with MTU=9000 for jumbo frames.
- Configure the guest operating system for Hadoop performance including enabling jumbo IP frames, reducing swappiness, and disabling transparent hugepage compaction.
- Place Hadoop controller roles, ZooKeeper, and journal nodes on three virtual machines for optimum performance and to enable high availability.
- Dedicate the worker nodes to run only the HDFS DataNode, YARN NodeManager, and Spark Executor roles.
- Run the Hive Metastore in a separate MySQL database.
- Set the YARN cluster container memory and vcores to slightly overcommit both resources
- Adjust the task memory and vcore requirement to optimize the number of maps and reduces for each application.
All details are in the paper.