A new paper shows that paravirtual RDMA (PVRDMA) is a viable option for using vSphere remote direct memory access (RDMA) in vSphere for virtualized high performance computing (HPC) instead of the usual use of PCI passthrough (also known as vSphere DirectPath I/O), which doesn’t let you use typical vSphere features like high availability, DRS, vMotion, and others.
The paper first describes how to set up PVRDMA with useful screenshots to step you through the process. Next, the paper shows the performance of PVRDMA against passthrough. Passthrough still performs better, but PVRDMA performs at an acceptable level and allows you to use all the convenient features of vSphere virtualization.
We conducted performance testing of the different technologies used for HPC on vSphere using OpenFOAM, which is popular open-source software used for solving computational fluid dynamics (CFD) problems, including aerodynamics, simulation of industrial flows, combustion systems, and electronic design automation. OpenFOAM uses the message passing interface (MPI) for parallel distributed workloads. MPI is a standard for multiple processor programming of HPC code that runs on a variety of machines, from small clusters to giant supercomputers.
One test shows PVRDMA performance vs. passthrough and TCP as the number of VMs is scaled out. The results are shown below in figure 1.
The chart shows that PVRDMA performs close to that of passthrough.
For more information about how to enable PVRDMA and to see how this solution performs, read the technical paper: “VMware Paravirtual RDMA for High Performance Computing.”