by Hari Sivaraman, Uday Kurkure, and Lan Vu
In a previous blog , we looked at how machine learning workloads (MNIST and CIFAR-10) using TensorFlow running in vSphere 6 VMs in an NVIDIA GRID configuration reduced the training time from hours to minutes when compared to the same system running no virtual GPUs.
Here, we extend our study to multiple workloads—3D CAD and machine learning—run at the same time vs. run independently on a same vSphere server.
This is episode 2 of a series of blogs on machine learning with vSphere. Also see:
- Episode 1: Performance Results of Machine Learning with DirectPath I/O and GRID vGPU
- Episode 3: Performance Comparison of Native GPU to Virtualized GPU and Scalability of Virtualized GPUs for Machine Learning