Product Announcements

Democratize AI and ML for All Enterprises with VMware vSphere and NVIDIA AI Enterprise 3.0

Today all enterprises—large and small—want to leverage the huge potential of artificial intelligence (AI) and machine learning (ML). AI and ML have been used extensively by many organizations to deliver real value—from reducing stock-outs in retail and improving supply chains, to enhancing accuracy in voice recognition, the improvements are widespread. In fact, per Enterprise Strategy Group1, “82% of organizations have seen value from their AI initiatives in as little as six months.”

However, the challenge for organizations is that AI and ML applications have been hard to deploy. These applications are often anchored in rapidly evolving, bleeding-edge code and lack proven approaches that meet the rigors of scaled production settings in enterprises.

To solve these challenges, VMware and NVIDIA have collaborated to unlock the power of AI for all enterprises by delivering an end-to-end enterprise platform optimized for AI workloads. This integrated platform delivers NVIDIA AI Enterprise, the best-in-class, end-to-end, secure, cloud-native suite of AI software running on VMware vSphere®.

Organizations can solve new challenges while increasing operational efficiency. NVIDIA AI Enterprise is optimized and certified for vSphere, the enterprise workload platform. Running on NVIDIA-Certified Systems™, which are industry-leading accelerated servers, this AI-ready platform accelerates the speed at which developers can build and deploy AI and high-performance data analytics, enables organizations to scale modern workloads on the same vSphere infrastructure they’ve already invested in, and delivers enterprise-class manageability, security, and availability through familiar VMware tools.

Launch of NVIDIA AI Enterprise 3.0

With the launch of vSphere 8 earlier this year and today’s launch of NVIDIA AI Enterprise 3.0, this platform’s capabilities to deliver AI solutions have been greatly expanded. Let’s look at some of these expanded capabilities.

  • Higher scalability for complex AI/ML models with eight vGPUs per VM: AI/ML development teams can now achieve higher scalability of available GPU resources with support for up to eight virtual GPUs (vGPUs) per VM – a 2x increase.
  • Improved performance with fractional vGPUs: Multi-vGPU support now includes fractional vGPU support with vSphere 8, ensuring demanding workloads have enough GPU resources.
  • Enhanced operational efficiency
  • Support for automated VMs placement by vSphere Distributed Resource Scheduler (DRS): VM placement on the GPU is greatly enhanced by the enablement of DRS awareness of PCIe topology. DRS automatically chooses NIC and GPU or multiple GPUs to improve performance.
  • Support for device group capability: Device groups enable the aggregation of PCIe devices paired with each other at the hardware level, either using NVLink or through a common PCIe switch.

Learn More

Source

  1. Enterprise Strategy Group Master Survey Results, Supporting AI/ML Initiatives with a Modern Infrastructure Stack, May 2021.