News Interview VMworld

Project Monterey – some choice quotes from our executives

Pat Gelsinger (CEO), Greg Lavender (SVP and CTO) and Chris Wolf (VP OCTO, ATG) have been key to the R&D work that led both to today’s Project Monterey announcement, and to the underlying ESXi-Arm technology. Many will remember Greg’s demo of some early incubation concepts from last year, which Chris had summed up well in his 4 Hypervisors + 1 Server = 0 Nesting blog post right after the VMworld 2019 keynote.

VMworld 2019

 

We’ve come far since VMworld 2019.

Kit Colbert (VP & CTO, CPBU) and Pere Monclus (VP & CTO, NSBU) in their The Datacenter of the Future [HCP3004] showed a great demo of Project Monterey’s powers: exporting networking and storage services to virtualized and bare metal workloads with a clear trust/isolation boundary. In that demo, a vSAN array was made available to two separate hosts via NVMe. One of the hosts ran vSphere and a VM, while the other ran Linux. Neither knew anything about vSAN – it was just NVMe storage to them. The magic was all inside the Monterey-powered SmartNICs – in this case, an NVIDIA BlueField-based device.

 

No longer does compute infrastructure have to be tightly coupled to the implementation of storage and networking services. No longer does the I/O stack have to compete for server CPU with workloads. No longer do you have to choose between a bare metal silo and VCF. If you haven’t read Kit’s Project Monterey announcement yet, have a look!

Today Pat, Greg and Chris shared some interesting perspectives.

Pat spoke with John Furrier and Dave Vellante, co-hosts of theCUBE, during VMworld and had this to say about Monterey and the road ahead:

Pat: Monterey is a big deal. We’ve been leveraging the NIC, essentially virtual NICs, but we never leveraged the resources of the network adapters themselves. It sits in the right place in the sense that it is the network traffic cop, it is the place to do security acceleration, it is the place that enables IO bandwidth optimization across increasingly rich applications.

Greg was also interviewed by theCUBE about Monterey. Here’s a quote that stood out to me:

Greg: With any computing server in the data center, in a colo facility or even in the cloud, a large portion of the CPU resources, and even some memory resources, can get consumed by just processing the high volumes of I/O that’s going out to storage devices, communicating between the different parts of multi-tiered applications. So there’s an overhead that gets consumed in the core server CPU, even if it’s multi-core, multi-socket.

By offloading a lot of that I/O work onto the ARM core and taking advantage of hardware offloads there in those SmartNICs, you can offload that processing and free up even as much as 30% of the CPU of a multi-socket, multi-core server and give that back to the application so that the application gets the benefit of those extra compute and memory resources.

Chris published a great summary of today’s announcements, but what I really liked was the approachable explanation of the three key use cases for Project Monterey:

1. Network performance and security: Consider running security services such as a L4-7 firewall on SmartNIC, decoupling it from the host platform and achieving line rate performance. Organizations can further isolate tenants, running independent workloads on SmartNICs or even run multiple network functions in isolation on the SmartNIC via isolation provided by the hypervisor (e.g., ESXi on Arm).
2. Storage performance and dynamic composition: As with networking, you have new opportunities for combinations of scale-up and scale-out architectures by taking advantage of processors on SmartNICs to accelerate a variety of storage functions, such as compression and encryption. Project Monterey will also provide further capabilities to scale storage capacity on-demand to meet performance or capacity requirements.
3. Bare metal workloads and composability: This is where Project Monterey really gets interesting. Imagine running the ESXi control plane on a SmartNIC, freeing all the x86 host cores to run other workloads, inclusive of bare metal. That allows you to run workloads on bare metal, while still being able to integrate them with core SDDC services, such as VMware vSAN and NSX. From a flexibility perspective, these options take VMware Cloud Foundation to a new level in terms of the ability to dynamically support a variety of hardware interfaces, composing infrastructure on-demand.