Product Announcements

Introducing vSphere 8: The Enterprise Workload Platform

Workloads are growing everywhere

As organizations embrace cloud, they are beginning to turn a new chapter in the era of multi-cloud. Multi-cloud has quickly become the dominant deployment model. According to a 451 Research study, 75% of all enterprises have multi-cloud footprints (1). Many enterprises choose to run mission critical workloads on-premises to take advantage of data locality, predictable workload performance and low network latency. As larger masses of data accumulate in the enterprise, they tend to attract more local services and applications to minimize latency, increase throughput, and maximize workload performance.

Consumption model preferences are evolving

IDC predicts that by 2025, 60% of enterprises will fund projects through OpEx budgets(2). This trend is evident with IT organizations continuing to embrace cloud-based consumption models for infrastructure services. Lines-of-business are increasingly making their own choices for infrastructure and services based on unique needs. Software-as-a-service is quickly emerging as a central strategy that businesses are adopting to save time and achieve goals quicker. In fact, Gartner expects worldwide SaaS spending to reach $208 Billion by year 2023(3).

Demand for infrastructure services is increasing

The volume and complexity of modern workloads are increasing, which is creating higher demand for infrastructure services that provide key underlying functionality for these workloads. The increasing demand for software defined infrastructure services places more strain on CPUs, leaving fewer compute cycles for workloads. Newer classes of hardware accelerators like Data Processing Units (DPUs, also known as SmartNICs) have been successfully used by hyperscalers to offload and accelerate infrastructure functions, freeing up CPU cycles to run workloads. As DPUs become more mainstream in the infrastructure, there is an increasing number of hardware choices from vendors with varying sets of capabilities. IT infrastructure teams are faced with the challenge of abstracting the hardware differences and presenting a uniform consumption interface to users.

Security threats to Data Center traffic are rising

Modern applications are also increasingly micro-services based and drive-up East-West network traffic within the data center. When it comes to securing the enterprise network, traditional firewalls focus solely on the network perimeter, with limited understanding of application topologies and behaviors. Traditional distributed firewalling approaches require host-based agents to secure the network within the data center, and result in complex security architectures and operations. With workloads and infrastructure services sharing a domain, the chances of malicious actors penetrating the network perimeter and persisting in the network are higher.

Traditional approaches to meet the needs of next-gen infrastructure do not work

Adding new server capacity to meet the increasing demand for infrastructure results in higher total cost of ownership. Application specific silos, such as GPU-based servers for running AI/ML workloads, do not scale well to run traditional IT workloads. This results in an inflexible architecture that leads to additional operational complexities. In a converged domain where workloads run alongside infrastructure services, the CPU complex becomes a single point of failure as has been shown by low level security exploits in the recent past.

On top of it all, IT organizations are expected to deliver uniform infrastructure consumption experiences across multi-clouds to develop, deploy and maintain high performant applications in a cost-effective and secure manner.

What if you could supercharge workload performance while scaling infrastructure services across the enterprise and reduce the total cost of infrastructure?

What if you could enable development teams to accelerate their innovation and deliver IaaS services directly to their development environments?

What if the benefits of cloud were made available to on-premises workloads without requiring lift-and-shift?

What if you could centrally manage on-premises infrastructure and reduce the operational burden of maintenance?

Well, the time to wonder ‘what if’ is over.

Introducing VMware vSphere 8

VMware vSphere 8, the enterprise workload platform, brings the benefits of cloud to on-premises workloads, supercharges performance through DPUs and GPUs, and accelerates innovation with an enterprise-ready integrated Kubernetes runtime. Let’s get into more detail.

vSphere 8 ushers in a new era of heterogeneous computing by introducing Data Processing Units to Enterprises through VMware vSphere Distributed Services Engine. vSphere Distributed Services Engine is the next step in the evolution of cloud infrastructure for modern applications, in which the stewardship for running infrastructure services is distributed between the CPU and the DPU.

vSphere Distributed Services Engine modernizes cloud infrastructure into a distributed architecture enabled by DPUs to:

  • Meet the throughput and latency needs of modern distributed workloads by accelerating networking functions
  • Deliver best infrastructure price-performance by providing more CPU resources to workloads
  • Reduce operational overhead of DPU lifecycle management with integrated vSphere workflows

vSphere Distributed Services Engine preserves the existing Day-0, Day-1 and Day-2 vSphere experience, with which customers are familiar. It is supported on a broad choice of DPUs from leading silicon vendors (NVIDIA & AMD) and server designs from OEMs (Dell, HPE).

Distributed Services Engine offloads and accelerates vSphere Distributed Switch and NSX Networking on the DPU, with additional services to follow in the future. So right away, this will benefit customers running applications that demand high network bandwidth and fast cache access such as in-memory databases.

Our internal benchmarking study running Redis on a DPU-enabled host achieved 36% better throughput along with a 27% reduction in transaction latency. In another scenario, a DPU-enabled host achieved performance similar to a non-DPU system, with 20% fewer CPU cores. These powerful results show how vSphere 8 enables customers to lower total cost of computing and improve workload performance.

Moreover, with vSphere 8 IT admins can reduce operational overhead of operating DPUs in the infrastructure by leveraging integrated vSphere workflows to manage lifecycle and monitor performance.

We are thrilled to note that our technology partnerships have produced market ready end-to-end solutions. Today, Dell Technologies announced the launch of SmartDPU Software Solutions on the VxRailTM   platform with a choice of DPUs from AMD Pensando and NVIDIA. HPE announced the launch of HPE ProLiant with VMware vSphere® Distributed Services Engine™ – based on DPUs from AMD Pensando. We are also excited about the rapid progress that our other partners like Lenovo and Intel are making in bringing more solutions to the market, providing even more choices for our customers.

When it comes to improving workload performance, vSphere 8 does not stop here. vSphere 8 improves AI/ML model training times and achieves unparalleled levels of performance for the most demanding and complex models by adding support for up to 32 NVIDIA GPUs devices in Passthrough mode – a 4x increase compared to vSphere 7.

Moreover, AI/ML development teams can now achieve higher levels of scalability of available GPU resources with support for up to 8 vGPUs per VM – a 2x increase compared to vSphere 7.

While Kubernetes has gained widespread adoption as the de-facto container orchestration technology, IT organizations need a simple and easy way to manage containers alongside VMs. That is why VMware created a streamlined Kubernetes management experience that is natively built into vSphere. With vSphere 8, VMware is delivering VMware Tanzu Kubernetes Grid 2.0 – designed to help IT teams and developers address the growing complexity of agile development environments. This latest release of Tanzu Kubernetes Grid adds new flexibility and control for cluster creation, open-source API alignment, and improved application lifecycle management capabilities.

DevOps teams spend a significant amount of time setting up Kubernetes clusters. Even in cases where infrastructure services are readily accessible, they are designed to meet the needs of IT admins and not necessarily integrated with the development environment. As a result, developers either rely on IT admins to provision developer services or standup infrastructure silos to address their needs. With vSphere 8, DevOps teams can now access IaaS services (like provisioning VMs, networking, setting up Tanzu Kubernetes Grid clusters) easily from the new Cloud Consumption Interface service. The Cloud Consumption Interface simplifies infrastructure setup across the vSphere estate through intuitive UIs and developer friendly APIs, freeing up time that can be spent on real development efforts.

 

Provisioning Infrastructure through Cloud Consumption Interface

With vSphere 8, TKG Availability Zones can be deployed across 3 clusters, which increases the overall resilience of containerized workloads to infrastructure failures. Simplified cluster creation has been made possible thanks a new Cluster API feature called Cluster Class. Users can now create, scale, upgrade and delete clusters, all from a single place. Provisioning, upgrading, and operating multiple Kubernetes clusters is now a simplified, declarative process. Carvel based tooling has been introduced to allow application developers and platform operators to build on Kubernetes with confidence.

In June of this year, we launched VMware vSphere+, a new offering from the vSphere family, which combines industry-leading cloud infrastructure technology, an enterprise-ready Kubernetes environment, and high-value cloud services to transform existing on-premises deployments into SaaS-enabled infrastructure . vSphere 8 takes these benefits several steps further.

With vSphere 8, IT admins can deploy the following add-on cloud services that protect workloads and optimize infrastructure, with more on the way.

VMware Cloud Disaster Recovery add-on service drives resilience of mission critical workloads by protecting them from disasters and ransomware.

 

Protect workloads with VMware Cloud Disaster Recovery

VMware vRealize Operations add-on service provides capacity planning and optimization for your infrastructure with the right size to fit the current and future needs of your workloads.

 

Easily get details on CPU, memory and storage consumption

IT admins spend a lot of time upgrading and maintaining infrastructure. Though regular maintenance operations help ensure uptime and availability, they take time away from running business critical applications. vSphere 8 optimizes maintenance windows by pre-staging ESXi image downloads and performing simultaneous upgrades on hosts, allowing teams to return faster to regular operations.

With the growth of workloads on-premises and at the edge, initial placement and migration are critical aspects that help infrastructure teams maximize service availability, balance utilization, and minimize downtime. vSphere 8 gives a major upgrade to vSphere Distributed Resource Scheduler and vMotion. Distributed Resources Scheduler now factors workload memory usage into placement decisions. It can now place workloads more optimally by taking into consideration memory needs of workloads.

vMotion now supports migration of VMs running on hosts that support Intel Scalable I/O Virtualization (Intel SIOV). Workloads can now simultaneously enjoy the benefits of SIOV passthrough performance and mobility across the vSphere infrastructure.

According to an IDC study, 65% of the global GDP will be digitalized by 2022(4). The IDC study points out the Global Data Sphere is expected to double between 2022 and 2026(5). As the footprint of computing continues to grow, many enterprises are starting to think about sustainable ways to operate infrastructure. VMware has taken the first step in helping enterprises develop sustainable computing strategies. vSphere 8 introduces Green Metrics – that help you track power consumed by workloads and infrastructure operations. This is just the first step in helping customers realize the potential and opportunities to reduce their carbon footprint while meeting business objectives.

vSphere 8 Green Metrics

 

Learn more

Find out how vSphere 8 can supercharge workload performance, accelerate innovation for DevOps, and enhance operational efficiency – all while bringing the benefits of the cloud to on-premises workloads

Sources

(1) Posey, Melanie. Voice of the Enterprise: Vendor Evaluations 2021 Cloud, Hosting & Managed Services. S&P Global. Published Feb 2022

(2) IDC, IDC FutureScape: Worldwide Future of Digital Infrastructure 2022 Predictions, Oct 2021, Doc # US47441321

(3) Gartner®, Gartner Forecasts Worldwide Public Cloud End-User Spending to Reach Nearly $500 Billion in 2022, Press Release, April 19 2022

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

(4) IDC, IDC FutureScape: Worldwide Digital Transformation 2021 Predictions, Oct 2020, Doc # US46880818

(5) IDC, Worldwide IDC Global DataSphere Forecast, 2022–2026: Enterprise Organizations Driving Most of the Data Growth, May 2022, Doc # US49018922