Compute (vSphere)

What’s New in VMware vSphere 8 Update 3?

The enterprise workload engine.

It’s that time again! Time for another feature rich update to vSphere 8. Introducing vSphere 8 Update 3. Read our vSphere 8 Update 3 announcement.

Adding to the previous update 2, update 1, and initial release. You can read up on those releases in the articles linked below.

Lifecycle Management

Keeping vSphere Updated

This is a quick overview of the main areas of Lifecycle Management in vSphere and their features and new 8 Update 3 features highlighted.

vCenter Reduced Downtime

  • Patch and update vCenter with minimal downtime now includes complete topology support and the ability to automatically perform the switchover phase.

vSphere Lifecycle Manager

  • Manage the software, driver and firmware stack for vSphere clusters and standalone hosts now including Live Patch, enhanced image customization and support for dual DPU configurations.

vSphere Configuration Profiles

  • Manage the configuration of vSphere clusters now including support for clusters using baselines, formerly Update Manager, that have not yet transitioned to cluster images using vSphere Lifecycle Manager. Now supporting baseline-managed clusters in vSphere 8 U3.

Live Patch

With vSphere 8.0 Update 3 we can address critical bugs in the virtual machine execution environment (vmx) without the need to reboot or evacuate the entire host. Examples of fixes include those in the virtual devices space.

Virtual machines are fast-suspend-resumed (FSR) as part of the host remediation process. This is non-disruptive to most virtual machines.

A virtual machine FSR is a non-disruptive operation and is already used in virtual machine operations when adding or removing virtual hardware devices to powered-on virtual machines.

The vSphere Lifecycle Manager compliance scan will report virtual machines that are incompatible with FSR and the reason why.

Here is an example showing how a Live patch is applied:

  1. Host enters partial maintenance mode
  2. New mount revision loaded
  3. New mount revision patched
  4. VMs fast-suspend-resume to consume patched mount revision

Some virtual machines are not compatible with FSR. VMs configured with vSphere Fault Tolerance, VMs using Direct Path I/O and vSphere Pods cannot use FSR and need to be manually remediated. Manual remediation can either be done by migrating the virtual machine or by power cycling the virtual machine.

Virtual machines that do not support FSR include VMs configured for vSphere Fault Tolerance, VMs configured with VM DirectPath I/O devices, vSphere Pods (container pods). 

These VMs must be manually migrated using vSphere vMotion or power cycled to pick up a new patched vmx instance. 

Live Patch is not compatible with systems configured with TPM devices, or systems configured with DPUs using vSphere Distributed Services Engine.

Partial Maintenance Mode

Partial maintenance mode is an automatic state that each host will enter when performing a Live Patch remediation task.

This special state allows existing VMs to continue to run but disallows the creation of new VMs on the host or for VMs to be migrated to or from the host.

For more information on Live Patch, see this article.

https://blogs.vmware.com/cloud-foundation/2024/07/11/vmware-vsphere-live-patch/

Enhanced Image Customization

vSphere Lifecycle Manager images can be further customized in vSphere 8 Updated 3. 

In the base ESXi version, the VMware Host Client (ESXi UI) and ESXi VM Tools (VMware Tools) components can be deleted from the image.

When a Vendor Addon is present, certain components belonging to the vendor addon can also be omitted from the final image. This also includes the ability to retain an existing driver version rather than adopting a newer driver in a newer vendor addon bundle. 

Customers should validate with the vendor that retaining the existing driver is supported.

This allows final images to be reduced in size by removing some non-essential components. This is useful in remote and/or edge use cases to reduce the overall image payload for the ESXi hosts that has to be transmitted over the network.

Dual DPU Support

vSphere Lifecycle Manager in vSphere 8 Update 3 includes support for dual DPU configurations. Similar to single DPU configurations, vSphere Lifecycle Manager will remediate both DPU ESXi versions and ensure all versions are kept at the same version.

vCenter Reduced Downtime Update

vCenter Reduced Downtime Update supports all vCenter deployment topologies.

  • Self-managed: vCenter VM is managed by itself.
  • Non self-managed: vCenter VM is managed by a different vCenter.
  • Enhanced Linked Mode: Two or more vCenter instances participating in the same SSO domain.
  • vCenter HA: vCenter instances configured for vCenter High Availability.

Automatic Switchover is available when performing updates to vCenter using reduced downtime updates. The Switchover phase will begin immediately and take approximately 2-5 mins of service downtime.

You can continue to manually initiate the switchover phase for control over exactly when the switchover phase, and brief downtime, will occur.

For more information on vCenter Reduced Downtime Update, see these articles.

https://core.vmware.com/blog/vcenter-reduced-downtime-update-vsphere-8-u2

https://blogs.vmware.com/cloud-foundation/2024/07/11/vcenter-reduced-downtime-update-in-vmware-vsphere-8-update-3/

vSphere Hardware

Dual DPU Support with vSphere Distributed Services Engine

vSphere 8 Update 3 adds dual DPU support to vSphere Distributed Services Engine. Dual DPUs can be used in two configurations.

High Availability DPU Configuration

The first configuration utilizes two DPU in Active/Standby high availability. This configuration provides redundancy in the event one of the DPUs should fail.

In the HA configuration, both DPUs are assigned to the same NSX backed vSphere Distributed Switch.

For example, DPU-1 is attached to vmnic0 and vmnic1 of the vSphere Distributed Switch and DPU-2 is attached to vmnic2 and vmnic3 of the same vSphere Distributed Switch.

Increase Network Offload Capacity

The second configuration utilizes two DPU as independent DPUs. Each DPU is attached to a separate vSphere Distributed Switch.

There is no failover between DPUs in this configuration. Essentially this configuration is the same as a single DPU configuration, you just now can have two DPUs each attached to their own vSphere Distributed Switch, increasing offload capacity per ESXi host.

Intel® Xeon® CPU Max Series Support

Take advantage of the latest in hardware acceleration from Intel for high-performance computing workloads running on vSphere.

Accelerate AI/ML workloads and other high-performance computing (HPC) application demands with Intel Xeon CPU Max Series.

Intel Xeon CPU Max Series leverages high-bandwidth memory (HBM) embedded within the CPU itself.

Intel Sapphire Rapids generation (incl non-HBM skus) includes 4 discrete on-chip accelerators

Currently, Intel has developed and provided 2 native vSphere drivers specifically for QAT and DLB.

vSphere with GPUs

GPU Profiles in vSphere 8

In earlier vSphere versions, all NVIDIA vGPU workloads on an ESXi host must use the same vGPU profile type and GPU memory size. That isn’t the case anymore. 

Now you can assign workloads with different vGPU profile types to the same physical GPU, helping to better consume the GPU resources. Memory sizes of the profiles can also differ (new in vSphere 8 Update 3). 


The GPU Media Engine (ME) can also be assigned to a vGPU profile (new in vSphere 8 Update 3). 

In previous releases, the GPU Media Engine was only available when consuming the entire physical GPU. The Media Engine can be presented to smaller MIG (Multi-instance GPU) profiles.

In current hardware, there is typically only one Media Engine. Only one vGPU profile can utilize the Media Engine on a single physical GPU. The media engine cannot be shared between multiple vGPU profiles / vGPU VMs using the same physical GPU.

Cluster Level GPU Monitoring

See GPU compute and GPU memory consumption at-a-glance in the vSphere Client. 

vSphere Client displays a new tile on the cluster summary tab showing an overview of GPU resources currently being used and the total physical GPU devices available to the cluster.


The cluster performance overview charts display a historical and real-time view of GPU compute and GPU memory utilization of the cluster.

vSphere DRS Settings for Passthrough Devices (vGPU)

Easily activate VM mobility for vGPU enabled virtual machines

vSphere DRS settings for vGPU enabled virtual machines can be easily controlled in the cluster DRS settings.

Enable and define the machine stun time limit for automatic DRS migrations of vGPU enabled VMs.

VM mobility of vGPU enabled VMs streamlines lifecycle management of GPU enabled clusters by allowing for automatic evacuation of vGPU VMs from hosts during remediation events.

Availability & Resilience

Embedded vSphere Cluster Service

vSphere Cluster Service (vCLS) is rearchitected to use fewer resources, remove storage footprint, and eliminate issues associated with vCLS deployment.

Embedded vCLS VMs have no storage footprint and run entirely in host memory. The ESXi host spins up the Embedded vCLS VM(s) directly. There is no OVF deployment pushed from vCenter and EAM (ESX Agent Manager) is no longer involved.


The number of vCLS VMs per cluster has also been reduced from up to three VMs to two VMs when using Embedded vCLS. A single node cluster will use a single Embedded vCLS VM and clusters of two or more hosts will use two Embedded vCLS VMs.

You can easily identify the Cluster Service type from the summary tab of the vSphere cluster.

A cluster using the new Embedded vCLS reports as such. A vSphere 8 U2 cluster, for example, reports a Cluster Service type “vCLS”.

For more information on Embedded vSphere Cluster Service, see these articles.

https://core.vmware.com/blog/embedded-vsphere-cluster-services-overview

https://core.vmware.com/resource/embedded-vsphere-cluster-services-deep-dive

vSphere Fault Tolerance Metro Cluster Support

Virtual machines configured with vSphere Fault Tolerance support stretched / metro clusters.

Simply check the box for “Enable Metro Cluster Fault Tolerance” when activating Fault Tolerance on the VM and choose the appropriate Host Group.

The primary FT VM is placed on the host group site and the secondary VM is automatically placed on the opposite site.

If the host running the primary FT VM fails, the secondary FT VM will take over as expected. Another host, within the same site as the failed FT VM, is selected to re-establish FT while two-site placement persists.

If an entire site fails, the affected VMs keep running without FT protection until the failed site recovers.

Workloads

CPU C-State Virtualization

Energy efficiency is very important for Telco and VRAN (Virtualized radio access networks) infrastructure. vSphere 8 Update 3 allows physical CPU C-States to be virtualized and managed from within workloads.

Workloads can request physical core enter power saving modes, such as C-State 6, when applications and processes are idle. 

The CPU can be reactivated for maximum performance when the workload requests it. Cascade Lake and newer Intel CPUs and the Guest OS intel_idle driver are required.

Customize Virtual Hardware when deploying from Content Library

OVF/OVA templates deployed from a content library can have their hardware customized during the deployment wizard instead of post-deployment.

This allows appliances and template virtual hardware customization to be streamlined and ensure the desired hardware components are added and configured for the workload.

Virtual Machine Disabled Operations

First and third-party solutions disable certain vSphere operations, such as migration operations, during certain activities. For example, a VM backup solution might disable a VMs ability to migrate using vMotion while the backup task is in progress to prevent the task failing. 

The disabled method / operation should be reactivated once the task has been completed. However, under certain circumstances, the method might not be reactivated.

In vSphere 8 Update 3, administrators can easily reactivate operations from the vSphere Client.

Security & Compliance

PingFederate Support in vSphere Identity Federation

vSphere now supports PingFederate as an external identity provider.

Throughout the lifespan of vSphere 7 and 8 we have been adding more ways to introduce modern authentication to vSphere. The latest addition to our collection is support for PingFederate, joining Entra ID, Okta, and ADFS support to make vSphere very flexible in dealing with identity and access control.

TLS & Cipher Suite Profile Support

Quickly configured best practices and modern TLS ciphers using a profile-based approach using API. Configuration Profiles or with PowerCLI.

Now there’s an easy way to configure the ciphers that will pass your audit, by just enabling a TLS profile. 

There’s only one profile right now, called NIST_2024, and you can set it with an API call, through Configuration Profiles, or through a PowerCLI script, which, frankly, is the easiest. 

There are examples in the Security Configuration Guide for vSphere 8 on how to do this. And you will need to restart your ESXi host to have it take effect, so you can do it right before you patch!

Security Configuration Guides & Baselines

Security Configuration Guide is updated for vSphere 8 Update 3 and includes guidance for vSAN 8 Update 3.

Turn on data-in-transit protections, turn on data-at-rest encryption, and you’re done. In fact, that’s a huge difference between vSAN and other storage solutions: it’s so very easy to turn advanced security on. It’s a checkbox, not a multi-year project.

The SCG has a few new things in it, from the new TLS profiles, to some guidance about controlling VM boot options. 

It also has easy to use comparisons between the STIG guidance, PCI DSS 4.0 guidance, and the baseline, too, so you can see exactly where the baseline differs for those compliance frameworks. 

And there are now scripts to audit and remediate the majority of what’s in the Guide, so customers setting up new environments can get things done quickly, and auditors can capture the output for their records, too.

vSphere IaaS Control Plane

For everything new in vSphere IaaS control plane, see this article What’s New in vSphere Update 3 for vSphere IaaS control plane?

Core Storage

For everything new in vSphere core storage, see this article What’s new with vSphere Core Storage?

Reminders

Plan for Upgrades

vSphere 8 initially released in October 2022 and saw two significant updates over the course of its first year. Continuing that trend of feature-rich updates, vSphere 8 Update 3 dropped on June 25th 2024. It’s a good time to remind you that vSphere 7 is planned to enter end-of-general-support (EOGS) in 2025. Time to start planning upgrades and/or migrations to vSphere 8 if you have not already done so.

Check out the vSphere 8 upgrade activity path and Best Practices for Patching VMware vSphere articles for more.

Deprecations & Removals

Overtime features are deprecated in vSphere as technology changes, and we adapt to customer’s needs. In addition to previously announced deprecations and removals in vSphere 8, Update 3 announces the deprecation of vSphere Trust Authority and the use of Storage DRS and Storage IO Control with respect to IO latency. See the vCenter and ESXi 8 Update 3 release notes for the complete list of deprecation.

Deprecated features remain supported in vSphere 8 but will not be supported in a future major version.