Memory Tiering reduces cost while increasing resource utilization, and was introduced as a tech-preview in vSphere 8.0U3 and was very well-received by customers. (see vSphere Memory Tiering – Tech Preview in vSphere 8.0U3 – VMware Cloud Foundation (VCF). The feedback from customers focused on data resiliency, security, and flexibility in hosts and VM configurations/controls. With the launch of VCF 9.0, these concerns have been addressed. Memory tiering is now a production-ready solution, including DRS and vMotion awareness, improved performance, a default improved 1:1 DRAM:NVMe ratio, and many more improvements to deliver a robust feature.
Extensive internal testing was conducted at Broadcom, and it was observed that Memory Tiering can provide up to 40% TCO savings for most workloads, as well as unleashing an increased CPU utilization of up to 25% – 30% more cores for workloads. Less cost, and more resources – Who doesn’t want that?!? Lastly, better VM consolidation ratios could also mean less servers, or more VMs per server.

Memory Tiering delivers these and many benefits by utilizing NVMe device(s) as a second tier of memory, increasing your memory footprint up to 4x, while leveraging existing server slots for inexpensive devices like NVMe. There are many key differences between our Tech Preview release and our production-ready release with VCF 9.0. Let’s take a look at those enhancements.

Mixed Cluster
Memory Tiering can be configured on all the hosts in a cluster, or you can opt to only configure this feature in a subset of a cluster. There are many reasons why you would do that, for example, you may want to test on one host and a handful of VMs, maybe only a few hosts have open slots for NVMe devices, or you may only be approved to procure a small number of drives. The good thing is, we support all these and many other scenarios, to meet our customers where they are at. You have the option to choose some hosts, or go all in.
Redundancy
Redundancy is always top of mind in architecture designs. I can’t really say I’ve ever seen a design with only one NIC per server. When it comes to storage devices, we can easily introduce redundancy with the configuration of RAID, and that is exactly what we are delivering. Memory Tiering is capable of consuming 2 or more NVMe devices in a hardware RAID configuration to provide redundancy in case of device failure.
DRS Support
DRS has been around for quite some time, and I still think of it as magic. This is a feature most customers can’t live without. We worked really hard to build intelligence into the Memory Tiering algorithm to not only see and understand the state of memory pages, but to be smart and handle those pages appropriately across the cluster.
DRAM:NVMe – New Ratio
In vSphere 8.0U3 with introduced Memory Tiering as Tech Preview to allow customers to test this feature. However, the default ratio at the time was 4:1 ratio, meaning that we have 4 parts DRAM and 1part NVMe. Well, that translates into a memory increase of 25%, and even though it sounds small, when you do a price comparison of a 25% memory increase with DRAM vs NVMe, you would understand how big of a deal this is.
In VCF 9.0 we are changing the default ratio after all the performance improvements that were done. The default DRAM:NVMe ratio is now 1:1 – Yes, that is a 2x increase in memory by default, and this ratio setting is customizable based on the workloads and needs. So, this means that if you have ESX hosts with 1TB of DRAM and you leverage Memory Tiering, you can end up with hosts with 2TB of memory. Because this setting is customizable and some workloads can greatly take advantage of this feature such as VDI, you can have ratios of up to 1:4 where you quadruple your memory footprint for a very low cost.
Other Improvements
There are many other improvements introduced in Memory Tiering with VCF 9.0. Overall performance improvements across the board, made this solution robust, flexible, redundant and secure. From a security standpoint, we also introduced encryption for Memory Tiering at both the VM level as well as the Host level, where VM memory pages are encrypted either per VM or for all VMs within a host with a simple, easy to configure approach.
Assessing Eligibility
How do I get started? How do I know if my workloads are good candidates for Memory Tiering?
Customers should consider the following factors when deciding to deploy memory tiering.
Active Memory
Memory tiering is ideal for customers with high consumed (allocated to all the VMs) memory (>50%) but low active (actively used at any point by workloads) memory (<50% of total DRAM).
The screenshot below shows how Active Memory and DRAM Capacity can be monitored using vCenter:

NVMe device
There are performance and endurance guidelines for supported drives, with 1500+ options listed in the Broadcom (VMware) Compatibility Guide. NVMe drives, like E3.S, are pluggable and can often be added using available slots on servers such as the Dell PowerEdge below. (Dell PowerEdge R760 Rack Server | Dell United States). We highly recommend customers to consult the Broadcom compatibility guide to ensure workload performance by selecting the recommended devices.

Memory Tiering reduces cost while increasing resource utilization, and its future is bright. Many more enhancements are already being worked on for better experience and even more benefits. In the next few months more information about Memory Tiering will be available to you.
***
Ready to get hands-on with VMware Cloud Foundation 9.0? Dive into the newest features in a live environment with Hands-on Labs that cover platform fundamentals, automation workflows, operational best practices, and the latest vSphere functionality for VCF 9.0.