VCF Compute (vSphere)

NVMe Memory Tiering Design and Sizing on VMware Cloud Foundation 9 Part 5: Deployment Scenarios (Greenfield, Brownfield, Nested Lab)

In this part of the blog series, I want to provide some information about the differences when enabling Memory Tiering in different scenarios. Although the core process remains the same, there are things that may require extra attention and planning to save some time and effort. When we talk about greenfield scenarios, we refer to brand new VMware Cloud Foundation (VCF) deployments including new hardware and new configuration for the whole stack. Brownfield scenarios will cover configuring Memory Tiering on an existing VCF environment. Lastly, I do want to include lab scenarios as I have seen mixed statements that this is not supported, but I will cover this at the end of this blog post.

Greenfield Deployments

Let’s start with the configuration process of greenfield environments. In Part 4, I covered how VMware vSAN and Memory Tiering are compatible and can co-exist in the same cluster. I’ve also highlighted something important that you should be aware of during greenfield deployments of VCF. As of VCF 9.0, enabling Memory Tiering is a “Day 2” operation, meaning that you first configure VCF and then you can configure Memory Tiering, but during the VCF deployment workflow, you will notice there is no option to enable Memory Tiering (yet), but you can enable vSAN. How you handle your NVMe device dedicated for Memory Tiering will dictate the steps necessary to get that device presented for its configuration.

If all the NVMe devices for both vSAN and Memory Tiering are present during the VCF deployment, chances are vSAN may auto-claim all of the drives (including the NVMe device you have allocated for Memory Tiering). In this case you would have to remove the drive from vSAN post-configuration, erase partitions and then start your Memory Tiering configuration. This step was covered in Part 4.

The other approach would be to take out, or not add the Memory Tiering device to the server, and add it back to the server after VCF has been deployed. This way, you won’t risk vSAN auto claiming the NVMe for Memory Tiering. Although this is not a major hurdle, it is still good to know what will happen and why, so you can quickly allocate the resources needed for Memory Tiering configuration.

Brownfield Deployments

Brownfield scenarios are a bit easier as VVF/VCF is already configured; however, vSAN may or may not have been enabled yet.

If vSAN is not enabled, you will want to disable the auto-claim feature, go through the vSAN configuration and manually select your devices (except the NVMe device for Memory Tiering). Everything is done in the UI and a procedure that has been used for years. This guarantees you to have the Memory Tiering NVMe device available for configuration. The detailed process is documented in TechDocs.

If vSAN is already enabled, I’m going to assume the NVMe device for Memory Tiering was just purchased and is ready to install. So, all we need to do is add it to the host and make sure it shows up properly as an NVMe device, and that it has no existing partitions. This is probably the easiest scenario, and the most common.

Lab Deployments

Now let’s talk about the long-awaited lab scenario. For a bare metal type of lab where ESX is single layered, and there are no nested environments, the same greenfield and brownfield principles apply. Speaking of nested, I’ve seen blogs and comments that nested Memory Tiering is not supported. Well, it is and it is not.

When we talk about nested environments, we are referring to two layers of ESX. The external layer is ESX installed on the hardware (normal setup) and the internal ESX layer is composed of virtual machines running ESX, and acting as physical hosts. Memory Tiering CAN be enabled in the internal layer (nested), and all configuration parameters work fine. What we are doing is taking a datastore and creating a virtual Hard Disk of type NVMe to present to the VM that acts as a nested host. Although we do see an NVMe device on the nested host and can configure Memory Tiering, the backing storage device is composed of the devices forming the datastore selected. You can configure Memory Tiering, and the nested hosts are able to see hot and active pages, but don’t expect any type of performance given the backing storage components. Does it work? YES, only in lab environments.

Testing in a lab environment is very helpful to help you go through the configuration steps, and understand how the configuration works and what advanced parameters can be configured. This is a great use case for preparing (practicing) to deploy in production, or even just getting familiar with the feature for certification exam purposes.

What about the outer layer? Well, this is the one that is not supported in VCF 9.0, as the outer ESX layer has no visibility to the inner layer and cannot see the VM memory activeness, in essence trying to see through the nested layer to the VM (inception). That is the main difference (without going too much into technical details).

So if you are curious about testing Memory Tiering, and all you have is a nested environment, you can configure Memory Tiering and any of the advanced parameters. It is neat to see how a few configuration steps can add 100% more memory to the hosts; it reminds me about the first time I tested vMotion and was in awe of this magical feature.

In earlier blogs I’ve mentioned that you can configure NVMe partitions with ESXCLI commands, PowerCLI and even scripts. In a later blog post, I also promised a script to configure partitions, so I’m delivering on my promise with the caveat, and direct warning that you can run the script at your own risk and it may not work in your environment depending on your configuration. I am not a guru in scripting, just dangerous enough to avoid manual work. Consider this script as nothing more than an example of how it can be automated, not as a supported automation solution. Also, the script does not erase partitions for you (I’ve tried), so make sure you do that before running the script. As always, test first.

There are some variables that you will need to change to make it work with your environment:

$vCenter = “your vCenter FQDN or IP” (line 27)

$clusterName = “your cluster name” (line 28)

Blog series:

PART 1: Prerequisites and Hardware Compatibility

PART 2: Designing for Security, Redundancy, and Scalability

PART 3: Sizing for Success

PART 4: vSAN Compatibility and Storage Considerations

Additional information on Memory Tiering

Deployment Considerations

Technical Overview


Discover more from VMware Cloud Foundation (VCF) Blog

Subscribe to get the latest posts sent to your email.