Cloud Migration Hybrid Cloud VMware Cloud on AWS

Sizing for VMware Cloud on AWS

Our sizer and TCO tool helps you estimate the number of hosts required to run VMs in VMware Cloud on AWS – but sometimes a ballpark figure just won’t do. Sizing is a bit of an art, and in this blog post we take you through 7 steps to get the most accurate data possible, so you can make an informed cloud migration decisions. 

 

The VMware Cloud on AWS sizer and TCO tool helps customers estimate the number of hosts required to run their VMs in a VMware Cloud on AWS environment. The typical user wants to migrate workloads from on-premises to the cloud and needs to estimate how many hosts will be required. To get acquainted with the basics of the sizer and TCO tool, please see: How to get a VMware Cloud On AWS Sizing and TCO Model.

If you are simply looking to get a ballpark VMware Cloud on AWS sizing estimate, it’s usually sufficient to enter your on-premises numbers and accept all of the sizer defaults for the numbers you don’t have. This is often all that is needed in the beginning stages of a cloud migration project. But if you want more accuracy, you will need to do more work.

To get the most precise sizing estimate, it is important to first figure out if your workloads are storage, memory or CPU bound. This will be the driving factor for the sizing calculation. In subsequent blogs, we will dive into the details of each of these possibilities. This article describes the general model to use for sizing, irrespective of what is driving the sizer calculation.

 

The Sizing Process

Sizing is often a bit of an art. It is important to take into account as much information as possible, make reasonable assumptions and then distill all of it into a model that leads to reasonable conclusions. It is virtually impossible to be exact when sizing, so the best we can do is produce an estimated range with a fair amount of certainty. The following is a series of steps you can take to accurately size your VMware Cloud on AWS deployment.

It goes without saying that the first thing we need to do is determine which workloads we plan to move to the cloud. There are a variety of ways to collect information about workloads, from using vRealize Operations to other tools like LiveOptics and RVtools. Whichever way you choose to get the data, you will need to know as much as you can to receive the most accurate results from the sizer. Below is a screenshot that shows the input values required. If you don’t have a particular value, you can use one of the VMC Sizer defaults.

 

Step 1: Collect on-premises data

 

Step 2: Categorize workloads

Different types of workloads have different characteristics. For example, your databases will likely behave differently than your application workloads. Averaging across workloads with different properties increases variance which can lead to a less precise sizing estimate. To tighten up the sizing, it’s generally a good idea to categorize workloads based on their characteristics. In this case, we would split databases and applications into separate profiles.

You can enter each workload category as a separate Profile in the VMC Sizer. As you can see below, there is an Applications profile that is different from the Databases profile. You can enter different workload CPU, storage and memory values for each profile.

Step 3: Size using on-premises data

Now that you have your on-premises data, you can get a good sizing estimate. Unless there is a good reason to manually set the RAID level, just choose auto-auto for Host failures to tolerate and Fault Tolerance Method. This will let the sizer choose the optimal settings that meet SLA requirements. Then, simply enter the workload values for each profile.

On the recommendation page, you will see an estimated number of hosts required to run your workloads along with a lot of other information about the CPU, storage and memory usage in VMware Cloud on AWS. We could stop here and just use this value, but there are a few more things we can do to increase confidence in our sizing.

 

Step 4: Consider Architecture and Scale

In the previous step, the sizer assumed that you want to maximize the number of VMs per SDDC. But this may not be what you want to do. Instead, you may want to split your workloads across multiple SDDCs based on application type. Or, if you are close to the configuration maximums for an SDDC, you may want to give yourself some headroom and split your workloads across two SDDCs.

The reason this is important is that there is management overhead to run vCenter, NSX, vSAN, etc. in each SDDC. You can find the overhead cost in the Recommendation page of the sizer tool:

  • A total of 32 vCPUs, 104 GB of memory and 3898 GB of storage is considered to be the overhead consumed by Management Virtual Machines

Obviously, if you have two SDDCs, the management overhead should be factored in twice. Also, if you don’t max out an SDDC, you may not pack in quite as many VMs per host. Currently, the way to manually split VMs into multiple SDDCs in the VMC Sizer is to run it twice, once for each SDDC. 

Keep in mind that this step is optional. The sizer will automatically add Management VM overhead for you for as many SDDCs as you need. But it will assume that you want to pack in as many VMs as possible into each SDDC, which may not be what you want.

 

 

Step 5: Determine the Upper Bound

Our next step is to get an upper bound estimate for our sizing. We do this to get a sense of how far off our estimate may be. First, figure out if you are CPU, storage or memory bound. This is easy to do, because the sizer tool tells you on the Recommendation output page:

Next, set the limiting factor (in this case storage) to the most conservative value that still makes sense for you. For example, you might set Dedup=1 to assume no benefit from dedup and compression.

Now you will have a range for your sizing estimate:

  1. A reasonable estimate based on your on-premises data.
  2. An upper bound sizing estimate based on the limiting factor that drives the sizer calculation.

In the next step, you can put your assumptions to the test by actually running some sample workloads in the cloud.

 

 

Step 6: Test some sample workloads

You already have on-premises data about your workloads, so you already have solid information for what you should expect in VMware Cloud on AWS. To get more confidence that your workloads behave similarly in the cloud, you can take a small sample of applications, migrate them to a small VMC cluster, as small as a single host, and see what happens. Are you seeing a similar utilization pattern in the cloud as in your on-premises environment? How many VMs can you fit in a VMware Cloud on AWS host before you start experiencing problems?

Getting real data from the actual VMC target environment will give you much more confidence about the assumptions you are using to size your deployment.

 

 

Step 7: Consider tradeoffs and make your decision

You now have a sizing estimate based on your on-premises data, an estimate from your upper bound calculation and an estimate based on data from running a small experiment in VMware Cloud on AWS. If your experiment showed similar behavior as on-premises, you can be fairly confident with your on-premises assumptions. On the other hand, your experiments may lead you closer to your upper bound values. In either case, you are now armed with a lot more data to have confidence in your sizing estimate.

 

Final Thoughts: Consider tradeoffs

Sizing is an estimation based on assumptions. Assumptions are rarely perfect; thus, you cannot expect that the sizing estimate will be perfect. Consider instead:

  • Do you prefer to overestimate or underestimate?

If you overestimate, you will artificially inflate the cost of the project. If you underestimate, you may run into budget issues in the future.

Consider also, that unlike your on-premises environment, hosts in VMware Cloud on AWS can be added in a matter of minutes. That is the power of elasticity in the cloud. If you underestimate the number of hosts you need, or if you need more hosts for any reason in the future, you can quickly add hosts. And if you find that you don’t need as many hosts as you estimated, you can delete hosts just as easily.

There are different strategies for purchasing host subscriptions and only you know which strategy works best for your case. Given your on-premises workload data, the upper bound calculation for hosts and the actual data from running sample workloads in VMware Cloud on AWS, you should have ample information to make a good decision.