Product Announcements

Troubleshooting Performance Comparisons (with cheat sheet)

When someone experiences a performance comparison issue when a workload is moved from physical to virtual (or from another virtual platform/system) this is how I approach diagnosing it or identifying why performance was different.

Let me start by saying that using current infrastructure and the latest versions of vSphere, there should be little or no performance difference between physical and virtual.  Very rarely is a true issue or an incompatible application identified.  The most common reason people see an issue with comparisons is a result of the following:

  • A poorly conceived performance test
  • A mis-configuration within the hardware/software stack
  • Hardware differences between the tests

While this might seem obvious, it happens far too frequently in my experience.  As a result, I approach these situations by doing two things.  First, explain and encourage the adoption of my golden rules for comparisons.  Secondly, use the cheat sheet below to document details in the environment which usually bubbles up some sort of difference.

Here are my golden rules for performance comparisons:

#1 Ensure the Comparison is Apples to Apples

The most important rule that needs to be adhered to when performing a comparison is to ensure the two systems, or environments, are identical!  Any differences between them can cause a wide variation in performance results and therefore invalidate the comparison.  This is where most clients are tripped up.  The most accurate methodology that should be used, is to leverage one physical server, run the performance test twice, the first time with just bare metal and the second time with only the addition of vSphere.  This helps ensure everything is identical – processors, BIOS config, storage and network integration, etc.  This is the type of methodology the VMware Performance Engineering team uses.

I understand that sometimes differences are necessary between environments.  For example, when clients are moving between CPU vendors (i.e.: AMD <-> Intel).  But one needs to realize that performance will be different in that case.  This make a comparison much more difficult.  That’s why it’s important we adhere to rule #2.

#2 Define Valid Performance Comparisons

For any comparison, we want to define success criteria and ensure it reflects what’s really important to the application experience.  During performance testing, many people get bogged down in measuring things like CPU utilization or storage throughput.  While that may be important as a secondary measure, I suggest we need to start with more appropriate application KPIs such as transactions/second, user latency or batch processing times.  Remember we are trying to compare application performance so we need to compare application statistics and not necessarily resource consumption.

#3 Don’t Overcommit Resources (initially)

In order to ensure the cleanest, noise free performance data to compare,  we want to ensure we remove the potential affects of technologies that support resource overcommitment.  While most of these, like memory transparent page sharing, have an immeasurable impact, a good practice is to simplify the test and ensure resources are not overcommitted.  The easiest way to achieve this is to use a dedicated host, reducing potential neighbour affects, or full reservations for the virtual machine being examined.

#4 Pay Special Attention to Storage

Storage is most often the bottleneck I see my clients experience (though it doesn’t need to be) affecting comparisons.  Spend extra time/effort here to both understand and ensure the storage configuration between the two tests is identical.  Even using a different LUN for a test can impact the results because it might be configured differently, sharing spindles differently, use a different tier of storage or take a different path to the host.  This is often the dimension where we need to dig deep and the benefit is a very performant application.

More details on troubleshooting storage: here here and here

As outlined above, here is the cheat sheet I use to collect information between two environments and their subsequent performance tests.  While this list of questions is not comprehensive, and isn’t meant to be, it’s a simple method to identify the key differences 95+ % of the time.

Cheat sheet: PerfCheck_CheatSheet_v1

Once you identify and understand the differences between systems or environments, it empowers you explain the difference or to modify the environment and reproduce results that are a true comparison.  This true comparison usually supports the fact that the application is running the same or better than its previous source and that there was never a performance issue but a gap in configuration or understanding.  Don’t you love when things resolve themselves?

VMware is also here to help, so if you’re still experiencing a performance issue, please ensure your reach out to our support teams.

I’m always interested in your feedback on how ‘you’ troubleshoot performance comparison problems and ideas for augmenting this simple cheat sheet.