New Fling: DRS Doctor

DRS is a very powerful vSphere feature that has been around for years. It’s constantly monitoring host performance to ensure that VM demand is satisfied. When DRS determines that another host in a cluster could better suit a VM, a migration recommendation is generated, and the VM will vMotion to another host. DRS looks at several aspects to VM performance like CPU ready time, CPU utilization, memory active and memory swapped to make intelligent placement decisions.

DRS has historically been a black box though. This decision information can be ascertained by analyzing DrmDump files, but requires engineering support to decode and interpret them…until now.

DRS Doctor is a new fling that is aims to fix this, and allows the vSphere Administrator to diagnose DRS behavior without engineering support. This is great when you just want to dig in a little deeper and understand why DRS made a decision to move a virtual machine.

DRS Doctor records information about the state of the cluster, the advanced settings applied, the workload distribution, the virtual machine entitlements, performance demand, the recommended DRS moves, and more. Even better, DRS Doctor writes all this data into a log file that requires no special tools to read.

If you want to give it a try, check it out at the Flings site here. (opens new window)

Here is the output of a DRS Doctor log from my lab. This was running on a CentOS 6.5 VM running inside the cluster. This should give you an idea of the powerful features of DRS and give you comfort that it’s always doing everything to make VMs happy.

This log series starts with the cluster in an idle state where DRS considered the cluster balanced. I then started up a group of virtual machines, each with varying degree of CPU and MEM resource demand to purposely throw the cluster off balance.  I waited for DRS to detect this imbalance, and then correct it.  As you read through the log files, please look for my comments in #BOLD. For simplicity and readability, I have concatenated three log files to show the progression. As DRS Doctor runs, it will create a new log file every five minutes which corresponds to each DRS iteration.

#The start of log file 1. DRS Doctor will output any affinity/anti-affinity rules here. It also shows any advanced settings that are applied to the cluster. The target balance value is the same as shown in the UI. This is based on the Migration Threshold value that is configured on the cluster.

#This is when I started a group of VMs in a very short period of time.

#DRS detects a change in the cluster balance state and reports that it’s become imbalanced.

#Powering on even more VMs.

#DRS detects even more imbalance. Things are starting to get crazy, but that’s expected with the number of VMs started on such short order.

#DRS begins to create and apply recommendations. This contains a lot of great information. We can see that the priority rating for the recommendation is 4 and that the reason for migrating was due to CPU.

#This next section is money. This is the source of main inputs that go into DRS placement/migration recommendations; entitlements and demand. Here you can get a dump of every VM on every host in the cluster. It will show the current entitlement, demand, ready time, active memory, entitled memory, and any swapping. It’s glorious!

(Sidebar: To better understand the data, it probably wouldn’t hurt to revisit the definitions of some of these metrics. Seriously, read this. If you don’t read it here, read it here.)

  • CPU Entitled: CPU resources devoted by the ESXi scheduler.
  • CPU Demand: The amount of CPU resources a virtual machine would use if there were no CPU contention or CPU limit.
  • CPU Used: Amount of actively used virtual CPU. This is the host’s view of the CPU usage, not the guest operating system view.
  • CPU Ready Time: Percentage of time that the virtual machine was ready, but could not get scheduled to run on the physical CPU.
  • MEM Entitled: Amount of host physical memory the virtual machine is entitled to, as determined by the ESX scheduler.
  • MEM Active: Amount of guest “physical” memory actively used.
  • MEM Shared: Amount of guest “physical” memory shared with other virtual machines (through the VMkernel’s transparent page-sharing mechanism, a RAM de-duplication technique). Includes amount of zero memory area.
  • MEM Swapped: Current amount of guest physical memory swapped out to the virtual machine swap file by the VMkernel.

#This section provides a nice summary for what DRS did, and more importantly WHY it happened. No more mystery.

#Here is the start of the second log file (the next DRS iteration). As you can see the cluster is still imbalanced. At this point all the newly powered on VMs are starting to calm down. DRS will continue to check the state and make recommendations.

#Start of the final log file.

#Balance is getting better. Almost there.

#BOOM! After things started settling down, everything came back into balance.

If you ever wanted to peek under the covers of DRS, I encourage you to download and try this fling.  If you find this information valuable or have suggestions and other feedback, please make sure to post your thoughts to the Flings page for DRS Doctor.