By Frank Denneman, Sr. Technical Marketing Architect
Recently a customer asked if host-vm affinity rules and VM-VM anti-affinity rules can be combined and what the impact and caveats are of this particular configuration.
In this scenario two virtual machines run an application that is clustered at application level. Virtual machine 1 and 2 provide the service of App-cluster1, similar clusters are configured on VM3-VM4 and VM5-VM6. The compute cluster contains 6 ESXi hosts. The customer requires each app-cluster contained on its own hosts, during normal operations no app-cluster should share an ESXi host. The virtual machine within the app cluster cannot share the same ESXi host.
Virtual Machine to Host affinity groups
The first step is to create and configure the VM-Host affinity groups. A virtual machine DRS group is created for each App-cluster and a host DRS group is created that contain the hosts on which the app cluster will run. Lets zoom in to the configuration used for App-Cluster1.
Step 1: Create virtual machine DRS group add VM1 and VM2
Step 2: Create Host DRS Group add ESXi-01 and ESXi-02
Step 3: Create Rule select both VM and Host group and select the appropriate type rule. (Should run on host in group or Must run on host in group)
The first interesting question is which rule to select. The type of rule selected depends on the requirement. During normal operations, DRS will honor both rules, however HA is not aware of “should” rules and HA can power-on a virtual machine on a host external to the host DRS group during after a host failure. If this is undesirable select the rule type designated as “Must run on”. DRS and HA cannot violate this rule and if both hosts are unavailable the virtual machines will not be able to run. This rule will also impact maintenance mode if the VM-VM anti-affinity rule is created, but this will be covered in the last part of this article.
VM-VM anti affinity rule
After the host-vm rule is configured, a VM-VM anti affinity rule needs to be defined.
The rule will restrict virtual machine load balancing and initial placement operations as DRS honor VM-VM affinity rules at all times. In essence VM-VM affinity rules can be considered “hard” rules similar to Must rules. As the VM-VM affinity rule is a DRS setting, HA might power-on the virtual machine on the same ESXi host as the other virtual machine, violating the rule. DRS will correct the rule violation during the first invocation. As DRS will not violate the VM-VM affinity rules therefore, the selected VM-host rule determines the level of portability of the virtual machines.
VM-VM affinity and should run on rule combination and maintenance mode
If a host listed in the host group is placed into maintenance mode, DRS will violate the “should” rule as it cannot violate the VM-VM affinity rule. This state ensures the resilience of the app cluster, as virtual machines are not located on the same host.
VM-VM affinity and must run on rule combination and maintenance mode
Host listed in a must rule are incorporated in the compatibility list of a virtual machine, all other hosts are perceived to be incompatible. Fundamentally reducing the cluster size and configuration to ESXi-01 and ESXi-02 in case of VM1 and VM2. If the host is placed in maintenance mode, DRS cannot violate the VM-VM affinity rule and as the compatibility list is reduced to two hosts the virtual machine has nowhere to go. In this particular scenario maintenance mode will fail.
This configuration results in extra overhead during normal operations, if it’s required to place a host into maintenance mode, either the VM-Host rule needs to be changed to “should” or disabled temporarily.
Load balance options
Affinity rules offer a great way to control and enhance resiliency of virtual machine and their services, however affinity rules do limit the options of load balancing operations of DRS. DRS clusters support the use of affinity rules and a combinations of multiple types of rules, however be aware that creating a large number of rules with their dependencies and restrictions may create a load-balancing gridlock situation.