In one of my sessions at VMworld 2018 in Las Vegas (#HCI1475QU – “Demystifying vSAN for the Traditional Storage Administrator”), a question was asked as to how vSAN reacts when entering multiple hosts into maintenance mode (EMM) with limited resources. While it is a practice that is discouraged, the answer depends on storage policy rules used for the objects in a vSAN cluster. It was similar to another question I received, and thought it deserved further explanation.
vSAN administrators placing a host in a vSAN cluster into maintenance mode will be presented with one of three options: “Full data migration,” “Ensure accessibility,” and “No data migration.” This post will focus specifically on the “Ensure accessibility” option: the selection most commonly used for host restarts and updates. The key area of interest is what occurs when there are a limited number of physical hosts, or fault domains. For simplicity, the examples below will use a 30GB VMDK object in a 5 host vSAN cluster, with a level of failure to tolerate (FTT) of 1, and will compare behaviors between a failure tolerance method (FTM) of RAID-1 and RAID-5.
Understanding the “Ensure accessibility” option
Prior to entering the host into maintenance mode, vSAN will determine if enough components that comprise an object will still be capable of remaining accessible after the host completes the EMM process. By selecting “Ensure accessibility” the user is choosing to accept a lesser level of resilience for the benefit of minimizing time and data movement, while still maintaining the availability of the object.
Even if there are other hosts to resynchronize the data to maintain full resilience, choosing “Ensure accessibility” will not perform any additional resynchronization activity if the object remains fully available upon decommissioning of the host. vSAN’s object manager will watch this state, and wait 60 minutes (by default) before it tries to initiate any resynchronizations to regain the level of resilience originally assigned by the policy.
EMM “Ensure accessibility” option from a fully available cluster
In Figure 1, we see the behavior of entering a host into maintenance mode for a RAID-1 object when choosing “Ensure accessibility.” In this case, vSAN recognized there was no need to make any adjustments, as the data would remain available, but less resilient.
Figure 1. RAID-1 object. EMM event with fully available cluster
Unlike a RAID-1 mirror, where resilience is provided by creating a copy of an object (sometimes referred to as a “replica”), a RAID-5 stripe in vSAN is comprised of at least 4 components spread across 4 hosts. All 4 components contain data with parity, and have an implied FTT of 1. Just as with the example in Figure 1, vSAN understands that entering a host containing a single component of a RAID-5 object is still accessible in this state, and will not do any resynchronization to regain full resilience until the 60 minute time window has expired. In a 5 host vSAN cluster with an object using RAID-5, an EMM event would look like what is shown in Figure 2. vSAN would recognize that there was no need to make any adjustments, as the data remains available, but less resilient.
Figure 2. RAID-5 object. EMM event with fully available cluster
In both cases, “Ensure accessibility” resulted in no data movement, but maintained availability of the object.
EMM “Ensure accessibility” option with resilience already degraded but other resources available
If an additional host entered into maintenance mode using “Ensure accessibility,” vSAN would determine if data would need to be moved to remain available, and if so would move the components of that replica to another host in the cluster. Figure 3 shows this behavior with RAID-1, where the resilience is already degraded, but there are other hosts available for the object to reside.
Figure 3. RAID-1 object. EMM event with resilience degraded but other resources available
In the case of a RAID-5 stripe, vSAN would move one of those components onto a free host, keeping the striped object available, although still degraded in its resilience.
Figure 4. RAID-5 object. EMM event with resilience degraded but other resources available
In these examples, “Ensure accessibility” resulted in data movement to maintain availability of the object. Availability of the object could be maintained because there were available hosts for the data to reside.
EMM “Ensure accessibility” option with resilience already degraded and insufficient resources
The following two examples are strictly for learning how vSAN handles data in this extreme case, and are not procedures that would be used in a production environment. This simply examines behavior that vSAN employs to maintain availability when resilience is already degraded due to other hosts being in maintenance mode, or working with extremely small clusters.
The behavior of a RAID-1 object and a RAID-5 object differ under these conditions. For a RAID-1 configuration, Figure 5 shows no change in availability of the data, as the single object could reside on a single host.
Figure 5. RAID-1 object. EMM event with resilience degraded and insufficient resources
With RAID-5 (and RAID-6) objects, it is going to honor the instruction of “Ensure accessibility” and do so in a unique way. vSAN realizes it cannot keep the object available with just two of the four components available, so prior to the EMM event, it will rebuild that object into a single RAID-0 object. Figure 6 illustrates this result.
Figure 6. RAID-5 object. EMM event with resilience degraded and insufficient resources
For this assigned RAID-5 object, we see using the Monitor > vSAN > Virtual Objects that the object states an effective protection level of RAID-0. In this example, the object consisted of just a single component (as it was only 30GB in size), but a much larger object could be comprised of multiple components.
Figure 7. RAID-5 object rebuilt to RAID-0 when choosing “Ensure accessibility” with insufficient resources
Note here that vSAN did NOT change the intended policy of RAID-5. In this case, it is taking the instructions for making sure the data remains available. When the additional hosts come back online, vSAN will resynchronize the data back to the stripe with parity layout to satisfy the intent of the applied policy. The “RAID-0” label is reporting is the effective “no resilience” condition it is in, not the desired condition defined by the storage policy.
Operational Considerations
This example demonstrates in a simplified way, how vSAN will manage a RAID-5 stripe when taking down multiple hosts with limited resources. Entering multiple hosts into maintenance mode in a single vSAN cluster, especially with limited resources, is discouraged. Doing so can introduce unnecessary resynchronization traffic, reduce critical storage capacity resources, and prevent sufficient fault domains to accommodate storage policies. These are some of the reasons why the VMware Update Manager (VUM) prevents parallel remediation of hosts in a vSAN cluster.
Running a cluster with no more than one host in maintenance mode is a good practice for any type of maintenance activity. Pair this with the suggested practice of running a vSAN cluster with at least one more host than the minimum required by the storage policies, and this will allow vSAN to automatically regain levels of resilience during times of planned or unplanned outages.
Summary
When entering a host into maintenance mode, choosing “Ensure accessibility” should be viewed as a flexible way to accommodate host updates and restarts. Planned events such as maintenance mode activities, and unplanned events such as host outages may make the effective storage policy condition different than the assigned policy. vSAN constantly monitors this, and when resources become available to fulfill the rules of the policy, it will adjust the data accordingly.