By now you should be well aware that one of the major storage and resource management enhancements in vSphere 5.0 is Storage DRS. What was one of the motivations behind developing this feature? For some time we have had the Distributed Resource Scheduler (DRS) feature in vSphere, and this managed the initial placement and load balancing of virtual machines based on CPU and Memory utilization. However there was still the possibility that VMs could be placed on the same datastore, and even if that datastore was nearing capacity, or the VM performance was degrading, there was nothing in DRS that would prevent VMs being placed on this datastore. Storage DRS addresses this by selecting the best datastore for initial placement, and also uses Storage vMotion to migrate virtual machines between datastores when capacity or I/O latency is an issue.
In previous postings I already discussed initial placement and load balancing based on datastore capacity and I/O latency. However there is another cool feature of Storage DRS that I haven't yet discussed. These are the affinity and anti-affinity rules. These rules are conceptually very similar to the affinity and anti-affinity rules that you might find in DRS. The rules basically work by keeping VMs together on the same datastore or apart of different datastores, in much the same way that the rules in DRS kept VMs together on the same host or apart on separate hosts. In DRS, you might have separated out your primary and secondary DNS server using anti-affinity rules. In this way, if one ESX host failed & brought down one of the DNS servers, your DNS server stays running on another host in the cluster. However there was nothing to stop both the primary and secondary DNS servers residing on the same datastore, and if that datastore failed, so did both servers. Now with Storage DRS anti-affinity rules, you can keep these DNS servers (or any other primary/secondary servers) on different datastores.
However there is another significant feature of Storage DRS affinity & anti-affinity rules, and this is the ability to automatically keep Virtual Machine Disks (VMDKs) together on the same datastore or apart on different datastores. By default, VMDKs are placed together on the same datastore. So why might I want to place VMDKs on different datastores? Well, one example that I thought of was that some of our customers use in-Guest mirroring and raid volumes in the Guest OS. In this case, you would want to make sure that both the primary volume and its replica are kept on different datastores. If both sides of the mirror were on the same datastore, and that datastore failed, you would lose both sides of the mirror.
This is yet another reason why Storage DRS is one of the most highly regarded features in vSphere 5.0.
Get notification of these blogs postings and more VMware Storage information by following me on Twitter: @VMwareStorage