By: Vladan Seget, vExpert
VMware vSphere can be leveraged to ensure application availability, server availability or site availability. In this post I’ll look at different scenarios that might fit your organization.
While it’s quite easy to get protected by High Availability (HA) to prevent hardware failures, there might be scenarios where you also need application protection.
It’s all about granularity
VMware vSphere App High Availability (HA) – Applications First!
When it comes to granularity, you might want to protect your application first. If your application crashes under a heavy load, for example, but the VM is still running fine, nothing happens. Users will start to complain that their application is not available. In this case, vSphere HA won’t trigger an HA event and the VM won’t get restarted. How do you deal with that?
Application High Availability to the rescue! The vSphere Application HA works in conjunction with vFabric Hyperic Server, so you must install and configure Hyperic server in order to use vSphere App HA, and you must deploy a small agent to the VM for monitoring. The whole solution is managed through a plugin visible through the vSphere Web Client only.
Not all Enterprise applications are supported for monitoring – here is the list of currently supported applications:
- MSSQL 2005, 2008, 2008R2, 2012
- Tomcat 6.0, 7.0
- TC Server Runtime 6.0, 7.0
- IIS 6.0, 7.0, 8.0
- Apache HTTP Server 1.3, 2.0, 2.2
One prerequisite to installing Application HA on your cluster is to activate VM and Application monitoring (see below) and also to create an IP pool for the subnet where you want to install the product.
App HA can protect your applications, but what about when the underlying VM has a problem? Yes – the usual High Availability mechanism kicks in. In vSphere 5.0 HA was completely revamped, and vSphere 5.5 brought even more options when it comes to affinity rules (or rather anti-affinity rules).
VMware High Availability (HA)
HA can save your bacon. That’s for sure. The main purpose of virtualization is to abstract the hardware to run VMs on any compatible hardware in HA cluster.
Problem with your hardware? No problem, HA kicks in and restarts the VM on another host in the cluster.
Since vSphere 5.x there is only one agent in the cluster which plays the role of Master. The agent is called FDM – Fault Domain Manager. One host takes the role of master. The other agents on other hosts are essentially slaves, but can become masters in the event the master host fails.
FDM master keeps track of all hosts that are members of a cluster, and any adding/removing hosts refreshes this list as well. Now you might be thinking, “What if the master fails?” In that case, there is an election process (this was not the case in vSphere 4) and the host that has access to the greatest number of datastores is elected as master. You might wonder why? It’s because the secondary communication channel is through datastores. There are other considerations for a slave to become elected as a master as well.
In a DRS enabled cluster, prior to vSphere 5.5 (but since vSphere 5.x), after a restart a VM via HA would be placed on a host, and then according to an anti-affinity rule, would move via vMotion to another host (see the left pic). Post vSphere 5.5, this has changed – the VM will start directly according to the anti-affinity rule, where it should.