Product Announcements

Winner of Cycle 7 on Virtualizing Tier 1 Applications

Congratulations to Jason Nash for his blog entry on virtualizing tier 1 applications. Jason has the distinct honor of being the first two time winner of this VMware vSphere blogging contest. Jason's complete post can be seen below or by going to the following link:

Why Isn’t Your Data Center 100% Virtualized?

January 9, 2010 by nashwj 

I understand that may not be a fair question. In many cases there are things that just can’t be virtualized, and I don’t mean for performance reasons. I’m talking about non-X86 workloads and applications with specialized hardware. Don’t forget about the dreaded dongle that some apps still require!

One thing that I find very interesting to discuss with customers is their comfort level limit with virtualization. At what point in their application tiering do they think that something couldn’t or shouldn’t be virtualized. It’s really not much of a secret that I’m a big proponent of virtualization and going as far with it as you can is something that I find myself preaching a lot. I do it for a number of reasons and I’m starting to see more and more people follow a similar train of thought.

From what I’ve observed there is usually a common migration to virtualization in an organization. I refer to it in a three step progression.

  1. Consolidation 
  2. Cool Features 
  3. Disaster Recovery 

Several years ago I was a Network Manager at a mid-sized company. Like most we were in the midst of serious server sprawl and needed to do something about it. Just saying “No” didn’t seem to work. We still had a rack full of 1U HP DL360 servers for varying tasks and groups. There were several for accounting apps that couldn’t run on the same system due to app conflict, then we had a couple with other apps that had Java conflicts….and even more for groups that just didn’t want to share resources or weren’t comfortable with it. All of these systems would sit at 5% utilization all day long sucking up power (that we didn’t have) and eating in to cooling (that we had even less of). This was the reason we first dipped in to virtualization and I refer to this as the consolidation phase. It’s the way to contain server sprawl and do it on low tier applications so you aren’t risking anything major.

We still see a lot of companies in the midst of the consolidation phase but ultimately they move in to the Management phase. This is where they virtualized the low tier apps and started to see the benefit of VMware. They now can VMotion machines around and do maintenance without downtime. They like VMware HA for redundancy and FT even more. Storage VMotion allows for easy storage migrations, again with no downtime. They also get comfortable managing, backing up, and working with VMware at this level. They start to think “Now, wouldn’t it be cool to just VMotion the Exchange server to another server for maintenance instead of that 8 hour downtime on a weekend?”. But they are scared….. Things like Exchange and SQL worry them.

The final stage is the Disaster Recovery stage. I have several customers in this right now and it’s something I talk about a lot. In fact, I did a keynote on this very subject at the Carolinas VMware Summit in the summer. What really pushes people to the next level isn’t core VMware functionality, it’s Site Recovery Manager. They start looking hard at their DR strategy and what they need to do to simplify it. They get a taste of SRM and see how easy it makes DR planning and, more importantly, testing. They see that they can easily test their DR plan any time they want without impacting production and without taking days to build an environment and then days again after the test to tear it down. Those Tier 2, 3, and 4 apps take no time at all in the plan, but those pesky Tier 1 apps still have an inch thick play book to cover each time the plan is tested. There are people out there running a single VM on a single ESX server just for this capability. They get the abstraction and portability of virtual machines while still making sure that super-app gets all the resources it wants.

So what is stopping you from virtualizing those Tier 1 applications? IF you say performance I ask you to check again. In most cases people are scared about I/O performance under any virtualization product. Look at this white paper by VMware. A single vSphere server can do 350K IOPS! If you have an application that needs more than that on a single server I’d like to see it. Here is another great comparison showing Oracle native against Oracle under VMware. That’s also a very good blog for performance related information.

So why do we see people shy away from virtualizing Tier 1 apps? They don’t have the necessary information to make them feel comfortable doing it. One thing we do at the start of any engagement is to gather information, and sometimes a lot of it. We have excellent tools to go look at a customer’s applications to see what performance requirements it has. Too many times we see people just P2Ving a large app and having serious performance problems because they didn’t do the work ahead of time. VMware’s own Capacity Planner tool that partners can use is really good at looking at servers to gather CPU, memory, and I/O requirements. With this information you can really architect out your environment to handle any load. That’s the key. You have to build a good architecture before you start virtualizing these heavy hitter applications and it’s often something that gets overlooked. Virtualization has gotten common and with common comes complacency. When people get complacent they overlook the details that make or break a new deployment.

Once you have the information you need and the requirements for your applications you can then start specifying the equipment and I/O infrastructure. We have customers now going full speed with 10Gb connectivity and Fibre Channel over Ethernet (FCoE). They do this to give those really high-end applications the I/O that they need. While most people will read that and think “We can’t possibly afford that!” they need to look at what it really costs them to deploy applications in a legacy model. If your standard ESX deployment is 6 or 8 Gb Ethernet connections and 2 or 4 4Gb Fibre Channel connections what is that costing you in switches, cabling, power, cooling, and management? You will find that these new consolidated fabric solutions are not much, if any, more expensive then deploying more of these split fabric infrastructures.

In the majority of organizations the Tier 1 apps are SQL, Oracle, and Exchange-based services. What people miss is that these really aren’t I/O heavy. Sure, they can do a LOT of small transactions but that’s not a problem with VMware or even “legacy” Fibre Channel connectivity. Be smart when moving those systems to VMware by planning your I/O, CPU, and memory but also pay attention to your disk layout. Again, another common problem we see is a Tier 1 application being thrown on a datastore in use by other VMs and causing a problem. It’s also common to see back-end spindles shared so even though the administrator has the application on a low use datastore it’s still fighting for spindle contention. Gathering good performance requirements and a well planned architecture will stop that problem well before anything gets deployed.

So, in conclusion, get moving on those Tier 1 apps. If you aren’t sure how to gather reliable data on performance requirements get with a good VMware and storage partner. They can make the difference between a successful deployment and one where you spend your nights tracking down performance issues.

Related Articles