posted

0 Comments

By
Massimo Re Ferre’

Staff
Systems Engineer – vCloud Architect

There
have been a number of discussions in the industry in the last few years about
whether hypervisors are (becoming) a commodity and whether the value is (or
will be) largely driven by the management and automation tools on top of them.
To be honest, I have conflicting sentiments about this. On one hand I tend to
agree. If you look at how the industry is shaping pricing schemas around these
products, that's the general impression – all major hypervisors are free and by
definition one could argue that they are a commodity.

On
the other hand, this doesn't really match my definition of commodity. I'd
define a commodity technology as something that had reached a "plateau of
innovation" where there is very little to differentiate from comparable
competitor technologies. This pattern typically drives prices down and adoption
up (in a virtuous cycle) because users focus more on costs rather than on
technology differentiation. The PC industry is a good example of this pattern.

Is
this what is happening with hypervisor technologies? Hell no. I think there is
no one on this planet who thinks that deploying OS images on dedicated physical
servers is faster, more flexible and in general better than deploying them on a
virtualized host. Yet virtualization usage, in the industry, is broad but
not deep
and it's usually around 30 percent (on average) within most
organizations. And these technologies are widely available for free (ESXi,
Hyper-V, XenServer and KVM)!

So
if everybody agrees that there is a problem with the current physical server deployment
model, and that there are free technologies available to download from the Internet
that can address the problem, why are organizations only confident to put 30
percent of their workloads on these hypervisors? Can someone explain this? My
take is that there may be a number of concerns around support and licensing. But
the industry has matured and made huge progress on this front in the last few
years (Oracle being one of the few exceptions unfortunately). I bet that a
large chunk of that 70 percent of server deployments is not virtualized simply
for technology concerns such as stability, performance, scalability, security
and so forth. Where there are technology concerns or technology limitations then
there is space for innovation (or education to raise awareness).

The
fact that the industry is moving to a model where the hypervisor is free and
the management tools are the source of revenue tells a partial story to me. The
technology story behind the scenes is quite different. The reality is that
there are multiple ways to look at hypervisors and their use cases. If you view
the hypervisor as the thin software layer that allows you to consolidate five
servers on a single box… well I am with you. At 10 Km/hour there is little
difference between a Ferrari and a Fiat (even though the Ferrari is still damn
cool). If you, instead, view the hypervisor as the foundation for private and
public clouds where multi-tenancy, security, flexibility, performance consistency
and predictability, integrity and scalability are not optional
characteristics… well then there is a difference indeed.

You
may argue that you can achieve most of these characteristics using the proper
management and automation tools that sit on top of bare metal hypervisors. But
the fact is that the policies at the management layer are only as good and reliable
as the hypervisor used to implement and enforce them. Yes, you could put a
Ferrari engine on a Fiat and have the best pilot (Michael Schumacher
Fernando Alonso) pushing it at 330 Km/hour! And everything may be great up
until the moment when you hit the brakes and find out that it will take you 1,500
meters to stop it (if you don't hit a wall before).

Similarly
could say that the real "value" of an airplane is its cockpit with
all the automation that goes into it. Again, you can put autopilot on and all
is good but, at the end of the day, the autopilot (and all the other automation
technologies in the cockpit) only instructs the "basic" airplane
technologies (thrust reversal, flaps, etc…) to do the real job. And I can
assure you that you will want these technologies to be as good, reliable and
secure as possible!. Always remember that it's not the autopilot and all the slick
automation that happens in the cockpit that keep you flying at 33.000 feet – it's
the wings.

I
am mixing metaphors here and perhaps digressing. Going back to our lovely
"commodity" hypervisors discussion, one of the things that always
shocked me is how powerful the networking subsystem is that is inside ESX. It's
just amazing. Out-of-the-box and easy-to-use support for distributed virtual
switches, redundancy (both at the physical and logical level), multiple
failover and balancing algorithms on a PortGroup basis, traffic shaping,
security built-in via the VMSafe APIs, and a tons of other parameters and features
that you can leverage and tune based on your specific requirements. And what
you have seen so far is really just the foundation of what's happening in terms
of injecting more cloud oriented and multi-tenancy support. We are working on
some cool stuff that will be coming out in the future that is just amazing. I personally
spent the last three months digging into those things and the potential there
is phenomenal. I can't talk about this in detail today but it's pretty clear
that here we are not talking about just setting up 10 Windows VMs on a physical
server allowing them to connect to a flat L2 segment sharing a single Ethernet
cable. I can't wait to talk more about what we have in the works and to prove to
you that, just like you can't build a castle on the sand, you can't build an
Enterprise Cloud on a limited hypervisor.