By Massimo Re Ferre’, Staff Systems Engineer – vCloud
Architect
As I mentioned in my previous
post I started working on virtualization technologies years ago. It was
around 2003 when I started talking, at public events, about what one could
achieve using VMware ESX (which at that time was the only VMware offering for
the enterprise market). I still remember the very first two questions I got
asked in one of those events that year. The first one was "wow, does it
really work?" Answer: "Yes, it does indeed". The second question
I got asked was, "Can I virtualize SAP"? The answer in 2003 was a no
brainier and it was something like, "We don't want you to virtualize the
SAP instance. We want you to virtualize the 20 plus infrastructure servers you
have sitting around it that support that SAP instance because they are what
cause you so much trouble".
For the next event, I decided that I should
anticipate the "what is virtualization good for?" and "where do
I start with it?" type of questions so I built the following slides to
give the audience a rough idea of where (and why!) these technologies would
fit.
For years I have pitched a typical datacenter
deployment as a pyramid on the side where, on the left, we have many instances
of dynamic, non-critical, non-resource
intensive types of workloads. Test and development environments are a good
example. As we move to the right, workloads start becoming less dynamic, more critical, and more resource intensive. The SAP
instance above would be a good example of what sits on the other side of this
spectrum. In the middle we have a broad mix of infrastructure, tier 2 and tier
3 types of workloads, each of which comes with various infrastructure
requirements.
As you can tell from my graphics above, the virtualization adoption model I was
suggesting was pretty straightforward: "start from the left, move to the
right and stop where you like". This slide was built in 2004 and could
still be used in 2010. I think this adoption model made tons of sense at that
time for many specific reasons:
1) Organizations were losing control of the left part
because of the many little workloads that were popping up every other day
without any sort of governance (virtualization helped a lot with consolidation
and containment);
2) Organizations were not dynamic enough on the left
part because the deployment lead time for physical servers was too long
(virtualization helped a lot with the concept of "your new server is three
clicks of mouse away");
3) Organizations were happy to introduce new
innovative technologies on the left part because it was less critical compared
to the part on the right side of the pyramid.
In a way, this was a win-win. The advantages of this
solution were an excellent fit for the characteristics of the dynamic workloads
on the left side and the limitations of this solution (limited enterprise
maturity with associated risks) weren't really an issue for those types of non-critical
workloads. Well, you know what happened next. End-users started this
"journey" and there are now many organizations that are running SAP
virtualized.
That was the picture in 2003. How about now in 2010?
As I started working more closely on public IaaS cloud aspects, I have heard
many concerns and doubts that reminded me of those questions I was getting back
in the early years of this century. Can I move my core business application out
there in the cloud? How can I ensure that my own customers' data are protected?
Well I am sorry to rain on the party but, honestly, I don't believe these will
be the first workloads to move into the public IaaS cloud.
First, there is a technology argument. We are still
talking, by and large, about early offerings in the public cloud space. Similar
to what happened with ESX and with the overall virtualization ramp-up, we will
see technical improvements in public cloud offerings that will make it easier
to migrate critical workloads onto future stages of the IT infrastructure. This
doesn't mean ESX wasn't initially an enterprise-grade product. In fact, I
worked with a number of customers that were moving relatively important
workloads on to that platform, but arguably vSphere is a better and more mature
technology.
Other than that, we can't ignore another, probably
more important, fact. Organizations
will want to take the time to learn what the public cloud is and will gradually
move workloads there. Most of them recognize the value of doing so in the same
way that they recognized the value of VMware ESX 1.0 when they first saw it.
This doesn't mean they jumped onto it overnight to migrate their core apps.
No matter how good the technology is (and while there
is space for improvement, it is good indeed) it will take time. You may want to
call it "fear of the unknown" or "risk management," but we
need to accept it for what it is. You will probably see me using these slides
again in 2010. I will just need to change the title to "The Public Cloud
(likely) Adoption Curve".
Massimo,
Nice post and I agree with you about the adoption curve for the public cloud. I was curious, though, on your feelings about the adoption rate of the Private Cloud?
Massimo,
agreed that the public cloud adoption model resembles in some way what we saw with virtualization. There are also new challenges and dimensions in the cloud space (particularly in the public cloud), such as security, interoperability, control, etc. These factors/dimensions need to be “understood” by all the stakeholders, and need to become solid components of the public cloud inner nature.
In fact, there are still some (organisational, not technical imho) issues that are still not addressed even in many “old style” 😉 virtualized environments (vsprawl is one, which needs organisational – not technical – shift in order to be controlled, e.g. with chargeback and the like).
The “fear of the unknown” becomes “official” when we talk about public clouds, because there’s an “official” need of knowing the security policies/practices/etc applied by the cloud provider. Again, it reminds me of the first virtualized workloads, where the user/customer was scared that his app wasn’t given enough horsepower etc. As we all know, far from truth.
So, while virtualization could have been introduced in companies in a hidden fashion (management may not care if it’s a p or v server – which is dead wrong, btw), public clouds most often can’t have this kind of luxury, thus requiring a most-needed mindshift in the way the whole IT infrastructure is thought.
But again… luckily, it’s inevitable 🙂
@Steve,
Spot on. I was meant to touch on Private Clouds as well but I wanted to keep this post short. Let’s see if I can give you an answer that makes sense without telling you too much of stuff I can’t talk about publicly. 🙂
In short the answer is: moving to Private Clouds won’t have some of the challenges associated to Public Clouds, but yet it won’t be fully transparent. Clouds (be them Private or Public) have the “end-user experience” in mind as a priority and this sometimes conflicts with some of the deployment policies we see today in internal virtualized environments. A good example is the fact that clouds completely hide the layout of physical resources and most of enterprises are just not ready for that. There have been, for instance, tons of discussions about how to expose “storage tiering” to the end-users, something that you can’t easily do if you want to preserve a certain level of transparency and ease of use. Is this is a “limitation” or a clever way to get rid of legacy setup practices which sometimes tend to be over-engineered? Your mileage may vary. Another point to consider is that Cloud, by its very nature, injects an additional level of abstraction that may conflict with some management and automation products currently being used and that need to be “adjusted” in order to understand new “virtual constructs”. Note that Cloud related technologies do not sit transparently on the side of a virtualized infrastructure as most of the management and automation products do today. Cloud technologies are intimately bound to the virtualization infrastructure and usually sit on top of it, in-path between the infrastructure underneath and the end-users trying to consume infrastructure resources. This makes integration with satellite products particularly challenging.
Since this comment is becoming longer than the post itself I’d better stop here. I’ll try to clarify this in a future post.
@PJ,
Agreed.
History does repeat itself. I remember a few other buzz words they used to use in IT to describe the cloud – “On Demand” computing and “grid computing” were used heavily for the past 10 years. It is true that marketing plays a big role in how to spin a “cloud” offering……there are as many cloud definitions and offerings today as there are grains of sand in Silicon Valley.
The cloud is not infinite (nor are the utilities we use) – there’s a level where storage, compute, and network become constrained and very expensive. Depending on what resources are available to a business, and what infrastructure designs are in place, the cloud will take on different forms.
Mike Flaherty
http://www.onlinetech.com