Home > Blogs > VMware vCloud Blog > Monthly Archives: August 2010

Monthly Archives: August 2010

Getting to Know vCloud (Part II)

Matthew D. Sarrel, Sarrel Group

Previously, I wrote about my experimentation with Terremark’s vCloud Express.  I found it insanely easy to create a VM from one of Terremark’s templates.  Yet, as we know, simply creating VM’s isn’t the end of the story.

I had a bunch of false starts trying to connect directly to my servers.  The best documentation I could find is here.

The steps involved are basically to download and install the SSL VPN client and then RDP (or SSH) into the VM directly.  It sounds easy enough but there’s more to it than that. The right way to do this is to click the VPN Connect button from the Servers page.  Much to my chagrin this appears to work only in IE.  I added the vCloud and the SSL VPN servers to my Trusted Sites:

Graf1

Then I installed an ActiveX component, upgraded Java, and installed the SSL VPN client.

Graf2

I ultimate arrived at this screen – remember this because it means you’re good to go.

Graf3

I launched the RDP client and connected to the IP address (you can find this by clicking on the server and at the bottom of the screen it shows the “Detected IP”).  And voila, I was sitting at my desk controlling the mouse and keyboard of a VM located somewhere out in the cloud.

Graf4

From here on it’s pretty straightforward.  I can install software either by downloading it directly to the server or by sharing my local drive with the server. Alternately, I could connect to the server by selecting it and clicking the “Connect” button.  This basically launches a remote session in the browser (again IE only).

Graf5

It’s important to use the CD/DVD drop down list to mount and install VMware Tools.  I could also use the same drop down list to mount an .ISO or map a local drive – very useful when installing software for test purposes.

If you’re just getting starting on vCloud, let us know how it’s going and share your own experiences. I’ll be happy to post examples of common challenges, use cases, and lessons learned as you ramp up your working knowledge of the public cloud!

Matthew D. Sarrel (or Matt Sarrel) is executive director of Sarrel Group, a technology product testing, editorial services, and technical marketing consulting company.  He also holds editorial positions at pcmag.com, eweek, GigaOM, and Allbusiness.com, and blogs at TopTechDog.

 

Tipping Points: the Social Aspect of Cloud Computing

Steve Jin, VMware R&D

This entry was reposted from DoubleCloud.org, a blog for architects and developers on virtualization and cloud computing.

Many people already know the book “The Tipping Point: How Little Things Can Make a Big Difference.” According to the author Malcolm Gladwell, tipping points are “the levels at which the momentum for change becomes unstoppable.” He defines the term as sociological and uses it to explain what he calls sociological epidemics.

Three Rules of Epidemics

In his book, Gladwell laid out the “three rules of epidemics” as follows:

1) The Law of the Few.
“The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts.” The author categorized people into Connectors who link us up with the world; Mavens who are “people we rely upon to connect us with new information”; and Sales People who are charismatic persuaders.

2) The Stickiness Factor
The specific content of a message that renders its impact memorable.

3) The Power of Context
Human behavior is sensitive to and strongly influenced by its environment.

Although the research comes from sociology, I think it applies to technology as well. After all, technology is social. Just think about social networks like Facebook, and the recent success of Apple’s iPad.

If you want your technology to be a huge success, you cannot ignore its social side story. In the end, it is human beings who make decisions regarding any technology adoption or product purchase.

Two Tipping Points

Cloud computing has become a buzzword. It has also climbed the CIO’s priority list from near the bottom to second place in a recent survey. While we’ve made huge progress in terms of marketing, we haven’t yet seen the tipping point of cloud computing in reality.

Unlike the Hush Puppies shoes discussed in the book, technologies like cloud computing are far more expensive to adopt and they require much more expertise than buying a pair of shoes.

This uniqueness of cloud computing results in an interesting phenomenon – it has two tipping points. The first tipping point is with marketing. As an industry, we are pretty good at creating buzz around new technologies. No other industry is as eager as high tech to embrace the “next big thing.” And we have totally embraced the marketing concept of cloud computing!

The second tipping point is with adoption in production. It may or may not come after the first tipping point. After the first tipping point, people may give cloud computing a try with pilot projects. At this tipping point, it becomes absolutely critical that the technology brings real vale to businesses or adoption dies.

There are examples in technology where the second tipping point arrived and then reversed due to the absence of sufficient value. One of the best examples in my opinion was the EJB (Enterprise Java Bean). It was widely adopted but then faded away. It failed to deliver sufficient value.

Tipping points for Cloud Computing

With so many people talking about cloud computing, we obviously have achieved the first tipping point from a marketing perspective. But we haven’t yet seen massive adoption through the second tipping point, adoption.

Where will businesses find sufficient value in cloud computing? Cost saving is definitely one, but for sure not the only one. As hardware cost keeps dropping, capital spending on servers may be less a concern for most businesses in the future. Instead, they would look for flexibility, control, and growth from cloud computing. If we look at the three rules of social epidemics from a technology perspective, value defined by these attributes will be driven by people. At the second tipping point for cloud computing, breakthrough adoption is now up to the Connectors, Mavens, and Sales People.

With many companies and organizations in the cloud movement, we will for sure have many Connectors, Mavens, and Sales People that may have different understandings of cloud computing, and different agendas due to company interests. We need to collaborate effectively with each other for the bigger goal: showing real world values to customers. Only then we will see the second tipping point that we all look forward to.

Steve Jin is author of VMware VI & vSphere SDK (Prentice Hall), founder of open source VI Java API, and is the chief blogger at DoubleCloud.org



Evolve or Die: Sys Admins and the Public Cloud

By David Davis

A few years back most enterprise infrastructure administrators and designers were grasping to fully visualize the ramifications that server, application, desktop, network, and storage virtualization would have on their datacenters. Today, many of those same enterprise administrators have become champions of virtualization. Now they are now facing a new technology wave that they are challenged to understand and champion in their organizations. This technology change that VMware (and its partners) are advocating is cloud computing (specifically the vCloud initiative).

Recently, I attended a VMware User Group (VMUG) session and caught VMware's Mike DePetrillo (mikedipetrillo.com) in a session entitled "All about (v)Cloud". His full-time job is traveling to enterprise customers and VMware technology partners discussing VMware's strategy around cloud computing.

In Mike's presentation (which I recorded here for the VMware YouTube channel), I was a bit surprised at his brutal message to every system admin out there – improve your skills or become obsolete (evolve or die!). More specifically, Mike said that cloud computing and virtualization are the "new normal" and you need to accept it. Admins of all types need to evolve into "infrastructure admins" to be able to not only SEE the bigger picture of what's going on in the data center but also be able to administer and troubleshoot it. Then, as your internal / private cloud (potentially) moves to a public cloud, you need to be prepared. You need to accept this change and be prepared to move on to public cloud-based IT projects with real ROI.

While it may seem common sense to some, Mike said that "if all you can do is provision SAN LUNs all day long, when cloud computing comes to your company, you'll be out of a job". Mike said that the real push for companies to adopt cloud computing is going to be from the top down and the justification will be to turn over infrastructure costs to a company that can do it for less money, freeing up existing IT staff to work on IT projects that will have a real ROI for the company.

(Download Mike’s Presentation “All About (v)Cloud” here)

As a former IT Manager at a medium-sized company, I have a lot of personal experience trying to persuade 25-year company employees that they need to accept the new technology change to "(INSERT NEW TECHNOLOGY)". Many times, that change wasn't accepted until it was forced upon them (Wordperfect 5.1 was taken away or COBOL programs were no longer supported). For some IT Admins, the change to virtualization and cloud computing may be forced on you. But for the smart Admins out there, I encourage you to take Mike's advice:

1. Move to 100% virtualization at your company;

2. Become an "Infrastructure Admin" not just a singularly focused Admin;

3. Accept that your datacenter infrastructure will likely one day be moved to "the cloud" and you will be re-tasked with working on IT tasks that have a huge ROI instead of just "keeping the servers up".

Another excellent post about dealing with change related to cloud computing came from Stu Miniman of Wikibon (formerly of EMC) who posted The Elephant, The Rider and The Path to Cloud Computing. In this post, Stu talked about the book "Switch: How to Change Things when Change is Hard" and how it relates to cloud computing.

David Davis is a VMware Evangelist and vSphere Video Training Author for Train Signal (www.TrainSignal.com). He has achieved CCIE, VCP,CISSP, and vExpert level status over his 15+ years in the IT industry. David has authored hundreds of articles on the Internet and nine different video training courses for TrainSignal.com including the popular vSphere video training package. Learn more about David at his blog (www.VMwareVideos.com) or on Twitter (www.Twitter.com/davidmdavis) and check out a sample of his VMware vSphere video training course from TrainSignal.com!

 

VMworld 2010: Eight Cloud Computing Sessions You Can’t Afford to Miss

By David Davis

 

VMware's
vCloud initiative is tremendously important for the future of the company and its
1,000+ partners. That new focus "on the cloud" is evident in this
year's VMworld 2010 conference – four of the eight session tracks at VMworld
this year are related to cloud computing! Let's explore what these cloud
oriented tracks are about and what the "not to miss" sessions are in
each of these "cloud tracks".

 

At VMworld
2010, three of the four session tracks related to cloud cover the private cloud
(that's an internal and company owned infrastructure running vSphere) and one track
covers the hybrid/public cloud. However, the number of sessions covering
hybrid/public cloud (15) is still small in comparison to the number of sessions
covering the private cloud (43). This makes sense as many enterprises are
awaiting further developments and wanting more education on cloud computing.

 

Whether
you'll be at VMworld or watching sessions later over the web, here are the eight
cloud computing sessions you don’t want to miss:

 

Private Cloud Business Continuity

BC7773 VMware
Site Recovery Manager: Misconceptions and Misconfigurations
with Mike Laverick. Mike literally
"wrote the book" on SRM (see his book "Administering VMware Site
Recovery Manager" at his website) and is a well-known expert on the topic.
Mike runs the website
RTFM.ed.co.uk, is a VMware Certified Instructor
(VCI), and a vExpert.

 

BC7803 Planning
and Designing an HA Cluster that Maximizes VM Uptime
with Duncan Epping. Duncan works for
VMware's professional services organization (PSO), sits on the VCDX defense
panel, and is a vExpert. His website,
Yellow-Bricks.com is a well-known source for advanced
vSphere information. I am sure that his HA clustering session will have some
"insider tips" that are not to be missed.

 

BC8537 VMware
Data Protection Roadmap
with Azmir Mohammad . One of the few "VMworld experts", Azmir
is a Sr Product Manager for VMware Data Recovery. To learn about the future of
what will
VMware
Data Protection
will look like, I want to see what Azmir has to say.

 

Private Cloud Management

MA7140 vCloud Architecture Design Strategies
and Design Considerations featuring John Arrasjid and Pang Chen who are both on
the VCDX panel and principle
architects/consultants at VMware. If any session can tell us what the vCloud
architecture and design will look like, you are likely to find it in this here.

 

MA8092 Cloud Futures: The Infrastructure
Authority featuring
Chris Wolf of Gartner. Chris is
well-known for being an unbiased virtualization and cloud expert. As Research
Vice-President at Gartner, he is eyed for direction on these topics by the
Fortune 100/500 companies  around
the world so I am interested in his unbiased view on these topics.

 

Hybrid/Public Cloud

PC6940 Networking Best Practices for vCloud
featuring Charles Cano and
Mike DiPetrillo. While I
haven't had the pleasure of meeting Charles, I have met and listened to Mike
speak. Mike is a VMware Global Cloud Architect – a title that you don't hear
very often. I have heard Mike speak about VMware's vCloud in general so I am
interested in his take on the "vCloud best practices" for the cloud.

 

Private Cloud Security

SE8098 Private Cloud Security: Vendor Secrets
and Hypervisor Competitive Differences with
Chris Wolf. Again,
any session with Chris is a "must attend" in my book as I am
confident I will leave with expert-level unbiased information (I also like the
intriguing title of "vendor secrets").

 

Technology Partner / Sponsors

SP9645 IDC Says, “Don’t Move to the Cloud”
featuring Ben Goodman and Richard Whitehead of Novell. While I don't know these
fellows personally and I admit that I normally wouldn't attend anything put on
by Novell (no offense Novell), I have to admit that these guys have a killer
session topic and I am very interested in attending this session.

 

And
finally, there is even a lab that covers the vCloud API (LAB16 Lab: VMware
vCloud API). If you want to learn more about the cloud, why not get some
hands-on time using the API that will make it all work.

 

There are
lots of great sessions on cloud computing at VMworld 2010. For my list, I tried
to select only the sessions where I knew the speaker personally. My apologies
to all the great cloud computing sessions that I didn't have room to mention.

 

David Davis is a VMware Evangelist and vSphere
Video Training Author for Train Signal (www.TrainSignal.com). He has achieved CCIE, VCP,CISSP,
and vExpert level status over his 15+ years in the IT industry. David has
authored hundreds of articles on the Internet and nine different video training
courses for TrainSignal.com including the popular vSphere video training
package. Learn more about David at his blog (www.VMwareVideos.com)
or on Twitter (www.Twitter.com/davidmdavis) and check out a
sample of his VMware vSphere video training course from
TrainSignal.com!

 

Thoughts on VMworld 7.0 and Virtualization 2.0

Massimo Re Ferre’  Staff Systems Engineer – vCloud
Architect

Yes that's right, it’s the
7th year of VMworld. The event started years back as a small gathering of a few
hundreds geeks. At least this is what VMware was expecting, in fact almost
1,500 individuals showed up in San Diego in 2004 for the inaugural show. I am
proud to be one of those first attendees. At that time you could breath the
"geeky" spirit of the event and how such a little and simple concept
would have changed the way we do computing down the road. Yeah, you take an
industry standard server, you install this little piece of software and you can
install two or more standard Operating Systems. Boom! You’ve changed the world
forever. There are times you listen to someone and have a “WOW” moment – I
still remember the moment where, in a meeting with a big bank back a few years
ago, I was telling a Veritas architect that we couldn't install their HA
clustering software because it was incompatible with vMotion. When I explained
to him what vMotion was he laughed at me saying that I probably
"misunderstood" what that technology could do because it was simply
impossible to move on-the-fly a Windows instance from one server to another.
Yeah, sure. Welcome to virtualization (1.0).

Then there’s a period where
there haven't been many "WOWs!". Sure there have been a few (VMware
Fault Tolerance comes to mind) but the titanic effect that the first wave of
virtualization technologies delivered in the first place…we just haven't seen
it again (yet). And there are good reasons for that. As virtualization became
more widely adopted, customers were obviously looking for more enterprise
management surrounding the first wave of these highly potential technologies. This
was the time when VMware concentrated more on the management side of things. I
have never seen anyone attending an ITIL class stepping out screaming
"Gee, this is the coolest thing in the world, I want to go home and read
more about it RIGHT NOW!"; similarly I anticipated a certain level of
"ah ok, interesting" when VMware announced a string of new
technologies in that management area (I can think of CapacityIQ, SRM, not to
mention all the Ionix portfolio VMware recently acquired from EMC). Don't get me
wrong: as much as none of the hypervisor geeks are going to "WOW" for
the Ionix portfolio….we all very well understand that without such tools and
an overall solid management strategy VMware wouldn't be considered for what
VMware would like to be considered, which is clearly not (just) the cool
technology provider that keeps a geek up during the night. That (alone) is not
what can bring VMware at the heart of the data center. Funny enough I’ve had a
blog post in my drafts for more than a year whose title was
"Virtualization is no longer sexy, it's just useful". More or less
this is what I'd have discussed here so I'll go ahead and delete the draft now.

This year it’s different
though. VMworld 7.0 (i.e. VMworld 2010) is going to go back to some core geeky
type of discussions around cloud technologies. While some of the cloud-related
discussions are still around management (which needs to be because we want
cloud to resonate to the enterprise as well) there are other "cloudy"
topics really for the geek at heart. I am thinking about the concept of
"location independence" that cloud will bring onto the table. That's
something I am going to touch on during my session at VMworld: Cloud 101: What's Real, What's Relevant for Enterprise
IT, and What Role Does VMware Play
. This session is really geared
towards introducing the cloud concepts and certainly one of the most
interesting concepts about cloud is that you could run your workloads…well…in
the cloud! Where else?! If you are still wondering what this whole cloud thing
is come to this session and you won't be disappointed. And if you are, that's
fine. Just don't fill up the feedback form. :-)

If everything goes well I
may even be able to show you something during that breakout. Consider I can't
promise this will be as "WOW!" as vMotion, but rest assured it's
going to be more fun than an ITIL class! This is, in a way, Virtualization 2.0
being presented at VMworld 7.0!

Joking aside, the only
problem I see is that some of these concepts (and technologies) are a bit hard
to digest the first time you face them. At least this is what happened to me
when I joined the vCloud team roughly 6 months ago. That's the part I am
struggling with at the moment: we are having some internal discussions on how
to layout a few sessions and we are debating on how to better present the
concepts and the products. You don't want to be too kindergarten but at the
same time you don't want to go too deep and lose the audience in the first 30
seconds. Challenging.

That's all I wanted to say
for today; if you are a geek you may find VMworld 2010 a fun show. And if you
are around stop by and say "ciao".

Massimo.

Getting to Know vCloud

Matthew
D. Sarrel, Sarrel Group

Lately I’ve
been experimenting with vCloud as it is offered through Terremark. Although I’ve been testing
software on virtual machines since 1999, this is my first foray into running VM’s
in the cloud.  And I love it.

I spend
most of my day testing software for reviews on blogs, websites, and magazines –
plus the testing that we at Sarrel Group do directly for software
developers.  Virtual machines play
a major role in my test operations. 
We have a number of servers running VMware and the VM’s themselves live
on a mix of NAS and SAN boxes. 
This is a huge leap forward from the old racks and racks of physical
machines we used to have in the lab at PC Magazine (at one point we had 512 PC’s
in one lab and 128 in another).

Enter
vCloud.  Now I don’t even have to
consolidate VM’s onto my servers. 
I just build them in the cloud, run my tests, and then delete them.  In the past 2 weeks I’ve learned that I
can be just as productive with NO hardware as I was with 640 PC’s.  Even I, the jaded battle-scarred tech
warrior I am, feel a bit giddy with such power and flexibility at my disposal.

Getting
Started with Terremark

The Terremark interface (available via https) is
set up to organize virtual machines (or servers) into rows and groups.  Basically, these let you build a grid
of VM’s so you can keep track of them. 
You can see the options below:


Terremark1

Right now,
I don’t have to organize my VM’s in the cloud because I’m just playing around,
but I can see how it would be helpful to maybe to create rows for each test
project and then group VM’s together by function within each row.

Clicking on the “create server” button gave me
several choices:



Terremark2

This is the
stunningly simple part.  Using drop
down boxes I could choose between templates for either a server OS or a server
OS plus a database.  Then I could
choose between Windows and Linux. 
Once I selected Windows or Linux I could choose between a variety of
server OS’s.  However, I do a lot
of testing on desktop OS’s.  A
typical test project might involve only one or two servers and a dozen
workstations.  I was disappointed
to see that there are no templates for workstation OS’s.  If I want to install a workstation OS I
can choose “create a blank server” instead, upload an ISO, and install from
that.  I’ll provide more detail
about how to do this in a future post.

From here
it’s really a matter of a few clicks to deploy my new server VM.  After clicking next I could choose how
many VPU’s and how much RAM I wanted. 
This plays into billing (your charged for the resources you use) and
that’s clearly spelled out in the description.


Terremark3

After clicking next I named
the server, created an administrator password, and chose an IP address from the
pool provided.



Terremark4 

From here I
clicked next, reviewed my settings, and then deployed the VM.  A new VM appeared in my server
list.  And that’s it.  In the amount of time it took you to
read this (and way less time than it took me to write it) I deployed a new
server VM.  And if I wanted to, I
could blow it away even faster just by right clicking and selecting delete.

I see a lot
of promise here, especially because I have to test while traveling. I have a
ton of bandwidth between my lab and the internet and I typically VPN and then
RDP or VNC into my test VM’s.  Or
if I’m working on a project where performance measurement matters I’ll box up
the server and carry it with me. 
The remote access is OK, but carrying a server with me is, to be honest,
pretty lame.

Please join
me in a short prayer that vCloud Express lives up to its claims and I never
have to schlep a server across the country again.

Matthew
D. Sarrel (or Matt Sarrel if we’re being familiar) is executive director of Sarrel Group, a technology product
testing, editorial services, and technical marketing consulting company.  He also holds editorial positions at
pcmag.com, eWeek, GigaOM, and Allbusiness.com, and blogs at TopTechDog.

Intercloud vs. Internet: What’s Missing in Cloud Computing?

Steve Jin,
VMware R&D

This
entry was reposted from
DoubleCloud.org, a
blog for architects and developers on virtualization and cloud computing.

As more and
more clouds go live, it’s time to think about how they will need to interconnect
and interact. InterCloud is a new terminology coined for cloud computing after
Internet for networking.

Vint Cerf,
the “father” of the Internet, said recently that the cloud is much like
networking in 1973 when computer networks couldn’t connect or interact. He
called for open standards for cloud computing so that InterCloud can become a
reality.

It’s hard
to design standards when people are still trying to reach a consensus on
defining what a cloud is in the first place! The good news is that as an
industry we went through a similar process for the Internet. So we can learn
from that experience.

The idea is
simple: look at basic building blocks we have for the Internet and think about
their equivalent for the InterCloud. Believe it or not, InterCloud and Internet
share many common characteristics. The following table summarizes some of
these.

 

Networking

Cloud Computing

Content

Data

Computing workload

Cornerstone

Ethernet

Virtualization

Format

ASN.1

OVF + ?

End point

Host

Hypervisor

Protocols

OSI, TCP/IP, etc.

?

Directory service

DNS

?

ID

IP address/host name

?

Resource locator

URI/URL

?

Interconnectivity

Internet

InterCloud

Killer use cases

Email, Web

System provisioning,
Dynamic scaling

 

Let me
emphasize the key difference between the Internet and the Intercloud —the
Internet moves data and the InterCloud moves computing workloads. With
virtualization and other high level virtual machine technology, computing
workload is essentially data. Viewed in this fashion, that would mean the
InterCloud can leverage the Internet as a high level application.

As you see
from the table above, there remain many question marks where we do not yet have
InterCloud equivalents to the Internet. That is something we need to think
about and work on moving forward.

Let me know
what you think. Do you have suggestions for equivalents?

Steve
Jin is author of 
VMware VI & vSphere SDK (Prentice
Hall)
, founder of open source
VI Java API,
and is the chief blogger at
DoubleCloud.org

Vertically Complete Systems with Data Openness: IT’s Next Big Trend?

Steve Jin,
VMware R&D

This
entry was reposted from
DoubleCloud.org, a
blog for architects and developers on virtualization and cloud computing.

IBM
recently
announced its re-organization around its
software and hardware business units. The previously separate business units
were merged together as one – the Systems and Software Group led by the former
software chief Steve Mills.

You may
recall that IBM did not have a dedicated software group until Lou Gerstner
created one 15 years ago to centralize all the software businesses into one
business unit. This unit has been IBM’s most profitable business. Before that,
IBM offered all the software as add-ons to the systems like 390 and AS/400.

Now can we
expect IBM to offer hardware systems as add-ons to their software solutions?

Although
companies constantly re-organize to streamline their business execution, this
reorganization did indicate a big trend is happening in the IT industry.
Computer vendors are striving to own vertically-complete stacks: from hardware
all the way up to business applications.

IBM is not
alone in this trend. Oracle acquired Sun Microsystems for its SPARC servers,
Solaris OS, Java, MySQL database, and even tape storage. Together with its
database and business applications acquired from PeopleSoft, Seibel, and many
other acquisitions, Oracle is now yet another “IBM” that controls vertically
complete stacks. We can expect more IBM-like business model clones emerging in
the coming years.

Is this
trend good for customers?

Openness
vs. User Experience

Openness
means choice and flexibility and often lower initial cost. But it can also mean
sleepless nights to put different IT components together. When there is an
issue with your application/solution, whom do you call? Your hardware vendor,
OS vendor, middleware vendors, application vendors, or systems integrator? It’s
hard to pinpoint anyone on the list. This is a pain for the customers.
Customers like having one throat to choke.

Looking
back 30 years ago, we can mostly blame IBM because it sold customers most of
their computer and software systems. IBM owned all the responsibility to fix
any problem. At least until customers had a different pain: you don’t have
choice therefore you have no power in negotiation with Big Blue.

Then
customers began to move onto the client-server model of computing and open
systems with open software. This strategy started to prosper in the 1990s.
Openness has been the winning market strategy ever since.

With the
openness problem solved, the solution that gave customers more negotiating
leverage also shifted the pain back to them. Today we have so many systems,
open or not, that the amount of effort to make them work together is a really
big pain. This hurts user experiences.

In the end,
users just need the applications to support the business. Again, the
application is the end, and everything below is the means. Whatever can do the
job better and more economically wins. This is the fundamental truth behind
this trend for vertically complete systems.

Software
Openness vs. Data Openness

With
everyone in the game being vertically complete, how can vendors differentiate
again? And for the customers, how should they choose vendors? For customers,
the stakes in choosing the right vendor are even higher than before because
they are standardizing their system with one vendor at a time. Switching costs
are the biggest pain and expense.

Still
customers would love to have the flexibility and choice to switch systems. How
do they do this?

I think the
key is the data. No matter how different systems are, they have to persist data
and be able to restart from the point where they shut down. If two systems from
two different vendors understand the same set of data, customers still switch
easily from one system to the other.

With
complicated systems, there may be many data persisted, including the output of
data processing, customization settings, user preferences, and more. It takes
time and effort to convert them all over from one system to another. Some of
the data may never be carried over. For example, if one vendor’s algorithm is
not replaced by the other, then the tuning parameters are lost.

Virtualized
Vertically Complete Systems

With
virtualization, the vertically complete systems can, and should, be re-defined.
For one thing, hardware virtualization provides an abstraction for the hardware
so that your vertically complete systems can be packed as a bunch of files. These
are what I call virtualized vertically complete system.

The
benefits are profound: you can move your vertically complete systems to wherever
best for whatever reasons such as reliability, performance, saving, and etc. They
are extremely important in cloud age where you cannot move around your physical
servers but for sure your virtualized servers.

The data
openness theme doesn’t change in the cloud environment. The bar may be even
higher because you may want to change the system more frequently in the cloud than
inside an enterprise.

Conclusion

The
industry is experiencing a shift from openness to a vertically complete
hardware/software appliance approach. Big vendors are re-aligning their
operations for this trend.

For
customers, it is mixed news. They may get a better user experience but pay the
price in vendor lock-in. To best protect IT investments, customers should
demand data openness to protect their ability to switch systems down the road.
Data openness is the only solution to this problem.

In this
cloud age, the virtualized vertically complete system inherits the benefits of
vertically complete systems with additional flexibilities and portabilities
that are crucial for cloud computing. It’s IT’s the next big trend.

Steve
Jin is author of 
VMware VI & vSphere SDK (Prentice
Hall)
, founder of open source
VI Java API,
and is the chief blogger at
DoubleCloud.org

A Big Cloud Challenge: Cross Stack Portability

Steve Jin,
VMware R&D

This
entry was reposted from
DoubleCloud.org, a blog for architects and
developers on virtualization and cloud computing.

When you
think of portability in cloud computing, you think of how to move applications
code, data, and workloads. These are mostly horizontal movements within the
same level of software stacks – from one IaaS to another, and from one PaaS to
another.

There is a
more interesting and potentially very important movement that I would describe
as “cross stack” portability. Today we don’t see cross stack portability unless
we re-write the application, which is not what I cover here (although it could
be a good business opportunity for companies to explore). Rather, I am talking
about how to move your application built on PaaS to an IaaS vendor or even to a
private cloud. The reason I call it cross stack is because the application is
moved up or down to a different level in the software stack.

In this
blog, I’ll focus on portability without code change. I’ll discuss three
conversions: from PaaS to IaaS, SaaS to IaaS, and IaaS to PaaS. Mathematically
we can have other forms of conversions – say from IaaS to SaaS – but those
examples are either not that interesting or not that practical. So I won’t
cover them here.

From
PaaS to IaaS

Here the
challenge is how to re-create the PaaS layer on IaaS. If you use standardized
middleware, you should be able to do this mostly by installing the exactly same
or compatible set of middleware as used by the PaaS vendor.

The tricky
part is that PaaS vendors sometimes recommend you use a generic service like
data services, which may not easily be reproduced anywhere else due to
technical and business reasons. If you avoid these services, your application’s
portability increases.

From
IaaS to PaaS

You can
have a lot of flexibility in building an application with IaaS. Once getting
virtual machines from the IaaS, you can install and configure your application
in whatever way you choose. This flexibility may or may not translate to the
PaaS.

You have to
do research on the middleware compatibility by yourself if you want to move
your application up the stack. If you find a cloud vendor who can accommodate
you, your portability may be good; otherwise you have to re-write your
application with some level of code re-use.

The problem
is that many PaaS vendors may not even tell you what combination of middleware
of what versions they use today. Some of them may well use proprietary software
which you cannot find anywhere else.

If you have
cross stack portability in mind while designing a new system, you can choose
the combination of middleware that gives you the best chance to move your
application to PaaS vendors.

From
SaaS to IaaS

This is the
case in which you want to move your SaaS application in house, behind your
firewall. Typically SaaS vendors provide applications to many customers at the
same time. It’s the multi-tenancy that helps them to achieve scale of economy.

Technically,
you can move the SaaS applications by converting the full software stack to virtual
machines and then moving them back to the enterprise. It’s the scale of SaaS
applications that may make this method impossible. By design, SaaS applications
incorporate many architectural elements for scalability that may not be needed
for a single enterprise application. These elements are too hard or expensive
to be reproduced inside a typical enterprise.

It’s all
about design. You can design SaaS application with cross stack portability in
mind and that can make it less painful. Zimbra, for instance, handles the
problem gracefully so that you can use it for service providers and for small
businesses.

The
Challenge

The
challenge for the PaaS/SaaS vendors is really simple: can you build a virtual
machine or a set of the virtual machines with all the middleware and
applications built-in? Companies can use them in house (still pay license fees
if applicable) for development and even production. When needed, companies can
choose to move it to either external IaaS or your PaaS/SaaS services. The
choice is with the users not controlled by you.

This may
sound crazy but I bet this is really something companies would love to see.

PaaS and
SaaS vendors, are you up to the challenge?

Steve
Jin is author of 
VMware VI & vSphere SDK (Prentice
Hall)
,
founder of open source
VI Java API, and is the chief blogger at DoubleCloud.org