Home > Blogs > VMware vCloud Blog > Monthly Archives: July 2010

Monthly Archives: July 2010

BlueLock and VMware vCloud Express Announce Cloud Monkey Finalists!

BlueLock and VMware have
announced the finalists for the vCloud
Express Cloud Monkey contest. The four finalists, selected by online voters
along with BlueLock and VMware, each receive a FlipCam and are now eligible to
compete for the Grand Prize – an engraved iPad!

The Cloud Monkey finalists
will now be using their new FlipCam to create a short recognition video
documenting their use case, which will be highlighted on VMwareTV. Grand Prize winner will be
selected by BlueLock and VMware and announced September 6, 2010.

The Cloud Monkey contest
kicked off June 16 and concluded July 7, 2010. BlueLock and VMware encouraged
BlueLock vCloud Express beta users to submit the most interesting use cases or
applications they developed on the service. The four finalists were determined
based on cloud applicability, creativity, time savings, and cost savings.

The four finalists are:

* David Demlow and his use case Off-site Backup
with On-Demand Recovery in the Cloud
;

* Classroom on the
Cloud
from James Lofquist;

* Public Cloud
Prototyping for Proof of Concepts
from Scott Turner;

* Matt R.’s Mail Server.

Congratulations to all of
our finalists. Stay tuned for the FlipCam videos from our Cloud Monkey
finalists and look for them to be posted on the vCloud blog. For more
information on Cloud Monkeys, and to view all of the submissions, check out the
BlueLock contest page. Thanks to all who
entered, voted and participated in the contest. To keep up with all of the
latest information on vCloud, please follow us on Twitter and Facebook!

 

Cloud Architecture Design: Should it be Top-Down or Bottom-Up?

Steve
Jin, VMware R&D

This entry was reposted from DoubleCloud.org,
a blog for architects and developers on
virtualization and cloud computing.

In my last
blog
, I discussed how to optimize workloads across the cloud. This is based
on the assumption that you already have an existing infrastructure. What if you
don’t have an existing cloud infrastructure but would like to design one from
scratch? Here is what you should be thinking about to get the most from your
new cloud.

First let’s take a look at other types of
infrastructures – say a road. When you design a new road, you have to collect
data such as population densities around the area, people’s working schedules,
what types of vehicles will run on the road, and so on. With that information,
you can decide how many lanes you want, what kind of road surface is required,
and so on. You don’t just make up the design specification from scratch, and
lay down eight-lane freeways everywhere.

The same process applies in designing cloud
infrastructure. Unfortunately this is not what we see often today.

Top-down approach

In my previous
blog
, I said the infrastructure is a means and an application is the end.
We need to drive the design cloud architecture from the application
perspective. This is what I call the top-down approach.http://www.doublecloud.org/wp-includes/js/tinymce/plugins/wordpress/img/trans.gif

The benefit of top-down approach is the
system efficiency you gain – you create just enough infrastructure for your
applications to run without any waste. This gives you the best ROI from a
business perspective.

The top-down approach works perfectly for
enterprise private clouds. For service providers, however, it may or may not
work. If the service provider offers specialized services such as online
storage where the application workload pattern is known, then you should
definitely follow the top-down approach.

Bottom-up approach

For service providers who provide generic
services, it’s hard, if not impossible, to know customer workload patterns in
advance. In these cases, you are best served by following a bottom-up approach.
That means designing the cloud infrastructure based on typical applications.

When new applications come in, just mix
them based on their workload patterns as described in this blog. In so doing,
you may still achieve good workload balancing and the best business ROI.

How to top-down

When using a top-down approach, you want to
analyze the workload patterns and quantify them with numbers in CPU, memory,
networking, storage and so on. If you cannot easily infer the numbers, just
pick a similar system and measure it before adjusting your own design based on
the scale ratio.

With the workload numbers, you can
translate them into infrastructure level requirements. To play safe, you want
to have some allowance for unusual cases. On the networking side, it’s not
purely about bandwidth, it’s also about good topology design that can flow
network traffic better.

Now, don’t forget another very important
dimension of workload pattern – timing. If the same workloads and their peaks
are evenly distributed over time, you should be fine. To best design a private
cloud, you have to consider the time element carefully. In fact, in many
enterprises there are some pretty clear patterns with workload over the time.
For example, the accounting system will peak at the close of each fiscal
quarter.

Summary

Although cloud infrastructure design is
mainly about computing infrastructure, we should drive the design from the
applications that run on the infrastructure. This is what I call the top-down
approach.

When the application workload patterns are
unknown, you can go with a bottom-up approach. Even so, you still want to
balance the workload at runtime by mixing complementary applications together
on the same physical resources.

For the private cloud and specialized
public cloud, the top-down approach is preferred. For generic service
providers, bottom-up is usually best. Let me know what you think by commenting
below!

Steve Jin is author of VMware VI & vSphere
SDK (Prentice Hall)
, founder of open
source
VI Java API, and is the
chief blogger at
DoubleCloud.org

Workload Optimization: Is It a Must-have for Cloud Computing?

Steve Jin, VMware R&D

 

This entry was reposted from DoubleCloud.org, a blog for architects and
developers on virtualization and cloud computing.

Cloud computing hasn’t changed the nature
of computing – it just changed provisioning and management. That’s important to
remember because workloads in the cloud are very much similar to what we see in
traditional computing infrastructures. To get the most out of your investment
in cloud services or in your own physical IT infrastructure, you need to
understand how to optimize workloads.

Workload categorization

Typical computing workloads involve four
basic parts: computation, memory, networking, and storage. Almost all
applications have these four parts but mostly not balanced.

Now let’s quickly review the essential
categories of application workloads:

* CPU intensive workloads. These applications
include scientific computation with significant data crunching, encryption and
decryption, compression and decompression, and so forth;

* Memory intensive workload. These applications
include in-memory caching servers, in-memory database servers, and so forth;

* Networking intensive workload. These applications
are typically Web servers, as well as network load balancers, and so forth;

* Storage intensive workload. These
applications typically involve file serving, data mining applications, and so
forth.

What is the problem?

Although cloud computing is supposed to
provide unlimited capacity, the real work of specific workloads have to be run
on a single server or a cluster of machines which are mostly virtualized.

With the unbalanced nature of individual workloads
in mind, the last thing you want to do is to have the same category of
workloads running on the same set of physical servers that have limited resources.
For example, you don’t want to run all of your file server virtual machines on
one physical server competing for storage IO, and leave their CPU cycles largely
idle. This creates resource competition on the one hand, and resource waste on
the other hand.

Another important aspect is timing. If you
have the same workload patterns but evenly distributed over time, it’s still
balanced.

So, what is the solution?

If you already have an infrastructure in
place, you can just mix the different types of workload together. In this way,
you can have a balanced utilization of the physical resources on CPU, memory, networking
and storage. More importantly, you can overcome the limits by hosting more
workloads on the same investment in servers and system software.

This is not a big deal for cloud users
because they still use the same resources under typical cloud service level
agreements (SLAs). For the service providers, it is a big deal because of the higher
ratio of applications to physical investments that impact the margins of the
business. This means balanced utilization of workloads can give them a
competitive edge over other service providers.

This is much easier said than done. Implementation
is everything. You have to collect enough information on the workload patterns
of all the applications and then calculate the best distribution of these
applications. Based on what you find, you can re-allocate existing applications
using live migration technologies such as vMotion, storage vMotion, and so on.

The algorithm described above is a simple
one, and does not take into account other elements such as workload distribution
over time, isolation of multi-tenants, security and compliance, the criticality
of applications, or system backup. To get this workload optimization to work well
in real life, you have to think through all of these other factors as well.

With the workload optimization system in
place, you can then consult it for every new provision. And, when the workloads
are no longer balanced, you can recalculate and re-distribute the workloads for
the best utilization. Ta da!

What’s next?

Now, what if you haven’t set up your
infrastructure yet? No problem, in fact you will have more flexibility. I’ll
show you how to do this successfully in my next blog.

Steve Jin is author of VMware VI & vSphere SDK (Prentice
Hall)
, founder of open
source
VI Java API, and is the chief blogger at DoubleCloud.org

VMware vCloud wants your opinion. Take our online survey!

Cloud computing gets a lot
of buzz today. But when people talk about cloud, do they mean IaaS, PaaS or
SaaS? Who knows? Here at the vCloud group at VMware, we’re curious to hear what
you think and why you use (or don’t use) a public cloud service. We’re also
interested in hearing what applications and workloads you’re running in a
public cloud.

To gather your feedback, we’ve
created a quick public cloud survey: http://bit.ly/publiccloudsurvey.
Our survey asks why you chose your provider, the type of workloads you’re
running, if you use intermediaries with your cloud solution, and what you
perceive as the biggest benefits or concerns when it comes to cloud.

The survey only takes about
10-15 minutes to complete, and the first 100 participants receive a $5
Starbucks gift card in the mail. Please note, this survey is open to ALL public
cloud users, not just VMware customers.

We look forward to hearing
from you! For more information on VMware vCloud, check us out on Twitter and Facebook.

The Lingua Franca of Security in the Cloud

By Michael Haines

Sr.
vCloud Architect  (Security)

 

I am sure you are not surprised to hear that
'Security' in the Cloud is one of the hottest issues for organizations wanting
to architect and deploy a Cloud solution and service offering. So, where do you
begin? In this blog I will highlight some of the potential security concerns
you should understand when architecting and offering a Cloud solution or
service:

 

- Physical Security

 

Physical security, often overlooked, is elemental to
the foundation of an organization’s security, and is therefore a great place to
start. After all, if attackers can walk off with a hard disk and server, they
have (at the least) denied you availability. If a co-worker throws away a DVD
containing proprietary information that a criminal could recover, then
confidentiality has been lost. If a disgruntled employee can access a key
database and change amounts, values, or data, integrity has been lost.

 

- Access Control

 

Access control is a key component of security because
it helps to keep unauthorized users out. It is part of what is known as the
triple A process of authentication, authorization and accountability.
Authentication systems based on passwords have been used for many years. Today,
many organizations even enforce two-factor authentication. Security
administrators in the Cloud have more to worry about than just authentication.
Most employees now have multiple accounts. However, there is a way to consolidate
these accounts using SSO, single sign-on. I will explain more later in this
blog, but it is key.

 

- Security Models

 

The security architecture and model mainly deal with
hardware, software, security controls, and documentation. When hardware is
designed, it needs to be built to specific standards that should provide
mechanisms to protect the confidentiality, integrity, and availability of the
data. The operating systems that will run on the hardware must also be designed
in such a way as to ensure security. Building secure hardware and operating
systems is just the start! Both vendors and customers need to have a way to
verify that hardware and software perform as stated and that both the vender
and client can rate these systems and have some level of assurance that such
systems will function in a known manner. This is the purpose of evaluation
criteria, which allows the parties involved to have a level of assurance.
Although a robust security model is a good place to start, providing real
security architecture requires that you also have the ability to control
processes and applications.

 

- Network Security

 

In the telecommunications and network security area,
we need to understand both network communications and network security. This
area covers many, many aspects, including TCP/IP, LAN, WAN, wireless
networking, and related security controls to name but a few. Also understanding
the data communication process and how it relates to network security is very
important Knowledge of remote access, firewalls, network switches, and network
protocols is a must. Understanding network security plays a key role in
preventing network-based attacks. Security should be implemented in layers to
erect several barriers against attackers. A good example of network access
control is a firewall. The firewall can act as a choke point to control traffic
as it ingresses and egresses the network. Another network access control is the
DMZ (demilitarized zone), which establishes a safe zone for internal and
external users to work. Typically, the DMZ contains devices accessible to
Internet traffic, such as Web (HTTP) servers, FTP servers, SMTP (e-mail)
servers and DNS servers.

 

- Application and Systems Security

 

Software plays a key role in the productivity of most
organizations, yet our acceptance of it is different from everything else we
tend to deal with. For example, if you were to buy an item from a manufacturer
that had a defective component, you would expect the manufacturer to recall the
item in question. However, if a user purchases a software product, the
purchaser has little or no recourse. The purchaser could potentially wait for a
patch to be released, or wait for an upgrade, or more drastically just purchase
an alternative vendor’s product! It is imperative that applications are written
well as they are an essential element in providing good security.

 

- Regulatory Compliance

 

Providing an organization’s information assets is key
to providing information security and risk management. This step will also
identity critical pieces of security information as well as policies,
procedures and guidelines. To an organization this is very important, as it
lays out how an organization manages its security policies and practices. In
turn these are used as roadmap, which demonstrates the level and amount of
governance an organization possesses.

 

- Information Security: Confidentiality, Integrity
and Availability

 

Confidentiality, integrity, and availability define
the basic building blocks of any good security initiative. Circumventing
security threats, attacks and vulnerabilities on your organization is a very
serious issue. These attacks are generally aimed at confidentiality and
integrity. They potentially give an attacker access to your data and
availability and can result in denial of service attacks (DoS). These attackers
in principle follow an attack methodology and pose a real threat. Websites
including Yahoo and eBay have been shut down due to persistent DDoS attacks,
which is similar to a DoS attack, except the attack is executed from multiple,
distributed agent IP devices.

 

We also need to take a brief look at what the
information security objectives are. This is a process that organizations rely
on, and is designed to identify, measure, control, and manage the risks to
information and information systems. The five phases of this process are continuous.
The goal of the risk assessment phase is to provide recommendations to develop
a reliable, and cost effective security strategy. Within an organization, some
data will always be at risk.

 

Risk assessors must prioritize risks to determine the
confidentiality level of the data as well as the likelihood and consequences of
the data ending up in the wrong hands. The effectiveness of current controls must
also beevaluated during this phase. These risk ratings are then used to
recommend appropriate, cost-effect security controls.

 

The next phase of the process is strategy
development. During this phase, security managers use the information provided
from the risk assessment report to develop a plan to mitigate risks and comply
with internal and external policies and requirements. The security strategy
includes methods for preventing, detecting, and responding to security events.

 

Following are the basic steps involved to develop a
security strategy:

 

1. Assess the Security Risks

2. Develop a Security Strategy

3. Implement your Security Controls

4. Monitor your Security Environment

5. Analyse and further Update your Security Strategy

 

These are the basic security principles you need to
be aware of when looking at security in the Cloud. The following also gives you
an insight into a few of the most common vulnerabilities in the Cloud:

 

- Unsecured Network Interfaces and Networks

- Excess Privileges

- Mis-configuration or Poor Management

- Un-patched Vulnerabilities

 

As I mentioned earlier, one of the areas that is
seeing continued interest and exposure is the ability to provide single sign-on
(SSO) functionality. In this context, what if your organization has an
application that allows access to only authorized users? All users must enter
their credentials to get access to the application in question. While they are
using the application, they find links to other applications. Well, this
presents a potential issue as when they try to access these other links, they
discover they must enter their credentials again to access another application.
Of course, users get very annoyed by this behavior and having to enter their
credentials multiple times. By using sign-on (SSO), users get the ability to
log on to the parent application one time and automatically gain access to the
other applications.

 

Summary

 

VMware is very serious about the Cloud and security,
and we provide resources that are available on-line in the areas of security
and compliance. For more information on how to stay up-to-date on securing your
virtual infrastructure, please take a look at the following:

 

- Hardening best practices

 

- Implementation guidelines

 

Security: http://vmware.com/go/security

 

- Security blog and white papers

- Advisories

- Alerts

- Certifications and validations

 

VMware Security Center:

http://www.vmware.com/security/

 

- Partner solutions

- Advice and recommendations

 

Compliance: http://vmware.com/go/compliance

 

- Peer-contributed content

 

Operations: http://viops.vmware.com

When Should You Use the Cloud? Example Use Cases

By Steve
Jin, VMware R&D

This
entry was reposted from
DoubleCloud, a blog for architects and
developers on virtualization and cloud computing.

In my last
post
, I discussed when not to use cloud services. Basically you should
avoid the cloud for your organization’s core competency IT systems.  Remember, cloud computing is not a
silver bullet for everything.

Today I want to share the stories from the other side: when
you should use cloud services. As a rule of thumb, you use cloud services for
your non-core competency IT systems. But, what are the typical non-core
competency systems?

There could be many cases in which you can use cloud
services. Let me go through some of them by sharing example use cases:

Outsourcing
projects.
If something is outsourced, most likely you don’t think it’s
a core competency to your business. You can then leverage the full benefit that
public cloud services bring to you. You can easily have workspace that is
accessible by both your employees and contractors, and it’s more secure than
opening up your own infrastructure to your contractors.

Pilot
projects.
You want to try something new and don’t want to be limited
by capital budget not available for infrastructure experiments. By definition,
from the moment you start a pilot project you don’t know whether it will work
or not. So it’s natural to “rent” the required infrastructures from service
providers. This is especially true for pilot projects that require buying lots
of machines that you can’t repurpose if the pilot project falls through.

Temporary
projects.
These are the projects that run for a short period of time
and it doesn’t make sense to buy the infrastructure. It’s like renting
furniture instead of buying them for staging your house when you want to sell
it. When your house is sold, you can ask the staging company to remove the
furniture. You pay based on the furniture and the total time of renting,
exactly the same idea as cloud services.
Demo and training scenarios. You may
have a pre-sales event in which you want a full demo for a week. Ditto for a
training class. These are perfect use cases for leveraging cloud computing.

Extension
for dramatic workloads.
Notice that I said dramatic, not dynamic workloads. Not every application has a steady workload
pattern over time. Some applications could experience 100 times more workloads
during peak time than otherwise. For example, a website which may need 1,000
servers for uploading photos during the weekend of the Super Bowl might only
require a handful the rest of the time. Would you invest in buying 1,000
servers that are used one weekend in a year? I doubt it. That is a perfect use
case for cloud services.

You may have other scenarios that are candidates for best
use cloud services. Please feel free to share them in your comments below.

VMware
engineer Steve Jin is author of 
VMware VI
& vSphere SDK (Prentice Hall)
, founder of open source VI Java API, and is the chief blogger at DoubleCloud.org

 

 

When NOT to Use Cloud?

By Steve Jin, VMware engineer

This entry was
reposted from DoubleCloud, a
blog for architects and developers on virtualization and cloud computing.

During the July 4th long
weekend, I got the chance to read the book “Delivering
Happiness
” by Tony Hsieh. It’s a great book with many great ideas and
lessons he learned from LinkExchange and Zappos.

So, how does this relate to cloud
computing?

Here’s what Tony wrote…

“It was a valuable lesson. We learned that
we should never outsource our core competency. As an e-commerce company, we
should have considered warehousing to be our core competency from the
beginning. Outsourcing that to a third party and trusting that they would care
about our customers as much as we would was one of our biggest mistakes. If we
hadn’t reacted quickly, it would have eventually destroyed Zappos.”

In this paragraph Tony summarized the
lesson from contracting eLogistics for inventory services in Kentucky, which turned
out to be a mess and almost killed Zappos when cash flow became a big issue.

From a business perspective, cloud services
are not much different from inventory services. Both are all about outsourcing.
The high tech nature of cloud doesn’t change the business nature of cloud
services. What happened to Zappos could potentially happen to any cloud
customers.

The question your business needs to ask,
and answer, becomes “Is IT your core competency?” Or, more specifically, “What
part of IT is – and is not – your core competency?”

At face value, they seem like easy
questions. When you dig down to the details of your business, however, you may
find surprising or even counter-intuitive answers.

Let me give you an example.

Most of us know Paypal, now part of
eBay.com, as an online payment service company. You may think the core
competency is the web site itself. It’s true. But the most important technical
competency is not the web site itself. According to the founder Max Levchin, it’s
the ability to judge risk, or protect against fraud. That is why they built a
software package called IGOR, which brought down the fraud down to one-tenth of
a percent. Without this core competency, they would have been quickly out of
business just like their competitors such as MoneyMail that had 25 percent
fraud and burned through cash too quickly.

To decide whether an IT system is your core
competency, consider these questions:

Is your system a secret weapon in your
market competition with others? By saying a secret weapon, I mean does IT give
your business competitive advantages in increasing sales, lowering costs, or creating
good-will in the community?

Will you lose big money when the system is
offline? You should look both internally for the impacts on daily operations
and planning, and externally for impacts on sales and customer satisfaction.
There are things that may be impactful in subtle ways.

Are you OK with the system being breached?
That includes illegal access, data corruption, and data stolen by malicious
hackers. This may involve cash losses and direct legal consequences in the short
term, as well as loss of customer trust in the long term.

Will you have enough control of your systems
to fulfill different business unit needs? IT systems have to support the growth
of the business. Do you need to have control of the systems in terms of
software configurations, maintenance, support, SLAs, and so forth? The last
thing you want is to have your system become a hurdle of your growth.

Once you decide whether a piece of your IT
is a core competency or not, you can plan accordingly. In future blogs, I will
talk about when you should consider using cloud services.

VMware engineer Steve Jin is
author of 
VMware VI &
vSphere SDK (Prentice Hall)
,
founder of open source
VI Java API, and is the chief blogger at DoubleCloud.org

System Provisioning in Cloud Computing: From Theory to Tooling (Part II)

By Steve Jin, VMware engineer

 This entry was reposted from DoubleCloud, a blog for architects
and developers on virtualization and cloud computing.


What’s Different in Cloud Computing?

If you look at the varieties of software at
different levels of the stack, you will find the lower the stack the fewer
number of choices. When you move up to middleware, you will have more choices. At
the application layer, the growth of possibilities gets exponential.

If we draw a diagram here, it’s then like
an inverted pyramid shown in following diagram
.


Pyramid

Installing an OS or Cloning from template?

With the use of virtualization in cloud
computing, the demand for the OS provisioning tools will be much less than
before. For one thing, OS plays a less important role for applications. Most of
the time it doesn’t matter what OS is there as long as there is one. It
therefore makes it possible to consolidate the OSes to several standardized
flavors.

On the other side, virtual machines are
really just a bunch of files. You can save the standardized OSes as templates
or machine images in Amazon terms. When you provision a new machine, you
include the OS as well. There is not much need to
install it by yourself.

Still, service providers may see many
different types of standard OSes even though each user just uses a handful. The
service providers therefore still need to use OS provisioning tools to accommodate
all the different requirements of their customers.

What about the extra storage space? Good
question. Overall the portion of OS images is relatively small and manageable
for the service providers who can cross leverage them for different users. Also
modern storage de-duplication software has significantly reduced actual storage
use, especially when the OSes are mostly the same (for example, Linux OSes
of different flavors and different versions).

When it comes to
the format of virtual machine template, you want to follow the
OVF spec which is a DMTF standard supported by major
players.

How About Middleware?

When it comes to middleware, things can get
tricky. There are definitely more combinations than with
operating systems. It may or may not be a good idea to pre-install the middleware inside
a machine template.

The deciding factor is really how much you use the particular middleware. More
often than not, you have a pre-defined set of middleware for a particular
purpose – for example, building Java-based enterprise Web applications. You can
install all the middleware components in a virtual machine image. It’s not only
easier to deploy, it can also test as the gold image
to standardize within an organization. It also saves time to maintain, patch, and upgrade the related
middleware to save costs and improve system security.

There are
certain cases in which you have a very specific combination of middleware.
Given the rarity of usage, you may either use configuration tools or install it
manually.  You decide by whichever is
more convenient. When you do something just once, you probably won’t bother
with learning tools like Puppet. But if you are already familiar with one of
the tools, why not use it?

Applications?

Applications
have the most varieties in terms of numbers and functionalities. In reality,
you may not want to create a virtual machine image for each application most of
the time. That’s especially true when you are in the process of developing
applications – you don’t have a stable application to build a virtual machine
image yet.

Can you wait for
the stable release of the application? In theory, it’s possible. But in
reality, it’s mostly not an option. You have to deploy it and test it in the
staging environment continuously upon daily build or even every check-in.

To facilitate
this
continuous integration and deployment, you would like to use some automation
framework as I mentioned earlier. Besides the application provisioning, you can
also automate the testing with the same framework.

Life could
really be this easy? Yes. That is the benefit of using a framework. However, no
pain no gain. The pain is to set up the framework and write/test the commands.
Once you set up the process, you cannot easily change the procedure of
installation. This is not an option for most developments especially in initial
stages, therefore you have to maintain that as well.

As I mentioned
in
my previous blog on cloud application architecture, you want
to design your applications to be as stateless as possible so that you can
standardize the application for massive deployment. If that is the case, you
can build a virtual machine with your application installed on the fly. If you
choose this approach, you should incorporate the building of a virtual machine
as part of your release process, and deploy that virtual machine instead of
pushing applications to virtual machines.
Many tools like VMware Studio can assist you to do that.

Summary

System
provisioning in cloud computing is an important aspect of system architecture,
application design and operation. As virtualization gets more and more popular
as the enabling technology for cloud computing, we will see more use of virtual
machine images/templates for system provisioning, including operating systems
and middleware. The standardized templates provide not only operational
efficiency but also high quality, pre-qualified software stacks for building
cloud applications.

Application provisioning
frameworks will still play an important role in system provisioning and other
activities such as lifecycle management, automated testing, and more. It’s not
only a tool for system operation, but also an essential tool for continuous
system integration.

VMware engineer Steve Jin is author of VMware
VI & vSphere SDK (Prentice Hall)
, founder of open source VI Java API, and is the chief blogger at DoubleCloud.org