Home > Blogs > VMware vCloud Blog > Monthly Archives: January 2011

Monthly Archives: January 2011

Economics of Cloud Computing – A Different Angle

Massimo Re Ferre’, Staff Systems Engineer – vCloud Architect

A homonym (and anonymous) friend of mind I used to work with in a previous IT life sent me a document exploring cloud economics from slightly different angle than usual. We often talk about this topic in the scope of elasticity, CAPEX vs. OPEX, PAYG (pay as you go) cost models and things like that. In this case he talks about the economics of clouds as a function of the costs of "knowing stuff". I found it pretty interesting and I thought it was worth sharing.  


"From an economic point of view, the model of cloud computing is the latest incarnation of the benefits achieved by providers of IT services as a consequence of specialization achieved by them.

We consider this business model is growing in the market through technology developments in the following stages: innovation, standardization, commoditization, falling prices and spreading.


The keyword to understand this concept is specialization. It is through this ability, refined over time, that is possible to lower the average cost to maintain the data center, at least for medium and large size customers.

Many of you know very well that there are many items that concur to data center costs. Hereafter we focus our attention on transaction costs, that is research costs to find better prices among suppliers, costs associated with the negotiation and execution of each transaction and the cost of technology scouting.

An important contribution to transaction costs is due to uncertainty about the future. In 1937, economist Ronald Coase, Nobel Prize for Economics in 1991 for his discovery and clarification of the significance of transaction costs for the institutional structure and functioning of the economy, wrote that it is good to internalize the costs of transactions as much as possible within the borders of the company.

Coase explained that, under the threshold of its sustainability, every company should try to do every transaction inside it, but when the complexity increases the company is likely to turn inefficient. Indeed, reached the threshold of sustainability, this indicates the limit of the process of internalization of transactions; in other words, the optimal size of the company. If a company goes beyond such limit the resulting increase in his size may imply diminishing returns of investments and therefore make more and more expensive the change of doing additional transactions within the company. So it is better to find opportunity in the market.

Today, the complexity regarding the integration between the hardware (servers, storage and networking) and software applications are pushing up transaction costs. In this context the cloud model, from an economic standpoint, is a way to reduce these costs. With this perspective, we observe the evolution if IT with an analogy: the internationalization of trading. Let’s assume that a country is comparable to an IT company which must decide whether to develop and run the service internally, or buy it externally. To make things simple, let's see what happened to international trading and compare the results to the IT industry, in order to clarify this vision.

We start from the father of modern economics, Adam Smith, who in 1776 already wrote about the efficiency achieved through specialization of labor (some of you may know the Adam Smith’s Pin Factory story).

In detail, Smith argued that if a foreign country can supply something (a commodity) at a cost that is cheaper than another country could spend to produce it, than it would be better to buy it from the foreign country and focus the attention on other tasks where competitive advantage can be created.

With the growth of the demand of IT services from others department of the company, it is necessary to reach a higher level of standardization, only in this way we can lower the cost of production of that service.
As long as the marginal cost of internal (domestic) production is less than the average costs provided by the outsourced service, it is likely that companies prefer to avoid the exploration of opportunity offered by the market.

But a specialized supplier is always looking for maximizing economies of scale and when the company evaluates the difference in labor costs (make vs. buy), it may turn out to be more convenient to buy the external service.

In this case we are now faced with a "mature" service, that is highly standardized, and thus very competive.

In conclusion, it can be argued that each organization has a different level of specialization hence a different cost to develop a given service. So each company will specialize in the development of services in the field where they have the greater relative advantages (or minor relative disadvantages).

It is clear that only part of the IT business is undertaking this journey, for now it is a phenomenon to be studied in perspective. It should not be seen as a catastrophe: the enormous gain in productivity will have beneficial effects throughout the IT industry due to the gain in efficiency and profitability for the various companies.

Today the benefits of the cloud model begins to emerge, supported by the economic theory. Victor Hugo said: "You can resist an invading army; you cannot resist an idea whose time has come."


I found this writing to be pretty interesting. There are a few concepts, such as the "simplification" and "standardization", that are usually discussed in the industry but here there is a "business" spin that I found pretty intriguing. It's like knowing that you need something but not knowing why. This script gets into some aspects of this "why". Of course it only scratches the surface.

The other thing that caught my attention is this "specialization" concept. Talking further with the source he commented that it's also a function of time. That is to say that the effort of developing and running something internally is not a one-shot. It's rather a continuous tuning and innovation that needs to occur due to the pace the IT industry is moving so the "sustainability over time" of the innovating effort is key to evaluate the make vs. buy decision.


Cloud Architecture Patterns: VM Template

By Steve Jin, VMware R&D


Standardize new virtual machine provisioning with templates


Creational pattern


It’s been a pain to create new virtual machines with the right software installed and configured properly. You can always use tools like KickStart to automatically install the operating system and then install other software as needed. But configuring such an environment is not trivial, and it takes a long time from start to finish.

With the rise of virtualization, more virtual machines are provisioned (and decommissioned) than ever before. Installing each new virtual machine from scratch is not the ideal solution.


While virtualization highlights the provisioning problem, it also offers an easy solution. It’s called a virtual machine template. You can pick and configure every piece of software you will need into a template, and clone it to new instances whenever needed. It’s not only easier but also much faster.


With this approach, the challenge now becomes how you can customize the cloned virtual machine – you don’t want to have a new template for every possible minor virtual machine variation. These variations could be settings like IP address, virtual devices, memory, disk spaces, and so on. These are common changes you would like to see, but it all depends on the capabilities of the underlying hypervisors. Different hypervisors’ features may vary, but not by much on basics.

Things get complicated when:

1. You have a dramatically different set of software that cannot easily be captured with a single template. You can always have a “one-for-all” template that includes the superset of software. It definitely eases the management. The downside is that it requires extra disk spaces, and more importantly, the extra software may expose vulnerabilities you otherwise don’t have to worry about.

2. You have to upgrade and patch software. For each patch/upgrade you will have a new template, which is good for operations if you always clone from the latest templates. It may mean you have to test and certify all of your applications with the new patch/upgrade. If you decide to support older versions of the software mixture, you will have many more templates to manage as well as more disk spaces.

Template hierarchy

In general, disk space is not a big concern given the advances of storage technologies like de-duplication that can save a lot when the templates are mostly the same. Still, that drains more money from your budget.

While designing the templates that change frequently, you want to consider hierarchical structure. At the root, you can have a template with the common set of software. Under the root template, you can have a delta template with extra software. When extracted from the template repository, the delta template is combined with the root as the final template. It’s very much like in OO hierarchy where a child type inherits the parent type.

The hierarchy is not limited to two layers and you can extend it to multiple layers in accordance with your software hierarchy. You’ll need a detailed analysis of your software to do this of course.

Template authoring

Although you can install everything manually, it’s highly recommended that you automate the process with shell scripts or other configuration tools like Puppet or Chef. These tools help repeat the process easily whenever needed, but also help you avoid mistakes even when you use the template only once. Configuration tools are complimentary with the template with extra features like continuous configuration compliance checking and enforcement.

The script can also serve as metadata for the template that explains what gets in and what does not. You don’t want to examine the disk image to find out what’s included in the template.


Creating templates has many benefits as discussed above. It also creates management burden – You now have one more thing to manage! It’s not only about storing the templates, but also designing for efficiency, and managing their lifecycles.

Known Uses

VMware vSphere has virtual machine templates that can be provisioned to new VM instances with a fair amount of customization not only for Linux but also for Windows. Amazon EC2, which is based on XEN, has a similar template called AMI (Amazon Machine Image) from which new virtual machines can be deployed. To standardize the VM template, DMTF has released OVF standard which has been widely adopted.

Related Patterns

VM Factory: create new instances based on VM templates.

Author: Steve Jin is the author of VMware VI and vSphere SDK (Prentice Hall), creator of VMware vSphere Java API. For future articles, please subscribe to Email or RSS, and follow on Twitter.

Testing Virtacore vCloud Express Beta

Matthew D. Sarrel, Sarrel Group

I’ve been playing around with the newly unveiled beta of vCloud Express from Virtacore for about a week now and this is some pretty cool stuff. I can’t really do all that much right now because the beta doesn’t offer the full functionality that the service will have, but there’s enough to see that the foundation is being laid for what will ultimately become an extremely valuable service.

According to Virtacore, there are between 600 and 700 beta users. They’re planning on stopping the beta and going into production in February.

First, the caveats. I actually like the way Virtacore spells all of this out in an email they sent when they approved my application to join the beta:

As a friendly reminder, this is a beta test. While we're confident the system and platform are stable, we do not make any guarantees to that effect. We recommend you not run production or other mission critical applications on the test platform. Other things to note:

· All data will be wiped clean at the end of the beta on January 28, 2011. Virtacore will have no way to recover that data, so be sure to export any necessary information before that date.

· During the beta we will only have Centos-based cloud servers available. Additional platforms will be available with the full release.

· Each participant will be able to create up to 5 servers. 

· Support will be provided through the ‘Feedback’ area of the console. Support hours are 9AM – 9PM EST, Monday – Friday.

During the test, there will not be any charge for the cloud servers created. Some features within the vCloud control panel are still under development and not functional at this time.  This includes bandwidth/service monitoring, and creating your own templates.  We hope to have these features enabled later during the beta period.

That email also contained a temporary username and password for use during the beta. I logged in and created a new server within minutes.  For now, there’s not all that much I could do, but there are some interesting things I saw in the management console.


If I could point out what I think is the most exciting thing then I’d call your attention to the tab that says “My Private Cloud”.  Virtacore tells me that this is there because they are going to provide not only the “public” multitenant environment and also provide a dedicated infrastructure hosted at their data centers.  A company could us the private cloud to run mission critical workloads or satisfy specific security and compliance requirements.  More interestingly, you could have your development and QA environments run in the public cloud and then when the application is ready to go live you could move it to the private cloud simply by dragging and dropping it within the management GUI.

Also, the history button (not the history eraser button ala Ren and Stimpy will show a full audit trail. This is important because you could see who did what and when in your cloud.  This is another feature that the other vCloud Express offerings I’ve seen lack.  I think that Virtacore’s decision to include a full audit trail (and a private cloud) is an indication of their intention to build an enterprise quality cloud offering.

I looking forward to more playing with Virtacore vCloud Express, especially when it comes out of beta.  I hope you’re looking forward to reading about it.

Matthew D. Sarrel (or Matt Sarrel) is executive director of Sarrel Group, a technology product testing, editorial services, and technical marketing consulting company.  He also holds editorial positions at pcmag.com, eweek, GigaOM, and Allbusiness.com, and blogs at TopTechDog.

Cloud Architecture Patterns: Cloud Broker

By Steve Jin, VMware R&D


Provide a single point of contact and management for multiple cloud service providers and maximize the benefits of leveraging multiple external clouds.




When you are buying and selling stocks or other securities, you hire a broker to execute the trade on your behalf. One reason for that is convenience. You don’t need to take care of the details of placing orders and working with multiple stock exchanges, and whatever else is required to trade securities.

How about working with multiple cloud service providers? For sure, you can go online to any cloud provider as long as you have your credit card ready. But is the service provider the best fit for your requirements? Do you have a backup plan if you are not satisfied with your service provider? Can you easily switch among your service providers to minimize cost or maximize flexibility? If you are not sure, you may then need something like a cloud broker.


A cloud broker is software that helps users and companies get the benefits of external cloud services. Depending on your requirements, it could be offered as a product so that you can install it inside your enterprise or as a service for which you pay as you go.


Although the cloud market is not as dynamic as stock market, it does change from time to time. So you will need the most recent market data to make the best decisions as a customer. In that sense, it’s better use the cloud as a service rather than as a product that you may need to update periodically.

Technically, a cloud broker is able to:

1. Work seamlessly with different cloud services providers on behalf of customers. It includes taking care of system provisioning, monitoring, billing, etc. In some sense, it’s like service aggregation.

2. Ideally, move workloads among the service providers. No longer are you locked in with a particular service provider.

3. Maximize performance/price ratio of cloud services by shuffling workloads among the providers.

4. Scale the VMs beyond one service provider who may not have enough resources. Who says cloud is unlimited? In theory it’s so, but in reality every service provider has a limit which you just don’t hit normally.

With these considerations in mind, the challenge would be providing a unified way for customers to use different service providers, be it Amazon, Rackspace, Terremark, or whomever. The tricky part of the challenge is while searching for the best deals among the various services, you want to keep the key differentiators of the providers so that you can leverage their comparative advantages when needed. This would be a tough tradeoff to make.


Use a cloud broker pattern to:

1. Maximize the benefits of leveraging external service providers;

2. Have unified interfaces while keeping the flexibility of multiple service providers;

3. Scale beyond one service provider whenever needed.


The cloud broker is an indirection from you to the service providers who do the real work. The quality of the brokerage service affects you. Also you will pay the cost of the software or service and that expense may or may not offset the benefits. It depends on your size, planning, execution, and so on.

When regard to lock-in, you would be free from any specific service provider. But will you be locked in with a cloud broker? Certainly that is possible, too. So how can you avoid broker lock-in, too? Working with multiple brokers is one way but it definitely takes effort.

Known Uses

One option is Appririo CloudWorks, a product designed for cloud brokerage. I expect more competing products and services will enter the market when the cloud service market matures.

Related Patterns

Façade VM: Cloud broker pattern is very similar in the topology but serves customers rather than service providers. Functionality wise, they are different as well.

This article was originally posted on www.doublecloud.org.Visit the site for more information on virtualization, cloud computing, and other enterprise technologies. 

Author: Steve Jin is the author of VMware VI and vSphere SDK (Prentice Hall), creator of VMware vSphere Java API. For future articles, please subscribe to Email or RSS, and follow on Twitter.