Home > Blogs > VMware vCloud Blog > Tag Archives: enterprise

Tag Archives: enterprise

Getting Started with vCloud Director, VMware Remote Console Proxy and VMware Remote Console Plugin (Part 2)

By Michael Haines, vCloud Architect

In Part 1 of this post, I inroduced the components of the VMware console proxy, remote console proxy, and how they work with vCloud Director. In Part 2, I'll cover resiliency of the remote console proxy, what ports are required, operations available to the user, and run through troubleshooting and tools.

7. How Resilient is the Remote Console Proxy?

The VMware Remote Console Proxy Plugin actually opens four different connections to the Console Proxy for a single Remote Console session (three for the vCenter server, one for the ESX/ESXi server). The Console Proxy is stateless, and therefore the Load Balancer can route each of those four connections to different Console Proxies and the system would still work perfectly fine. If a connection is dropped for some reason (e.g. the Console Proxy dies), then the VMware Remote Console Proxy automatically tries to re-establish it and the Load Balancer can route it through a different Console Proxy. The user should not notice anything but a small delay.

8. What ports are required to be open for the Remote Console Proxy to function correctly?

The vCD Console Proxy Plugin communicates with the vCD server only on port 443 via the Console Proxy. That is the only port that is required to be open and accessible from the Internet.

9. What operations are available to a user of the Remote Console Proxy?

The following functionality is exposed by the VMware Remote Console Proxy and presented and consumed in VMware vCloud Director:

  • If the Virtual Machine (VM) is powered off and you connect the VMware Remote Console Proxy to it, it will implicitly power it on.
  • Power Off the Virtual Machine (VM).
  • Reset the Virtual Machine (VM).
  • Suspend the Virtual Machine (VM).
  • If the Virtual Machine (VM) is suspended, it will implicitly resume it on initial connection, similar to power on.
  • Remote MKS interactions, including grabbing / un-grabbing input, and explicitly sending Ctrl+Alt+Del to the guest.
  • Full screen support.
  • Remote device support for CD-ROMs and floppies, whether physical or backed by file images, which exists on the client machine.
  • Technically, it will let you connect these devices as server-side as well.
  • If the Virtual Machine (VM) temporarily disconnects, such, as during reverting to a snapshot or the MKS connection is lost, the VMware Remote Console will try to re-connect.

The VMware Remote Console Proxy behaviour is to automatically power on and resume the Virtual Machine (VM), if it is powered off or suspended when you open a console to it. However, vCD allows you to open a console of a Virtual Machine (VM), only if the state of the Virtual Machine (VM) is powered off. This behaviour is enforced by vCD and not the VMware Remote Console Proxy.

10. Troubleshooting

Here are some of the most common things to check if for some reason you are not able to use the Remote Console Proxy:

  • Does `netstat -nape | grep 443` show a java process listening on the IP Address that you specified as the (internal) console proxy IP Address when running the configure script? This should also show port 443 open on the HTTP service IP.
  • Can you telnet to port 443, 902, and 903 on the vCenter server from the vCD server?
  • Can you telnet to the public console proxy IP on port 443 from outside of your Load Balancer?
  • Do the VMware Remote Console Plugin logs, which exist on the client, tell you anything?  On Windows clients, the VMware Remote Console Plugin logs are typically found in C:\Documents and in Settings\LOGINUSER\Local Settings\Temp\vmware-LOGINUSER. The file names follow the pattern of vmware-LOGINUSER-VMWAREVMRCPID.log and vmware-VMWAREVMRCPID-mks-LOGINUSER-VMWAREREMOTEMKSPID.log. On Unix clients, the VMware Remote Console Plugin logs are typically found in /tmp/vmware-LOGINUSER. Again, the file names follow the pattern vmrc-VMWAREVMRCPID.log and VMWAREVMRCPID-mks VMWAREREMOTEMKSPID.log.
  • Do the Console Proxy logs tell you anything? The Console Proxy logs al its activity in the same way as the rest of the vCD server. Its logs therefore can be found in <vCD dir>/logs/vcloud-container-info.log and vcloud-container-debug.log.

11. Tools

The following python script can be used to get a Virtual Machine (VM) console through the VMware Remote Console Proxy. The script expects curl and vmware-vmrc to be in your path. This script has currently been tested on Linux and Windows. The latest version of the script, version 1.0.2 is able to work on windows by using python and curl compiled for windows. In addition to this on Windows we have tested both with and without the cygwin shell. If you get the following when you try and run Vmrc-mks.py as follows:

$ python Vmrc-mks.py 
Traceback (most recent call last):
  File "Vmrc-mks.py", line 13, in <module>
    import argparse
ImportError: No module named argparse

This is because this script requires Python 2.6 or later and the argparse package, which is not always installed with the default distribution. This can be found at:

The script performs the following:

 - login to vCD

 - request screen ticket

 - parse the mks://… URL and extract the host/vm moref/ticket

 - decode the URL-encoded ticket

 - pass proper arguments to vmrc

Script help:

usage: vmrc-mks.py [-h] [-v] -c CREDENTIALS [–curl CURL] [–vmrc VMRC]


Script to process and acquire the VM screen ticket from the vCloud REST API.

Positional arguments:

acquire_ticket_uri    URI to acquire ticket action:

https://<vCD Cell Host>/api/v1.0/vApp/vm-265481682/screen/action/acquireTicket

 Optional arguments:

  -h, –help            show this help message and exit

  -v, –verbose      verbose messages


                                  credentials in format user@Org:password

  –curl CURL           point to curl executable, default is "curl"

  –vmrc VMRC           point to vmrc executable, default is "vmware-vmrc"


vmrc-mks.py -c vcloud@system:vcloud


Providing the VMware Remote Console clients (VMRC) functionality in vCD allows you to access multiple vCenter and ESX/ESXi servers from a single location. This is useful, as it allows the cloud provider’s to "hide" the structure of their vCenter servers and ESX/ESXi servers where the Virtual Machine’s (VMs) are located. In addition to this, the VMware Remote Console clients are stateless and therefore provide both scalability and availability.

Getting Started with vCloud Director, VMware Remote Console Proxy and VMware Remote Console Plugin

By Michael Haines, vCloud Architect

The Remote Console Proxy allows a tenant (a vSphere Remote Console client) using VMware vCloud Director (vCD) the ability to access a vApp (VM) and open and present the console of that vApp (VM) in question.

So, why do we need to have a Remote Console Proxy at all? In the context of VMware vCloud Director, we have a product that is designed to work over the public Internet. For security reasons, we need an intermediate communication module, the Remote Console Proxy. The security aspect and benefit here for the Service Provider means that they do not have to put their vSphere infrastructure directly on the internet, and for the consumers (tenants) of such a service, they gain the ability of access through an additional security layer such as a firewall, etc.  

Here’s How it Works:

The VMware Remote Console clients (also referred to as VMRC) communicate with the vSphere servers using a custom protocol developed by VMware. However, cloud users will more than likely be behind corporate firewalls that limit the possible network communication only to selected protocols and thus custom protocols would not be allowed. To resolve this limitation, the Remote Console Proxy allows the entire Remote Console communication to be tunnelled over the HTTPS protocol. This approach uses the standard port 443 and can be passed via corporate HTTPS proxies if needed.

Below I try to answer the 11 top questions about some of the key components that you need to understand about VMware vCloud Director and the Remote Console Proxy.

1. What are the components of Console Proxy (CP)?

VMware Remote Console Plugin

The VMware Remote Console Plugin provides the scripted interface to launch the VMware Remote Console. The plugin is provided as an ActiveX control for IE and a Mozilla plugin for Firefox. The plugin launches vmware-vmrc.exe on Windows and vmware-vmrc on Unix that renders the Virtual Machines (VM) console. vCloud Director uses client-side JavaScript to call into the plugin. The VMware Remote Console Plugin also queries the proxy settings from the browser. Note: the VMware Remote Console Plugin caches browser proxy settings and hence if you make changes, you need to restart the browser for the plugin to reload the settings. The VMware vCloud Director Remote Console is vSphere 4.1 plus vCloud specific features (browser proxy support and HTTPS tunnelling for MKS traffic). VMware vCloud, allows you to open the console of a Virtual Machines (VM), only if the state of the Virtual Machines (VM) is powered off. vCD and not the VMware Remote Console enforce this behaviour.

VMware Remote Console

The VMware Remote Console can be thought of a lightweight VIM client (VIM stands for VI Management. For the VI, it could be VMware Infrastructure or Virtual Infrastructure), that is spawned by the VMware vCloud Director in this case, and which is capable of providing MKS (For virtual machines residing on a VMware ESX/ESXi server, console connections are offered through the VMware MKS ActiveX control). This control is downloaded through a secure channel when you try to view a live VMware virtual machine) and device interactions with a single remote Virtual Machine (VM) that lives on a vSphere server. As such, the Remote Console can talk to either a Virtual Machine (VM) residing directly on an ESX/ESXi server or go through the vCenter server. It also has some limited Virtual Machine (VM) management functionality for the Virtual Machine (VM) it is connected to.

Note: The Remote Console only operates on powered on Virtual Machines (VMs). This is enforced by vCD, which means that vCD does not allow you to open the console of a powered off Virtual Machine (VM). However, Remote Console does allow opening console of a powered off VM and the behaviour is to automatically power it on.

Remote Console Proxy (CP)

The Remote Console Proxy performs three distinct functions:

a) Provides a single entry point. A VMware vCloud Director installation works with a large number of vCenter servers and ESX/ESXi servers and therefore the Virtual Machines (VM) can be located on many different hosts. The vCD clients are not aware of that however – they communicate only with the Console Proxy in order to open Remote Consoles. It is the only visible entry point for Remote Console communication from the viewpoint of the vCD clients. The Console Proxy is responsible for redirecting the requests to the correct vCenter server and ESX/ESXi servers.

b) HTTPS communication. The VMware vCloud Director clients communicate with the Console Proxy only via HTTPS on port 443. This communication can be channelled through a client's HTTPS proxy as well if needed. The Console Proxy converts the incoming HTTPS communication to the protocols specific to the vCenter server and ESX/ESXi servers.

c) Security. The Console Proxy provides an additional layer of VMware vCloud Director specific security on top of the standard vCenter server security. The Console Proxy assists with the protection of customer Virtual Machines (VMs) in a multi-tenant environment. In this case it ensures that a client in one organization does not get access to the Virtual Machines (VMs) of another organization. The Console Proxy also protects the vCenter and vSphere servers from denial of service attacks. This works through the Console Proxy communicating with the vCenter and ESX/ESXi servers, but only if the connection is made by clients who have already authenticated to the VMware vCloud Director server. Other clients are denied access, and as a result the vSphere servers cannot be subjected with connections from anonymous users.

Cipher Suites

The Console Proxy accepts only FIPS-140 compliant cipher suites. These suites only use "Triple-DES (3DES)", which is the "Data encryption standard (DES)" and is a symmetric encryption standard based on a 64-bit block and "AES", which is "Advanced Encryption Standard" and is also an example of a symmetric algorithm.

The following is the specific list of supported Console Proxy cipher suites:

  • TLSv1:DHE-RSA-AES128-SHA – ENABLED – STRONG 128 bits
  • TLSv1:DES-CBC3-SHA – ENABLED – STRONG 168 bits
  • TLSv1:AES128-SHA – ENABLED – STRONG 128 bits
  • SSLv3:DHE-RSA-AES128-SHA – ENABLED – STRONG 128 bits
  • SSLv3:DES-CBC3-SHA – ENABLED – STRONG 168 bits
  • SSLv3:AES128-SHA – ENABLED – STRONG 128 bits

Note: A description of FIPS-140 can be found here:


The Remote Console Proxy logs the important events in the logs/vcloud-container-info.log (in the vCD cell). Detailed information about its operations is logged in logs/vcloud-container-debug.log (if it is enabled).

2. How does the Remote Console Proxy Work?

The Remote Console Proxy runs as a process on the VMware vCloud Director Cell and communicates to the vCenter server on port 443 and to the ESX/ESXi host on ports 902 and 903. The VMware Remote Console Plugin, which runs on the client browser, communicates with the Remote Console Proxy only on port 443. The VMware Remote Console Plugin then tunnels the MKS traffic (902/903 traffic) over HTTPS to the Console Proxy. In terms of the flow, a user creates a HTTPS session to the VMware vCloud Director Portal via a load balancer (and is authenticated etc).  Note here that the MKS traffic is already encrypted at this point, so we do not need to encrypt it again. Also, the initial handshake is SSL and the traffic after that is encrypted in a custom way. The VMware Remote Console Plugin talks only to a single address, that of the Console Proxy, or its Load Balancer. The VMware Remote Console Plugin talks to the Console Proxy, possibly through a Load Balancer only on port 443. If present, the Load Balancer directs the incoming HTTPS connections only to the Console Proxy. It is the Console Proxy's responsibility to direct the connection to the correct vCenter server or ESX/ESXi server and to convert the HTTPS connections to MKS connections on ports 902/903 if needed. 

Note that the VMware vCloud Director Remote Console Plugin detects whether the client has an HTTPS proxy (using the IE settings) and it only talks to Console Proxy through the HTTPS proxy if there is one.

As you will note from the above, there is a difference in the way that the vSphere Remote Console Proxy and the vCD Remote Console Proxy functionality works. The following basically lists the differences:

  • The vCD Remote Console Proxy plugin uses the proxy settings configured in the browser to connect to the Remote Console Proxy.
  • The Remote Console Proxy presents itself as a vCenter server and ESX/ESXi server to the vCD Remote Console Proxy.
  • The ticket used by the vCD Remote Console Proxy is augmented to figure out which real vCenter server and ESX/ESXi server it needs to communicate with.

3. How does VMware vCloud Director actually Remote Console to the ESX/ESXi Console?

The flow of operations goes as follows:

  • The VMware Remote Console Proxy logs in via the vCenter server using the clone ticket that was provided to it.
  • The VMware Remote Console Proxy acquires an MKS ticket for the Virtual Machine (VM) from the vCenter server and gets the address of the ESX/ESXi where the VM is running.

The VMware Remote Console Proxy connects to the ESX/ESXi using the MKS ticket. In ESXi this occurs in one pass only on port 902. In ESX, the VMware Remote Console Proxy first gets yet another MKS ticket on port 902 and then connects on port 903, only then does it establish a VNC-like connection that allows it to display the remote console of the Virtual Machine (VM).

4. How does the Remote Console Proxy Work without a Load Balancer?

Diagram: vCD-Console-Proxy-Architecture without a LB:


The browser interacts with the VMware vCloud Director Portal using the HTTP service and requests a console session ticket. The VMware vCloud Director Cell responds to the client with the console session ticket that includes an IP Address to connect to – in this example ( The VMware Remote Console Plugin then connects to the IP Address ( from the console session ticket. If you do not have the VMware Remote Console Plugin installed, you will be prompted to install this (please see the diagram vCD-Remote-Console-Plugin). Once the VMware Remote Console Plugin is installed on the client browser, the console session will start between the VMware Remote Console Plugin and Console Proxy.

Diagram: vCD-Remote-Console-Plugin:


5. How does the Remote Console Proxy Work with Load Balancing HTTP Traffic?

Diagram: vCD-Console-Proxy-Architecture with a LB:


In this scenario, the browser interacts with the VMware vCloud Director Portal using the HTTP service, but this time the communication is performed through the Load Balancer and again requests a console session ticket. The VMware vCloud Director Cell responds to the client with the console session ticket that includes an IP Address to connect to – in this example The VMware Remote Console Plugin then directly connects to the IP Address ( from the console session ticket. If you do not have the VMware Remote Console Plugin installed, you will be prompted to install this (please see the diagram vCD-Remote-Console-Plugin). Once the VMware Remote Console Plugin is installed on the client browser, the console session will start between the VMware Remote Console Plugin and Console Proxy. In this scenario, all HTTP traffic will be load balanced.

6. How does the Remote Console Proxy Work with the HTTP and Remote Console Proxy Load Balancer?

Diagram: vCD-Console-Proxy-Architecture with HTTP and Load Balancer:


In the above architecture, the browser interacts with the HTTP service through the load balancer and requests a console session ticket. On completion of acquiring the session ticket, the vCD cell responds with the ticket that includes the IP address to connect to, in this case either, or The VMware Remote Console Plugin then connects to the external IP of or from the ticket again. Finally, the console session then takes place between the VMware Remote Console Plugin and the console proxy through the load balancer.

In Part 2 of this post, I’ll cover resiliency of the remote console proxy, what ports are required, operations available to the user, and run through troubleshooting and tools.

When Your Data Center Has Packed Its Bags

By John Ellis, Chief vCloud Architect at BlueLock


I recently grabbed lunch with a friend of mine whose company had just moved their data center several states away.

Physical Data Center Migrations such as these are truly epic quests: one hopes that servers come off the trunk in roughly the same shape they possessed when they were placed on the truck. Once the gear is brought into the new data center you try to re-assemble all the building blocks in the same manner as the original datacenter.

Carefully and in the proper order, one powers on servers one-by-one hoping that disks haven't been jostled and all networks were reconstructed correctly. Of course the data center reconstruction didn't go entirely to plan and the poor guy spent most of his holiday trying to get his services to start up once again. Physical servers and networks can be very delicate items that require a good deal of precision to move, no matter if it is three yards or three states.

Virtual Datacenter Migrations

In October I had to perform a similar, albeit smaller, move migrating the infrastructure for our development team into our brand new datacenter. Forty servers running our collaboration environment, testing environment, quality assurance, pre-production and rapid prototyping needed to move along with the independent networks and firewalls for each. While there were many dependencies to manage between servers and a long list of LAN restrictions and security policies, I was still able to perform the entire migration within four hours. When I was ready to power everything up I didn't have to reconfigure a single application.

What made my October migration so smooth was not based on any planning or forethought at all – I had not performed much of either. The migration I managed dealt entirely with cloud-based infrastructure while my friend was dealing with bare-bones physical hardware. While he moved disks and chassis, I just moved bits and bytes.

There are several facets of vCloud Datacenter that afforded organizations greater IT agility, but the function that has saved me the most sweat and tears has been the portability between vCloud Datacenters. I can now organize all my inter-dependent servers together as virtual applications, bound by their networks and the security policies that define them.

VMware vCloud Director in Action

These vApps reside as the primary components of vCloud Director and can easily be exported in a standard Open Virtualization Format (OVF). If I ever need to move my QA environment to a new data center (or a new country) I can easily freeze-dry my virtual machines, networks, firewall rules, guest customization rules, hardware definitions and even end-user licensing agreements into a readily portable format and transfer it wherever I wish. No more loading trucks with servers, no more plugging in CAT5e cables. I now move only virtual server files to a new home whenever I feel the need. I can even move servers onto my local laptop for testing by loading the OVFs into VMware Workstation. Portability is just one requirement of agile IT. A nimble infrastructure requires rapid provisioning and flexible management. vCloud Datacenter enables this flexibility through two interfaces: the vCloud Director management console and the vCloud API.

For my next post, let's walk through an example of how much more quickly one can have new hardware running within vCloud Datacenter.

VMware Partner Exchange 2011 – Get Ready!

By David Davis

One of my favorite yearly events (besides VMworld) is VMware Partner Exchange (PEX). Much more intimate and more "top secret" than VMworld, the partner-only event offers unique advantages.

Who Attends VMware Partner Exchange?

PEX doesn't have as many attendees as VMworld and I’ve found it to be lower stress and a more concentrated dosage of VMware Kool-Aid than its larger cousin.

First, VMware Partner Exchange is only open to VMware partners. Large partners like EMC, Cisco, Dell, VCE, NetApp, and HP have their own training days before PEX begins (called "pre-conference boot-camps). Many sessions and keynotes revolve around becoming the most successful (and profitable) VMware partner you can be.

If you are a VMware partner then there is no doubt that you need to be in Orlando, Florida between February 8-11.

If you aren't a VMware partner but you or your company sells, supports, integrates, develops related software, or manufactures related hardware then you need to, first, become a partner and then make sure you attend this invaluable event. There is even an option to become a free VMware partner called a Technology Alliance Partner (TAP).

Finally, if you are a VMware customer then continue to learn more about PEX, and ask your local VMware partner about PEX when they return. Also, make sure you have VMworld 2011 in Las Vegas on your calendar for August 29-September 1, 2011 because that is the event for you.

Five Reasons to Attend VMware Partner Exchange

If your company is or can be a VMware partner then here are five reasons why you need to attend PEX:

1. Learn how to make your company more successful – you'll find keynotes and sessions that educate, empower, and motivate you to be the best partner you can be (more on sessions below).

2. Gain the training on new technology that you must have to be successful – prior to the conference you'll find VMware training boot-camps that cover vSphere Design, View 4.5 Design Best Practices, and desktop virtualization. Additionally, you'll be able to perform VMware labs to gain the hands-on experience you'll need to implement VMware's products quicker and easier.

3. Network with the best in the virtualization world – VMware company executives like Paul Maritz, & Steve Herrod will be there as well as well known names in the virtualization world like John Arrasjid (VCDX001), Duncan Epping (VCDX007), and John Troyer (VMware Social Media Evangelist).

4. Get certified – VMware will be offering a free certification training boot-camp on the VMware Technical Sales Professional (VTSP) and there are special sessions on VCP and VCDX. Last year, I paid $150 extra to attend John Arrasjid's VCDX preparation workshop (which is available this year as well). Plus, you can attempt your VCP, VCAP-DCA, or VCAP-DCD certification exam, onsite at PEX. Learn more about certifications at PEX here.

5. Go to Disneyworld – Okay, don't tell your boss about this one but PEX is inside the Disneyworld campus and if you have an extra $80 you could go to one of the parks before or after the conference. (Remember – it's the happiest place on earth.)

Must See PEX Sessions

I've already built my schedule for PEX and you'll see it below. While I have booked every session that I possibly can, there are some speakers and sessions that I consider "must see". They are:

  • John Arrasjid – covering vCloud BC/DR solutions (TECH-BC-202) & vCloud Architecture Design Workshop(TECH-CLD-201),
  • David Hill – Private vCloud Architecture Technical Deepdive (TECH-CLD-300)
  • John Troyer – Social Media Optimization for Marketing (SAL-MKT-200)
  • Brenton Badger – Tier 1 Applications in VMware vCloud Director (TECH-CLD-203)
  • Pang Chen – VMware vCloud and vCloud Director 101 (TECH-CLD-101)
  • Thomas Christensen – Don't be afraid of the Cloud (SAL-CLD-100)
  • Ryan Sweet – vCloud APIs bringing new integration partner opportunities (TEX-CLD-204)
  • And many more vCloud-related sessions.


The public content catalog for PEX 2011 is located here.

See you there!

I'll be staying at the Disney Coronado hotel for PEX from February 7-11. For anyone reading this who is attending – I would be glad to meet you there. I'll be attending as much training and as many sessions I can but I also plan to make time to do some labs and network with as many people as I can. Additionally, I'll be blogging and doing video interviews. If you are at PEX and would like to meet or do an interview, let me know via Twitter (@davidmdavis). Look for more upcoming posts on VMware PEX 2011!

Learn more and register for VMware Partner Exchange 2011 here.

Economics of Cloud Computing – A Different Angle

Massimo Re Ferre’, Staff Systems Engineer – vCloud Architect

A homonym (and anonymous) friend of mind I used to work with in a previous IT life sent me a document exploring cloud economics from slightly different angle than usual. We often talk about this topic in the scope of elasticity, CAPEX vs. OPEX, PAYG (pay as you go) cost models and things like that. In this case he talks about the economics of clouds as a function of the costs of "knowing stuff". I found it pretty interesting and I thought it was worth sharing.  


"From an economic point of view, the model of cloud computing is the latest incarnation of the benefits achieved by providers of IT services as a consequence of specialization achieved by them.

We consider this business model is growing in the market through technology developments in the following stages: innovation, standardization, commoditization, falling prices and spreading.


The keyword to understand this concept is specialization. It is through this ability, refined over time, that is possible to lower the average cost to maintain the data center, at least for medium and large size customers.

Many of you know very well that there are many items that concur to data center costs. Hereafter we focus our attention on transaction costs, that is research costs to find better prices among suppliers, costs associated with the negotiation and execution of each transaction and the cost of technology scouting.

An important contribution to transaction costs is due to uncertainty about the future. In 1937, economist Ronald Coase, Nobel Prize for Economics in 1991 for his discovery and clarification of the significance of transaction costs for the institutional structure and functioning of the economy, wrote that it is good to internalize the costs of transactions as much as possible within the borders of the company.

Coase explained that, under the threshold of its sustainability, every company should try to do every transaction inside it, but when the complexity increases the company is likely to turn inefficient. Indeed, reached the threshold of sustainability, this indicates the limit of the process of internalization of transactions; in other words, the optimal size of the company. If a company goes beyond such limit the resulting increase in his size may imply diminishing returns of investments and therefore make more and more expensive the change of doing additional transactions within the company. So it is better to find opportunity in the market.

Today, the complexity regarding the integration between the hardware (servers, storage and networking) and software applications are pushing up transaction costs. In this context the cloud model, from an economic standpoint, is a way to reduce these costs. With this perspective, we observe the evolution if IT with an analogy: the internationalization of trading. Let’s assume that a country is comparable to an IT company which must decide whether to develop and run the service internally, or buy it externally. To make things simple, let's see what happened to international trading and compare the results to the IT industry, in order to clarify this vision.

We start from the father of modern economics, Adam Smith, who in 1776 already wrote about the efficiency achieved through specialization of labor (some of you may know the Adam Smith’s Pin Factory story).

In detail, Smith argued that if a foreign country can supply something (a commodity) at a cost that is cheaper than another country could spend to produce it, than it would be better to buy it from the foreign country and focus the attention on other tasks where competitive advantage can be created.

With the growth of the demand of IT services from others department of the company, it is necessary to reach a higher level of standardization, only in this way we can lower the cost of production of that service.
As long as the marginal cost of internal (domestic) production is less than the average costs provided by the outsourced service, it is likely that companies prefer to avoid the exploration of opportunity offered by the market.

But a specialized supplier is always looking for maximizing economies of scale and when the company evaluates the difference in labor costs (make vs. buy), it may turn out to be more convenient to buy the external service.

In this case we are now faced with a "mature" service, that is highly standardized, and thus very competive.

In conclusion, it can be argued that each organization has a different level of specialization hence a different cost to develop a given service. So each company will specialize in the development of services in the field where they have the greater relative advantages (or minor relative disadvantages).

It is clear that only part of the IT business is undertaking this journey, for now it is a phenomenon to be studied in perspective. It should not be seen as a catastrophe: the enormous gain in productivity will have beneficial effects throughout the IT industry due to the gain in efficiency and profitability for the various companies.

Today the benefits of the cloud model begins to emerge, supported by the economic theory. Victor Hugo said: "You can resist an invading army; you cannot resist an idea whose time has come."


I found this writing to be pretty interesting. There are a few concepts, such as the "simplification" and "standardization", that are usually discussed in the industry but here there is a "business" spin that I found pretty intriguing. It's like knowing that you need something but not knowing why. This script gets into some aspects of this "why". Of course it only scratches the surface.

The other thing that caught my attention is this "specialization" concept. Talking further with the source he commented that it's also a function of time. That is to say that the effort of developing and running something internally is not a one-shot. It's rather a continuous tuning and innovation that needs to occur due to the pace the IT industry is moving so the "sustainability over time" of the innovating effort is key to evaluate the make vs. buy decision.


Cloud Architecture Patterns: VM Template

By Steve Jin, VMware R&D


Standardize new virtual machine provisioning with templates


Creational pattern


It’s been a pain to create new virtual machines with the right software installed and configured properly. You can always use tools like KickStart to automatically install the operating system and then install other software as needed. But configuring such an environment is not trivial, and it takes a long time from start to finish.

With the rise of virtualization, more virtual machines are provisioned (and decommissioned) than ever before. Installing each new virtual machine from scratch is not the ideal solution.


While virtualization highlights the provisioning problem, it also offers an easy solution. It’s called a virtual machine template. You can pick and configure every piece of software you will need into a template, and clone it to new instances whenever needed. It’s not only easier but also much faster.


With this approach, the challenge now becomes how you can customize the cloned virtual machine – you don’t want to have a new template for every possible minor virtual machine variation. These variations could be settings like IP address, virtual devices, memory, disk spaces, and so on. These are common changes you would like to see, but it all depends on the capabilities of the underlying hypervisors. Different hypervisors’ features may vary, but not by much on basics.

Things get complicated when:

1. You have a dramatically different set of software that cannot easily be captured with a single template. You can always have a “one-for-all” template that includes the superset of software. It definitely eases the management. The downside is that it requires extra disk spaces, and more importantly, the extra software may expose vulnerabilities you otherwise don’t have to worry about.

2. You have to upgrade and patch software. For each patch/upgrade you will have a new template, which is good for operations if you always clone from the latest templates. It may mean you have to test and certify all of your applications with the new patch/upgrade. If you decide to support older versions of the software mixture, you will have many more templates to manage as well as more disk spaces.

Template hierarchy

In general, disk space is not a big concern given the advances of storage technologies like de-duplication that can save a lot when the templates are mostly the same. Still, that drains more money from your budget.

While designing the templates that change frequently, you want to consider hierarchical structure. At the root, you can have a template with the common set of software. Under the root template, you can have a delta template with extra software. When extracted from the template repository, the delta template is combined with the root as the final template. It’s very much like in OO hierarchy where a child type inherits the parent type.

The hierarchy is not limited to two layers and you can extend it to multiple layers in accordance with your software hierarchy. You’ll need a detailed analysis of your software to do this of course.

Template authoring

Although you can install everything manually, it’s highly recommended that you automate the process with shell scripts or other configuration tools like Puppet or Chef. These tools help repeat the process easily whenever needed, but also help you avoid mistakes even when you use the template only once. Configuration tools are complimentary with the template with extra features like continuous configuration compliance checking and enforcement.

The script can also serve as metadata for the template that explains what gets in and what does not. You don’t want to examine the disk image to find out what’s included in the template.


Creating templates has many benefits as discussed above. It also creates management burden – You now have one more thing to manage! It’s not only about storing the templates, but also designing for efficiency, and managing their lifecycles.

Known Uses

VMware vSphere has virtual machine templates that can be provisioned to new VM instances with a fair amount of customization not only for Linux but also for Windows. Amazon EC2, which is based on XEN, has a similar template called AMI (Amazon Machine Image) from which new virtual machines can be deployed. To standardize the VM template, DMTF has released OVF standard which has been widely adopted.

Related Patterns

VM Factory: create new instances based on VM templates.

Author: Steve Jin is the author of VMware VI and vSphere SDK (Prentice Hall), creator of VMware vSphere Java API. For future articles, please subscribe to Email or RSS, and follow on Twitter.

Testing Virtacore vCloud Express Beta

Matthew D. Sarrel, Sarrel Group

I’ve been playing around with the newly unveiled beta of vCloud Express from Virtacore for about a week now and this is some pretty cool stuff. I can’t really do all that much right now because the beta doesn’t offer the full functionality that the service will have, but there’s enough to see that the foundation is being laid for what will ultimately become an extremely valuable service.

According to Virtacore, there are between 600 and 700 beta users. They’re planning on stopping the beta and going into production in February.

First, the caveats. I actually like the way Virtacore spells all of this out in an email they sent when they approved my application to join the beta:

As a friendly reminder, this is a beta test. While we're confident the system and platform are stable, we do not make any guarantees to that effect. We recommend you not run production or other mission critical applications on the test platform. Other things to note:

· All data will be wiped clean at the end of the beta on January 28, 2011. Virtacore will have no way to recover that data, so be sure to export any necessary information before that date.

· During the beta we will only have Centos-based cloud servers available. Additional platforms will be available with the full release.

· Each participant will be able to create up to 5 servers. 

· Support will be provided through the ‘Feedback’ area of the console. Support hours are 9AM – 9PM EST, Monday – Friday.

During the test, there will not be any charge for the cloud servers created. Some features within the vCloud control panel are still under development and not functional at this time.  This includes bandwidth/service monitoring, and creating your own templates.  We hope to have these features enabled later during the beta period.

That email also contained a temporary username and password for use during the beta. I logged in and created a new server within minutes.  For now, there’s not all that much I could do, but there are some interesting things I saw in the management console.


If I could point out what I think is the most exciting thing then I’d call your attention to the tab that says “My Private Cloud”.  Virtacore tells me that this is there because they are going to provide not only the “public” multitenant environment and also provide a dedicated infrastructure hosted at their data centers.  A company could us the private cloud to run mission critical workloads or satisfy specific security and compliance requirements.  More interestingly, you could have your development and QA environments run in the public cloud and then when the application is ready to go live you could move it to the private cloud simply by dragging and dropping it within the management GUI.

Also, the history button (not the history eraser button ala Ren and Stimpy will show a full audit trail. This is important because you could see who did what and when in your cloud.  This is another feature that the other vCloud Express offerings I’ve seen lack.  I think that Virtacore’s decision to include a full audit trail (and a private cloud) is an indication of their intention to build an enterprise quality cloud offering.

I looking forward to more playing with Virtacore vCloud Express, especially when it comes out of beta.  I hope you’re looking forward to reading about it.

Matthew D. Sarrel (or Matt Sarrel) is executive director of Sarrel Group, a technology product testing, editorial services, and technical marketing consulting company.  He also holds editorial positions at pcmag.com, eweek, GigaOM, and Allbusiness.com, and blogs at TopTechDog.

Connecting Terremark vCloud Express VM to the Internet and Installing Apps

By David Davis

In part one of this series, I covered why vCloud Express is so appealing for SMBs and large enterprises alike. From there, I showed you how to get started using vCloud Express by creating your first virtual machine. Once we had our VM up and running, there are still a few tasks left that you still need to perform. You'll want to configure outbound Internet access, inbound VPN access, inbound Internet access, and install your applications. Let me show you how.

Outbound Internet Access

With Terremark vCloud Express, you are provided a block of private IP addresses for your "virtual data center". In my case, I had the private IP network. As you can see in the graphic below, I assigned one of those private IPs to my first server.


As I add more servers, they would be able to communicate on this internal LAN by default.

But what if I want to download applications or patches from the Internet? Does this VM on the private LAN have outbound Internet access? The answer is yes, by default outbound NAT'ed Internet access is configured. I went to the Network tab in vCloud Express and could see my assigned public Internet address, as you see in the graphic below.


However, there is no INBOUND Internet access to my new VM, just as you would have with a home/SMB NAT router.

Inbound VPN Access

Once your VM is up and running, the first thing you will want to do is connect to it via RDP to configure it and begin installing your applications. This is actually available by default if you use Terremark's VPN Connect – a private SSL VPN. Of course, this VPN could be used for other things besides just RDP to a Windows server. You could use the SSL VPN for SSH or SCP to a Linux VM or FTP to a any VM to transfer the apps that you need to install.

To connect to your virtual datacenter via the SSL VPN, click VPN Connect.

A new browser window will popup and you will sign in to the Cisco SSL VPN client (make sure that you select SSL VPN from the drop-down menu).

If this is the first time you have used it, this will launch the Cisco AnyConnect VPN Client. When you are done, you'll have a new Windows system tray icon for this client. If you click on it, you can see your VPN IP address and status.

Once connected, you are on the same network as your vCloud VM. That means that you can RDP to the IP address of the VM, like I did here.


Notice how I'm connected via RDP to the same private 10-net IP address that my VM was assigned, above.

Inbound Internet Access

At some point, you will want to configure inbound Internet access to your vCloud VMs. This could be just to RDP to a public IP address for management but, more than likely, it is to allow your new VM to be, for example, a public Internet web server (or any other application port you would want to open).

To do this, you'll go to the Network tab and you need to do two things:

1. Create a new Service

2. Create a new Node

In my mind, the service is the NAT rule allowing traffic inbound and the node is the server (or servers) that will be joined to the NAT rule to complete the inbound access. Note that these are NOT firewall rules. There is a whole separate configuration in Terremark vCloud Express called Security Services which is essentially your virtual data center firewall configuration.

Let's say that we want to configure inbound public RDP access to our new VM. This way, we don't have to connect to the VPN before we can use RDP (note that this could be a security concern for some). To do this, we would first create a new service by clicking on Create Service in the Network tab.


From here, the Create Internet Service wizard comes up. I opted to use the default Internet IP (but I could have request a new IP). Then, I used TCP as my protocol and port 3389 as that is the port for RDP. I named the service, you guessed it, "RDP". I accepted that I will have to pay $0.01 more per hour for inbound network traffic from the Internet and here's what it looked like:


Next, to be able to actually RDP, I needed to map this service to a Node. I selected the new Internet service and clicked on Create Node.


This brought up the Create Node Service wizard where I filled out the server name, server IP, and server port, as you see below.


And here is what we have…


Finally, I tested this inbound RDP NAT by using Remote Desktop on my local PC and going to the external Internet IP address I mapped to my internal private IP. Here are the results of my RDP attempt:


Notice the IP address that I was able to RDP to after I created the service, created the node, and mapped them together.

You would want to administer the vCloud VM either through RDP to the private IP using the SSL VPN or via RDP to the public IP address.

Installing Applications in your vCloud Express VM

So how do I get applications on my vCloud VM? You could have chosen to install your Windows VM with SQL Server already installed – that's one option to get an application. It doesn't appear that local ISO or remote ISO mounting is supported at this time. The recommended way to get apps to your VM is to FTP (or some other file transfer protocol) to the VM and then install them from there. I can see installing an FTP server on the VM (opening up the service and mapping it to a node) and then FTP'ing ISO files to the VM to be mounted with something like Daemon-Tools and then installed.

Besides downloading applications directly from the software provider's website on the Internet, another option to copy applications over to the new VM is to use RDP and map your local drives. From there you could copy or run anything (but, of course the performance is going to be very limiting). Here is a network drive from my house, mapped through RDP, as seen by the vCloud Express VM.


From here, I could copy over Veeam Backup (an application that I had on my local network drive) and install it in the vCloud VM. However, with the upload speed of my local Internet connection, it would be much faster to just download that application from the software company to the vCloud VM, directly (I was actually getting 6MB per second download speeds from the Internet to the vCloud VM – amazing!)


Some of you may be wondering what this is costing me. Well, I've only been trying this for 24 hours but I checked my bill and, so far, I have only built up $0.70 in charges (yes that is 70 cents) and I'm sure that I can afford that 🙂


Plus, I really like the resource utilization screen…


In summary, I can see so many uses for this "VM in the cloud". I could use it to replicate and store data from my local storage, I could run my lab environment inside it, or use it as an Internet web server. I'm sure that IT admins out there have a long list of use cases for this easy to use and affordable virtual environment. What about you?

David Davis is a VMware Evangelist and vSphere Video Training Author for Train Signal. He has achieved CCIE, VCP,CISSP, and vExpert level status over his 15+ years in the IT industry. David has authored hundreds of articles on the Internet and nine different video training courses for TrainSignal.com including the popular vSphere video training package. Learn more about David at his blog or on Twitter and check out a sample of his VMware vSphere video training course from TrainSignal.com.

Cloud Architecture Patterns: Façade VM

Steve Jin, VMware R&D


Provide a single point of contact for a large-scale system consisting of many virtual machines so that they are viewed as one giant VM from outside



As Known As

Giant VM


When a system becomes big, you need multiple VMs to support the workload. For ease of use reasons, external users don’t want to manage multiple connections to each of the virtual machines. Who wants to remember a list of IP or DNS names for a service? Also, you just cannot expect your users to pick up the least-busy VMs for balanced workloads across your cluster of VMs. And to scale your application when your overall workload increases, you want a seamless way adding new capacities without notifying others.

Finally, if you offer a public service, you don’t want to allocate a public IP address for each of your VMs. These days, public IPs are scarce resources and may cost you money.


To solve these problems, you want to designate a VM as the façade of your cluster of VMs. From an external perspective, users or applications can see only one giant virtual machine providing service.

The façade VM gets allocated a well-known IP address and registered with DNS servers. If it’s a publicly available service, the façade VM gets one public IP address.

Behind the façade VM, each other VM is assigned a private IP address and made known to the façade VM. When a request comes in, the façade VM can quickly process it and forward it to backend VMs for further processing, as shown in Figure 1.


Figure 1 Façade VM and backend VMs

As you can see from Figure 1, two participants are involved:

  • Façade VM: its IP and service ports are well known to everyone;
  • Backend VM (worker VM): these play different roles with data feeds from the façade VM upfront. There could be many instances. All are hidden from external view.

The criteria that decide which backend VM server to forward to include:

1. The functionality. The backend servers can be divided with different roles to serve specific requests. The service can be identified with a port number for fast processing. Ideally they should be uniform for easy development and management.

2. The workload. For the VMs with the same roles, the façade VM can send a request to a least-busy VM. It can be simply based on the total number of requests, not necessarily the real VM workload you can find from the hypervisor hosting it. Most of time, this approach should be good enough.

For load balancing, the façade VM can delegate the traffic routing work to existing high performance load balancer appliances in data centers while still maintaining the management responsibilities.

The façade VM should monitor the health of the backend VMs. When any of them dies, the façade VM should remove it and add a new VM.


Usethe façade VM to:

  • Group VMs together like a single giant VM;
  • Simplify user experience of VM clusters;
  • Achieve better availability of your application;
  • Balance workloads to different VMs running in the backend;
  • Scale your application seamlessly by adding more backend VMs;
  • Save public IP addresses.


While it’s a good idea to have a single point of contact, it’s also a risk as it could be a single point of failure. For mission critical applications, you will want to have a hot standby backup for the façade VM. At a minimum, you need an auto restart for the façade VM.

For public services, it would be hard for developers and system administrators to directly access–for example, using SSH–the backend VMs. The related ports are mostly not open for security reasons. Even if they are open, you cannot connect to a particular backend server because there is no one-on-one mapping from public IPs to the private IPs of the VM of your interest. In that case, you most likely have to use VPN so that you can “see” private the IP addresses of all backend VMs.

Known Uses

Most cloud service providers offer load balancing features so that you can design the façade VM easily without affecting performance. Amazon EC2, for example, offers load balancing features.

Not all load balancing mechanisms are the same. Terremark vCloud Express, for example, provides a unique feature in which you can map each port of a public IP address to a group of virtual machines. This allows maximum use of a public IP address.

Related Patterns

VM Pool: you can get VMs from VM pool during peak hours and return them during off peak hours.

Stateless VM: you can leverage this for the backend servers.

This article was originally posted on www.doublecloud.org.Visit the site for more information on virtualization, cloud computing, and other enterprise technologies.

Author: Steve Jin is the author of VMware VI and vSphere SDK (Prentice Hall), creator of VMware vSphere Java API. For future articles, please subscribe to Email or RSS, and follow on Twitter.

Thinking Differently about Scalability with Cloud Computing

The following post is the first in a series on cloud computing and the enterprise by John Ellis, Chief Architect for vCloud at BlueLock.


As a software architect, developing solutions in the public cloud has made my life much easier. I can spin up new servers in mere minutes and occupy only ethereal, virtual real estate. I can only imagine how huge of an eye roll I would have received from a capital expense report spanning fourteen physical servers – but today I can spin up fourteen virtual servers and never leave the comfort of my operational budget. 


Scalability in the Cloud

The move from physical to virtual makes economic sense by casting aside physical hardware, yet it also makes design and development sense when considering infrastructure architecture. Since the number of servers you can provision is no longer bound by the physical size of your server closet, a whole new way of scaling your infrastructure can be unlocked.

Public cloud hosting providers such as BlueLock offer a unique value to the overwhelming majority of organizations that develop their own software today, regardless of whether the software is for internal use only or offered as a solution to the outside world. To really build an infrastructure that can efficiently scale in the cloud, one should look past the traditional methods of horizontal scaling or vertical scaling. Efficient infrastructure scaling in the cloud is achieved by performing both horizontal and vertical scaling, using what John Allspaw of Flickr coined “diagonal scaling.” 


Beefing Up with Vertical Scaling

Vertical scaling is the process of beefing up a server by adding more CPUs, more memory or faster disks. Additional CPUs may help batch jobs execute faster. More memory may allow your local cache regions to grow and fetch more data quickly. Faster disks may reduce I/O times and speed random access times. Scaling vertically in this way allows you to speed up individual applications and single threads without having to fight with the latency or coordination overhead associated with massive parallelism. Depending on the nature of the application, however, adding more hardware can quickly have diminishing returns. At a certain point adding another ounce of hardware doesn't provide any performance benefit, leaving us to find another way to scale our system and add more throughput. 


Multiplying with Horizontal Scaling

Horizontal scaling grants us more throughput at the cost of complexity. A simple example is a web site that just serves up static content; if one server can handle 10 page requests per second, then load-balancing between two servers should provide 20 page requests per second. Things get a bit more sticky as you farm out work to application servers that might need to manage some sort of stateful task, but the theory is still the same. Adding more concurrently running servers empowers you to execute more concurrent workloads. 


Growing Large with Diagonal Scaling

Scaling diagonally by enabling both vertical and horizontal scalability is something unique about clouds powered by VMware vCloud Director. No longer do I have to worry that my server is too small or underpowered. Instead, I can dynamically grow my servers until I hit the limit of diminishing returns. Once my servers can grow no further, I just clone the same server over and over again to handle more concurrent requests. This kind of flexibility is invaluable – it removes the fear that we might make the wrong scalability decision at the onset of our architecture layout. It's nearly impossible to truly understand how an application will need to scale in the future – especially when your developers haven't written the application yet! 


Designing to Scale: The Creation

As an example let us envision a simple, n-tier web application with a HTTP server and a second, Spring-powered tcServer. We may initially size the HTTP server to have only 512 MB of RAM with 2 vCPUs, while the application server resides within 1 GB of RAM and 1 CPU. Our application is young, no one is really using it aside from potential investors and (since we picked a fantastic public cloud provider) we just want to pay a minimum amount to start. 


Building to Scale: The Launch

Our application goes into public beta and users begin to sign up. We bump up the number of concurrent threads our HTTP server is allowed to spin up and hot-add another 512 MB of RAM. CPU usage on our application server goes from 10% to 50%, so we add another vCPU to allow our thread pool to spread out a bit.


A few weeks later our developers add local caching to our application, so we expand the memory from 1 GB of RAM to 16 GB of RAM. The servers are now larger and handling requests nicely; in fact the application really doesn't need to use more than 12 GB at any point in time. 


Growing to Scale: The Rush

One month later our application is about to be featured on a prominent talk show. The application needs to handle five times the number of concurrent users…before lunch. Adding additional hardware won't help us scale to the meet the throughput we require, but luckily our savvy developers wrote our application so that multiple instances can share sessions and work together. Within minutes the application can horizontally scale by cloning our HTTP and application servers six times over. Since our clones are exact duplicates of our existing servers with brand new IP addresses, there is no need to re-configure the servers in order to have them ready to go.



Ready, Set, Grow

Working with a public cloud provider allows your infrastructure to scale in a way that best fits your application. There is no “one size fits all” strategy to infrastructure architecture – you need to expand alongside the unique personality of the applications you host. Infrastructure powered by a VMware vCloud Datacenter partner can help take away the worry about locking yourself into a single scalability pattern – letting you grow along with your application's own unique needs.