Home > Blogs > VMware vCloud Blog > Monthly Archives: November 2010

Monthly Archives: November 2010

Connecting Terremark vCloud Express VM to the Internet and Installing Apps

By David Davis

In part one of this series, I covered why vCloud Express is so appealing for SMBs and large enterprises alike. From there, I showed you how to get started using vCloud Express by creating your first virtual machine. Once we had our VM up and running, there are still a few tasks left that you still need to perform. You'll want to configure outbound Internet access, inbound VPN access, inbound Internet access, and install your applications. Let me show you how.

Outbound Internet Access

With Terremark vCloud Express, you are provided a block of private IP addresses for your "virtual data center". In my case, I had the private IP network. As you can see in the graphic below, I assigned one of those private IPs to my first server.


As I add more servers, they would be able to communicate on this internal LAN by default.

But what if I want to download applications or patches from the Internet? Does this VM on the private LAN have outbound Internet access? The answer is yes, by default outbound NAT'ed Internet access is configured. I went to the Network tab in vCloud Express and could see my assigned public Internet address, as you see in the graphic below.


However, there is no INBOUND Internet access to my new VM, just as you would have with a home/SMB NAT router.

Inbound VPN Access

Once your VM is up and running, the first thing you will want to do is connect to it via RDP to configure it and begin installing your applications. This is actually available by default if you use Terremark's VPN Connect – a private SSL VPN. Of course, this VPN could be used for other things besides just RDP to a Windows server. You could use the SSL VPN for SSH or SCP to a Linux VM or FTP to a any VM to transfer the apps that you need to install.

To connect to your virtual datacenter via the SSL VPN, click VPN Connect.

A new browser window will popup and you will sign in to the Cisco SSL VPN client (make sure that you select SSL VPN from the drop-down menu).

If this is the first time you have used it, this will launch the Cisco AnyConnect VPN Client. When you are done, you'll have a new Windows system tray icon for this client. If you click on it, you can see your VPN IP address and status.

Once connected, you are on the same network as your vCloud VM. That means that you can RDP to the IP address of the VM, like I did here.


Notice how I'm connected via RDP to the same private 10-net IP address that my VM was assigned, above.

Inbound Internet Access

At some point, you will want to configure inbound Internet access to your vCloud VMs. This could be just to RDP to a public IP address for management but, more than likely, it is to allow your new VM to be, for example, a public Internet web server (or any other application port you would want to open).

To do this, you'll go to the Network tab and you need to do two things:

1. Create a new Service

2. Create a new Node

In my mind, the service is the NAT rule allowing traffic inbound and the node is the server (or servers) that will be joined to the NAT rule to complete the inbound access. Note that these are NOT firewall rules. There is a whole separate configuration in Terremark vCloud Express called Security Services which is essentially your virtual data center firewall configuration.

Let's say that we want to configure inbound public RDP access to our new VM. This way, we don't have to connect to the VPN before we can use RDP (note that this could be a security concern for some). To do this, we would first create a new service by clicking on Create Service in the Network tab.


From here, the Create Internet Service wizard comes up. I opted to use the default Internet IP (but I could have request a new IP). Then, I used TCP as my protocol and port 3389 as that is the port for RDP. I named the service, you guessed it, "RDP". I accepted that I will have to pay $0.01 more per hour for inbound network traffic from the Internet and here's what it looked like:


Next, to be able to actually RDP, I needed to map this service to a Node. I selected the new Internet service and clicked on Create Node.


This brought up the Create Node Service wizard where I filled out the server name, server IP, and server port, as you see below.


And here is what we have…


Finally, I tested this inbound RDP NAT by using Remote Desktop on my local PC and going to the external Internet IP address I mapped to my internal private IP. Here are the results of my RDP attempt:


Notice the IP address that I was able to RDP to after I created the service, created the node, and mapped them together.

You would want to administer the vCloud VM either through RDP to the private IP using the SSL VPN or via RDP to the public IP address.

Installing Applications in your vCloud Express VM

So how do I get applications on my vCloud VM? You could have chosen to install your Windows VM with SQL Server already installed – that's one option to get an application. It doesn't appear that local ISO or remote ISO mounting is supported at this time. The recommended way to get apps to your VM is to FTP (or some other file transfer protocol) to the VM and then install them from there. I can see installing an FTP server on the VM (opening up the service and mapping it to a node) and then FTP'ing ISO files to the VM to be mounted with something like Daemon-Tools and then installed.

Besides downloading applications directly from the software provider's website on the Internet, another option to copy applications over to the new VM is to use RDP and map your local drives. From there you could copy or run anything (but, of course the performance is going to be very limiting). Here is a network drive from my house, mapped through RDP, as seen by the vCloud Express VM.


From here, I could copy over Veeam Backup (an application that I had on my local network drive) and install it in the vCloud VM. However, with the upload speed of my local Internet connection, it would be much faster to just download that application from the software company to the vCloud VM, directly (I was actually getting 6MB per second download speeds from the Internet to the vCloud VM – amazing!)


Some of you may be wondering what this is costing me. Well, I've only been trying this for 24 hours but I checked my bill and, so far, I have only built up $0.70 in charges (yes that is 70 cents) and I'm sure that I can afford that 🙂


Plus, I really like the resource utilization screen…


In summary, I can see so many uses for this "VM in the cloud". I could use it to replicate and store data from my local storage, I could run my lab environment inside it, or use it as an Internet web server. I'm sure that IT admins out there have a long list of use cases for this easy to use and affordable virtual environment. What about you?

David Davis is a VMware Evangelist and vSphere Video Training Author for Train Signal. He has achieved CCIE, VCP,CISSP, and vExpert level status over his 15+ years in the IT industry. David has authored hundreds of articles on the Internet and nine different video training courses for TrainSignal.com including the popular vSphere video training package. Learn more about David at his blog or on Twitter and check out a sample of his VMware vSphere video training course from TrainSignal.com.

Cloud Architecture Patterns: App VM

Steve Jin, VMware R&D


Provide packaged software stack as Platform-as-a-Service (PaaS) platform for running applications



As Known As



We all know the three different types of cloud services from Infrastructure-as-a-Service (IaaS), PaaS, to Software-as-a-Service (SaaS). If you want to leverage PaaS, you have to choose one of the PaaS service providers like Google or Microsoft. Leveraging an external PaaS has its own benefits.

What if you want to keep your applications running in-house but still enjoy the benefits of PaaS? Today you don’t have much choice. Google, for example, does not sell its App Engine as a product that you can install and run on premise. You have to run it on the Google cloud.


The solution is clear and even easy – build your own PaaS. Yes, that’s right! Although you won’t get the same software stacks as as your service providers, you can easily install software stacks from either open source or commercial products onto virtual machines. Following the same naming convention for App Servers, I refer to these virtual machines as “App VMs.”

Note that you may have applications written in different programming languages/architectures/frameworks (for example, LAMP – Linux/Apache/MySQL/PHP), or J2EE, and therefore you should create different virtual machine templates accordingly. By the same token, you may need software stacks like a database, messaging server, and a directory server. For a complete PaaS platform, you should create different VM templates or reuse existing ones.

With many different software configurations in place, you should test the templates thoroughly before deploying them. You also need to manage and patch these templates and maintain and evolve them over time.

When you need to run your application on your own PaaS, you can simply provision these virtual machines and inject your application code there. It will require more management on the infrastructure than using external service providers.

Building your PaaS on VM allows you full control over the software stack: what goes there and with what configurations. It also gives you much more portability for your applications than on a typical service provider PaaS. For one thing, you can move your VMs around more easily than moving your typical PaaS applications because the underlying touch points of VMs are mostly X86 instructions. It means you can easily move VMs among different X86-based hypervisors, either inside or outside of an enterprise.

With the portability provided by this App VM pattern, you can also consider building App VMs and running them at IaaS service providers. You can move these VMs back to the enterprise whenever you want without rewriting your applications.


Consider App VM pattern when you:

1.   Want to construct your own PaaS platform with full flexibility and control over your software stack;

2.   Cannot find a PaaS service available that satisfies your need;

3.   Need truly portable applications that run across both private clouds and public clouds.


The biggest challenge of this pattern is provisioning and management. With so many potential virtual machines, you have to manage and patch them effectively. When the scale of your system reaches a certain size, management may become a big burden.

If you decide to run the an App VM with IaaS providers, you also need to check out if the service providers offer easy solutions for you to move these VMs back to the enterprise.

Known Uses

It’s pretty common that people use IaaS to run applications. There are also pre-packaged virtual machines templates with software installed, like IBM WebSphere AMI in Amazon.

For enterprise use, VMware demoed CloudTools at VMworld 2009 that can provision J2EE App VMs. The CloudTools was originally created by Chris Richardson for Amazon EC2, and I ported it to vSphere.

Related Patterns

VM Factory: it can help provision the App VMs.

VM pool: it can help pool the App VMs for fast provisioning.

Façade VM: helps build a large scale PaaS platform with App VM.


This article was originally posted on www.doublecloud.org.Visit the site for more information on virtualization, cloud computing, and other enterprise technologies.

Author: Steve Jin is the author of VMware VI and vSphere SDK (Prentice Hall), creator of VMware vSphere Java API. For future articles, please subscribe to Email or RSS, and follow on Twitter.

vCloud Express: Step by Step

By David Davis

In my post Cloud Adoption for SMBs and End Users – Easy and Affordable, I talked about how it makes perfect sense that SMBs move to the cloud. vCloud Express, offered by a number of providers, is an ideal service for SMBs (and enterprises alike) because it's quick, easy, and pay-as-you-go on a credit card.

It had been some time since I tried out vCloud Express so I was thankful when recently I had the opportunity to try out vCloud Express from Terremark. Quickly, I found out that vCloud Express had grown up a lot since I last saw it. Before I show you how to get started with vCloud Express, here are a few things that you should know:

  • vCloud Express is no-commitment & pay as you go with a credit card.
  • vCloud Express is designed to be easy to use (which you'll see below).
  • Unlike Amazon EC2, Terremark vCloud Express is VMware-based, supports more than 450 guest operating systems, supports up to 8-way 16GB VMs, supports Windows 2008 and SQL 2008, offers hardware load balancing and fiber-attached persistent storage.
  • Prices start at 3.6 cents per computing hour.

With that out of the way, let me show you how you can get started with vCloud Express and try it for yourself.

Honestly, I have used a lot of online services and Terremark has done a great job of making "Infrastructure as a Service / IaaS" super-easy. Here it is, step by step:

1. Go to the vCloud Express from Terremark page and click Order Now  to go to the signup page.

2. Fill out the New User Signup & activate your account.

3. At this point, you'll need to provide a credit card to Terremark to bill your per hour usage on.

4. When you Sign In, you'll be brought to the Resources page so click on Servers to get started creating your first server.

5. At this point, you have a number of options. You can create Rows and Groups to help organize servers if you'll have more than a couple of servers. However, minimally, if you're just going to create one server like I am then you can select either Create Server or Create Blank Server. The difference between these two is that "Create Server" creates a new server from pre-built templates where "Create Blank Server" does what it says and creates an empty VM where you would install your own OS. In my case, I want to demonstrate a VM that has a pre-built OS (a template) so we'll choose Create Server. (note that we could even create a server with an OS and a SQL database).


6. This brings up the Create Server Wizard that will guide us through the process. First we need to specify the type of VM (OS, OS & Database, or Cohesive FT). I specified OS only then set my OS to Windows 2008 Standard R2 64-bit. The only servers that I saw with additional monthly fees were the SQL database servers.


7. Next, I had to specify the number of virtual processors (VPU) and the amount of RAM that I wanted this server to have. Notice how as the CPU and RAM rises, so does the cost per hour of this VM (also add in the cost for the virtual hard drive).


8. From here, I specified the server name, admin password, and IP settings. Image4
9. Next, I had to specify what row and group this server should be contained in (I created new rows and groups then named them whatever I wanted).


10. Finally, I reviewed what we were about to deploy (including the associated costs), opted to power on the server, and accepted the license agreement. Image6
At this point, I was told that the new server could take up to 45 minutes to be created however, after just 5 minutes my new Windows server in the cloud was ready to be used.


       11. Next, select the server and click Connect. Likely you will have to install the VMware MKS plugin, as I did, to use the console. I did have some trouble connecting to the server console however, I was successful when using FireFox, installing the MKS plug-in as directed, and connecting to the VPN with VPN Connect (a SSL VPN that required me to install the Cisco AnyConnect VPN Client). Here's what my web console looked like:


12. From the server console, I updated the VMware Tools by mounting the provided ISO, installing, and rebooting.

Note that you aren't recommended to use this web-based server console for daily administration, only to get the server up a running to the point that you can connect to it via RDP.

After only about 15 minutes of using vCloud Express, I have a working Windows 2008 R2 server with VMware Tools installed, but what remains?

  • Configure Outbound and Inbound Internet Access
  • Installing Applications

I will cover these in a separate vCloud blog post so look for part 2.

In summary, think about this – never before could you have a new Windows or Linux server, up and running on the Internet, in under 15 minutes, and only pay a few cents per hour for the resources that you use? vCloud Express is revolutionary in its simplicity, affordability, and easy of use.

David Davis is a VMware Evangelist and vSphere Video Training Author for Train Signal. He has achieved CCIE, VCP,CISSP, and vExpert level status over his 15+ years in the IT industry. David has authored hundreds of articles on the Internet and nine different video training courses for TrainSignal.com including the popular vSphere video training package. Learn more about David at his blog or on Twitter and check out a sample of his VMware vSphere video training course from TrainSignal.com.

Cloud Architecture Patterns: Façade VM

Steve Jin, VMware R&D


Provide a single point of contact for a large-scale system consisting of many virtual machines so that they are viewed as one giant VM from outside



As Known As

Giant VM


When a system becomes big, you need multiple VMs to support the workload. For ease of use reasons, external users don’t want to manage multiple connections to each of the virtual machines. Who wants to remember a list of IP or DNS names for a service? Also, you just cannot expect your users to pick up the least-busy VMs for balanced workloads across your cluster of VMs. And to scale your application when your overall workload increases, you want a seamless way adding new capacities without notifying others.

Finally, if you offer a public service, you don’t want to allocate a public IP address for each of your VMs. These days, public IPs are scarce resources and may cost you money.


To solve these problems, you want to designate a VM as the façade of your cluster of VMs. From an external perspective, users or applications can see only one giant virtual machine providing service.

The façade VM gets allocated a well-known IP address and registered with DNS servers. If it’s a publicly available service, the façade VM gets one public IP address.

Behind the façade VM, each other VM is assigned a private IP address and made known to the façade VM. When a request comes in, the façade VM can quickly process it and forward it to backend VMs for further processing, as shown in Figure 1.


Figure 1 Façade VM and backend VMs

As you can see from Figure 1, two participants are involved:

  • Façade VM: its IP and service ports are well known to everyone;
  • Backend VM (worker VM): these play different roles with data feeds from the façade VM upfront. There could be many instances. All are hidden from external view.

The criteria that decide which backend VM server to forward to include:

1. The functionality. The backend servers can be divided with different roles to serve specific requests. The service can be identified with a port number for fast processing. Ideally they should be uniform for easy development and management.

2. The workload. For the VMs with the same roles, the façade VM can send a request to a least-busy VM. It can be simply based on the total number of requests, not necessarily the real VM workload you can find from the hypervisor hosting it. Most of time, this approach should be good enough.

For load balancing, the façade VM can delegate the traffic routing work to existing high performance load balancer appliances in data centers while still maintaining the management responsibilities.

The façade VM should monitor the health of the backend VMs. When any of them dies, the façade VM should remove it and add a new VM.


Usethe façade VM to:

  • Group VMs together like a single giant VM;
  • Simplify user experience of VM clusters;
  • Achieve better availability of your application;
  • Balance workloads to different VMs running in the backend;
  • Scale your application seamlessly by adding more backend VMs;
  • Save public IP addresses.


While it’s a good idea to have a single point of contact, it’s also a risk as it could be a single point of failure. For mission critical applications, you will want to have a hot standby backup for the façade VM. At a minimum, you need an auto restart for the façade VM.

For public services, it would be hard for developers and system administrators to directly access–for example, using SSH–the backend VMs. The related ports are mostly not open for security reasons. Even if they are open, you cannot connect to a particular backend server because there is no one-on-one mapping from public IPs to the private IPs of the VM of your interest. In that case, you most likely have to use VPN so that you can “see” private the IP addresses of all backend VMs.

Known Uses

Most cloud service providers offer load balancing features so that you can design the façade VM easily without affecting performance. Amazon EC2, for example, offers load balancing features.

Not all load balancing mechanisms are the same. Terremark vCloud Express, for example, provides a unique feature in which you can map each port of a public IP address to a group of virtual machines. This allows maximum use of a public IP address.

Related Patterns

VM Pool: you can get VMs from VM pool during peak hours and return them during off peak hours.

Stateless VM: you can leverage this for the backend servers.

This article was originally posted on www.doublecloud.org.Visit the site for more information on virtualization, cloud computing, and other enterprise technologies.

Author: Steve Jin is the author of VMware VI and vSphere SDK (Prentice Hall), creator of VMware vSphere Java API. For future articles, please subscribe to Email or RSS, and follow on Twitter.

Thinking Differently about Scalability with Cloud Computing

The following post is the first in a series on cloud computing and the enterprise by John Ellis, Chief Architect for vCloud at BlueLock.


As a software architect, developing solutions in the public cloud has made my life much easier. I can spin up new servers in mere minutes and occupy only ethereal, virtual real estate. I can only imagine how huge of an eye roll I would have received from a capital expense report spanning fourteen physical servers – but today I can spin up fourteen virtual servers and never leave the comfort of my operational budget. 


Scalability in the Cloud

The move from physical to virtual makes economic sense by casting aside physical hardware, yet it also makes design and development sense when considering infrastructure architecture. Since the number of servers you can provision is no longer bound by the physical size of your server closet, a whole new way of scaling your infrastructure can be unlocked.

Public cloud hosting providers such as BlueLock offer a unique value to the overwhelming majority of organizations that develop their own software today, regardless of whether the software is for internal use only or offered as a solution to the outside world. To really build an infrastructure that can efficiently scale in the cloud, one should look past the traditional methods of horizontal scaling or vertical scaling. Efficient infrastructure scaling in the cloud is achieved by performing both horizontal and vertical scaling, using what John Allspaw of Flickr coined “diagonal scaling.” 


Beefing Up with Vertical Scaling

Vertical scaling is the process of beefing up a server by adding more CPUs, more memory or faster disks. Additional CPUs may help batch jobs execute faster. More memory may allow your local cache regions to grow and fetch more data quickly. Faster disks may reduce I/O times and speed random access times. Scaling vertically in this way allows you to speed up individual applications and single threads without having to fight with the latency or coordination overhead associated with massive parallelism. Depending on the nature of the application, however, adding more hardware can quickly have diminishing returns. At a certain point adding another ounce of hardware doesn't provide any performance benefit, leaving us to find another way to scale our system and add more throughput. 


Multiplying with Horizontal Scaling

Horizontal scaling grants us more throughput at the cost of complexity. A simple example is a web site that just serves up static content; if one server can handle 10 page requests per second, then load-balancing between two servers should provide 20 page requests per second. Things get a bit more sticky as you farm out work to application servers that might need to manage some sort of stateful task, but the theory is still the same. Adding more concurrently running servers empowers you to execute more concurrent workloads. 


Growing Large with Diagonal Scaling

Scaling diagonally by enabling both vertical and horizontal scalability is something unique about clouds powered by VMware vCloud Director. No longer do I have to worry that my server is too small or underpowered. Instead, I can dynamically grow my servers until I hit the limit of diminishing returns. Once my servers can grow no further, I just clone the same server over and over again to handle more concurrent requests. This kind of flexibility is invaluable – it removes the fear that we might make the wrong scalability decision at the onset of our architecture layout. It's nearly impossible to truly understand how an application will need to scale in the future – especially when your developers haven't written the application yet! 


Designing to Scale: The Creation

As an example let us envision a simple, n-tier web application with a HTTP server and a second, Spring-powered tcServer. We may initially size the HTTP server to have only 512 MB of RAM with 2 vCPUs, while the application server resides within 1 GB of RAM and 1 CPU. Our application is young, no one is really using it aside from potential investors and (since we picked a fantastic public cloud provider) we just want to pay a minimum amount to start. 


Building to Scale: The Launch

Our application goes into public beta and users begin to sign up. We bump up the number of concurrent threads our HTTP server is allowed to spin up and hot-add another 512 MB of RAM. CPU usage on our application server goes from 10% to 50%, so we add another vCPU to allow our thread pool to spread out a bit.


A few weeks later our developers add local caching to our application, so we expand the memory from 1 GB of RAM to 16 GB of RAM. The servers are now larger and handling requests nicely; in fact the application really doesn't need to use more than 12 GB at any point in time. 


Growing to Scale: The Rush

One month later our application is about to be featured on a prominent talk show. The application needs to handle five times the number of concurrent users…before lunch. Adding additional hardware won't help us scale to the meet the throughput we require, but luckily our savvy developers wrote our application so that multiple instances can share sessions and work together. Within minutes the application can horizontally scale by cloning our HTTP and application servers six times over. Since our clones are exact duplicates of our existing servers with brand new IP addresses, there is no need to re-configure the servers in order to have them ready to go.



Ready, Set, Grow

Working with a public cloud provider allows your infrastructure to scale in a way that best fits your application. There is no “one size fits all” strategy to infrastructure architecture – you need to expand alongside the unique personality of the applications you host. Infrastructure powered by a VMware vCloud Datacenter partner can help take away the worry about locking yourself into a single scalability pattern – letting you grow along with your application's own unique needs.

Cloud Architecture Patterns: Stateless VM

Steve Jin, VMware R&D


Ensure a virtual machine does not carry a permanent state so that it can be easily provisioned, migrated, and managed in the cloud.

Also Known As

Disposable VM




Virtualization is the cornerstone for cloud computing, especially at the infrastructure level. With many virtual machines created, managing them becomes a big challenge.

Among these challenges are system provisioning, backup, archiving, and patching different virtual machines. These administrative tasks take lots of CAPEX and OPEX.

We need a better way to architect applications for the cloud.


Making a VM stateless solves a lot of problems. For one thing, you force applications to save data outside of the virtual machine. No longer do you need to back up the virtual machine – only the data. It also makes the system provisioning easier without differentiating the different instances. When the stateless VMs crash for whatever reasons, you don’t lose much. Just add a new virtual machine and voila!

A stateless VM is not a VM that has its own local data. More often than not, it does require some local data or a local cache for better performance. But these data don’t need to be persisted. In some cases, a stateless VM can have additional software installed or data pulled in from a known repository. This process should be fully automated with self-starter scripts, or managed by an external installer.

Once a stateless VM goes live, it should discover all the related services to persistent data. The stateless VM has to rely on the environment to work effectively. It includes directory services, data services, and so on.

With stateless VMs, you can improve mobility inside an enterprise and external transfer to the public cloud. For one thing, you just need to transfer a VM image once and only once. The second time, you just pass around a pre-agreed VM catalog ID that can be no more than several digits. Pretty easy.

When your application runs into problems, instead of diagnosing the problem you just remove the problematic VMs and add new virtual machines. With this capability, you can also easily scale out your applications by adding new VM instances as you need them.

Last but not least, the software upgrade and patch. It has been a big pain to upgrade and patch software in large deployments. You have to do it with each individual machine despite virtual or not. With stateless VM, you only need to patch the template and new virtual machines will pick it up seamlessly.


Use stateless VM to

  • Standardize virtual machines to the minimum variations;
  • Scale out the applications with uniform VMs;
  • Reduce storage consumption and archiving efforts;
  • Ease the software upgrading and patching process;
  • Simplify virtual machine provisioning and lifecycle management;
  • Improve VM mobility within and across clouds. You no longer need to transfer VM images, just a VM catalog ID;
  • Reduce the chances of being infected by a virus.


Stateless VM is great but not universally applicable. The consequences of using stateless VMs include:

  • You are forced to separate data from code which may mean more work than otherwise;
  • Application data has to be persisted in different locations in different ways. You may need additional logic for better data availability;
  • Customization has to be injected at runtime, preferably pulled by the individual virtual machines, rather than pushed by a central server;
  • It may complicate simple/small applications by separating different concerns;
  • A delay or breakdown of the network might result in loss of data.

Known Uses

Amazon EC2 is a good example of stateless VM. After a virtual machine is provisioned, you can install anything there. But you cannot persist any of them after the virtual machine is powered off. To save data, you have to use either S3 or EBS. This forces you to architect your applications differently.

Related Patterns

VM pool: stateless VM may or may not be recycled, depending on the new additions after being created whether they can be reset or cleared.

VM Factory: making VMs stateless eases the job of a VM factory because it does not need to track differences among VMs. When you need a virtual machine, you can just pick any one.

This article was originally posted on www.doublecloud.org. Visit the site for more information on virtualization, cloud computing, and other enterprise technologies.

Author: Steve Jin is the author of VMware VI and vSphere SDK (Prentice Hall), creator of VMware vSphere Java API. For future articles, please subscribe to Email or RSS, and follow on Twitter.

Cloud Architecture Patterns: Aspectual Centralization

Steve Jin, VMware R&D


Separate concerns in large scales computing by leveraging different types of services in the cloud




The history of computing reveals different eras starting from mainframe to client/server to Web computing. With mainframes, computing is contained within the boundary of a mainframe. With client/server and web computing we see the separation of the presentation from the data. With all these computing models, the data is owned and maintained by different applications. The IT staffs who run and maintain the applications are responsible for backing up and maintaining data.

With the rise of cloud computing, I see a new trend that will fundamentally change the game and push productivity to all new levels. I call this “Aspectual Centralization” (AC). This is as important to cloud architecture as Model-View-Controller (MVC) is to software architecture.


With AC, different aspects of an application are extracted out and delegated to centralized services: data services, messaging services, logging services, and so on.

This extraction and delegation process has two profound impacts on application development and deployment. First, application architects and developers are freed from designing these different services, and can really focus on the application business logic instead. Having said that, you still need to design your data schemas.

Secondly, the IT staffs that run the application will not worry about data backup anymore. Backup and maintenance are still needed but this activity now shifts to the staff maintaining centralized data services.

As you can see, this accelerates the IT trend of staff specialization with commensurate gains in productivity.

This specialization works best for big IT shops. But what about smaller ones? For one thing, your IT staffs may have to assume multiple roles. But the clear defined role and responsibility will improve productivity as well. If possible, you can leverage external service providers for data services.

For implementation, you can offer centralized services with traditional physical servers, or go with virtual servers. In the latter case, you will have dedicated virtual machines for these services.


Use AC to:

1. Centralize common services and manage them effectively and consistently;

2. Scope out basic infrastructure from application development;

3. Build applications based on enhanced service-oriented-architecture and enforce it across the enterprise;

4. Improve efficiency of IT administration therefore reduce OPEX.


While gaining many benefits from aspectual centralization, you may face these challenges:

1. Initial investments on the enterprise wide services and re-factoring of existing systems may require additional CAPEX. This could be mitigated using external service providers or virtualized infrastructure.

2. Performance of individual applications may be affected with remote access to other services. Local caching may be required which may complicate the system and defeat the original purpose.

3. It’s harder to cut the boundary of individual applications, which are interwoven into the whole infrastructure. If you are interested in a particular application, it’s easy to move it around inside an enterprise but could be hard to move elsewhere without moving the whole service infrastructure.

Known Uses

Some service providers provide various data services like Amazon’s S3/SimpleDB, Google’s BigTable, Microsoft Windows Azure Storage/SQL Azure, or messaging services like Amazon Simple Queue Service (SQS). You can use these services either from your applications running in their data centers or in your own.

For enterprises, it’s long been an acceptable practice to share databases across different applications, but generally not as common services. The AC architecture pattern will be widely adopted as we move on to IT as a service paradigm.

Related Patterns

Stateless VM: You can use stateless VM but still persist your data with AC.

This article was originally posted on www.doublecloud.org. Visit the site for more information on virtualization, cloud computing, and other enterprise technologies.

Author: Steve Jin is the author of VMware VI and vSphere SDK (Prentice Hall), creator of VMware vSphere Java API. For future articles, please subscribe to Email or RSS, and follow on Twitter.

Interview with EMC’s Scott Lowe on Cloud Computing and vCloud Director

By David Davis

Over the years of attending four VMworlds, one Partner Exchange (PEX), and numerous VMUGs, I have been able to meet some of the most amazing and distinguished people in the virtualization industry. Certainly Scott Lowe is one of those people that I have been honored to meet. Not only is Scott always friendly to talk to but, when he speaks about virtualization and infrastructure, it is immediately evident that his knowledge on the topic is vast. Additionally, Scott's book and certification as a VCDX document his width breadth and depth of knowledge in architecting virtual infrastructures.


For those who don't know Scott Lowe, here is his Bio:

An industry veteran of almost 17 years, Scott brings a range of technical skills combined with outstanding communication skills. In addition to VCDX, Scott is a VCP2, VCP3, VCP4, VSP4, and VTSP4, and holds industry certifications from Cisco, EMC, and Microsoft. Scott's blog (blog.scottlowe.org) is consistently recognized as one of the top technical virtualization blogs, and Scott was a very early contributor to the Planet V12n blog aggregator. Scott is also a published author; his first book, Mastering VMware vSphere 4, is a best seller. Scott currently works with EMC Corporation as a technology consultant specializing in EMC, Cisco, and VMware solutions. For his work supporting the VMware community, Scott was also awarded a VMware vExpert award in both 2009 and 2010.

I am thankful to Scott for agreeing to be interviewed about Cloud Computing, vCloud Director, and his role in this ecosystem as a vSpecialist at EMC.

Question #1: What is your take on the VMware vCloud Director announcement, as an EMC vSpecialist?

We try really hard, as vSpecialists, to make sure that we stay on the ball about developments that involve cloud computing as defined by VMware. This means that we work closely with VMware to ensure that we understand their direction and how we can, from EMC’s perspective, help support their efforts. In that regard, the VMware vCloud Director announcement was not a surprise (not that it was a surprise to very many people); we’d been working with pre-release code for quite some time. It was great to see VMware finally be able to unveil the code and the functionality they’d been working on for so long. VMware vCloud Director is a necessary step toward enabling organizations to treat their IT assets in a more fluid way than they do today. That fluidity is, in turn, necessary in order for organizations to be able to build a private cloud and move to more of a cloud computing operational model.

Question #2: How do EMC Vblocks and VMware's vCloud initiative fit together?

EMC Vblocks and vCloud are very complementary. Both are focused on helping IT organizations move to that idea of IT as a Service, or embracing a cloud computing operational model. These products are not competitive, but designed to work hand-in-hand. Vblocks are intended to help organizations quickly deploy infrastructure (compute, memory, network, storage) and VMware’s vCloud initiative, including vCloud Director, are intended to help organizations quickly deploy workloads on running infrastructure. They are like two sides of the same coin: one deals with infrastructure and the other deals with the workloads running on top of that infrastructure.

Question #3: Will Vblocks and Cloud Computing help small and medium-size businesses?

It’s really about their infrastructure needs. For organizations that are small right now but growing rapidly, a Vblock might make a lot of sense because it gives the organization a fairly clear view of how much capacity they have available to them as they grow. The same goes for medium-sized businesses. In addition, Vblocks might be attractive because they eliminate a lot of the guesswork that can be involved in building your own private cloud infrastructure. In a Vblock, the VCE Coalition has tested the components to ensure that everything works as expected in as predictable a fashion as possible.

Indirectly, Vblocks can also impact small- to medium-sized businesses through service providers. As service providers adopt the Vblock model of provisioning infrastructure and then begin to leverage software like UIM (Unified Infrastructure Manager) and vCloud Director, it makes the cloud more available to small- and medium-sized businesses. At least, I think so.

Question #4: What is an EMC VPLEX and how will it make cloud computing possible?

Personally, I consider EMC VPLEX to be a key component in building both private and hybrid clouds (hybrid clouds being a mix of private and public clouds). VMware has been talking about moving workloads into the cloud for a while now, but until the arrival of VPLEX no one had the answer for how to handle storage. With VPLEX, EMC can enable customers and service providers to provide active/active read-write storage at two locations simultaneously, and it’s this functionality that is leveraged by vMotion over distance to truly make it possible for workloads to burst into the cloud. This functionality is going to get even more exciting in the near future when EMC adds asynchronous functionality to VPLEX.

Qyestion #5: How do you see cloud computing evolving over the next few years? (i.e., what is missing today that you hope to see later?)

That’s a really open-ended question, but I’ll do my best to answer it. I think we will see continued efforts and development poured into addressing some of the key issues that still surround the broad adoption of cloud computing: networking challenges, storage, security, and multi-tenancy. VMware will obviously focus on the areas that are core to their mission, as will EMC and Cisco; between the three organizations, the VCE Coalition has all these areas pretty well covered. It’s going to be an exciting time over the next few years!

Question #6: Can you recommend any resources to learn about EMC, cloud computing, and vCloud Director?

Well, without being too self-serving, I did publish a collection of vCloud Director-related links on my site recently, so check that out. Almost all of the top VMware-focused bloggers have been writing about vCloud Director, so be sure to check out sites from bloggers like Duncan Epping, Frank Denneman, Hany Michael, and others. I’m sure there are many, many more that I did not mention! Of course, as a blogger I’m a bit biased, but leveraging bloggers’ knowledge is also a great way to learn more about EMC and cloud computing as well.

Again, big thank you to Scott Lowe for agreeing to be interviewed for my VMware vCloud blog and I hope that you will checkout his blog at blog.scottlowe.org.

David Davis is a VMware Evangelist and vSphere Video Training Author for Train Signal. He has achieved CCIE, VCP,CISSP, and vExpert level status over his 15+ years in the IT industry. David has authored hundreds of articles on the Internet and nine different video training courses for TrainSignal.com including the popular vSphere video training package. Learn more about David at his blog or on Twitter and check out a sample of his VMware vSphere video training course from TrainSignal.com!

Two Men and a Cloud – Start Packing!

By David Davis

Fellow vExpert, Jon Owings, is a friend of mine who runs a website called "2 VCPs and a Truck" and he goes by @2VCPs on Twitter. Jon's website has good how-to articles for VMware vSphere Admins but I think there is more to his site name than just a joke. If you aren't familiar with it, "2 VCPs and a Truck" is a play on a common home moving company in the USA, "2 Men and a Truck". Certainly the "2 VCPs and a Truck" are the movers that that have been performing P2V consolidation – moving physical servers to virtual servers. So how does this relate to "the cloud"?


With the move to server virtualization maturing, now comes the move to the cloud. Enterprises and Service Providers alike are warming up to the idea (and some doing it) of creating hybrid clouds. Like any house move, this move will require an old house, new house, a truck, and movers. Let's look at this move to the cloud just as we would any house move.

Why are we moving to begin with?

I think that the most difficult question that enterprise IT admins face related to the cloud is "why do we need to make this move at all?" Mike DePetrillo said that if you are a cloud provider, you don't try to sell cloud computing to IT Admins – you go directly to the CFO or CEO. This is an important point.

Enterprises make the move to the cloud for business reasons – not IT reasons. The cloud offers a change from businesses having to lay out capital expenditures (capex) to a monthly operational expense (opex) model. This makes sense to the CFO and CEO. This is the same reasons that companies might hire contractors and pay them more than full time employees.

Another reason to move to the cloud is business agility. Perhaps your IT group struggles to keep up with demand during the Christmas retail season or during registration at the University.

Finally, businesses move to the cloud to refocus staff on more critical company projects that offer real ROI – not just "keeping the servers up."

In other words, you do what you do best (your business) and let a cloud provider do what they do best (keep the servers up). This is the same reason that you likely don't bake your own bread or sew your own clothes – you let someone do this who can do it quicker, easier, more reliably, and cheaper than you can to allow you to do what you best (like go to work or spend time with your family).

WANTED: Cloud Movers – Must Have Vast Knowledge of IT Skills

Just like moving your house, moving servers can be a lot of work – more work than one person or group wants to take on. So how do you move? Likely, you need help. You'll need people who know the virtual infrastructure, people that know the applications, and people that know the end users. If the CFO or CIO is the person opting to make this move, they will have to champion the project with these disparate groups of people in order to make the move a reality.

Certainly these "movers" need skills – cloud skills. They may know vSphere inside and out but the cloud is a new animal. The cloud empowers the users to participate more in IT and the lifecycle of a server or app. These movers must have a wide breadth of IT skills. The guy who only knows COBOL (sorry Mr. Dinosaur – surely I'm offending someone) isn't going to be too helpful when it comes to moving to the cloud. The movers need not only vSphere skills but also networking skills, application knowledge, business knowledge, performance knowledge, and troubleshooting knowledge. They need to know vSphere, the SAN, the network, the OS, and the Apps. Someone has to work with the cloud provider to communicate the business's needs related to IT. Someone has to assist the cloud provider in migrating these VMs and their apps.

Finally, don't forget the cloud providers. They need "movers" too. Their movers need to work with your movers to get the move done. But let's leave the topic of cloud provider expertise to another blog post and move on to "the trucks".

Where are the Trucks?

You could move your house across town without trucks, sure, but moving trucks make the job tremendously easier. Trucks are the tools that allow the movers to do their job quickly, efficiently, and at a reasonable cost.

In cloud computing, the "trucks" are the software and hardware that link your current infrastructure (the private cloud) to the cloud providers (public cloud) to create a hybrid cloud. That hybrid cloud is going to allow you to move your virtual servers (and their apps) to and from the private to the public. And, if your cloud provider isn't living up to their promises, you also want to be able to use your "trucks" to quickly move your VMs to another provider.

In my mind, these "trucks" are still maturing. Certainly you could shutdown your VMs, copy them to a provider, and power them back up. However, what you really need are intelligent connections that allow you to securely move running applications from private to public clouds. Recently, I wrote about Afore Solutions CloudLink. BlueLock has CloudConnector that works with VMware vCloud Director, and other cloud providers offer their own solutions. Whatever solution you use, you want to make sure that you aren't stuck in "Hotel California" as VMware CEO Paul Maritz pointed out at VMware Partner Exchange 2010 (you can check in but you can't check out = lock-in).

Start Packing!

If you aren't planning a move to the cloud today, the least that you should do is "start packing". What I mean by that is to get your IT infrastructure as virtualized and efficient as it can be. Try to achieve 100% virtualization. Know your applications and end users. Understand your IT costs to the point that you can put a monthly price tag on each virtual machine. Streamline, consolidate, and document what you have now. By doing these things, you will not only make your IT more efficient and save money but be prepared should your company consider a move to the cloud.  Finally, educate yourself with to the point that you are an "enterprise admin" ready to be a cloud mover so that you aren't left out of the cloud equation completely.

Where are you in the cloud moving process? Learning? Packing? or Moving? I welcome your comments!

David Davis is a VMware Evangelist and vSphere Video Training Author for Train Signal. He has achieved CCIE, VCP,CISSP, and vExpert level status over his 15+ years in the IT industry. David has authored hundreds of articles on the Internet and nine different video training courses for TrainSignal.com including the popular vSphere video training package. Learn more about David at his blog or on Twitter and check out a sample of his VMware vSphere video training course from TrainSignal.com!

Cloud Architecture Patterns: VM Pool

Steve Jin, VMware R&D

This entry was reposted from DoubleCloud.org, a blog for architects and developers on virtualization and cloud computing.


Provide a mechanism to fast provision virtual machines (VMs) and manage their lifecycles by maintaining a pool of virtual machines.




Virtual machines can be expensive to create. It takes several minutes to create a new virtual machine. Technologies like linked clone and storage offloading can help speed up the process, but it still takes time. And these alternative approaches, in some use cases, do not help when you need instant provisioning.


It’s generally a good practice to pool resources that are expensive to create. In programming, you pool threads and check them out on demand. When it’s done, you check them in back to the pool. This is what most Web Servers do for high performance.

You can leverage the same idea for VM provisioning. You create new virtual machines and put them into a pool. When there is a new request, you just check out one virtual machine from the pool. The following diagram shows how it works.


Notice that there are two participants here:

1) VM pool manager which checks in/out VMs, and indexes the content, and searches the available VMs
2) VMs that are parked in the pool, ready to be checked out.

Depending on how you use the pool, you can keep VMs powered on and ready to work at any time. Alternatively, you can keep the VMs powered off and power them on after being checked out. The difference is that the latter approach saves energy but takes longer. It’s a tradeoff you have to make in your projects.

To create the checked out virtual machine for your purpose, you may have extra work to do. For example, you may still have to to change its IP address, register with a load balancer, etc.

There are several decision points for a VM pool in real projects. They are mostly trade-offs you have to make in the design process.

Fixed size or dynamic?

A VM pool keeps a certain amount of virtual machines. To check out, the algorithm can be either first-in-first-out (FIFO) or first-in-last-out (FILO). It doesn’t matter for the users as long as you keep a reasonable number of VMs available. In either case, the pool manager has to track the VMs parked there.

The number of VMs could be fixed or dynamic. If it’s fixed, you create a new VM and check it in upon one being checked out. Otherwise, you let it be, or optionally create a new one when the number drops to a predefined minimum.

New or Recycle?

For checking in new virtual machines, there could be two sources. One source is a VM factory where you create new ones. The other source is to “recycle” the virtual machines no longer needed. The latter may or may not be feasible depending on how “dirty” the virtual machine is after being used. You may need to run scripts or shut VMs for cleaning up before being checked in even if they are “recyclable. “

Single pool or Multi pools?

If you will need one type of virtual machine, you just need a single pool for easy management and efficiency. But if you need multiple types of virtual machines, you may need multiple VM pools, each of which holds one type of virtual machine.

For simplicity, you may want to stick with single pool approach. If your VMs have a good denominator, you can create a VM template based on that and pool it. The work then becomes installing additional software packages on the fly. It may or may not save you time depending on the complexity of the VMs.


When you want fast or instant provisioning, you end up using extra resources like disk spaces for the virtual machines parked in the pool. The amount of resources depends on the size of the pool. And if you want VMs hot standby or not, you may use extra power for the powered on VMs that are just waiting.

By understanding your priorities, you can tune the parameters and minimize these costs to a reasonable –but still good enough level for your projects.

Known Uses

In 2009, I got a cloud demo project for VMworld keynotes. It’s obvious that you cannot make your audience wait for three minutes after each command is issued.

To handle this, I designed a very simple VM pool which holds pre-created VMs. These VMs are on hot-standby in the pool. After the deployment command is issued, a new VM is checked out and injected with new code. In several seconds, you see the newly deployed application up and running.

At the same time, a backend process is kicked off to create a new virtual machine for future usage. It still takes about three minutes in total at backend, but no one cares because the application itself is up and running in seconds. The demos turned out to be a huge success.

Related Patterns

  1. Factory VM: you can use a factory VM pattern to create new virtual machines before adding them to the pool.
  2. Stateless VM: If the VMs in the pool are stateless, they require much less management work. Basically you don’t need to track different VMs–just pick anyone from a single pool!
  3. Template VM: you can create different templates for significantly different VMs.

Steve Jin is the author of VMware VI and vSphere SDK (Prentice Hall), creator of VMware vSphere Java API. For future articles, please subscribe to Email or RSS, and follow on Twitter.