VMware is aware of today’s announcement of multiple vulnerabilities in the OpenSSL library and is reviewing their impact on vCloud Hybrid Service. Full details for all VMware products and services are tracked in the VMware Knowledge Base Article 2079783. We will post information specific to vCHS here on the vCloud Blog, and provide updates with more information as we have it.
By: Chris Colotti
There have been some popular questions regarding “types of use cases” for VMware vCloud Hybrid Service when used for IaaS, DaaS, and Disaster Recovery as independent services. However, it’s not all been pulled together into a single architecture example. I have started setting out on a long venture to build and design a full-scale architecture with the help of a few other business units, most notably the End User Computing group, specifically Kris Boyd who works on “EUC Solutions”. This will ultimately involve the following components all pulled together in a large single architecture.
- On Premises vSphere
- Horizon View Desktops
- VMware vCloud Hybrid Service Extended Applications (IaaS)
- Exchange Web
- SharePoint Web
- Wiki Web interfaces
- Horizon View Security Services
- VMware Horizon DaaS on vCloud Hybrid Service
- Cloud based desktops for Disaster Recovery
- Access to failed over applications in the cloud
- VMware vCloud Hybrid Service – Disaster Recovery
- Protect on premises applications
- Connect with DaaS desktops
- Cloud based desktops for Disaster Recovery
In order to achieve a full scale setup the first thing that was needed was to get all the VPN networking and connection points between clouds and to the on premises data center. The diagram below shows the 50,000 foot view of the combined clouds and data center connectivity so you can get an idea about the connectivity we have going on. For each component we will show some examples of the firewall rules and VPN connections. We will also drill down into each of the solution’s individual data centers so you can see the “double-click” aspect as we build out this series of blog posts.
Note: We are making the assumption that the reader already understands the basic means to access their vCloud Hybrid Service resources.
Additionally many firewall rules were created and tested both on and off premises to ensure the first infrastructure of Active Directory was communicating and passing native replication. This is no different than the previous document I wrote about building your Enterprise IT Hybrid Data Center and extending your datacenter to vCHS. The purpose here was to make sure we have all the end points connected ahead of time. Since the basic idea for this is covered in the other paper, we will not review the details of the connections and how to create them. This is simply an expansion on that original concept for the larger scope. For our purposes we used VPN but for a production setup Direct Connect would be ideal to minimize latency.
For the purposes of this setup we also have used DYN.com for many of our external DNS needs and some traffic management, as well as F5 Networks Global Traffic Managers for DNS load balancing. We wanted to make sure our external DNS name space (VMTM.org) was managed through other services instead of being self-hosted.
Part 1 – Extending VMware Horizon View Secure Access
Now that we had the overall infrastructure in place, it was simply time to start building out the various components and applications. The requirements for the Horizon View setup were as follows:
WDC (On Premises) Data Center
- View Desktops
- View Connection Server
- View Security Server in the DMZ
- Active Directory
Las Vegas (vCHS IaaS) Data Center
- View Security Server in the DMZ
Essentially, all the private view components are hosted on premises in WDC and are on separate network segments between the connection server, desktops, and the security server on the Internet. When you double click into the data centers below are the diagrams of each.
On Premises Data Center:
Note: The F5 Big IP pictured is only being used for DNS load balancing so the servers on the public network are still being accessed via the Edge Gateway’s Public IP addresses.
vCHS Las Vegas Data Center:
Since the two sites were already connected via VPN, we just needed to write the firewall rules for the Horizon View application itself. We first configured the vCloud Hybrid Service firewall with the following rules so that the security server would connect properly to both the connection server and the desktops per the View documentation. Also we needed to ensure that the public access to the security servers on the Internet was made available through various SNAT and DNAT rules.
Below are some of the examples of those rules from the vCloud Hybrid Service Side.
Also here are some examples of the rules we have on premises also using a vCloud Networking and Security Gateway.
If you have not already figured it out the firewall ruling and thought does take some knowledge so it may not be for the faint of heart and you may need to talk to some of your networking folks. In our case we control all the access to this environment and all the public IP addresses. In a previous life I was also a firewall admin so I have that added advantage to be able to write and understand all these rules.
Once all the rules were in place we tested each individual security server via public DNS A-Record. We first ensured the on premises one was working, we flipped the A-Record to the vCHS based server and tested again. After we were satisfied both were working properly, we moved to the final stage of load balancing both servers together for redundancy.
Load Balancing the Servers
The real magic here is that you now need a global load balancing solution that will mange the two external IP addresses that correlate to each of the View Security Servers. The easiest way to do this since they are not in the same data center is using a solution that has DNS based load balancing. We utilized both F5 Global Traffic Managers and the DYN.com Traffic Manager hosted solution.
We simply created a CNAME in external DNS for view.vmtm.org and we are able to toggle it between the F5 GTM’s and the DYN traffic manager solutions. The F5’s are setup with one on premise in WDC and the other in the Sterling Virginia Virtual Private Cloud.
The External DNS CNAME simply resolves to one service or the other with a 30 second TTL so we can move it back and forth. In the case of both solutions the setup and configuration of the actual traffic management is different but the concept is basically the same. The main DNS record resolves to a subdomain which in turn is configured for one of the two view security servers. So the user only ever needs to know about the main external DNS. Below you can see some of the IP records.
view.vmtm.org = Primary CNAME
This points to either view.dyn.vmtm.org OR view.wip.vmtm.org
view.dyn.vmtm.org = DYN.com Traffic Manager Pool
F5 GTM Settings
wip.vmtm.org = F5 GTM DNS Authoritative Managed Name space
view.wip.vmtm.org = F5 GTM Wide IP Management Pool
What we are not going into detail here specifically is how all the DNS is configured to make this work, that will be another post or whitepaper. Needless to say you need to have a solid understanding of DNS records and management since some domains are authoritative in DYN.com and another is Authoritative in the F5. This is all actually documented by F5 in this article about delegating sub domains.
Understanding the Traffic Flow
Once you have this all configured it is important to understand a little bit of the traffic flow. The function of the Security Servers is to “Proxy” a user’s connection to the desktop through the security server. This means if a user connects to the on premises security server via the load balancing, they are localized to the desktop. If they are connected to the vCHS based security server their desktop connection will traverse the VPN or direct connect link. This is where direct connect will minimize latency. However, the availability aspect of having one security server on premises and the other off premises allows you to control the load balancing for maintenance or internet bandwidth purposes.
Now that this is up and running the next stage will be to incorporate VMware DaaS on vClolud Hybrid Service into the mix and see how we can combine the client experience for connecting to DaaS for the purposes of Disaster Recovery into the cloud. The reason for all this is we needed to simulate client connections to real applications to really show and appreciate the experience. It is my belief that for Disaster Recovery to really work well you have to make sure you contend for the user’s client connections. Not every application is web based, which is why we will be standing up some local applications and using vCloud Hybrid Service Disaster Recovery to fail them over and ultimately the user will not even know once they connect using the view client and are connected to the DaaS setup which in turn can reach the failed over application. You need to be able to see the forest through the trees a little bit here as we post the next set of aspects.
Chris is a Principal Technical Marketing Architect with the vCloud Hybrid Services team with over 10 years of experience working with IT hardware and software solutions. He holds a Bachelor of Science Degree in Information Systems from the Daniel Webster College. Prior to VMware he served a Fortune 1000 company in southern NH as a Systems Architect/Administrator, architecting VMware solutions to support new application deployments. At VMware, in the roles of Consulting Architect, Chris has guided partners as well as customers in establishing a VMware practice and consulted on multiple customer projects ranging from datacenter migrations to long-term residency architecture support. Currently, Chris is working on the newest VMware vCloud Hybrid Service solutions and architectures for vSphere customers wishing to migrate to the VMware Hybrid Cloud Service. Chris is also a VMware Certified Design Expert, (VCDX #37).
By Brandon Sweeney, Vice President U.S. Mid-Market Businesses, VMware
VMware’s partner community has long been integral to the success of our business, and they continue to be key players as we deliver on the VMware Hybrid Cloud vision.
Today, we are doing even more, expanding our partner network to drive hybrid cloud adoption. Importantly, we are backing up this commitment with significant channel investments, helping partners sell an easy to deploy disaster recovery solution on vCloud Hybrid Service.
To help our partners capitalize on the growing cloud market and drive the greatest benefit to their customers, this week VMware is launching a vCloud Hybrid Service program with enablement, incentives, promotions and enhanced support. Our U.S. and UK partner community, which includes Ingram Micro, Tech Data, Arrow, and Avnet, as well as hundreds of corporate resellers and VARs, can now deliver VMware vCloud Hybrid service with new capabilities that are not offered elsewhere in the market.
In addition to the partner incentives, VMware is providing all end-user customers with a price promotion for a three- or 12-month Virtual Private Cloud, vCloud Hybrid Service – Disaster Recovery, and Dedicated Cloud. Details on the partner promotion, including up to a month of free service are available here.
VMware is fully committed to the success of our partners and is now launching an even broader set of channel incentives to help monetize cloud opportunities. Effective June 1, 2014, vCloud Hybrid Service distribution and reseller competency partners in the US can participate in incentives focused on delivering vCloud Hybrid Service – Disaster Recovery, Virtual Private Cloud and Dedicated Cloud to their Mid-Market customers through the Mid-Market Cloud Surge Program.
We have created the VMware Hybrid Cloud Solution Competency to provide partners with the foundational training required to deliver the VMware vCloud Hybrid Service. This will help them successfully sell cloud services and gain influence with their customers in cloud discussions. In addition to the competency, VMware also created solution enablement toolkits on vCloud Hybrid Service, and vCloud Hybrid Service – Disaster Recovery to enable partners to quickly learn, market, and sell these solutions to their customers.
Added Value – Disaster Recovery
How can a partner position a mission-critical cloud solution to its customers? They can begin with the new VMware vCloud Hybrid Service – Disaster Recovery solution. No matter what market a customer is in, having an effective, affordable and well thought-out disaster recovery plan is crucial to ongoing success.
VMware vCloud Hybrid Service – Disaster Recovery is a new cloud-based disaster recovery (DR) service that provides a continuously available recovery site for VMware virtualized data centers and seamlessly extends DR to the public cloud. Using this service offering, VMware partners can distinguish themselves by offering a simple and cost-effective DR solution, allowing customers to use the cloud to deliver business value, without wrestling with operational complexity and incompatibility inherent to other public clouds.
By leveraging vCloud Hybrid Service and the new vCloud Hybrid Service – Disaster Recovery solution, our partners can provide customers with unparalleled cloud solutions and receive a limited time incremental benefit from VMware. As the first to market with a DR-specific packaged offering with support – we are excited to see how this translates into success for our partner community.
For VMware partners –What are you waiting on? The cloud surge is happening now and we want you to dive on in!
Learn more on Cloud Surge by logging on to Partner Central.
Brandon Sweeney is VMware’s Vice President U.S. Mid-Market. Brandon’s team is responsible for both customer acquisition and serving existing customers in the sub 1000 employee segment via VMware’s mid-market sales teams, channel partner ecosystem, marketing and sales operations teams. In addition to driving day to day business operations, Brandon leads and develops the overall business strategy for the segment. Brandon joined VMware 9 years ago. He was most recently Vice President of the Americas Partner Organization (APO) with responsibility for driving revenue and share across all Partner Routes to Market including OEMs, Resellers, Distributors and SI/SOs for all customer segments.
It was a busy week for the cloud Twitterverse: trashing the new Gartner Cloud IaaS Magic Quadrant was de rigeur early in the week because everyone knows that IT is dead and all applications are going to be re-written to run on one of the clouds in the leaders’ quadrant. Then, on Friday, AWS announced a management plug-in for VMware vCenter, prompting a rush of schadenfreude as this clearly means that VMware is “going to disappear” as all existing apps are going to be imported into an operationally incompatible cloud environment with no resiliency for existing apps and highly inconsistent performance (for all apps).
Both of these things can’t be simultaneously true, of course. But there are kernels of truth in both narratives that underscore the importance of a hybrid cloud strategy: public cloud has to evolve to support both existing and cloud-native applications, blend together on-prem and off-prem deployments, and address the key challenge of production application deployment, which is operations. The single most expensive part of any cloud deployment is the operations team, whether you’re doing DevOps or waterfall, or anything in between.
My colleague Chris Wolf has written a clear and lucid blog post on why this is the case, which I encourage you to read. Allowing IT organizations to extend their current modus operandi into a complimentary, compatible public IaaS service is central to the capabilities of vCloud Hybrid Service. That means being able to operate immediately with the apps you have and the team you have, jump starting the journey that is IaaS adoption. Your destination may very well look radically different to where you are today – you want to change how you operate to take advantage of the flexibility that cloud provides – but that doesn’t mean the journey has to be filled with giant leaps, downtime, re-writing everything multiple times, and huge operational risk.
I would also argue that when writing cloud-native applications, you want to write them once and not have to rewrite large chunks of your app if you choose to deploy somewhere else. When developing for hybrid cloud, the choice of deployment venue can be deferred until the later stages of the process. When developing for a pure public cloud, you choose the deployment venue at the beginning, and it’s very expensive to change your mind later. It’s no coincidence that there’s no “Export” button in AWS’s new vCenter plug-in.
And so yes, I believe the best place to run a VMware virtualized app is VMware’s cloud, which you would expect me to say. But not because it’s super simple to bring your existing VMs to it, but because it allows you to get started with the operations you have today, and gives you unparalleled operational flexibility tomorrow.
We’re having a little fun kicking off a campaign this week in partnership with The Onion, but we have a serious point behind the antics. Your business is trying to do big things, and do them in a hurry, and sometimes business leaders don’t view IT as the strategic partner to get them to their destination.
We believe by embracing the hybrid cloud, we can change that relationship, to make IT the “it” crowd, the people who really make things happen. That’s because a hybrid cloud strategy gives IT all the freedom of the public cloud with the manageability and security you expect from your existing data center or private cloud. It can make you seem like a miracle worker.
The fact is there’s a lot of value in a well-run enterprise data center. Despite the fact it can take a degree in archeology to manage these existing environments, “keeping the lights on” is an essential function the enterprise data center does well. It’s tremendously reliable, provides predictable performance and often can be a better choice for compliance and overall business strategy than putting corporate data in the hands of someone else.
Yet there’s a good reason for the growth of cloud computing. Seizing new business opportunities and responding to unexpected challenges is often better achieved by leveraging extreme flexibility of the public cloud. It’s why the public cloud is expected to hold 70-80 percent of cloud workloads this year, according to Gartner.
As a result, we believe we’re in the midst of a broader shift in the role of IT. We’re shifting from “keeping the lights on” – installing and managing servers, storage and networks – to using a hybrid model to enable IT to become a broker of cloud services, with responsibility for governance across public and private spheres, on-premise and off. The technology now exists to bridge these two worlds seamlessly.
Quite frankly, we believe 2014 will be the year most IT organizations get serious about hybrid cloud as an agility driver for the business. Today, most IT leaders recognize how transformative cloud computing can be, but haven’t defined a cloud strategy that’s right for them. We can help.
Watch www.becometheITdepartment.com over the next month for insights and research on the move to hybrid cloud.
The cloud holds much promise for enterprises of all sizes, including the promise of improved business agility, lower IT costs and much more. However, medium to large businesses see the “leap” to public cloud as having many issues, especially maintaining same security standards when moving business critical workloads to cloud.
That’s why today we’re happy to announce AFORE CloudLink Encryption solution for vCloud Hybrid Service, making it easier for customers to secure & protect their sensitive applications data in the public cloud. AFORE CloudLink for vCloud Hybrid Service was designed for hybrid cloud, so you can protect your workloads wherever they reside, either on-premise or on vCloud Hybrid Service.
VMware understands the issues that most of our customers face when it comes to moving to the cloud, which is why we created the vCloud Hybrid Service, a cloud offering that provides a seamless and straightforward way for users to embrace the public cloud by seamlessly extending their data centers. vCloud Hybrid Service enables customers to move to a public cloud while leveraging the same VMware tools they have been using for years. This allows users to embrace the cloud, without the need to re-educate staff, deploy new tools or re-architect their applications.
However, as customers move forward with cloud adoption, their applications that contain sensitive data, including intellectual property, business sensitive plans and customer and employee personal information, need to meet strict regulatory compliance or corporate security policies. In addition, customers are concerned about security in shared cloud infrastructure and relinquishing control of their data to cloud providers outside of their environment.
AFORE CloudLink for vCloud Hybrid Service addresses these concerns by providing the following benefits:
- Mitigates compliance risk by securing sensitive data in cloud
- Allow user to maintain control of encryption keys
- Application and OS agnostic, no change to user workloads
- Easy to deploy and manage, with seamless extension to cloud
CloudLink is your answer to data security and regulatory compliance, allowing users to embrace vCloud Hybrid Service with ease and confidence. Not only does it provide the data protection you need, but it is straightforward to deploy and manage.
For more information about VMware Hybrid Cloud Service, visit vCloud.VMware.com.
By: Hany Michael
In this blog post, I will try to provide a practical approach to building your first hybrid cloud using various VMware solutions and services. In a nutshell, we will use vSphere for the private/on-prem cloud while we will be leveraging the vCloud Hybrid Service (vCHS) for the public/off-prem cloud. We will then bridge both clouds with a VPN tunnel using NSX (or vCNS, whichever applicable to you). Last but definitely not least, we will stand a vCloud Automation Center (vCAC) as an abstracted layer above these two clouds for automation and policy-based provisioning of workloads in a true hybrid cloud model.
This is not going to be a detailed how-to guide, but rather an architecture discussion with some good practices that I have seen from my real-world experience in the field. The important thing to note also is that everything you will read in this post has been tested and proven to work.
As you see in the architecture diagram above, we have four main areas to look at. Starting from the bottom-up, we have NSX for vSphere running a site-to-site VPN tunnel with an Edge Gateway on vCHS. This is to bridge the communication between the two clouds. I will explain why in just a bit. Next, we have on the left the traditional vSphere infrastructure where NSX is connected to. On the right side we have our Virtual Private Cloud (VPC) on vCHS. The latter could be also a Dedicate Cloud on vCHS depending on your business requirements. Lastly, and at the top of the architecture, you can find vCAC as a one single entry to this hybrid cloud with policy based configurations mapped to your business groups. Now let’s examine all that in details.
The Business Driver:
Before we take a technical deep dive into all these components, let us stop first for a moment to understand the business driver behind this architecture. It is important to note here that I did not base this solution on a fictitious scenario. This is a real-word requirement from one of my large customers in the financial sector. Their main challenge is that they cannot predict the timing and the size of new projects that require from IT to provide compute resources for. Another challenge is that these requirements are very dynamic in nature that their Operations team cannot handle using the traditional VM provisioning on vSphere. These projects run through three main phases: Test/Dev, UAT and Production. The second and third phases could be handled internally since the Ops team will have the time and exact requirements to plan for. It is the first phase that is unpredictable and require the greatest amount of agility to respond to. These Test/Dev VMs are not an isolated type of workloads that can be just spun up on any public cloud. The customer needs to have a communication back and forth to internal infrastructure services (like AD, DNS, DB, Antivirus ..etc) or even corp apps/databases that cannot be hosted off-prem. Lastly, The templates used to spin up these workloads in phase one must be based on internally maintained corp images not those that are available on the public cloud.
The Networking Infrastructure:
Now to the interesting part. We will start here with the underlying network infrastructure required for cross site communications. As I have mentioned in the previous section, the workloads that are provisioned on the cloud (be it internal or external) must be able to communicate to the on-prem infrastructure services. To achieve that, we are setting up a site-to-site VPN tunnel between two gateways. Those gateways, in our case here, are NSX 6.0 for vSphere sitting on-prem while on the other end we have an Edge Gateway running on vCHS. You can find many tutorials on the internet that describe this part in details. Please note that you can utilize vCNS (part of the vCloud Suite) or any hardware based VPN gateway instead of NSX. The choice is really yours. With NSX (or vCNS), you will need to have at least two interfaces, one internal to the corp network fabric and other other one external to the public Internet. The latter needs to be set with a public IP or NAT’ed in a DMZ provided that it can receive the VPN connection requests from the internet (in our case here, from the Edge Gateway in vCHS). I would’t personally hesitate to leverage NSX here due to the incredible flexibility that it has when it comes to the design and deployment. If you haven’t already done so, please take the time to review my blog posts on NSX as an SDN gateway on the internet. http://www.hypervizor.com/nsx/
On the counterpart, we will need to configure the VPN information on the vCHS Edge Gateway. At the time of this writing, these configuration parameters are not exposed on the vCHS UI itself so you will have to jump into your vCloud Director UI and manage the Edge Gateway from there. Nevertheless, it is quite straight forward and it is almost identical to what you configured on your NSX end.
The Compute Resources
Now that we have setup our network piece and our traffic is flowing back and forth between the two clouds, it is time to construct our compute resources. On the private cloud part, this would typically be a vSphere cluster which we will designate here as “Production” to run the workloads in phase three. This could also host the UAT workloads or we can simply allocate another cluster for that. The important part here is that the NSX Edge must have a network interface/route to the VM Networks on that cluster. This is showing on the architecture diagram as 192.168.110.0/24.
On the other side, and assuming that we have a VPC subscription on vCHS, we already get an Org-vDC with 5Ghz CPU (burstable to 10Ghz) and 20GB Memory to start with. That will be our compute resource on the public cloud side. This, of course, could be scaled up per demand. Like with our on-prem vSphere cluster, this Org-vDC must have a Routed Organization Network that is acting as the internal interface for the Edge Device. This is illustrated on the architecture diagram as 172.16.1.0/24.
Before we go to the provisioning part, let us stop for a minute to address the requirement of content synchronization across the clouds. As I have mentioned earlier in the business requirements section, we need to maintain the core templates on premise and have these templates synchronized (or pushed out) to the public cloud for consumption through the blueprints which we will talk about in just a bit. To do that, we just need to setup vCloud Connector (vCC) on our private cloud side only. vCC comes in two components – Server and Node. The vCC Server is a one time setup that will typically sit in your management cluster. This acts as a hub (if you will) to coordinate the workloads migrations and template sync across your private and public clouds. The Node part is where you need to deploy per each internal endpoint (e.g. vSphere) or public cloud (e.g. vCHS). In the case of the latter, it is already setup for you. You just need to change the port number from 443 to 8443 on your vCD Organization URL when connecting to it and you will be all set. Again, there are some resources out there on the Internet that explain these configurations in details.
Once you have your Server and Node vCC components setup on-prem, you need to “Subscribe” to the template location to sync that content out to your public cloud endpoint. It is as simple as this.
vCloud Automation Center
This is where everything comes together to form our first and true hybrid cloud. In vCAC, we first need to add two endpoints, the first being the local vCenter Server that is managing the Production Cluster. And the second being the vCloud Director of our VPC subscription on vCHS. I will not go through the exact configuration steps, there is already some excellent and detailed videos on YouTube by the VMware Technical Marketing. I will list below some guidelines specific to our architecture here:
- Blueprints: You will need to create two different Blueprints, one for the local/prod VMs and the other for the remote/dev VMs. Each one should be pointing to the relevant Reservation Policy that you set in the previous point.
- Templates: The Templates you set in the blueprints above will be selected respectively from vSphere and vCHS. Both will be identical since they are syned by vCloud Connector as exampled in an earlier section.
- Customization: On the private cloud, we can leverage any vCenter Customization Specification to customize your template while provisioning. On the vCHS side, you do not have to identify that as vCD takes care of this for you.
- Network Profiles: You will maintain both the external and internal IP addressing through the Network Profile configurations. You do not have to work around IP address conflicts even on your public cloud since both vCAC and vCD will keep track of the consumed IPs.
- Reservation Policies: You will have to create here two Reservation Policies, one for the local compute cluster running on vSphere, and the second for the remote compute resources running on vCHS.
Putting it all together
The last piece in this architecture is to create the entitlements for each Blueprint/Service and relevant approval processes. What is even more interesting is what you can do to delegate specific and precise capabilities to your internal users or even external contractors or consultants working on your projects. Imagine, as an example, that a new SharePoint consultant is on your site ready to start the three phases process of Test/Dev, UAT and Prod. You simple assign him an account on AD, provide him access to the vCAC portal, and give him entitlement to create VMs only on the public side of your hybrid cloud. This, of course, is still controlled with your approval process. Now that the consultant has his VM(s), he can connect to your on-prem infrastructure services per the security policies that you defined on NSX. During the consultant work, he wanted to create a snapshot before he upgrades a specific components on his software, no problem, he has that permission right form his vCAC portal – no need to bother your Ops team. But let’s even say that he messed up something so badly that he wants just to start fresh. Still no issue, he can “reprovision” his VM and have it prepared from scratch in a matter of minuets. In this case, he doesn’t really need another approval process (unless you define that) since he already got that the first time – imagine how much time you have saved him and your team in operations tasks like this. And have I mentioned that this very consultant can even access the console of his VM(s) on vCHS through the VMRC without even touching vCD or knowing anything about it?
We have just scratched the surface here. There are so much that you can do with such flexible architecture and agile infrastructure that is defined and driven per your very own business requirements.
Hany Michael is a Lead Architect in the SEMEA Professional Services Organization at VMware and a CTO Ambassador. Hany has an extensive experience in architecting and building private and public clouds across EMEA for large enterprises and service providers. During his spare time, he also blogs about various technical subjects and he is well known in the industry for his architecture based diagrams. With the latter, Hany has been visualizing for the past six years highly complex architectural subjects and presenting them in an easy way to grasp and understand.
By David Hill
Recently VMware vCloud Hybrid Service released a new storage tier offering named Standard Storage. Standard Storage is a low-cost storage option available as part of the Dedicated and Virtual Private Cloud services. In the announcement blog article , we discussed the use cases for the different storage tiers. In this article, we will look at when you would want to use both storage tiers at the same time, essentially configuring a Virtual Machine with multiple virtual disks on different storage tiers.
In the original article we discussed database servers. Let’s revisit this use case:
A database server predominantly has higher read/write requests and requires an increase in storage performance. The data is not very static; it’s extremely fluid, due to the nature of the application. With SSD-Accelerated Storage in vCloud Hybrid Service, an end user can get better performance to support the high data change rates within database applications.
When running database servers, you want higher performance from your storage tier, but the downside is you are paying a higher cost for that storage tier. What is needed is the ability to split your virtual disks across different storage tiers.
Why would we want to do this?
Let’s say we have a database server that needs 100GB for the OS plus application installation and 700GB for the database itself. We only need the higher performance for the 700GB, so we could lower our costs if we split the virtual disks and placed them on the different tiers, essentially putting the OS/Application virtual disk on Standard Storage (cheaper) and the database virtual disk on SSD Accelerated Storage (higher performance). By doing this, the costs are reduced as we are paying a lower price for the 100GB disk.
The diagram above shows at a high level how this is achieved. By creating separate virtual disks, we can place these on the different storage tiers. This is extremely easy to do. Once the virtual machine is created, you can add another virtual disk by changing the settings on the virtual machine.
Let’s take a look at how we do this:
1. Open the Virtual Machine details within the UI and go to the settings tab:
3. Click “Storage Allocation”
7. Virtual Machine is configured
That’s it. All done. The virtual machine now has two virtual disks configured with different storage characteristics.
You can see by reviewing the above steps, that it is extremely easy to utilize multiple storage profiles within the VMware vCloud Hybrid Service, ultimately allowing you to manage the costs and capabilities of your cloud workloads.
David Hill is currently a Senior Technical Marketing Architect in the Hybrid Cloud Business Unit. David has been a self-employed IT Consultant and Architect for around 15 years, working on projects for large consultancies and financial institutions. Dave blogs at his personal blog, www.davidhill.co, where he hopes to provide readers with an informative reference site when designing/deploying or troubleshooting virtualisation and cloud technologies.
Attention VMware Service Provider partners! In case you missed it, last month we announced new improvements to the vCloud Usage Meter with the launch of version 3.3. We’ve enhanced the “Monthly Usage” report to improve reporting efficiencies and accuracy of reporting down to the precise numbers of VMs hosted and under management – providing valuable information to improve future VSPP product offerings.
One such reporting improvement includes Usage Meter 3.3’s ability to report product usage data directly into the VSPP Business Portal. This functionality streamlines the monthly usage reporting cycle for product and end user data (though Service Providers will still need to approve the data prior to submission of monthly reports to VSPP Aggregators).
Other key features of the release include:
- Introduction of business intelligence reporting along with the monthly product report;
- Support for direct reporting to the VSPP business portal;
- Support for vCloud Automation Center;
- Refreshed versions of supported products;
- Expanded API support;
- And product improvements.
VMware vCloud Usage Meter 3.3 offers Service Providers security, compliance, reliability, and lowered operational expenses in reporting product usage. If you haven’t already done so, download vCloud Usage Meter 3.3 by visiting the community page. For more information, our VMware vCloud Usage Meter reference guides, both User and API, have also been updated to reflect the changes in 3.3.
On May 20th at 9:30am PST, our next vmLIVE session will show how Service Providers can leverage VMware vSphere Data Protection (VDP) to deliver profitable cloud backup and recovery services to customers. VDP features WAN-optimized, encrypted replication, efficient backup deduplication and advance recovery technology.
With vSphere Data Protection, you get:
- Efficient, Reliable Backup
- Faster, Assured Recovery
- Simple, Integrated Management
- Application-Aware for Physical and Virtual
Don’t miss this vmLIVE to learn how Service Providers are using VDP solutions to provide cloud-based offsite data protection to grow revenues and acquire customers!
Want to learn more? Register for the May 20 vmLIVE today.
For more information about VMware vCloud Service Providers, visit vCloudProviders.VMware.com.