Home > Blogs > VMware vCloud Blog > Monthly Archives: February 2013

Monthly Archives: February 2013

The Journey From Virtualization to Cloud – Highlights from #cloudtalk

Last Tuesday, we hosted our #cloudtalk on the journey from virtualization to cloud. Special thanks to everyone who participated in the chat for making it a lively and provocative discussion. We also wanted to thank Kurt Milne (@kurtmilne) and Bryan Bond (@VMJedi) for co-hosting the chat with us!

The discussion started off with the question, “What does the journey from virtualization to cloud mean to you?”

@millardjk was the first to chime in, stating that virtualization is a datacenter without automation, self-service or elasticity, while cloud brings all three with it. @tcrawford suggested that cloud is a maturity beyond virtualization in the progression of resource management. @jtimdodd stated that going from virtualization to cloud was going from an internal virtual infrastructure to an external environment that can scale on demand.

Several others chimed with their views, including @Dana_Gardner, who noted that going from virtualization to cloud means taking a utilization benefit to a IT transformation/strategy. @maishsk brought up a very interesting point, stating that virtualization is a consolidation/migration of workloads and cloud is more about process and culture, which @jakerobinson agreed with.  All seemed to agree after Kurt stated that “Cloud requires letting go of some traditional IT ops practices”.  @jamesurqhart built upon Kurt’s view, making the point that cloud also means adopting new IT practices and skills.

We then asked participants if there were any key decisions one should look at when considering making the move from virtualization to cloud.

Co-host @VMJedi made a great point, claiming that while automation is important, getting out of the hardware maintenance and upgrade business is a huge driver for making the decision of virtualization to cloud. @Dana_Gardner talked about how organizations must decide to support a class of requirements all at once, if they want to build a repeatable fabric and if apps have to align to it. @lmacvittie discussed how decisions must balance control and agility – how some things need control, while others do not. The decision that needs to made were to figure out what applications or processes need the control and to decide to let go of what does not.

The conversation soon shifted as soon as we asked participants if they have taken steps from virtualization to cloud and if so, what roadblocks or challenges have they encountered?

Our co-host @VMJedi shared that flexible scalability “in house” is starting to become an increasingly difficult thing to do, without the agility to maneuver changes rapidly. @tcrawford suggested that too many companies are looking at the move from virtualization to cloud as a tech swap, as doing this they miss core changes and significant opportunities. @kelvinpapp shared a similar sentiment that the biggest challenge is dismissing the perception that cloud equals a loss of control, and he suggested that organizations should instead view cloud as an opportunity. Almost all agreed about one of the main difficulties for companies is finding the opportunity and value in the process of changing from virtualization to cloud.

@davidmdavis then asked participants what exactly is stopping companies from using hybrid cloud? @joshcoen stepped in and answered, sometimes the company environment just does not allow for it. Sometimes there are disparate sites and latencies higher than one second.

Security popped up as a roadblock on the move from virtualization to cloud, as well as being a potential issue stopping companies from using hybrid cloud. This brought us to ask what the best practices in approaching cloud security are.

@jgershater noted that security is a shared responsibility – the provider secures the premises and firewall, while the customer secures the app and VM. @kurtmilne brought up how every IT shop tends to think that their security is above average and needs a reality check. He also went on to say that organizations need to recognize private and public resource pools and how IT is responsible for many activites that can impact security posture. @Dana_Gardner said that one of the best practices for cloud security was to focus on access control over perimeter control, which @lmacvittie agreed with, also adding app and data control as important focus areas. @jamesurquhart agreed with both, stating to “layer them turtles, but get those turtles talking to each other.”

We then asked participants, “When crafting cloud strategy, how do users decide what to focus their POC on?”

Co-host @VMJedi shared that in eMeter’s personal POC, he included security performance and ease of deployment. @KongYang answered that strategy should always be predicated on solving customer issues and addressing customer needs. He went on to say that the customer should always be top of the mind. @Dana_Gardner  said that the proof-of-concept should show ROI, saying that he isn’t sure it is a success without a demonstrated and repeatable economic benefit.

Later, we asked how users select cloud providers that align with their cloud vision or strategy.

@lmacvittie said that when selecting a cloud provider, they should ask several questions and talk to other organizations using providers on their list, which @KongYang agreed with. @KongYang also recommended to try before you buy, as well as verifying the SLA before committing.  @maishsk  cited portability, checking to see how easy it is to move workloads in and out of the cloud.

Co-host @kurtmilne posed one of the final questions, asking what inning we’re in, as far as IT Operations transformation for new SDDC and Cloud Operations practices.

The general populous of the chat seemed to agree that the game is nowhere finished. @maishsk said we are only in the bottom of the third inning. @shawncarey went as far to say that the game is just getting started, with players still warming up! @Dana_Gardner agreed with Shawn, saying we’re in the pre-game stage, only getting to the locker room and putting equipment on.

@GeorgeReese even got his two cents in towards the end of #cloudtalk, telling the chat that approvals processes kill when it comes to cloud and if you need a PO, it isn’t cloud.

Thank you to everybody who listened or participated in our #cloudtalk, and stay tuned details around our next #cloudtalk! In the meantime, be sure to check out our Google+ Hangout on the Software-Defined Datacenter today at 10am PT! Feel free to tweet us at @vCloud with any questions or feedback!

New VMware Cloud Credits Purchasing Program Provides an Easy On-Ramp to the Cloud

By: Geoff Thompson, Director, VMware Cloud Credit Strategy

Today, we’re excited to announce the new VMware Cloud Credits Purchasing Program – a new way for customers to take advantage of public or hybrid cloud as a key component of their comprehensive IT strategy and work with VMware vCloud Service providers more effectively.

Customers can purchase VMware Cloud Credits from VMware Solution Provider Partners and redeem over time with approved VMware Service Provider Partners. Through the VMware Cloud Credits Program, customers will work closely with their Solution Provider partner to identify potential workloads and estimate the credits required to deploy the workload in the cloud via an authorized VMware vCloud Service Provider Partner.  The program enables the customer to apply their credits as needed based on business requirements and provides a very effective mechanism to control cloud spend and manage their providers. In addition, VMware Cloud Credits are managed inside MyVMware providing customers with a single pane of glass to view their perpetual licenses along with all public and hybrid cloud spend.

VMware Cloud Credits are redeemable for Infrastructure-as-a-Service (IaaS) offerings from approved VMware vCloud Service Providers – redeemable IaaS offerings include:

  • IaaS Compute Services – CPU, RAM and Storage for each VM;
  • IaaS Networking Services – IP addresses and bandwidth for each VM;
  • Operating System Licensing – OS license for each VM;
  • Network & Security Software Add-ons – Software firewalls, load balancers and anti-virus;
  • IaaS Monitoring and Support – Packaged support for IaaS solution.

VMware Cloud Credits benefit VMware Solution Provider Partners by enabling them to strengthen their strategic value, and offer a full range of cloud solutions to augment perpetual license sales offerings. Additionally, vCloud Service Providers and Solution Providers can take advantage of VMware Cloud Credits to help deepen their relationships with existing customers by working together to identify potential public or hybrid cloud workloads, as well as open up avenues for new customers through additional products and value-added cloud services.

Check out the Cloud Credits page for more information. For future updates, be sure to follow @vCloud and @VMwareSP on Twitter!

Demystifying the Software-Defined Datacenter – Join the Google+ Hangout!

Looking for new ways to transform and empower your organization’s IT department? The concept of the software-defined datacenter (SDDC) promises to abstract the datacenter from its underlying hardware – thereby enabling your IT department to connect and configure computing resources in new, powerful ways.

But what does this mean for you? Join VMware’s Google+ Hangout this Thursday, February 28th at 10am PT, as our panel of experts discusses the obstacles, drawbacks and opportunities companies and users may face as they make the leap above virtualization and cloud to a software-defined data center.

Other topics we’ll be covering during the chat include:

  • How to build the foundation for SDDC
  • Common obstacles to avoid
  • Strategies for realizing the full benefits of cloud
  • Predictions for how SDDC will evolve and impact IT in the future

For IT professionals preparing to redefine the way IT delivers services, this Google+ Hangout will help illustrate how the SDDC delivers greater agility, speed and innovation while positioning IT as an innovative business unit.

A Google+ account is necessary to post questions to the panel; however, you can still watch the live video stream without one. Click here to sign up for an account.

More on our panel of experts…

Moderator:

JJ Digeronimo is a tech executive, entrepreneur and author. She is currently a strategic initiator in Cloud Computing and Software-Defined Data Centers. She is a multifaceted talent with a passion for technology that enables her to quickly align business obstacles to solutions that encompass skilled people, quality technology and redefined processes.

The panel of experts will include:

Jeff Byrne is a Senior Analyst and Consultant, Taneja Group and recently served as VP of Marketing and Corporate Strategy at VMware. He currently provides consulting to a variety of virtualization, cloud storage and providers in areas such as strategy development, competitive assessments, and go-to-market initiatives.

Michael Leeper is the Director of Global Technology at Columbia Sportswear, one of the customers we featured last year in our “Another VMware Cloud” campaign.

Angelo Luciani is a Network Specialist at The Canadian Depository for Securities Limited and vExpert, involved in the entire IT value chain of discovery, design and delivery. He possesses a strong and successful background working with stakeholders to develop virtual architecture frameworks that aligns strategy, processes and IT assets with business goals.

Mark Sarago is a Business Solution Strategist in Accelerate Advisory Services at VMware. Mark has more than 30 years of IT experience. He provides collaborative services to global customers to help them define and communicate their IT strategy with strong alignment to business goals and measures.

We hope to see you there! Be sure to register for the Google+ Hangout, and follow @vCloud and @VMwareSP for future updates!

Device Management With Puppet

By: Nan Liu, Senior Systems Engineer at VMware.

This is a repost from Nan’s personal blog.

In Puppet 2.7, one of the new features added was device management. In this initial release, only a small number of Cisco switches were supported. Overall the capabilities weren’t really significant, however the concept shifted people’s perception of the boundaries for configuration management. All of sudden Puppet didn’t end at the operating system, and extended to black boxes that were previously thought to be a bridge too far.

This slowly spawned a flurry of activities exploring network devices, load-balancers, and storage:

The benefits of having the entire infrastructure automated with a single tool chain under version control is indisputable. A Software Define Data Center is not complete until you gap the management capabilities whether it’s your network or storage. With that said, there’s still some limitations with puppet device. Currently, the device command only supports communication with a single device at a time. This is fine when the device is self contained, but in some instances it’s necessary to interact with a series of devices to perform a meaningful task. For this reason, we developed transport resource to support multiple connectivity for a single puppet execution. This is not a substitution for orchestration of a chain of events, but rather to group resources that have interactions between different devices:


1
2
3
4
5
6
7
8
9
10
11
12
13
transport { 'ssh':
  username => 'root',
  password => 'p@ss',
  server   => '192.168.1.10',
  # support connection options in net::ssh :
  options  => { 'port' => 10022 },
}

transport { 'rest':
  username => 'admin',
  password => 'secret!',
  server   => '192.168.1.11',
}

A transport is shared and reused across several resources, and custom type/provider can leverage any transport connectivity:


1
2
3
4
5
# this is a mockup
remote_service { 'ntp':
  ensure    => running,
  transport => Transport['ssh'],
}

Here’s an example debug output showing ssh connectivity established and reused for 192.168.1.10 to perform a series of activity.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
debug: PuppetX::Puppetlabs::Transport::Ssh initializing connection to: 192.168.1.10
debug: Executing on 192.168.1.10:
vpxd_servicecfg eula read
debug: Execution result:
VC_EULA_STATUS=1
VC_CFG_RESULT=0

debug: Executing on 192.168.1.10:
vpxd_servicecfg db read
debug: Execution result:
VC_DB_TYPE=embedded
VC_DB_SERVER=
VC_DB_SERVER_PORT=
VC_DB_INSTANCE=
VC_DB_USER=
VC_DB_SCHEMA_VERSION=
VC_CFG_RESULT=0

debug: Executing on 192.168.1.10:
vpxd_servicecfg sso read
debug: Execution result:
SSO_TYPE=embedded
SSO_LS_LOCATION=https://192.168.1.10:7444/lookupservice/sdk
SSO_DB_TYPE=embedded
SSO_DB_SERVER=localhost
SSO_DB_SERVER_PORT=5432
SSO_DB_INSTANCE=ssodb
SSO_DB_USER=ssod
SSO_DB_PASSWORD=
VC_CFG_RESULT=0

debug: Executing on 192.168.1.10:
vpxd_servicecfg service status
debug: Execution result:
VC_SERVICE_STATUS=1
VC_CFG_RESULT=0

debug: Finishing transaction 2205823080
debug: Closing PuppetX::Puppetlabs::Transport::Ssh connection to: 192.168.1.10
debug: Storing state
debug: Stored state in 0.07 seconds
notice: Finished catalog run in 2.90 seconds

In the example above, a single ssh connection was established at the beginning, and closed at the end of the session. This was one of the roadblock Jeremy Schulman brought up when developing network devices NETCONF sessions. Originally I suggested a resource responsible for closing the connections that had automatic dependencies to the correct network resources since there was no API to do this in the provider2. However this doesn’t take in consideration for resources failures which results in dangling open connections. Ultimately, I went back to Puppet::Transaction and added the connection cleanup at the end of evaluate method:


1
2
3
4
5
6
7
8
9
10
module Puppet
  class Transaction
    alias_method :evaluate_original, :evaluate

    def evaluate
      evaluate_original
      PuppetX::Puppetlabs::Transport.cleanup
    end
  end
end

This isn’t the cleanest solution, since it monkey patches Puppet’s internal, and does not trap user termination of Puppet, but it is more reliable then depending on every resource succeeding. Thanks again to Jeremy’s original work with transaction.rb which led me down this path. For now this is a work around, while we wait for Puppet Labs come up with a formal solution to #3946.

Next week at VMware Partners Exchange (PEX 2013), Nick Weaver and Carl Caum will present more about Puppet and how it works with VMware Products at session VPN1298. This is just a sneak preview, so please attend to get the complete picture. Stay tuned, and I will provide more technical details about what we are using Puppet for next week after the PEX announcement.

  1. Juniper has gone as far as embedding Puppet agent to the device.

  2. See #3946. Work on Puppet long enough, you’ll find tickets indicating Dan has explored these dark corners before anyone else.

Take Charge at VMware PEX 2013!

One of most anticipated VMware events of the year has arrived – VMware Partner Exchange (PEX) 2013! This year, the theme of PEX is “Take Charge”. From taking charge of your business and learning how to expand it by cross-selling VMware solutions and services, to taking charge of your future and exploring best practices to acquire new customers, this year’s PEX is gearing up to be an informative opportunity for our VMware Service Providers and Partners alike. We hope you “Take Charge” of all the fantastic opportunities and knowledge you can gain from this year’s VMware Partner Exchange!

Solutions Exchange

Drop by the Solutions Exchange to network with peers and VMware experts face-to-face!

Hours for the Solutions Exchange:
Monday, February 25, 2013 – 5:00pm to 7:00pm
Tuesday, February 26, 2013 – 11:00am to 6:00pm
Wednesday, February 27, 2013 – 11:00am to 6:00pm

Be sure to also visit our vCloud Service Provider partners in the Service Provider Pavilion!

  • AirVM Inc. – Booth #V7
  • EarthLink – Booth #V5
  • Hosting – Booth #V2
  • Iland Internet Solutions – Booth #V11
  • PeakColo – Booth #100
  • Savvis – Booth #302
  • US Signal Company – Booth #V8
  • Verizon Terrermark – Booth #V12

Must-See Sessions

Along with the general information sessions about VMware, here are the must-see sessions for vCloud Service Providers:

VSPP Sales Presentations

CI1511 – Disaster Recovery to the Cloud – Solutions for Service Providers
Thursday, February 28, 10:15-11:15am

VMware’s Business Continuity/Disaster Recovery solution provides Service Providers and Partners the path to help customers take the first step towards a Public Cloud.

VPN1307 – Selling How to Sell and Deliver Services Powered by VMware vCloud
Wednesday, February 27, 11am-12pm

Covering the VMware cloud strategy and opportunities for service providers and partners to build a business based on VMware’s vCloud Services platform.

VPN1424 – White labeling vCloud Services with VSPP Partners
Thursday, February 28, 10:15-11:15am

The Hybrid/Public Cloud is rapidly changing the landscape for the VMware partner community and then in particular for our Solution Providers and Distributors. To enter the Cloud Market, these partners need to make a decision whether they built a Cloud Infra themselves our white label this with our existing vCloud Service Providers. White labeling will be the best and quickest option as they can take advantage of the core business from the SP and add value and services on top of that.

VSPP Technical Presentations

CI1149 – Virtualizing Business Critical Applications – Best Practices for vSphere
Wednesday, February 27, 11am-12pm

Take a vSphere platform deep dive into performance features, tuning and troubleshooting with a VCDX.

CI1411 – vCloud Director 5.1 Install, Configure and Manage Boot Camp
Saturday, February 23, 8:30am-5:30pm
Sunday, February 24, 8:30am-5:30pm
Monday, February 25, 8:30am-5:30pm

VSPP Marketing Presentations

MGT1514 – Increase Margins and Deliver Value Added Services with VMware Management Offerings for Service Providers
Thursday, February 28, 11:30am-12:30pm

This session will give an overview of the exciting new management product offerings available to Service Providers, including a new vCloud Service Provider Bundle.

VPN1298 – Next Generation of Cloud Configuration Automation Using Puppet and VMware
Wednesday, February 27, 2-3pm

Learn how to automatically provision and configure cloud instances from zero to fully operational in minutes, automating VMware infrastructure. This session will also highlight benefits for DevOps including quick deployment of critical updates, like security patches, across hundreds of servers in seconds, and pro-active initiation of Puppet runs to update configurations and report changes.

VPN1299 – Cloud Ignition: Enables Solution Providers to Sell Public and Hybrid vCloud Solutions Through Service Providers
Tuesday, February 26, 3:30-4:30pm

Learn about a new VMware program that will enable Solution Providers to effectively sell public and hybrid cloud services in partnership with VMware’s vCloud Service Provider ecosystem. This session will release details of a program designed to bridge the gap between customer datacenter solutions and public IaaS.

VPN1323 – VMware Service Provider Program Momentum and Expansion
Wednesday, February 27, 12:30-1:30pm

This session is for partners who are new to the public cloud concept as well as partners who are currently in the program and want to hear about our expanding benefits and product portfolio.

Evening Events

PEX is a great opportunity to learn how to identify customer needs effectively or network with VMware experts and executives, but be sure to also take note of all the great evening events taking place at PEX! From Tweetups to the VMware Partner Appreciation Party featuring alt-rock mainstay Third Eye Blind, here’s everything you need to know:

Monday, February 25
5,7-pm: Welcome Reception – Sponsored by EMC
Kick off VMware Partner Exchange 2013 at the Welcome Reception. The Weclome Reception is a great opportunity to explore the Solutions Exchange, check out cool products and solutions, and interact with peers, partners and VMware teams.

Tweetups
Signup for #VMwareTweetup, taking place 5:30-7:30 in the Hang Space of the Solutions Exchange (same time as the Welcome Reception) to network with peers and to learn about VMware Link, the new social collaboration platform for VMware Partners! Later, you can also join the #PEXTweetup, an “unofficial” offsite sponsored tweetup for the community.

Tuesday, February 26
4:30-6:30pm: Hall Crawl
Grab a drink and discover new technologies while connecting with new partners and other attendees in the Solutions Exchange!

Wednesday, February 27
7:30-10:30pm: VMware Partner Appreciation Party
Join your colleagues at the Partner Appreciation Lounge in the Mandalay Ballroom! The evening will kick off with the club sounds of DJ Mike Attack and a lounge-style buffet, beer and wine. Then later, Third Eye Blind will take the stage with hits like “Jumper”, “Semi-Charmed Life”, and “Graduate”!

We’re looking forward to VMware Partner Exchange 2013, and we hope you are too! Be sure to follow us at @vCloud and @VMwareSP for future updates!

Maximize Visibility of VMware Data with the vCloud Usage Meter 3.0.2

As a vCloud Service Provider, it’s important to always monitor your VMware product consumption. The vCloud Usage Meter, available as a free virtual appliance to all Service Provider Partners who are reporting on the vCloud Service Provider Bundles, automates the collection of VMware product usage data and makes it simple for Service Providers to gather data and generate reports to support the VSPP billing process.

The latest version of the vCloud Usage Meter (3.0.2) includes increased stability as well as enhancements to integration, monitoring, billing processing, reporting, memory utilization, and performance.  The 3.0.2 patch release adds support for vCenter Operations Manager 5.6 and vCenter Site Recovery Manager 5.1 along with fixes in vCenter Site Recovery Manager 5.0 to address agent based replication errors in some installations.

Learn how to best measure, monitor and report your VMware data with these vCloud Usage Meter How-to videos:

Licenses and Automatic Reporting

This video covers how to create license keys and group license keys under specific sets, as well as how to introduce the three licensing categories (and how to properly set them up to avoid billing discrepancies), and set up an automatic reporting section to automatically generate reports and email them to an aggregator or elsewhere.

Management

Learn how to enter your Service Provider details for generating reports, email reports automatically, add/edit multiple vCenter servers and pair them with Site Recovery Managers as needed, set up a time in the vCloud Integration Manager to pull vRAM usage data from vCenter Servers, and configure an outgoing email server to receive alerts and reports.  

Reporting

Become a reporting guru! Learn how to create and differentiate between billing reports, usage reports, customer summary reports, license summary reports, product reports, and customer product reports.

Working with Customer and Rules 

See how to create customers both manually and by importing information from a selected file, edit and delete customers, create customer rules manually or by filtering and selecting a vSphere inventory object, and update Customer Summary Reports according to the customer rules you’ve set up.

By the time you’re finished with our how-to videos, you’ll be an expert with the vCloud Usage Meter. You can find the how-to videos on the Service Provider Learning Path on Partner Central’s Partner University.

From the Infrastructure as a Service (IaaS) path, choose the System Administrator role then the Service Provider IAAS Usage Meter System Administrator link.  Be sure to click on each of the video links in order to “Add to myLibrary”.  You can then watch the videos by accessing them through the “Enrollments” link in the “My Education” tab.

Click here to download the vCloud Usage Meter 3.0.2 virtual appliance today.  For more details on vCloud Usage Meter 3.0.2, please download our FAQ or visit the vCloud Usage Meter Community.

Be sure to also follow @VMwareSP and @vCloud on Twitter for future updates!

VMware vCenter vs. vCloud Director

By: David Davis

Confused about the difference between vCenter and vCloud Director? You’re not alone. Sure, if you are a vSphere admin at a large enterprise you already know what vCloud Director is and are likely either evaluating it or have implemented it. However, there are many IT pros out there who work at small businesses who are still trying to understand why they need virtualization or, if they are already using vSphere, why they need vCenter. There are many other IT pros at medium-sized companies who already use vCenter but don’t really understand what vCloud Director is or why they need it on top of vCenter.

Comparing vCenter and vCloud Director

Being a visual person, VMware’s cloud infrastructure solution diagram best shows me where vCenter fits in in reference to vCloud Director.

Figure 1 – vCloud Director Solution Diagram

As the diagram shows, vCenter is what manages your vSphere virtual infrastructure hosts, virtual machines, and virtual resources (CPU, memory, storage, and network).

As you can see from the diagram, vCloud Director is at a higher level in the cloud infrastructure. vCloud Director (vCD) can traverse multiple virtual infrastructures and further abstracts the virtual infrastructure resources into software-defined storage, networking, security, and availability.

Users of vCD don’t have to know about vSphere or vCenter. They don’t have to know about hosts, clusters, or resource pools. They don’t have to worry about what other VMs are on their host or if there are cluster resources available to run their VMs. Finally, they don’t have to worry about whether their VMs are secure or if they will be available. All of these things are abstracted away by vCloud Director such that all the vCD user needs to worry about is their organization’s virtual datacenter.

Instead of managing a virtual data center in vCenter (comprised of clusters, resource pools, hosts, and VMs), vCD users manage vCD organizations, made up of organization virtual datacenters, org users, org policies, and org catalogs.

The View From vCenter

To make this comparison another way, look at the typical administrator screen used by an admin in vCenter.

Figure 2 – vCenter Admin View

From this view, you can see virtual data centers (not to be confused with organizational datacenters in vCloud), clusters, hosts, virtual machines, and vApps. However, vCloud Director doesn’t know about any of these things until you tell it. Even then, it won’t go deeply into monitoring and managing these types of objects (as that’s not its job).

The View From vCloud Director

The other side of this proverbial “coin” is vCloud Director. Here’s what it looks like from the perspective of the typical vCD organization admin:

Figure 3 – vCloud Director Organization Admin View 

From this view, the vCD user (not admin, in this case) sees their vApps, organization catalogs, users, and the option to manage their virtual data center.

The vCD admin, on the other hand, has a much larger view, but still from a very different perspective than the vCenter admin:

Figure 4 – vCloud Director Administrator View 

As you can see, the vCD admin sees multiple organizations but then can control the cloud resources, including cells, provider vDCs, and networks. Additionally, vCD admins can see vSphere resources, such as vCenter servers, resources pools, hosts, datastores, and port groups but cannot view or administer those vSphere resources at the level required (and offered by vCenter).

vCenter vs. vCloud Director – Summary

In the end, what you need to know is that vCenter is required to administer your virtual infrastructure but it doesn’t create a cloud. The first piece required to create your cloud is vCloud Director. vCD will talk to your vCenter server/servers but certain tasks will have to be done first in vCenter, such as creating a HA/DRS cluster, configuring the distributed virtual switch, adding hosts, etc. Bottom line: to create a cloud you’ll need both vCenter and vCD. Do all companies need vCD? That’s a question for another blog post.

In my opinion, the best way to learn about any tool is to try it! Download the vCloud Suite (including vSphere 5.1, vCenter 5.1, and vCloud Director 5.1) for 60 days at no cost.

For future updates, be sure to follow @vCloud and @VMwareSP on Twitter!

David Davis is a VMware Evangelist and vSphere Video Training Author for Train Signal. He has achieved CCIE, VCP,CISSP, and vExpert level status over his 15+ years in the IT industry. David has authored hundreds of articles on the Internet and nine different video training courses for TrainSignal.com including the popular vSphere 5 and vCloud Director video training courses. Learn more about David at his blog or on Twitter and check out a sample of his VMware vSphere video training course from TrainSignal.com.

Customize vCloud Director 5.1 Login Style and Logo

By: Andrea Siviero, PSO Solutions Architect at VMware

In the long trip to getting a “cloud” up and running, customers I’ve worked with at some point reach the moment where customization of the portal is needed, and of course the first step is using the company logo on the login page.

With vCloud Director 1.5.x it was an easy step: just upload your image on the branding option and presto, you have your own logo on vCD 1.5 login page, like this:

…with a logo in top left login:

Of course styling has to be completed, but with your own logo you can start feeling at home ;-)

With vCloud Director 5.1, the look and feel has been improved compared to previous releases, but customers I’m working with have found it harder than previous releases to customize the login page, especially the logo.

The first thing that you’ll notice is that changing the logo option in the branding menu will NOT affect the login page, but only the small logo on bottom page AFTER user has been logged on:

While the login page won’t have any company logo, but only the VMware logo in top left corner:

Customers have of course become disappointed and have asked, ”Where is my logo, dude?”

On the VMware Knowledge Base, you will get find 2 CSS files:

  • cloud-director-51-login-template.css
  • cloud-director-51-template.css

So the next phase is to start using the cloud-director-51-login-template.css. You can upload this as a cloud administrator in the panel shown above. When you are done with this, the logo and the background should be gone already. You need to close and re-open the browser to see the changes.

I’m not a web designer and this article has no scope on using CSS/HTML language, but I’ll show you how I’ve been able to customize the login page in an easy way and obtain a fully customized login page like this:

I’ve used 2 very simple tools:

  • Firefox with the embedded Web Developer Tool – I’m using the latest version available (18.0.2) but I think any other modern browser has similar tools.
  • TextWrangler (free text editor with Language markup hightlight)

With the Inspect and Stlye Tools of Firefox, you can graphically select objects on the login page and customize their CSS style using the embedded Style option –  this will apply dynamically to all of the settings.

Once happy with the HTML markup, just write it down to your custom-css and upload to vCloud director using “the login page theme” option. For example, you can customize the CSS that you have previously downloaded from the KB article.

The company logo can be placed on any webserver publicly available, so there is no need to modify changes on vCloud director – changes will be fully supported and can be reverted with a single click.

So keep calm and let the vCloud styling begin ;-)

Below is a copy and paste of the modified css that I created that produces the customized login page you see above: Continue reading

The Journey From Virtualization to the Cloud Within a Software Defined Datacenter – Join Us For Our Next #cloudtalk!

Analysts predict that 2013 will see continued increase in enterprise cloud adoption, but we want to know – is your organization ready to take the leap from virtualization to the cloud? Oftentimes people confuse virtualization as being synonymous with cloud computing, but virtualization is actually just a crucial first step in achieving the full range of business benefits that cloud computing offers within a Software Defined Datacenter.

For our next #cloudtalk on Tuesday, February 19th at 11am PT, we’d like to invite our service providers, partners, and the larger cloud community to share your personal experiences in moving your organization, or your customers, from virtualization to the cloud within a Software Defined Data Center. What are the main challenges organizations face when moving from virtualization to cloud? What applications are best suited for a cloud environment? We plan to discuss these questions and more during the one-hour chat.

Co-hosting the chat with us will be Kurt Milne (@kurtmilne), Director of Cloud Ops Marketing at VMware and co-author of the book, “Visible Ops Private Cloud: From Virtualization to Private Cloud in 4 Practical Steps,” as well as Bryan Bond (@VMjedi), Senior Systems Administrator at eMeter, one of our Another VMware Cloud customers. Kurt and Bryan will be on the chat to answer questions and share best practices and their personal experiences on moving from virtualization to the cloud.

Here’s how to participate in #cloudtalk:

  • Follow the #cloudtalk hashtag (via TweetChatTweetGrid, TweetDeck or another Twitter client) and watch the real-time stream.
  • On Tuesday, February 19th at 11am PT@vCloud will pose a few questions using the #cloudtalk hashtag to get the conversation rolling.
  • Tag your tweets with the #cloudtalk hashtag. @reply other participants and react to their questions, comments, thoughts via #cloudtalk. Engage!
  • #cloudtalk should last about an hour.
  • RSVP for #cloudtalk on our twtvite!

In the meantime, feel free to tweet at us (@vCloud) with any questions. Look forward to having you join us on Tuesday the 19th for #cloudtalk!

Backup and Restore of vCloud Director Consumer Workloads

By: Massimo Re Ferre’, vCloud Architect

This is a repost from Massimo’s personal blog, IT 2.0 – Next Generation IT Infrastructures.

Backup and restore (of consumer workloads) in a vCloud Director environment is a hot topic. When you deal with Pets (Vs. Cattle) it is important that you take care of your little lovely friends workloads. Part of the effort of taking care of them includes backing them up regularly and, more importantly,  restoring them when needed.

This industry has achieved a high level of maturity in terms of best practices (and tooling) for backing up and restoring workloads running on vSphere virtual infrastructures. As we introduced an additional layer on top of vSphere (vCD) we broke, so to speak, some of the tools and many of the best practices. Even more challenging, we introduced concepts that didn’t exist before in a virtualization scenario (cloud providers and cloud consumers).

People tend to always give a crisp yes / no when faced with the question “can you backup/restore workloads running in vCloud Director”? I think the matter is more complex than that. It really boils down to what you want to do (more on this later).

I was tasked (I actually volunteered) to double click on this. Admittedly I started this effort with a short minded view that was (on the line of) “let’s find out which backup and restore tools integrate with vCloud Director”. As I started to lay out the content it became very clear that I was trying to find out the micro-details without having clear the potential macro-architectures and big picture. I started to lay out the context and I thought that making it public would help gathering more feedbacks and getting valuable inputs on how to proceed. What you will see next is (more or less) part of the content I am working on. It goes without saying that this are the informal rants of a single cloud architect. This is not a VMware paper (as is) and you shouldn’t refer it as such when pointing to this blog post.

Introduction to the vCloud Director Storage Layout

The figure below shows a high level view of the vCloud Director storage architecture.

There are a lot of considerations missing in the picture above in terms of how the storage stack is constructed in vCloud Director 5.1 (for example Storage Profiles, Provider vDCs, vSphere clusters, etc.) but there is enough information to describe the backup and restore process (and associated challenges).

First of all one can depict the multi-tenancy nature of vCloud Director where a single datastore/LUN (and host, for that matter) can be securely shared among different tenants (aka organizations).

vCloud Director presents a certain amount of (abstracted) storage to the tenant as a property of the organization vDC (aka Org vDC) the tenant has been assigned to. The tenant can consume that storage by creating VM disks as a property of a VM. The tenant does not care where that abstracted pool of storage resources are coming from.

Another important thing to notice in this simplified diagram is the fact that different actors can access the same resources at different levels. For example:

  • A tenant can access and can manipulate resources in its organization vDC whereas a cloud administrator can manipulate all resources across all tenants

  • A tenant can access a file on the VM file system by means of a Guest OS operation whereas a cloud administrator can access the same file mounting the VMDK at the ESXi host level

  • A tenant can perform limited manipulation on VMDK files via the vCloud APIs (e.g. independent disks, new in vCD 5.1) whereas the cloud administrator can fully manipulate them using traditional vSphere mechanisms

Infrastructure Visibility

This parameter, later used to characterize backup and recovery solutions, describes the level of access a given individual may have in a vCloud Director stack.

vCD uses a role-based model to assign proper rights to users. In the context of this document we will divide the cloud world in two macro roles: providers and consumers.

In vCD language, they are the cloud administrator and the organization administrator.

Note: We will consider roles like vApp user and vApp author being a subset of the organization administrator role and, as such, with a slightly limited visibility compared to the latter. We will just consider the organization administrator as the cloud consumer.

We introduce here two key concepts in cloud operations. These may be relevant in general for cloud but they are indeed very relevant for vCD cloud deployments.

These concepts are above-water visibility and below-water visibility. The water line alluded here is the line that separates cloud tenants from cloud administrators.

It is important for cloud administrator and cloud consumers to pay attention to this parameter (visibility) because that determines whether a given backup solution they are (respectively) building or consuming is available out of the box without customizations and on any vCloud Director deployment available.

“Above-water” Visibility

With above-water visibility (or consumer space) we refer to all of those operations that can be performed by a vCD tenant (specifically by an organization administrator) with an out of the box vCD. The emphasis here is on vanilla and out of the box.

These are all standard operations that any vCD tenant can perform regardless of the vCloud Director implementation (private or public that is).

This is a list of operations that, for example, an organization administrator can do above-water:

  • Creating a “backup server” inside the tenant to backup locally the files (inside the OS) of the production VMs

  • Manually copying vApps either in the same PvDC or in different PvDCs

  • Programmatically copying vApps either in the same PvDC or in different PvDCs

  • Leveraging independent disks to attach / detach VMDK files to stateless VMs

  • Leveraging independent disks (through attach / detach) to create Guest OS mirrors of production VMs.

Many of these approaches are usually typical of “design for fail” cloud models and don’t usually fly very well with customers with an Enterprise mind set.

Also, a missing out of the box object storage service in vCD limits the above-water backup and recovery use cases. An alternative workaround would be to setup a proxy inside the tenant that can backup to a third party public object storage service.

For example an object storage can be configured as a target in some traditional backup and restore tools or some third party public object storage services provide appliances (aka storage gateways) that can act as a proxy between a private set of servers and the public object storage service.

All of the above is considered above-water since this is something the tenant can implement without any interaction with the cloud provider and, more importantly, without any particular vCloud Director customization or extension.

This applies to any vCloud Director based cloud instance.

“Below-water” Visibility

Describing below water visibility (or provider space) is fairly easy because it is, essentially, full visibility into the cloud stack. This is only available to the cloud administrator and, assuming the vCloud Director administrator is also the administrator of the infrastructure underpinning it (which is often the case), this includes visibility into a variety of tools and layers including, obviously, vCenter Servers.

The cloud administrator is the owner of the entire stack and can perform any operation at any level in the stack. This is obviously true within the boundaries of what it is supported by the integration of the various products in the vCloud Suite.

There are for example tasks that, while the cloud administrator can perform at a lower level, are not supported as they may break the layers above. Some of these tasks, for example, include (source: vCAT 3.0.2):

  • Editing virtual machine properties

  • Renaming virtual machine

  • Disabling DRS

  • Deleting or renaming resource pools

  • Changing networking properties

  • Renaming datastores

  • Changing or renaming folders.

In the context of backup and recovery of consumer workloads, operating at this level of the stack requires careful planning by the cloud administrator.

This is a list of operations that, for example, a cloud administrator can theoretically do below-water:

  • Backing up / restoring files inside tenants via VMware VADP

  • Backing up / restoring VMDKs inside tenants via VMware VADP

  • Backing up / restoring VMs inside tenants via VMware VADP

  • Backing up / restoring vCloud vApps inside tenants via VMware VADP

  • Other objects manipulation aimed at saving the state of those objects using vCenter administration level of access.

Some of the operations above, particularly the restore of vCloud objects, require particular attention and best practices.

 Most vCloud implementations will vary below-water. This is true for many other operations but it is certainly true for backup and recovery operations. While there is a set of basic core functionalities a cloud admin can perform using VMware tools at this layer, most implementations will be complemented by peculiar backup and restore software products and, perhaps, particular configurations of the same backup and restore software products.

So while we consider the above-water zone to be consistent and standard across all vCloud Director deployments, we anticipate the below-water zone to be specific and peculiar for every deployment.

Backup and Restore levels

This is the second parameter that we will use later to characterize and segment backup and recovery solutions.

This is straightforward and describes the “what” in the backup and restore equation. What objects do tenants need to backup (and be able to restore)?

These objects and levels are discussed below in this section. The following picture summarizes them graphically.

File Level

This is the most atomic thing in the cloud consumer space that the tenant may want and can backup (and restore). It can’t get more granular than that. There isn’t a lot to say about it. A file inside a Guest OS file system is just a file.

Disk Level

This refers to the VMDK file associated to a given VM. It’s fair to see the VMDK as the drive of the VM. Note that by backing up the VMDK you are essentially backing up the entire state on disk of that Guest OS. In Microsoft Windows parlance, it’s like backing up the entire c:\ drive.
The relationship between the VMDK and the files discussed above is 1:many.

VM level

This object includes the VMDK content as well the metadata describing the virtual machine. A VM is really the collection of the content of the (virtual) disk as well as surrounding data that describes the characteristic of the VM (number of vCPUs, amount of memory, number of vNICs, etc.). This information is saved in the vmx file (which sits next to the VMDK file, in the same folder).
The relationship between the VM and the VMDK can be 1:many (limits apply, albeit it is often 1:1).

vApp level

This object describes the service (or the workload). A vApp is usually referred to as a collection of VMs but there are more to it than that. A vApp includes information such as vApp Networks (and associated network and security levels), VMs start and stop order, etc.
vCD vApp metadata and vCD VMs metadata are also part of the properties of the vApps.
The relationship between the vApp and the VM can be 1:many (limits apply)

Managed Service Vs. Self Service

This is the last parameter that we will use to characterize a backup and restore solution for vCloud Director consumer workloads.

At first this may sound like a duplicate of the above-water and below-water segmentation but it is not.

The infrastructure visibility parameter speaks more to the implementation of the cloud environment and the out of the box capabilities.

This segmentation speaks more to the operational aspect of performing backup and recovery of consumer workloads.

While it would be easy to mapping the above-water concept with self-service and mapping the below-water concept to managed services the reality may be more complex.

For example a given cloud service provider may offer managed services using above-water capabilities.

Or, even more interesting, a cloud consumer could experience a self-service experience using below-water capabilities (by means of third party portals or API extensions that the cloud administrator can expose to the tenant and that are not available out of the box with a vanilla vCloud Director setup).

Cloud Provider Managed Service

This is the scenario where the cloud administrator owns the operational aspects of backing up (regularly) and restoring (on a need basis) consumer workloads on behalf of the cloud consumer.

This is true regardless of:

  • Whether the cloud administrator uses an above-water (less likely) or a below-water (more-likely) strategy

  • What level of backup and restore is required (file, disk, VM or vApp)

In this scenario the cloud administrator usually have a set of policies in place to backup the consumer workloads (depending on the agreed SLAs) and the cloud administrator personnel perform the restore. Depending on the contract in place this could happen without consumer interaction or the consumer, by opening a ticket with the cloud service provider, could trigger the restore.
In this scenario the self-service aspect of cloud is not leveraged and exploited.

Cloud Consumer Self Service

In this scenario the tenant is fully in control of the backup and restore operations.

This is true regardless of:

  • Whether the cloud consumer uses an above-water or a below-water strategy

  • What level of backup and restore is required (file, disk, VM or vApp)

There is typically no interaction between the cloud administrator and the tenant and every backup and restore operational aspect is available to the cloud consumer.

Note the nature of backup operations may vary depending on the implementation details.

For example in an above-water backup and restore strategy the tenants are responsible for building and consuming their own solution.

However, when a tenant is consuming, in self-service, a below-water solution implemented by the cloud service provider, backup operations may be driven by:

  • Pre-defined policies (e.g. all vApps placed in a given virtual datacenter will have a pre-defined backup policy)

  • Self-service policies (e.g. the tenant can interactively assign vApps to particular policies interacting with the cloud via third party service portals or API extensions)

Backup and Restore: Solutions Characterization

Why is this important? Ideally every backup and restore solution we will discuss in the context of this document can be characterized by this triplet we have defined:

  • Where? (above-water or below-water)

  • What? (files, disks, VMs or vApps)

  • Who? (tenant self-service or provider managed services)

The triplet above isn’t useful to describe the inner technical details of any backup and restore product. However it is very useful to describe the outer characteristics of any backup and restore solution.

Ideally, before talking about the actual implementation, cloud architects should be able to characterize a solution by the where / what / who parameters.

This is true for architects building clouds (e.g. “our vCloud Director based backup and restore strategy will allow tenants to restore VMs and vApps by opening a ticket with us. We will then leverage some of our below-water features not exposed to the tenants”).

Similarly, architects consuming clouds should be able to query potential cloud service providers about their backup and restore services using this framework (e.g. “we are looking for a vCloud Director based service that would allow us to restore files, disks and VMs in self-service leveraging below-water features”).

Note that, for the most part, the infrastructure visibility aspect (below-water, above-water) isn’t usually something a consumer may want to call out as a “requirement”. Ideally the consumer would always want something to be “above-water” because that means the solution could be implemented on any vCD based cloud should they choose another cloud provider. However, the reason for which a tenant may specifically ask for a below-water functionality is because they have enough know-how of the vCloud stack to require a particular and more efficient solution than what a tenant may be able to achieve above-water.

In summary, we have been introducing the concept of above-water and below-water.

We have then introduced the list of objects that could be a target for backup and restore operations.

Last but not least we have introduced the notion of self-service and managed services.

The following picture represents a self-service solution.

The following picture represents a managed services solution.

That’s all (I can disclose). This is the framework I have been working on lately. As often happens to me, I can’t tackle a very simple problem without having to put it into the bigger picture to contextualize it. Sorry about that.

While I do understand that many people are interested in “does backup product xyz talk to the vCloud APIs”, I fear a simple yes or no doesn’t cut it and doesn’t put those people in a position to build a proper backup and restore solution for their vCloud Director based cloud.

Now, the next challenge is how to lay out (in a meaningful way) the research and unstructured work I have been doing to double click on actual solutions. What I have in mind right now (subject to change) is to describe in greater details a certain number of solutions and architectures (4? 6? 10?) that could be considered most common and best practices and characterize each of them with the where / what / who framework I discussed above.

This would let VMware customers and partners come up with their own additional solutions / combinations that they could characterize with the same framework. Just a thought at the moment.

Any comment or feedback that you may have, I am all ears.

Massimo.

Massimo currently works as at VMware as a Staff Systems Engineer, vCloud Architect. He works with Service Providers and Outsourcers to help them shape their Public Cloud services roadmap based on VMware cloud technologies. Massimo also blogs about Next Generation IT Infrastructures on his personal blog, IT 2.0.