Home > Blogs > Federal Center of Excellence (CoE) Blog

vCAC Property Dictionary: Customize Service Requests with Dynamic Menus

[originally posted on virtualjad.com]

In a previous post I discussed the benefits of utilizing vCloud Automation Center’s Property Dictionary to add input options during the application request process. This is one of the quickest ways to add some flare (and serious functionality) to the application request and allows users to have a little more granularity in the service selection process. The Property Dictionary – and custom properties in general – also help drive down the number of Blueprints thanks to the logic that can be baked right into the process.

Let’s review (from previous post)

In addition to creating a custom property, which can trigger external actions (workflows), you can also create property definition that utilizes vCAC’s built-in reserved custom properties, which can take the user’s input and apply that to the existing custom property as an answer file of sorts. For example, a drop-down list that presents the networks available to a given Provisioning Group and allowing users to select a preferred network. The property dictionary can also be used to build relationships between parent and child definitions to provide a more dynamic and nested functionality — the user selects a location (“Datacenter A”, parent) and, based on that selection, only appropriate networks (“NetA”, “NetB”, “NetC”, children) dynamically become available. The result is an application that gets deployed to Datacenter A using Network B. Throw a storage selection option in there with the same Datacenter relationship rule and now you’ve got a fine balance of policy-based controls and a dynamic user-experience.

Use the step-by-step instructions in this post to build this exact use case in your implementation of vCloud Automation Center. The assumption here is that you’ve got a working vCAC environment. If not, start with my vCAC 5.1 Detailed Installation Guide. Also, these steps are all build on version 5.1 but will also work on 5.2. Here’s a shot of our starting point – plain old “out of the box” request…

…continue reading at virtualjad.com

vCAC and it’s Security Capabilities

[originally posted on www.commondenial.com]

I believe that vCAC is one of the ways of putting the “yes” back into innovation versus the “no” that always seems to come out of security people. Innovation is thwarted because we feel like systems are out of our control and when they are, we don’t know what they are doing. We have to ask the network team to give us insight. Now, with vCAC we get the security back because we (the security people) can establish governance and control and sometimes this can bring security.

Governance in itself provides control but by including the ability to require approval, having separation of duties, limiting actions on individual and multi-machine systems, you gain even more control. You have the ability to implement your corporate and IT policies within vCAC and that is superior. The pain with security in the IT realm… yes, in IT, not in the security department is that they don’t want us in there. They feel like we are smothering them. With vCAC, we can take some of that pain away. We now have the ability to work together to develop the right systems to be utilized.

Just one… one of the many examples are the security attributes added to the machines. These actions can be defined on an individual basis. The screen shot below identifies some of the operations available that can be run on the virtual blueprint. As you can see it gives a lot of options and takes away a lot of options.

Now some may think that those options are just plain security and I get that. Truly I do, but this isn’t RBAC, these are operations you can take against the virtual machine. This goes deeper, making the virtual machine(s) the “identity”. You can’t avoid the governance and the control. You can’t ignore the fact that I can provide a limited or great amount of systems, that I have “blessed”, to specific groups of people and allow them to request it themselves. If they want to make the CPU, memory, and/or storage changes, I can provide that. If I want the requests to be approved, I can provide that. If I want to reclaim those machines, I can do that. I feel from a security viewpoint, vCAC can do so much more than people give it credit for. This is how we start bridging the gap between IT and security, this is how we bring them together.

Follow me on twitter @banksek

Free vCloud Networking and Security Training

[originally posted on commondenial.com]

I wanted to make sure that everyone was aware of the one hour free training that VMware is providing on vCloud Networking and Security. It is a great opportunity to check out and understand the product.

Overview of class : The vCloud Networking and Security Fundamentals course covers VMware’s vision of a Software Defined Datacenter (SDDC) and explains key features of vCloud Networking and Security Suite. This course also examines various methods for implementing vCloud Networking and Security.

Objective of class :

  • Communicate VMware’s vision of a SDDC
  • Explain workload networking and security requirements
  • Describe vCloud Networking and Security components
  • Examine vCloud Networking and Security implementations

Outline of class :

vCloud Networking and Security Vision describes the challenges that vCloud Networking and Security addresses, its key concepts, and the customer benefits.

vCloud Networking and Security Overview explains workload networking and security requirements, describes vCloud Networking and Security components, and explains vCloud Networking and Security purchasing options

vCloud Networking and Security Customer Use Cases examines how customers implemented vCloud Networking and Security into their environments.

ENROLL : http://mylearn.vmware.com/mgrReg/courses.cfm?ui=www_edu&a=det&id_course=173000

Use vCloud Automation Center’s Property Dictionary to Customize Service Requests

[originally posted on virtualjad.com]

As I’ve eluded to on more than one occasion, VMware’s vCloud Automation Center (vCAC) is more than just a cloud portal. It is a solution designed to take defined business policy and requirements and apply them to the underlying IT systems, providing a governance model that delivers infrastructure-as-a-service (IaaS) with business agility in mind. Once defined, those policies are applied to vCAC’s individual policy definitions to build a “mesh policy” that provide the governance and controls for self-service, automation, and lifecycle management. The result is a finely-tuned service deployment model that defines the applications (blueprints), where they can be deployed, who can deploy them, and under which circumstances they are (or aren’t) allowed to be deployed. More than just a cloud portal.

vCAC 5.1 provides a ton of this capability “out of the box”, but the solution can also add a tremendous amount of additional capability using built-in control concepts, custom properties, and native integration with external tools such as PowerShell, vCenter Orchestrator (vCO), and others. The possibilities are immense. Those of you who are familiar with vCO will immediately realize the power of that last statement. If you’re not familiar with vCO you should stop reading this, download/deploy the vCO appliance, and make it your best friend…then come back and finish reading. Any workflow available in vCO can be initiated during a vCAC service request. vCAC’s extensibility options — utilizing the built-in Design Center and/or Cloud Development Kit (CDK) add-on — take it to a whole other level of customization and automation. Well-defined use cases and a solid implementation strategy are key when you head down the extensibility path. I will cover more on extensibility and custom use cases in future posts. For now, I’m going to focus on one of vCAC’s built-in concepts that can be used to customize service provisioning options, reduce the number of managed objects (blueprints), and add a nice touch to the user experience…with as few point-and-clicks as possible! What I’m referring to is vCAC’s built-in Property Dictionary feature.

The Property Dictionary

From the vCAC 5.1 What’s New Guide (p. 2-77):

The property dictionary feature, introduced in release 4.5, enables an enterprise administrator to provide a more robust user interface for custom properties that a machine owner enters at request time.

Properties are used throughout the product to provide settings for many features. When users request new machines they are prompted for any required properties. Enterprise administrators or provisioning group managers designate which properties are required by selecting the Prompt User option on the blueprint or build profile. By default, the Confirm Machine Request page displays the literal name of the property as a required text box and does not provide any validation other than that a value has been entered.

The property dictionary allows you define characteristics of properties that are used to tailor the behavior of the request user interface…

(give the “what’s-new” guide a read if you haven’t done so already)

You use the Property Dictionary function to build a Property Definition, which is the logic behind each action. Property definitions can be created for custom properties that require user input during the service request process and, for example, will trigger an external action (e.g. workflow) to complete a given set of tasks that respond back to vCAC when completed. Can you say “Software-Defined Datacenter”?

Some additional uses of the Property Dictionary include:

  • Allowing users to select specific resources that are otherwise hidden (e.g. overriding resource reservation policies to allow users to select a specific datastore, network, or cluster)
  • Creating property names and descriptions that make sense and can be read in plain english
  • Adding pop-up tool tips to explain each required item
  • Customizing the order in which required fields are displayed
  • Making an otherwise required field no longer required

You can also create property definition that utilize vCAC’s built-in reserved custom properties, which can take the user’s input (or selection) and apply that to the existing custom property as an answer file of sorts. For example, you can define a drop-down menu that lists all the networks available to a given Provisioning Group (via that group’s resource reservation) and allow the user to select a preferred network. Once the request is approved, that application is deployed to the selected network. You can also build relationships between parent and child definitions to provide a more dynamic and nested functionality — the user selects a datacenter (“Datacenter A”, parent) and, based on that selection, only appropriate networks (“NetA”, “NetB”, “NetC”, children) become available. The result is an application that gets deployed to Datacenter A using Network B. Throw a storage selection option in there with the same Datacenter relationship rule and now you’ve got a fine balance of policy-based controls and a dynamic user-experience.

Sounds like a good use case to me! — my next post will provide detailed configuration steps for enabling this exact scenario.  Stay tuned…

++++
@virtualjad

vCloud Suite 5.1 Solution Upgrade Guide

By now you’ve probably heard all the hype around the 5.1 releases of VMware’s vSphere and vCloud platforms – and the vCloud 5.1 Suite, which bundles the latest versions of several VMware key IaaS-focused technologies and delivers a comprehensive cloud solution.  The suite comes in 3 flavors – Standard, Advanced, and Enterprise.

If you’re an existing (active) customer of any of these products, there’s an upgrade and/or entitlement path to the suite for you – and it’s highly recommended that you take advantage of it.  Or, at the very least, you can upgrade your individual products to 5.1 as you ponder the suite.  Whether or not you choose to upgrade and take advantage of the latest and greatest features is up to you.  But if you’re looking for increased scale, performance, efficiency, and capability while taking advantage of end-to-end advancements in VMware’s leading cloud technologies, then I would place upgrade at the top of your to-do list.  (some of my peers suggest I’m drinking the Kool-Aid via fire hose….really?).  Learn more about the suite here: http://www.vmware.com/products/datacenter-virtualization/vcloud-suite/overview.html.

The attached guide will walk you through, in detail, the upgrade steps and procedures for moving to vCloud Suite 5.1.

Upgrade Overview

Speaking of upgrade – and to get back on topic – I thought it would be beneficial to publish a how-to guide of sorts to help with upgrading from previous versions of the core infrastructure stack to version 5.1, taking in consideration the many co-dependencies of an active cloud deployment (VMware’s pubs and guides cover the process for individual products with plenty of detail, but not so much as a whole solution…yet).

I’ll specifically focus on upgrading from previous (pre-5.1) versions to 5.1.  The approach will go something like this (in this order):

  1. vCloud Director 1.5.x -> vCloud Director 5.1
  2. vShield Manager 5.0.x -> vCloud Networking & Security 5.1
  3. vCenter 5.0.x (windows) -> vCenter 5.1 + required add-ons
  4. vSphere (ESXi) 5.0.x -> ESXi 5.1
  5. vShield Edge (vSE’s) 5.0.x -> Edge Gateway 5.1

Note: Many issues encountered during the upgrade are contributed to lack of planning, upgrading components out of order, or skipping steps.  To ensure a successful upgrade and continuity of services, it is critical that the steps highlighted in this document are followed closely.  In other words, avoid shortcuts!

Things to Consider

Before we get started, let’s set expectations and discuss some caveats.  At first glance, upgrading to a “dot” release doesn’t seem that significant, but if you have followed VMware’s versioning strategy in the past, you’ll know that a “.1” release is typically a major update that adds a significant set of capabilities and functionality.  This one upgrade path is no exception.  And with that comes several considerations…

  • Take advantage of snapshots – take one of every VM you’re touching and make sure you have good backups of the configs any associated databases.
  • Understand the implications of upgrading your vCenter server, especially in environments that of other products and 3rd-party solutions installed that depend on it (see: VMware View).
  • If you plan on migrating from vCenter Server on Windows to the vCenter Server Virtual Appliance (VCVA), this guide isn’t going to help you much.  The upgrade procedure to follow is for a Windows-installed vCenter.  But, by all means, download the VCVA and give it a run – works great.  Just note that you can’t currently migrate from one platform to another.
  • vCenter Server 5.1 adds a significant set of new features, some of which that will require special attention during this upgrade…specifically for the new single-sign on (SSO) function.  To ensure the upgrade goes smoothly, be sure the follow the installation steps IN ORDER.  This ensures all service dependencies will be in place as new features are installed.
  • vCloud Director 5.1 is backwards compatible with vSphere 5.0.x, but not the other way around.  You can upgrade vCD and sub-components now and wait to get vCenter and ESXi up to 5.1 later…just understand this may limit some of the new features in vCD that depend of vSphere 5.1.  See VMware’s compatibility matrix for more details: http://partnerweb.vmware.com/comp_guide2/sim/interop_matrix.php
  • The upgrade procedure for vCloud Director 1.5.x highlighted in this doc assumes a single instance (cell) is installed.  As such, upgrading the only cell will result in a vCD outage (not the running vApps, just vCD UI access).  See VMware’s guide, vcd_51_install.pdf, to upgrade a multi-cell environment…it’s just a few extra steps.
  • The new version of vCloud Connector (2.0) is not yet available (as of this writing).  While the majority of the cloud suite’s components have been upgraded and converged on 5.1 versioning, the latest vCC 2.0 appliance is expected to go live by the end of the year.  If you’re currently using vCC 1.x, upgrading to vCD 5.1 may break it.  Stay tuned for the 2.0 release if this is something you depend on.
  • If you’re running vCenter Operations (vCOps) 5.0.x and are planning on upgrading the rest of your environment, you might as well take the time to update your appliance to version 5.0.3 to take advantage of some minor new enhancements that will compliment the vCloud 5.1 suite – vCenter Operations Suite 5.6 was announced at VMworld Barcelona and will be available for download/upgrade soon.

Now that we’ve got that out of the way, let’s get upgrading!  Upgrading to 5.1 is not difficult, but it does take some planning, cautions (see above), and an organized approach to ensure all goes well…especially in a production environment.  Speaking of that, here’s my disclaimer:

DISCLAIMER: This document is not an ‘official’ VMware publication, nor are the author (that’s me) or VMware responsible for any outcomes, outages, late nights in the datacenter, or complete system meltdowns.  This document and its forthcoming upgrade procedures were created as reference material to help you get your environment upgraded so you can enjoy all the wonders of the vCloud 5.1 Suite.  As a precaution, do this in a test/dev environment prior to attempting the process in a production deployment.  On the other hand, please feel free to share your successful upgrade stories!

Download the full guide here: vCloud 5.1 Suite Solution Upgrade Guide v1.1

++++

@virtualjad

Follow virtualjad on Twitter

Connecting Clouds

For those organizations on the journey of transforming their datacenters to meet the demand of a modern IT consumption model, it’s easy to envision what cloud euphoria could/should look like.  That’s mostly because vision is quite cheap – all it takes is a little imagination (maybe), a few Google queries, several visits by your favorite vendor(s), and perhaps a top-down mandate or two.  The problem is execution can break the bank if the vision is not in line with the organization’s core objectives.  It’s easy to get carried away in the planning stages with all the options, gizmos and cloudy widgets out there – often delaying the project and creating budget shortfalls.  Cloud:Fail.  But this journey doesn’t have to be difficult (or horrendously expensive).  Finding the right solution is half the battle…just don’t go gluing several disparate products together that were never intended to comingle and burn time and money trying to integrate them.  Sure you might eventually achieve something that resembles a cloud, but you’re guaranteed to hit several unnecessary pain points on the way.

Of course I’m not suggesting putting all your eggs in one vendor’s basket guarantees success.  Nor am I suggesting that VMware’s basket is the only one that provides everything you’ll ever need for a successful cloud deployment.  In fact, VMware prides itself with an enormous (and growing) partner ecosystem that provides unique approaches and technologies to cloudy problems and beyond.  What I am suggesting, however, is the need to pick and choose wisely.  Well integrated clouds = well functioning clouds = happy clouds and happy customers.  Integration means common frameworks and interfaces, extensible API’s, automation via orchestration, app portability across clouds, and technologies that are purpose-built for the job(s) at hand.  And as a bonus, integration can mean leveraging what you already have – an infrastructure awaiting the transformation of a lifetime.  That’s right, the cloud journey should not be a rip-and-replace proposition.

There’s another major component to this – while I spend the majority of my time helping organizations and federal agencies adopt the cloud and transform their infrastructures, there’s often something else on the customer’s mind that can’t be ignored.  It’s a long-term strategy delivered in nine datacenter-shattering words: “I want to get out of the infrastructure business”.   I’m hearing this more often than not and it cannot be ignored.  What they are referring to is the need to eventually shift workloads to public clouds rather than continue to invest in their own infrastructures.  This strategy makes perfect sense.  As the adoption of public cloud services increases, more and more CIO’s are finding new comfort levels in handing over their apps and workloads to trusted cloud providers, albeit slowly.  But this also introduces new challenges.  How does an organization well on its way to delivering an enterprise/private cloud to the business ensure that future adoption of public clouds does not mean starting from scratch?  What about managing and securing those workloads just as you would in the private cloud?  Public cloud providers need to be an extension of your private cloud, giving you the freedom of application placement, the ability to migrate workloads back and forth, and providing single-pane-of-glass visibility into all workloads and all clouds.  This endeavor requires the right planning, tools, and frameworks to be successful.

Here are the top “asks” from customers currently on, or getting ready to start, this journey (in no particular order):

  • Private cloud now…public cloud later (or both…now)
  • Workload portability (across clouds / cloud providers)
  • A holistic management approach
  • End-to-end visibility
  • Dynamic security
  • Cloud-worthy scalability

If any of this is resonating, then you’re probably in a similar situation.  CIO’s are pushing the deployment of private clouds while simultaneously considering public cloud options.  Therefor the solution needs to deliver everything we know and love of the private cloud while laying down the framework for public cloud expansion.  Problem is not many solutions out there can do this.  Public cloud providers often run proprietary frameworks and management tools to keep costs low and private cloud solutions are generally focused on just that (being private).

Enter VMware.

VMware has put a lot of effort in leveraging the success of vSphere – the cloud’s critical foundation – to help take a controlling lead up the software stack and deliver a cloud solution for both private and public (i.e. hybrid) clouds.  And through the VMware Service Provider Program (VSPP), they have also enabled a new generation of cloud service providers that build their offerings using the same vCloud frameworks available to enterprises.  As a result, each and every one of these vCloud-powered service providers instantly becomes a possible extension of your private cloud, placing the power of the hybrid cloud – and all the “asks” above – at your fingertips.

Here’s what that looks like from a 1,00ft view…

  CIM Stack

  Let’s review this diagram:

1 – Physical Infrastructure: commodity compute, storage, and network infrastructure.

2 – vSphere Virtualization: hardware abstraction layer and cloud foundation.  Delivers physical compute, storage, and networks as resource pools, datastores, and portgroups (or dvPortgroups).

3 – Provider Virtual Datacenter (PvDC) and Organizational Virtual Datacenter (OvDC): delivered by vCloud Director as the first layer of cloud abstraction. resources are simply consumed as capacity and delivered on demand.

4 – vCenter Orchestrator: key technology for cloud integration, automation, and orchestration across native and 3rd-party solutions.

5 – vCenter Operations: holistic management framework for visibility into performance, capacity, compliance, and overall health.

6 – Security & Compliance: dynamic, policy-based security and compliance tools across clouds using vShield Edge and vCenter Configuration Manager (vCM)

7 – VMware Service Manager for Cloud Provisioning (VSM-CP): self-service web portal and business process engine tying it all together.  Integrates with vCO for mega automation.

8 –vCloud Connector (vCC): single pane of glass control of clouds and workloads.  enables workload portability to/from private and public vClouds and traditional vSphere environments.

Last but not least is the very important question of “openness” in the cloud (don’t get me started on heterogeneous hypervisors!).  VMware spearheaded the OVF standard several years ago, which has been adopted by the industry as a whole as a means of migrating vSphere-based workloads to non-vSphere hypervisors (and the clouds above them) with metadata in tact.  In fact, OVF remains a key technology in the Hybrid cloud scenarios and is an integral part of workload portability across clouds.  OVF gives customers the ability to move workloads in/out of vSphere and vCloud environments and into other solutions that support the standard.  Just beware of solutions that will happily accept OVF workloads but not so happily give them back (warning: the majority won’t).

The end result: cloud goodness, happy CIO’s, and streamlined IT.  How’s that for a differentiator?

++++

@virtualjad

Follow virtualjad on Twitter

Heterogeneous Foundations for Cloud: Simply Overrated

Let me start by making a statement that you may or may not agree with – being heterogeneous is often a problem in need of a solution…not a strategy. Allow me to explain…

I spend a lot of time discussing VMware’s vCloud solution stack to many different customers, each with varying objectives when it comes to their cloud journey. The majority of them fall under two groups – Group A) those who know what they want and where to get it and Group B) those who think they know what they want and have been shopping for the “right” solution since before cloud hit the mainstream – one “cloud bake-off” after another while changing requirements in real-time. Can you guess which ones meet their objectives first? Hint: it’s the same group that delivers IaaS to their enterprise and/or customers using proven technologies and trusted relationships in the time it takes the other to host a bake-off.

For group A the requirements are straightforward – deliver me a solution (and technology) that meets exceeds all the characteristics of cloud [see: defining the cloud] so I can transform my infrastructure and deliver next generation IT to the business. Sound familiar? It should because this is where the greater majority is – whether they accept it with open arms or are trying to meet agency mandates (or both). These are the organizations that understand the value of a COTS solution that promises to reduce cost, complexity, and time to market. These are the folks that consider what has worked so incredibly well in the past and stick with it. They look at the foundation that has built their virtualized infrastructures and helped them achieve unprecedented levels of efficiency, availability, and manageability. These are vSphere customers (did you see that coming?). Remember what the very first characteristic (and prerequisite) of Cloud is – Pooling of Resources. More than 80% of the virtualized world is running vSphere as their hypervisor of choice. In fact, a new VM is powered up on vSphere every 6 seconds and there are more VM’s in (v)Motion than there are planes in the sky at any moment. There is no question that VMware’s flagship hypervisor has changed the way we do IT – a hypervisor that has earned the right and reputation to be your cloud’s foundation…vCloud’s foundation.

But not everyone gets it (enter group B). These are the folks that set requirements they think are intended to benefit the business or customer and end up burning resources, money, and time in the process. My job is to look at the business’ objectives, understand their unique requirements, propose a solution, and help determine the resulting architecture. But every once in a while a customer throws out a requirement that just doesn’t make sense…and I feel, as trusted advisor, it is my responsibility to make sure they understand the impact of such requirements.

This brings me to the topic of this post and the most often misguided “requirement” out there: “my cloud needs to support heterogeneous hypervisors”.

Say what!? Heterogeneous hypervisors? I’ll just put this out there – VMware’s cloud framework (specifically vCloud Director) does not support heterogeneous hypervisors – and for a very good reason! What benefit will this provide when there's an opportunity to build this baby from the ground-up? Let me be clear about one thing – the need to support a heterogeneous anything is a problem and not an effective business strategy. Heterogeneity often occurs when IT merges – whether that’s in a datacenter consolidation, business merger, bankrupt vendor, whatever. The business typically wants to save existing investments and needs a new way to manage those assets in a centralized/consolidated manner. A great example of this exists in the storage world – as datacenters were consolidated and several different flavors of storage subsystems were expected to play together, storage virtualization solutions were needed to make it so. There are several solutions out there to choose from – IBM SAN Volume Controller (SVC) or NetApp V-Series just to name a couple. Bottom line is the organization gained a heterogeneous storage environment and needed a solution to bring it all together to achieve centralized management. Although there are solutions available to help (some better than others), they are really just a band-aid and still result in everything you’d expect from such a situation:

  • increased complexity
  • added learning curve
  • masking of core/native capabilities
  • increased operations and management costs
  • reduced efficiencies
  • additional management layers
  • increased opportunity for failure
  • lots of finger-pointing when all hell breaks loose

These are all results of a problem. Organizations rarely choose to add complexity, cost, risk, etc. to their infrastructures but instead employ available technologies to help reduce the pain of such a situation. However, as the environment scales, these same organizations do choose to scale the native capacity first in an effort to avoid making the problem worse (ex: adding a storage capacity that natively integrates with the front-end solution).

When it comes to building a cloud infrastructure, most organizations are early in the design and planning process and have an opportunity to employ proven technologies and gain seamless integration, high efficiencies, centralized management, etc. all on top of a solid foundation. The key word here is foundation (i.e. the hypervisor) – the most critical component in this architecture. Why would any organization choose to take the heterogeneous approach and deal with the added risks when so much is at stake?

And finally, for all you out there that suggest that not supporting a heterogeneous foundation creates cloud vendor lock-in (this happens to be the #1 argument), I only have this to say: regardless of who you trust to be your hypervisor, your best bet is to select a solution that provides an open and extensible framework, exposes API’s for seamless infrastructure integration, and has the trust and reputation your business or customer needs. I won’t name names…but there’s only one.

++++

@virtualjad

Follow virtualjad on Twitter

Transform IT With Cloud: GovConnection.com Podcast

I recently had an opportunity to record a Podcast with one of VMware's valued channel partners, GovConnection.com.  During the Podcast I addressed several questions regarding the adoption of cloud infrastructures in the Federal Government.

Topics included:

  • cloud adoption rates across federal organizations
  • cloud technology drivers (why cloud?)
  • the advantages of building out a cloud infrastructure vs. traditional IT
  • recommended steps for getting started (how cloud?)
  • how VMware solutions align themselves with this IT evolution

Take a listen @ GovConnection's Cloud Computing Technical Library (http://www.govconnection.com/IPA/PM/Solutions/TechnologyLibrary/Cloud.htm)

(go to "Transform IT With Cloud" and select "Listen Now")

or listen now: Jad_VMWare_GOV

 Enjoy! — feedback welcome

 

++++

@virtualjad

Gov’t Agencies Taking the Cloud Journey

This week I had the distinct pleasure of joining a panel of cloud industry experts for the AFCEA Belvoir Industry Days conference at Washington National Harbor's Gaylord Resort to discuss the hot topics of cloud computing in front of hundreds of attendees representing several federal agencies (notably the US Army).  The panel was moderated by GSA CIO, Casey Coleman, and included experts representing Lockheed Martin, CSC, Octo Consulting Group and — best of all — VMware.

To kick things off, each panelist had 5 minutes for opening remarks and to provide some insight on their organization's perspective on cloud…call it a 5-minute elevator pitch.  For my part, I shared VMware's cloud vision of transforming IT as we know it and the journey through this transformation — an approach to cloud that is broken up into three measurable stages:

  1. IT Production – early stage virtualization to reach new infrastructure and cost efficiencies.
  2. Business Production – realizing the value of all that is gained by virtualizing "low hanging" applications in stage 1 — increased availability and performance, app agility, centralized management, etc — to drive the virtualization of business critical applications while setting a solid foundation for cloud computing.
  3. IT as a Service (ITaaS) – reaping the benefits of the first two stages and laying down the framework of a modern cloud architecture, which ultimately leads to to business agility.

The first panel question was teed up by Ms. Coleman, which was enough to fuel additional questions by the 300+ audience for the remainder of the 1-hr session.  After each panelist shared their thoughts on each of the questions, I couldn't help but notice the recurring theme: Security and Compliance in the cloud.  The panel shared several views and opinions on this often-touchy topic.  Here are a few highlights of these and other important questions along with my response (not necessarily in this order and all paraphrased of course)…

+++
Q: How will I know my agency is ready for cloud?
A: Does IT and business agility intrigue you?  Understanding the industry-accepted characteristics of cloud — pooling, elasticity, automation, self-service, etc. (see: NIST) — and all that it promises will often trigger a need to move along on the journey.  But agencies are approaching the journey in many different ways. Some are eager to achieve the goal of business agility — and quickly ramping up to get there — while others are simply following the guidelines of the Vivek Kundra's cloud first mandate but struggling to lay down the ground work to get there.  Regardless of why you need/want cloud, how prepared your agency is will make the journey affordable, achievable, and worth-while.

Q: How do I evolve from traditional IT to IT as a Service and the cloud?
A: First and foremost, setting a solid foundation of the cloud — just like you would for a house — is a critical first step (resource pooling: a key prerequisite) in the journey.  For VMware's customers, that means achieving high levels of virtualization and efficiencies through vSphere.  For any organization that is stuck in the IT Production phase (20-30% virtualized), that means taking the necessary steps to moving to the Business Production phase and increase those levels of virtualization to 60% or greater on an optimized virtual infrastructure.

Q: How is compliance and security addressed in the cloud?
A: We first have to understand what changes as we shift from static workloads protected by physical perimeter security devices to an environment where they are run virtually on shared infrastructure — possibly across multiple datacenters — and free to be elastic, portable, and dynamic.  This shift requires a fundamentally new approach.  From a VMware perspective, security and compliance are addressed using a set of technologies and management tools to provide end-to-end compliance and security in depth.  This includes the ability to provide dynamic network segmentation and protection in the cloud; providing secure multi-tenancy through frameworks and adaptive [virtual] security devices built for this era; a governance model that makes sense of all actions (and interactions); and a compliance and control engine that address these issues within a single workload or entire clouds at a time.  Only with these tools and tight integration with the surrounding frameworks can you provide a level of compliance for workloads small and big, connected or not, and still be able to deliver all that we drive to achieve in the cloud.

Q: Workload portability is critical — how is this achieved in the cloud?
A: We're constantly referring to the need for elasticity and portability in the cloud.  These terms are referring to the ability to move workloads been cloud infrastructures for reasons including capacity, performance, security, availability, cost, and other business factors.  VMware addresses these key characteristics by implementing technologies that allow a cloud user to shift workloads across cloud infrastructures — between any combination of private, public, or traditional virtualized environments — and achieve true Hybrid cloud capabilities.  With these tools at their fingertips, consumers are presented with a "single pane of glass" interface that allows them to move and manipulate workloads across all vCloud-powered clouds for whatever the purpose.

Q: How about cloud interoperability?
A: Interoperability is key.  Most agencies that dive into the realm of all things cloud quickly realize that not all clouds are made equal — from from it!  This can be a big problem — the journey to cloud doesn't have to be polluted with warning signs and speed bumps.  VMware spear-headed the Open Virtualization Framework (OVF) which has received industry-wide acceptance, is an ANSI standard for portability, and is supported by several partners and competitors alike.  With OVF, customers are able to import/export workloads and associated meta data to/from a variety of virtualization and cloud platforms.  VMware is also a big believer in open API's — vCloud API's in this case — to enable streamlined management and control of workloads across clouds.  VMware uses these technologies natively to enable portability across vClouds (pub/priv/hybrid) and to/from vSphere environments.  This means that your on-premise private vCloud will deliver interoperability with vCloud-powered service providers and allow you to deploy, run, manage, and secure workloads across these common frameworks.

There are gotchas — understand that the objective here is to provide a means of moving your applications based on the requirements of the business or the unique characteristics of a given application.  Interoperability needs to be a two-way (at least) road…beware of the service providers that are happy to receive (import) an OVF workload but not give you the tools to get it back.  We call this the "Hotel California" model.  When all sources and destinations provide a common set of frameworks and API's, this issue goes away and streamlined management ensues.
+++

I certainly enjoyed learning the position of each panelist — many common approaches but not always the case, which keeps it interesting!  All in all, the audience questions were great, the panelists were often in sync, and we all demonstrated a [mostly] unified approach to the cloud journey.

++++
@virtualjad