Home > Blogs > Federal Center of Excellence (CoE) Blog > Tag Archives: service_providers

Tag Archives: service_providers

Heterogeneous Foundations for Cloud: Simply Overrated

Let me start by making a statement that you may or may not agree with – being heterogeneous is often a problem in need of a solution…not a strategy. Allow me to explain…

I spend a lot of time discussing VMware’s vCloud solution stack to many different customers, each with varying objectives when it comes to their cloud journey. The majority of them fall under two groups – Group A) those who know what they want and where to get it and Group B) those who think they know what they want and have been shopping for the “right” solution since before cloud hit the mainstream – one “cloud bake-off” after another while changing requirements in real-time. Can you guess which ones meet their objectives first? Hint: it’s the same group that delivers IaaS to their enterprise and/or customers using proven technologies and trusted relationships in the time it takes the other to host a bake-off.

For group A the requirements are straightforward – deliver me a solution (and technology) that meets exceeds all the characteristics of cloud [see: defining the cloud] so I can transform my infrastructure and deliver next generation IT to the business. Sound familiar? It should because this is where the greater majority is – whether they accept it with open arms or are trying to meet agency mandates (or both). These are the organizations that understand the value of a COTS solution that promises to reduce cost, complexity, and time to market. These are the folks that consider what has worked so incredibly well in the past and stick with it. They look at the foundation that has built their virtualized infrastructures and helped them achieve unprecedented levels of efficiency, availability, and manageability. These are vSphere customers (did you see that coming?). Remember what the very first characteristic (and prerequisite) of Cloud is – Pooling of Resources. More than 80% of the virtualized world is running vSphere as their hypervisor of choice. In fact, a new VM is powered up on vSphere every 6 seconds and there are more VM’s in (v)Motion than there are planes in the sky at any moment. There is no question that VMware’s flagship hypervisor has changed the way we do IT – a hypervisor that has earned the right and reputation to be your cloud’s foundation…vCloud’s foundation.

But not everyone gets it (enter group B). These are the folks that set requirements they think are intended to benefit the business or customer and end up burning resources, money, and time in the process. My job is to look at the business’ objectives, understand their unique requirements, propose a solution, and help determine the resulting architecture. But every once in a while a customer throws out a requirement that just doesn’t make sense…and I feel, as trusted advisor, it is my responsibility to make sure they understand the impact of such requirements.

This brings me to the topic of this post and the most often misguided “requirement” out there: “my cloud needs to support heterogeneous hypervisors”.

Say what!? Heterogeneous hypervisors? I’ll just put this out there – VMware’s cloud framework (specifically vCloud Director) does not support heterogeneous hypervisors – and for a very good reason! What benefit will this provide when there's an opportunity to build this baby from the ground-up? Let me be clear about one thing – the need to support a heterogeneous anything is a problem and not an effective business strategy. Heterogeneity often occurs when IT merges – whether that’s in a datacenter consolidation, business merger, bankrupt vendor, whatever. The business typically wants to save existing investments and needs a new way to manage those assets in a centralized/consolidated manner. A great example of this exists in the storage world – as datacenters were consolidated and several different flavors of storage subsystems were expected to play together, storage virtualization solutions were needed to make it so. There are several solutions out there to choose from – IBM SAN Volume Controller (SVC) or NetApp V-Series just to name a couple. Bottom line is the organization gained a heterogeneous storage environment and needed a solution to bring it all together to achieve centralized management. Although there are solutions available to help (some better than others), they are really just a band-aid and still result in everything you’d expect from such a situation:

  • increased complexity
  • added learning curve
  • masking of core/native capabilities
  • increased operations and management costs
  • reduced efficiencies
  • additional management layers
  • increased opportunity for failure
  • lots of finger-pointing when all hell breaks loose

These are all results of a problem. Organizations rarely choose to add complexity, cost, risk, etc. to their infrastructures but instead employ available technologies to help reduce the pain of such a situation. However, as the environment scales, these same organizations do choose to scale the native capacity first in an effort to avoid making the problem worse (ex: adding a storage capacity that natively integrates with the front-end solution).

When it comes to building a cloud infrastructure, most organizations are early in the design and planning process and have an opportunity to employ proven technologies and gain seamless integration, high efficiencies, centralized management, etc. all on top of a solid foundation. The key word here is foundation (i.e. the hypervisor) – the most critical component in this architecture. Why would any organization choose to take the heterogeneous approach and deal with the added risks when so much is at stake?

And finally, for all you out there that suggest that not supporting a heterogeneous foundation creates cloud vendor lock-in (this happens to be the #1 argument), I only have this to say: regardless of who you trust to be your hypervisor, your best bet is to select a solution that provides an open and extensible framework, exposes API’s for seamless infrastructure integration, and has the trust and reputation your business or customer needs. I won’t name names…but there’s only one.

++++

@virtualjad

Follow virtualjad on Twitter

Gov’t Agencies Taking the Cloud Journey

This week I had the distinct pleasure of joining a panel of cloud industry experts for the AFCEA Belvoir Industry Days conference at Washington National Harbor's Gaylord Resort to discuss the hot topics of cloud computing in front of hundreds of attendees representing several federal agencies (notably the US Army).  The panel was moderated by GSA CIO, Casey Coleman, and included experts representing Lockheed Martin, CSC, Octo Consulting Group and — best of all — VMware.

To kick things off, each panelist had 5 minutes for opening remarks and to provide some insight on their organization's perspective on cloud…call it a 5-minute elevator pitch.  For my part, I shared VMware's cloud vision of transforming IT as we know it and the journey through this transformation — an approach to cloud that is broken up into three measurable stages:

  1. IT Production – early stage virtualization to reach new infrastructure and cost efficiencies.
  2. Business Production – realizing the value of all that is gained by virtualizing "low hanging" applications in stage 1 — increased availability and performance, app agility, centralized management, etc — to drive the virtualization of business critical applications while setting a solid foundation for cloud computing.
  3. IT as a Service (ITaaS) – reaping the benefits of the first two stages and laying down the framework of a modern cloud architecture, which ultimately leads to to business agility.

The first panel question was teed up by Ms. Coleman, which was enough to fuel additional questions by the 300+ audience for the remainder of the 1-hr session.  After each panelist shared their thoughts on each of the questions, I couldn't help but notice the recurring theme: Security and Compliance in the cloud.  The panel shared several views and opinions on this often-touchy topic.  Here are a few highlights of these and other important questions along with my response (not necessarily in this order and all paraphrased of course)…

+++
Q: How will I know my agency is ready for cloud?
A: Does IT and business agility intrigue you?  Understanding the industry-accepted characteristics of cloud — pooling, elasticity, automation, self-service, etc. (see: NIST) — and all that it promises will often trigger a need to move along on the journey.  But agencies are approaching the journey in many different ways. Some are eager to achieve the goal of business agility — and quickly ramping up to get there — while others are simply following the guidelines of the Vivek Kundra's cloud first mandate but struggling to lay down the ground work to get there.  Regardless of why you need/want cloud, how prepared your agency is will make the journey affordable, achievable, and worth-while.

Q: How do I evolve from traditional IT to IT as a Service and the cloud?
A: First and foremost, setting a solid foundation of the cloud — just like you would for a house — is a critical first step (resource pooling: a key prerequisite) in the journey.  For VMware's customers, that means achieving high levels of virtualization and efficiencies through vSphere.  For any organization that is stuck in the IT Production phase (20-30% virtualized), that means taking the necessary steps to moving to the Business Production phase and increase those levels of virtualization to 60% or greater on an optimized virtual infrastructure.

Q: How is compliance and security addressed in the cloud?
A: We first have to understand what changes as we shift from static workloads protected by physical perimeter security devices to an environment where they are run virtually on shared infrastructure — possibly across multiple datacenters — and free to be elastic, portable, and dynamic.  This shift requires a fundamentally new approach.  From a VMware perspective, security and compliance are addressed using a set of technologies and management tools to provide end-to-end compliance and security in depth.  This includes the ability to provide dynamic network segmentation and protection in the cloud; providing secure multi-tenancy through frameworks and adaptive [virtual] security devices built for this era; a governance model that makes sense of all actions (and interactions); and a compliance and control engine that address these issues within a single workload or entire clouds at a time.  Only with these tools and tight integration with the surrounding frameworks can you provide a level of compliance for workloads small and big, connected or not, and still be able to deliver all that we drive to achieve in the cloud.

Q: Workload portability is critical — how is this achieved in the cloud?
A: We're constantly referring to the need for elasticity and portability in the cloud.  These terms are referring to the ability to move workloads been cloud infrastructures for reasons including capacity, performance, security, availability, cost, and other business factors.  VMware addresses these key characteristics by implementing technologies that allow a cloud user to shift workloads across cloud infrastructures — between any combination of private, public, or traditional virtualized environments — and achieve true Hybrid cloud capabilities.  With these tools at their fingertips, consumers are presented with a "single pane of glass" interface that allows them to move and manipulate workloads across all vCloud-powered clouds for whatever the purpose.

Q: How about cloud interoperability?
A: Interoperability is key.  Most agencies that dive into the realm of all things cloud quickly realize that not all clouds are made equal — from from it!  This can be a big problem — the journey to cloud doesn't have to be polluted with warning signs and speed bumps.  VMware spear-headed the Open Virtualization Framework (OVF) which has received industry-wide acceptance, is an ANSI standard for portability, and is supported by several partners and competitors alike.  With OVF, customers are able to import/export workloads and associated meta data to/from a variety of virtualization and cloud platforms.  VMware is also a big believer in open API's — vCloud API's in this case — to enable streamlined management and control of workloads across clouds.  VMware uses these technologies natively to enable portability across vClouds (pub/priv/hybrid) and to/from vSphere environments.  This means that your on-premise private vCloud will deliver interoperability with vCloud-powered service providers and allow you to deploy, run, manage, and secure workloads across these common frameworks.

There are gotchas — understand that the objective here is to provide a means of moving your applications based on the requirements of the business or the unique characteristics of a given application.  Interoperability needs to be a two-way (at least) road…beware of the service providers that are happy to receive (import) an OVF workload but not give you the tools to get it back.  We call this the "Hotel California" model.  When all sources and destinations provide a common set of frameworks and API's, this issue goes away and streamlined management ensues.
+++

I certainly enjoyed learning the position of each panelist — many common approaches but not always the case, which keeps it interesting!  All in all, the audience questions were great, the panelists were often in sync, and we all demonstrated a [mostly] unified approach to the cloud journey.

++++
@virtualjad