Home > Blogs > Federal Center of Excellence (CoE) Blog > Tag Archives: heterogeneous

Tag Archives: heterogeneous

Connecting Clouds

For those organizations on the journey of transforming their datacenters to meet the demand of a modern IT consumption model, it’s easy to envision what cloud euphoria could/should look like.  That’s mostly because vision is quite cheap – all it takes is a little imagination (maybe), a few Google queries, several visits by your favorite vendor(s), and perhaps a top-down mandate or two.  The problem is execution can break the bank if the vision is not in line with the organization’s core objectives.  It’s easy to get carried away in the planning stages with all the options, gizmos and cloudy widgets out there – often delaying the project and creating budget shortfalls.  Cloud:Fail.  But this journey doesn’t have to be difficult (or horrendously expensive).  Finding the right solution is half the battle…just don’t go gluing several disparate products together that were never intended to comingle and burn time and money trying to integrate them.  Sure you might eventually achieve something that resembles a cloud, but you’re guaranteed to hit several unnecessary pain points on the way.

Of course I’m not suggesting putting all your eggs in one vendor’s basket guarantees success.  Nor am I suggesting that VMware’s basket is the only one that provides everything you’ll ever need for a successful cloud deployment.  In fact, VMware prides itself with an enormous (and growing) partner ecosystem that provides unique approaches and technologies to cloudy problems and beyond.  What I am suggesting, however, is the need to pick and choose wisely.  Well integrated clouds = well functioning clouds = happy clouds and happy customers.  Integration means common frameworks and interfaces, extensible API’s, automation via orchestration, app portability across clouds, and technologies that are purpose-built for the job(s) at hand.  And as a bonus, integration can mean leveraging what you already have – an infrastructure awaiting the transformation of a lifetime.  That’s right, the cloud journey should not be a rip-and-replace proposition.

There’s another major component to this – while I spend the majority of my time helping organizations and federal agencies adopt the cloud and transform their infrastructures, there’s often something else on the customer’s mind that can’t be ignored.  It’s a long-term strategy delivered in nine datacenter-shattering words: “I want to get out of the infrastructure business”.   I’m hearing this more often than not and it cannot be ignored.  What they are referring to is the need to eventually shift workloads to public clouds rather than continue to invest in their own infrastructures.  This strategy makes perfect sense.  As the adoption of public cloud services increases, more and more CIO’s are finding new comfort levels in handing over their apps and workloads to trusted cloud providers, albeit slowly.  But this also introduces new challenges.  How does an organization well on its way to delivering an enterprise/private cloud to the business ensure that future adoption of public clouds does not mean starting from scratch?  What about managing and securing those workloads just as you would in the private cloud?  Public cloud providers need to be an extension of your private cloud, giving you the freedom of application placement, the ability to migrate workloads back and forth, and providing single-pane-of-glass visibility into all workloads and all clouds.  This endeavor requires the right planning, tools, and frameworks to be successful.

Here are the top “asks” from customers currently on, or getting ready to start, this journey (in no particular order):

  • Private cloud now…public cloud later (or both…now)
  • Workload portability (across clouds / cloud providers)
  • A holistic management approach
  • End-to-end visibility
  • Dynamic security
  • Cloud-worthy scalability

If any of this is resonating, then you’re probably in a similar situation.  CIO’s are pushing the deployment of private clouds while simultaneously considering public cloud options.  Therefor the solution needs to deliver everything we know and love of the private cloud while laying down the framework for public cloud expansion.  Problem is not many solutions out there can do this.  Public cloud providers often run proprietary frameworks and management tools to keep costs low and private cloud solutions are generally focused on just that (being private).

Enter VMware.

VMware has put a lot of effort in leveraging the success of vSphere – the cloud’s critical foundation – to help take a controlling lead up the software stack and deliver a cloud solution for both private and public (i.e. hybrid) clouds.  And through the VMware Service Provider Program (VSPP), they have also enabled a new generation of cloud service providers that build their offerings using the same vCloud frameworks available to enterprises.  As a result, each and every one of these vCloud-powered service providers instantly becomes a possible extension of your private cloud, placing the power of the hybrid cloud – and all the “asks” above – at your fingertips.

Here’s what that looks like from a 1,00ft view…

  CIM Stack

  Let’s review this diagram:

1 – Physical Infrastructure: commodity compute, storage, and network infrastructure.

2 – vSphere Virtualization: hardware abstraction layer and cloud foundation.  Delivers physical compute, storage, and networks as resource pools, datastores, and portgroups (or dvPortgroups).

3 – Provider Virtual Datacenter (PvDC) and Organizational Virtual Datacenter (OvDC): delivered by vCloud Director as the first layer of cloud abstraction. resources are simply consumed as capacity and delivered on demand.

4 – vCenter Orchestrator: key technology for cloud integration, automation, and orchestration across native and 3rd-party solutions.

5 – vCenter Operations: holistic management framework for visibility into performance, capacity, compliance, and overall health.

6 – Security & Compliance: dynamic, policy-based security and compliance tools across clouds using vShield Edge and vCenter Configuration Manager (vCM)

7 – VMware Service Manager for Cloud Provisioning (VSM-CP): self-service web portal and business process engine tying it all together.  Integrates with vCO for mega automation.

8 –vCloud Connector (vCC): single pane of glass control of clouds and workloads.  enables workload portability to/from private and public vClouds and traditional vSphere environments.

Last but not least is the very important question of “openness” in the cloud (don’t get me started on heterogeneous hypervisors!).  VMware spearheaded the OVF standard several years ago, which has been adopted by the industry as a whole as a means of migrating vSphere-based workloads to non-vSphere hypervisors (and the clouds above them) with metadata in tact.  In fact, OVF remains a key technology in the Hybrid cloud scenarios and is an integral part of workload portability across clouds.  OVF gives customers the ability to move workloads in/out of vSphere and vCloud environments and into other solutions that support the standard.  Just beware of solutions that will happily accept OVF workloads but not so happily give them back (warning: the majority won’t).

The end result: cloud goodness, happy CIO’s, and streamlined IT.  How’s that for a differentiator?



Follow virtualjad on Twitter

Heterogeneous Foundations for Cloud: Simply Overrated

Let me start by making a statement that you may or may not agree with – being heterogeneous is often a problem in need of a solution…not a strategy. Allow me to explain…

I spend a lot of time discussing VMware’s vCloud solution stack to many different customers, each with varying objectives when it comes to their cloud journey. The majority of them fall under two groups – Group A) those who know what they want and where to get it and Group B) those who think they know what they want and have been shopping for the “right” solution since before cloud hit the mainstream – one “cloud bake-off” after another while changing requirements in real-time. Can you guess which ones meet their objectives first? Hint: it’s the same group that delivers IaaS to their enterprise and/or customers using proven technologies and trusted relationships in the time it takes the other to host a bake-off.

For group A the requirements are straightforward – deliver me a solution (and technology) that meets exceeds all the characteristics of cloud [see: defining the cloud] so I can transform my infrastructure and deliver next generation IT to the business. Sound familiar? It should because this is where the greater majority is – whether they accept it with open arms or are trying to meet agency mandates (or both). These are the organizations that understand the value of a COTS solution that promises to reduce cost, complexity, and time to market. These are the folks that consider what has worked so incredibly well in the past and stick with it. They look at the foundation that has built their virtualized infrastructures and helped them achieve unprecedented levels of efficiency, availability, and manageability. These are vSphere customers (did you see that coming?). Remember what the very first characteristic (and prerequisite) of Cloud is – Pooling of Resources. More than 80% of the virtualized world is running vSphere as their hypervisor of choice. In fact, a new VM is powered up on vSphere every 6 seconds and there are more VM’s in (v)Motion than there are planes in the sky at any moment. There is no question that VMware’s flagship hypervisor has changed the way we do IT – a hypervisor that has earned the right and reputation to be your cloud’s foundation…vCloud’s foundation.

But not everyone gets it (enter group B). These are the folks that set requirements they think are intended to benefit the business or customer and end up burning resources, money, and time in the process. My job is to look at the business’ objectives, understand their unique requirements, propose a solution, and help determine the resulting architecture. But every once in a while a customer throws out a requirement that just doesn’t make sense…and I feel, as trusted advisor, it is my responsibility to make sure they understand the impact of such requirements.

This brings me to the topic of this post and the most often misguided “requirement” out there: “my cloud needs to support heterogeneous hypervisors”.

Say what!? Heterogeneous hypervisors? I’ll just put this out there – VMware’s cloud framework (specifically vCloud Director) does not support heterogeneous hypervisors – and for a very good reason! What benefit will this provide when there's an opportunity to build this baby from the ground-up? Let me be clear about one thing – the need to support a heterogeneous anything is a problem and not an effective business strategy. Heterogeneity often occurs when IT merges – whether that’s in a datacenter consolidation, business merger, bankrupt vendor, whatever. The business typically wants to save existing investments and needs a new way to manage those assets in a centralized/consolidated manner. A great example of this exists in the storage world – as datacenters were consolidated and several different flavors of storage subsystems were expected to play together, storage virtualization solutions were needed to make it so. There are several solutions out there to choose from – IBM SAN Volume Controller (SVC) or NetApp V-Series just to name a couple. Bottom line is the organization gained a heterogeneous storage environment and needed a solution to bring it all together to achieve centralized management. Although there are solutions available to help (some better than others), they are really just a band-aid and still result in everything you’d expect from such a situation:

  • increased complexity
  • added learning curve
  • masking of core/native capabilities
  • increased operations and management costs
  • reduced efficiencies
  • additional management layers
  • increased opportunity for failure
  • lots of finger-pointing when all hell breaks loose

These are all results of a problem. Organizations rarely choose to add complexity, cost, risk, etc. to their infrastructures but instead employ available technologies to help reduce the pain of such a situation. However, as the environment scales, these same organizations do choose to scale the native capacity first in an effort to avoid making the problem worse (ex: adding a storage capacity that natively integrates with the front-end solution).

When it comes to building a cloud infrastructure, most organizations are early in the design and planning process and have an opportunity to employ proven technologies and gain seamless integration, high efficiencies, centralized management, etc. all on top of a solid foundation. The key word here is foundation (i.e. the hypervisor) – the most critical component in this architecture. Why would any organization choose to take the heterogeneous approach and deal with the added risks when so much is at stake?

And finally, for all you out there that suggest that not supporting a heterogeneous foundation creates cloud vendor lock-in (this happens to be the #1 argument), I only have this to say: regardless of who you trust to be your hypervisor, your best bet is to select a solution that provides an open and extensible framework, exposes API’s for seamless infrastructure integration, and has the trust and reputation your business or customer needs. I won’t name names…but there’s only one.



Follow virtualjad on Twitter