Home > Blogs > VMware CloudOps

SDDC: Changing Organizational Cultures

By Tim Jones

TimJones-cropI like to think of SDDC as “service-driven data center” in addition to “software-defined data center.” The vision for SDDC expands beyond technical implementation, encompassing the transformation from IT shop to service provider and from cost center to business enabler. The idea of “service-driven” opens the conversation to include the business logic that drives how the entire service is offered. Organizations have to consider the business processes that form the basis of what to automate. They must define the roles required to support both the infrastructure and the automation. There are financial models and financial maturity necessary to drive behavior on both the customer and the service provider side. And finally, the service definitions should be derived from use cases that enable customers to use the technology and define what the infrastructure should support.

When you think through all of the above, you’re really redefining how you do business, which requires a certain amount of cultural change across the entire organization. If you don’t change the thinking about how and why you offer the technology, then you will introduce new problems alongside the problems you were trying to alleviate. (Of course the same problems will happen faster and will be delivered automatically. )

I correlate the advancement to SDDC to the shift that occurred when VMware first introduced x86 virtualization. The shift to more efficient use of resources that were previously wasted on physical servers by deploying multiple virtual machines gathered momentum very quickly. But based on my experiences, the companies that truly benefited were those that implemented new processes for server requisitioning. They worked with their customers to help them understand that they no longer needed to buy today what they might need in three years, because resources could be easily added in a virtual environment.

The successful IT shops actively managed their environments to ensure that resources weren’t wasted on unnecessary servers. They also anticipated future customer needs and planned ahead. These same shops understood the need to train support staff to manage the virtualized environment efficiently, with quick response times and personal service that matched the technology advances. They instituted a “virtualization first” mentality to drive more cost savings and extend the benefits of virtualization to the broadest possible audience. And they evangelized. They believed in the benefits virtualization offered and helped change the culture of their IT shops and the business they supported from the bottom up.

The IT shops that didn’t achieve these things ended up with VM sprawl and over-sized virtual machines designed as if they were physical servers. The environment became as expensive or more expensive than the physical-server-only environment it replaced.

The same types of things will happen with this next shift from virtualized servers to virtualized, automated infrastructure. The ability for users to deploy virtual machines without IT intervention requires strict controls around chargeback and lifecycle management. Security vulnerabilities are introduced because systems aren’t added to monitoring or virus scanning applications. Time and effort—which equate to cost—are wasted because IT continues to design services without engaging the business. Instead of shadow IT, you end up with shadow applications or platforms that self-service users create because what they need isn’t offered.

The primary way to avoid these mistakes is to remake the culture of IT—and by extension the business—to support the broader vision of offering ITaaS and not just IaaS.

Tim Jones is business transformation architect with VMware Accelerate Advisory Services and is based in California. Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags.

Forensic IT: Discover Issues Before Your End Users Do

by Paul Chapman, VMware Vice President Global Infrastructure and Cloud Operations

If you’ve ever watched five-year-olds playing a soccer game, there is very little strategy: all the kids swarm the field and chase the ball trying to score a goal.

Most IT departments take a similar sort of “swarming” approach to service incidents and problems when they occur.

For most of my career, IT has been a reactive business: we waited until there was a problem and then scrambled very well to solve it. We were tactical in terms of problem solving in a reactive mode, yet monitoring was focused on availability and capturing degradation in services, versus being proactive and predictive, analyzing patterns to stay ahead of problems. In the new world of IT as a service, where expectations are very different, that model no longer works.

New and emerging forensics tools and capabilities give IT the tools to be proactive and predictive—to focus on quality of service and end-user satisfaction, which is a must in the cloud era.

Forensics: A new role for IT
As an example, with new network forensics tools to monitor and analyze network traffic, it may seem a natural fit for network engineers to use them, but at VMware we found the skillsets to be quite different. We need people who have an inquisitive mindset — a sort of “network detective” who thinks like a data analyst and can look at different patterns and diagnostics to find problems before they’re reported or exposed into user impact.

Those in newly created IT forensic roles may have a different set of skills than a typical IT technologist. They may not even be technology subject matter experts, but they may be more like data scientists, who can find patterns and string together clues to find the root of potential problems.

Adding this new type of role in the IT organization most definitely presents challenges as it goes against the way IT has typically been done.  But this shift to a new way of delivering service, moving from the traditional swarm model to a more predictive and forensics-driven model, means a new way of thinking about problem solving. Most importantly, forensics has the potential to create a significant reduction in service impact and maintain high level of service availability and quality.

Quality of service and reducing end user friction
Every time an end user has to stop and depend on another human to fix an IT problem, it’s a friction point. Consumers have come to expect always on, 100 percent uptime, and they don’t want to take the time open a ticket or pause and create a dependency on another human to solve their need. As IT organizations, we need to focus more on the user experience and quality of service—today’s norm of being available 100 percent of the time is table stakes.

With everything connected to the “cloud,” it’s even more important for IT to be proactive and predictive about potential service issues. Applications pull from different systems and processes across the enterprise and across clouds. Without the right analysis tools, IT can’t understand the global user experience and where potential friction points may be occurring. In most experiences, IT finds out about a poor quality of service experience when users complain — perhaps even publicly on their social networks. Unless we get in front of the possible issues and take an outside-in, customer-oriented view, we’re headed for lots of complaints around quality of service.

At VMware, we have seen a significant reduction in overall service impact since using network forensics, and we’re keeping our internal customers productive. Focusing on quality of service and finding people with the right skillsets to fill the associated roles has us unearthing problems long before our end users experience so much as a glitch.

———-
Follow @VMwareCloudOps and @PaulChapmanVM on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

What I Learned from VMware’s Internal Private Cloud Deployment

By Kurt Milne

kurtmilne-cropFor seven years as an industry analyst, I studied top-performing IT organizations to figure out what made them best-in-class. And after studying 30 private cloud early adopters in 2011, I co-authored a book about how to deploy private cloud.

But after joining VMware last year, I’ve had the opportunity to spend six months working closely with VMware’s IT team to get an in-depth understanding of our internal private vCloud Suite deployment.

In this multi-part blog series, I’ll write about what I’ve learned.

Lesson learned – The most important thing I learned, and what really reframed much of my thinking about IT spending, is that VMware IT invested in our private cloud strategy to increase business agility.  And that effort drastically lowered our costs.

Breaking it down:

1. We made a strategic decision to try something different.

Over the years, I’ve studied companies that use every form of squeezing IT budgets there is. But what happens with a “cut till it hurts” or a “cut until something important breaks” approach is that the primary objective of lowering IT budgets is often achieved. But it also leaves IT hamstrung and unable to meet the needs of the business. An unbalanced focus on cost cutting reduces IT’s ability to deliver. That in turn lowers business perception of IT value, which further focuses efforts on cost cutting. Define “death spiral.”

VMware didn’t follow that path when we decided to invest in private cloud. We justified our “Project OneCloud” based on belief that that the traditional way of growing IT capabilities wouldn’t scale to meet our growth objectives. We have doubled revenue and headcount many times over the last 10 years. The IT executive team had the insight to realize that a linear approach of increasing capacity by buying more boxes and adding more headcount would not support business needs as we double in size yet again. We are no longer a startup. We have grown up as a company. We had to try a different approach.

Apparently VMware IT is not alone with this thinking. IT Under Pressure: McKinsey Global Survey results shows a marked shift in 2013 as IT organizations are using IT to improve business effectiveness and efficiency, not just manage costs.

2. Effective service design drove adoption.

What really enabled our private cloud success was broad adoption. There is a commitment and investment in private cloud that requires broad adoption to justify the cost and effort. The promise of delivering IT services the same old way at lower cost didn’t drive adoption. What drove adoption was a new operating model focused on delivering and consuming IT as a service. Specifically, abstracting infrastructure delivered as basic compute, network, and storage as a service. Then designing IT services for specific groups of consumers that allowed them to get what they need, when they needed it. That included application stacks, dev/test environments, and any other business function that depends on IT infrastructure (almost all do in the mobile-cloud era). We strove to eliminate the need to call IT, and also eliminated tickets between functional groups within IT.

Ten different business functions — from sales, marketing, and product delivery, to support and training — have moved their workloads to the cloud. Many have their own service catalog with a focused set of services as front end on the private cloud. Many have their own operations team who monitor and support automation and process that are built on top of infrastructure services.

Carefully designing IT services, then giving people access to get what they need when they need it without having to call IT — is key to success.

3. Broad adoption drove down costs via scale economies.

We started with one business group deploying sales demos and put their work in a service catalog front end on the private cloud. Then we expanded onboarding other functional groups to the cloud. One trick – and that is to develop a relationship with procurement. Any time someone orders hardware within the company, get in front of the order and see if they will deploy on private cloud instead.

Make IT customers’ jobs easier. Accelerate their time to desired results. Build trust by setting realistic expectations, then delivering per expectation.

Three primary milestones:

  1. Once we onboarded a few key tenants and got to ~10,000 VMs in our cloud, we lowered cost per general purpose VM by roughly 50 percent. With a new infrastructure as a service model that allowed consumers to “outsource infrastructure” to our central cloud team — and at a much lower cost per VM — word got out, and multiple other business groups wanted to move to the cloud.
  2. Once we onboarded another handful of tenants and got to ~50,000 VMs in our private cloud, we lowered cost per general purpose VM by another 50 percent. We were surprised by how fast demand grew and how fast we scaled from 10,000 to 50,000 VMs.
  3. We are “all in” and now on track to meet our goal of having around 95 percent of all our corporate workloads in private or hybrid cloud (vCloud Hybrid Service) – for a total of around 80,000 to 90,000 VMs. We expect cost per VM to drop another 50 percent.

So we set out to increase agility and better meet the needs of the business, delivered services that made IT consumers’ jobs easier, and as a result we dropped our cost per VM by ~85 percent.

Key takeaways:

  • Our private cloud goal was to reshape IT to better meet revenue growth objectives.
  • We transformed IT to deliver IT services in a way that abstracted the infrastructure layer and allowed various business team to “outsource infrastructure.”
  • Ten different internal business groups have moved workloads to private cloud.
  • Less focus on infrastructure and easy access to personalized services made it easier for IT service consumers to do their jobs and focus more on their customers.
  • A new operating model for IT and effective service design drove adoption.
  • Broad adoption drove down costs. By ~85 percent.

Below are links to two short videos of VMware IT executives sharing their lessons learned related to cost and agility. In my next post, I’ll talk about what I learned about a new operating model for IT.

—-
Follow @VMwareCloudOps and @kurtmilne on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

 

 

 

 

 

 

 

 

Top 5 Ways IT Can Stay Relevant in the Cloud Era

By Jason Stevenson

You have heard it said that “rock and roll is dead.” The same could soon be said about your IT department.

External pressures are driving an extinction of the IT department. Today’s business users are becoming more and more savvy, growing up with all kinds of technology in both the home and the office. Desk-side computing is dying off quickly, being left behind by technologies—like tablets, smart phones, and even wearable technology like eyeglasses and wristwatches—that make your employees more mobile and agile. This type of technology doesn’t typically need desk-side support, and when users are frustrated enough to need “human” help, they look for services such as Kindle’s Mayday for instant assistance that’s specific to the device/service that they are experiencing issues with.

The movement toward mobility and agility naturally drives organizations toward more cloud-based services, and software as a service (SaaS) rather than customized applications. This means that as time goes on, storage infrastructure, compute infrastructure, network infrastructure, and the data center will become less and less relevant.

Today’s business managers need to move at the speed of technology, and often consider the IT department a hindrance more than anything else. So how do you reverse that trend?

1. Become a “service provider” or be left in the dust.

Shifting your focus from technology to service management, you act as broker of the cloud services available within the marketplace. In order to “run IT like a business,” the IT department needs to have a clear picture of the services it provides and how these services create value for its customers.

Through full transformation of your organizational structure, core processes, and management tools, you are able to facilitate, combine, and enhance cloud services to add business value.

2. People are paramount. Your IT org chart should match your service-oriented approach.

Where today your organization is likely heavily vested in operational resources, a more mature organization will be needed. Your IT department will no longer deliver technology but will become a truly service-oriented organization and must be resourced appropriately.

You may find your organization chart actually reflecting a service lifecycle with service strategy at the top of the pyramid and service design, transition, operation, and improvement on the bottom of the pyramid. Consider how your services will flow naturally through the bottom of your organization chart, moving left-to-right/cradle-to-grave.

3. Perfect your processes. Even the greatest technology means nothing if there isn’t a solid plan in place to deliver it.

If you are still thinking technology is the solution, it’s time to get your head out of the clouds; or better said, into the clouds. An IT service provider that will last the test of time will have robust processes in place for:

  • Relationship and Demand Management
  • Portfolio and Finance Management
  • Supplier Management
  • Change, Configuration, Release, and Validation Management
  • Portal Management
  • Reporting Management

Without these processes in place your customers will never truly understand or appreciate the value of the technology you are providing.

4. Technology changes CONSTANTLY. Accept that and adopt it as part of your strategy moving forward.

Rather than blather on about technology, talking about what’s hot and what’s not, accept the simple fact that technology changes every day in real and profound ways. You must avoid resistance to change and embrace innovation when it makes good sense from a cost and risk perspective.

At this moment, technology is clearly trending toward attributes like mobility, community, utility, and self-service. Your customers (the business) are better informed than ever of these trends and are expecting their business to move at the speed of technology. Be an enabler of these new trends, not a roadblock.

5. Your customers (the business) will either love you or leave you. Focus on their experience with IT above all else.

You must provide service to not only meet user requirements, but also user expectations. Users want to feel empowered and immediately gratified. Become the preferred method to engage services. Provide a positive user experience that users want to engage with, and not just because they have to. Become a trusted advisor, aware of both the business and IT trends. Be impressive when it comes to managing suppliers/vendors, and be excellent project managers.

You must deliver services in such a way that your users are left not merely satisfied, but actually exhilarated by the service they received. Otherwise, you are not adding value and cannot compete.

Your focus must truly shift from technology to services. And, you must begin this journey now, before becoming obsolete like so much of the technology you have retired from your environments. Be willing to look outside your organization for solutions, and realize the paradigm shift from technology design to service delivery.

—–

Jason Stevenson is a transformation consultant with VMware Accelerate Advisory Services. He is based in Michigan and one of only seven certified ITIL Masters in the U.S.

Provide Transparency with an ITBM Service Costing Process

By Khalid Hakim

Last week I wrote about the growing need for IT to provide cost transparency to the business, especially to support its transition to a service provider or broker model. I also outlined some of the problems caused by opaque, decentralized costing strategies.

At VMware, we rely on an IT Business Management (ITBM) Service Costing Process (SCP) to help customers run IT like a business. Behind the acronyms lies a powerful tool that allows IT to validate its expenditures and solidify its role as a business leader. Here are four areas where our ITBM SCP solution helps address the challenges I outlined last week.

1. Service-based cost models
The ITBM SCP helps your organization establish a well-defined, repeatable, and consistent service-costing process with clear roles and responsibilities. This includes engaging the IT and finance teams to create and possibly mature your service-based cost model to encapsulate both technical and business services. Once developed, a service-based cost allocation strategy is signed off on by all involved departments to ensure standardization across the IT organization.

Using the SCP methodology also helps standardize how costs should be classified based on IT Financial Management (ITFM) and ITBM management principles, along with finance department policies. By implementing a full-service cost model, IT helps explain the cost of its services and deliverables, eliminates random cost allocation, and ensures more effective cost optimization efforts.

2. True cost transparency
An ITBM SCP approach encourages service-based cost models to be built in a collaborative way that provides IT with internal cost transparency which can be shared externally (with your executive and line of business stakeholders, and customers). A series of workshops help cost an end-to-end service using a number of use cases and alternative scenarios to come up with a service-specific cost structure that best fits your organization and business needs. This in turn helps avoid any over- or under-costed services. The SCP empowers service managers/owners to defend their numbers more confidently and helps shift IT’s image from “always expensive” to “always valuable.”

3. Value-driven approach
Our SCP methodology is supported by an Agile approach, which provides coverage to all IT services, along with more reliable data sources and processes. The iterative, phased approach delivers quick wins, ensuring that value is immediately recognized by all your stakeholders. This in turn helps you rebrand IT as a value creator rather than a cost center.

4. Improved ITBM maturity
The SCP solution targets service managers and owners (in addition to IT financial managers) via a series of knowledge transfers, educational workshops, and discussions to provide the background needed to manage IT as a business and optimize ITFM processes. It also raises the awareness of the finance team, since their view is typically limited to non-service management accounting. All together this helps elevate the investment planning process to a more service-oriented approach that drives higher IT and business value. Moreover, it helps define the success metrics required to sustain the SCP process and ensure strategy continuity.

In all these ways, the ITBM Service Costing Process can help your IT organization understand its costs, increase efficiency, identify areas of improvement, and provide the transparency necessary to help the business continue to see IT as a value creator.

——-
Khalid Hakim is an operations architect with the VMware Operations Transformation global practice. You can follow him on Twitter @KhalidHakim47.

An ITBM Service Costing Process is Key to IT Transformation

By Khalid Hakim

KHALID-cropAs more businesses recognize the integral role IT plays in the overall success of the enterprise, executive and business stakeholders have higher expectations of IT’s performance and its ability to prove its value. Providing cost transparency back to the business is key to meeting those expectations.

That is why today’s IT organization needs to have an in-depth understanding of the costs associated with delivering IT services, enabling each service manager/owner to defend his or her numbers from a service angle (not from an expense code or department/project budget) and hence improve the overall IT service value perception.

This highlights the need for a new management discipline that provides a framework to deliver IT as a service and manage the business of IT: IT Business Management (ITBM). Yet many IT leaders do not have the support, knowledge, or bandwidth needed to implement an effective ITBM practice, with its core focus on minimizing IT costs while maximizing business value.

When I’m working with customers, I use VMware’s ITBM Service Costing Process (SCP) to facilitate a modular service-based costing approach that offers ease in manageability and operability. In my next post I’ll dig into the details of how the SCP solution is used as well as the benefits and business value it addresses. But first, I want to clarify the far-reaching repercussions of failing to implement these processes.

Common challenges facing IT
The biggest problem for today’s IT organizations is not insufficient funds or financial management people skills, but rather IT planning, budgeting, costing, allocating, and pricing, all of which are based on by-department cost management.

Traditional IT costing methods don’t explicitly call out value-service based structures and bills. They are more focused on costs associated with technology component purchases, projects implemented, cost code totals, department costs, and customer allocations of these non-value-add cost elements.

These situations create a host of business issues for IT:

Failure to understand the costs of IT deliverables Not all service managers are able to understand their end-to-end service costs and defend their expenses due to the lack of true service views, including service catalogs and definitions, as well as service-based cost models.

  • Arbitrary cost cutting and budget shrinking decisions — Management often looks at expense lists from cost-codes or a totals view, not from a service-based view that enables top management to see a holistic path to savings.
  • Random cost allocation — IT’s cost allocation is typically based on policies and guidelines set by the finance management department that are usually technically driven and don’t reflect the full value of IT.
  • Overstated or understated service costs — IT service cost calculations may include superfluous cost elements or exclude key cost elements. This is all caused by lack of a well-defined service-based costing process standard across IT, which results in services that can’t be compared “apples-to-apples” with outside service providers.

The “IT is always expensive” perception — Service managers and owners can’t confidently defend their numbers, which results in a common perception that IT is expensive.

Lack of trust and value realization — Due to the lack of value-centric conversations and full service-based cost transparency, talks tend to focus on numbers instead of the true value delivered to the business. As long as services are not being managed as business, then customers will continue to question what their money is buying.

Data does not support making meaningful decisions — One of the biggest challenges IT faces without an ITBM SCP is unreliable and inaccurate financial data related to IT assets.

Poor budget processes or lack of budget clarity —The traditional IT budgeting process follows a limited approach that limits IT’s capabilities view and creates uncertainties and inefficiencies in day-to-day operations.  Running IT like a business requires budgets to be based on services demands, rather than expense codes.

Limited financial and business management background — Financial management is not stressed across the IT organization, instead seen as a specialized role important for ITFM managers only. Service managers and IT generally lack basic financial management background that could provide them important insights.

But there is good news for the IT organization. Check back, and I’ll share more details about the ITBM SCP solution and the four key areas in which it addresses these challenges.
——-

Khalid Hakim is an operations architect with the VMware Operations Transformation global practice. You can follow him on Twitter @KhalidHakim47.

Using vCloud Suite to Streamline DevOps

By: Jennifer Galvin

A few weeks ago I was discussing mobile app development and deployment with a friend. This particular friend works for a company that develops mobile applications for all platforms on a contract-by-contract basis. It’s a good business. But one of the key challenges they have is the time and effort required to install a client’s development and test environment so that they can start development. Multiple platforms need to be provisioned. And development and testing tools that may be unique to each platform must be installed. This results often in needing to maintain large teams with specialized skills and having to maintain a broad range of dev/test environments.

I have always been aware that VMware’s vCloud Suite can speed up deployment of applications, (even complex application stacks), but I didn’t know if long setup times were common in the mobile application business. So I started to ask around:

“What was the shortest time possible it would take for your development teams to make a minor change to a mobile application, on ALL mobile platforms – Android, iPhone, Windows, Blackberry, etc?”

comic part 1 The answers ranged between “months” and “never”.

Sometime later, after presenting VMware’s Software Defined Datacenter vision to a tech meetup in Washington, D.C. a gentleman approached me to discuss the question posed. While he liked the SDDC vision, he wondered if I knew of a way to use vCloud Suite and software controlled everything to speed up mobile development. So I decided to sketch out how the blueprints and automated provisioning capabilities of the vCloud Suite could help speed up application development on multiple platforms.

First, let’s figure out why this is so hard in the first place – after all, mobile development SDK’s are frameworks, and while it takes a developer to write an app, the SDK is still doing a lot of the heavy lifting. So why is this still taking so long? As it turns out, there are some major obstacles to deal with:

  • Mobile applications always need a server-side application to test against: mobile applications interact with server-side applications, and unless your server-side application is already a stable, multi-tenant application that can deal with the extra performance drain of 32 developers running amok (and you don’t mind upsetting your existing customers), you’re going to need to point them at a completely separate environment.
  • The server-side application is complex and lengthy to deploy: A 3-tier web application with infrastructure (including networking and storage), scrubbed production database data to provide some working data, and front-end load balancing is the same kind of deployment you did when the application initially went into production. You’re not going to start development on your application any time soon unless this process speeds up (and gets more automated).

Let’s solve these problems by getting a copy of the application (and a copy of production-scrubbed data) out into a new Testing area so the developers can get access to it, fast. vCloud Suite provides a framework for the server-side application developers to express its deployment as a blueprint, capable of deploying not just the code, but all the properties to automate the deployment, and consumes capacity from on-premises resources as well as those from the public cloud. That means that when it comes time to deploy a new copy (with the database refreshed and available), it’s as easy as a single click of a button.

comic part 2comic part 3Since the underlying infrastructure is virtualized, compute resources are used or migrated to make room for the new server-side application. Other testing environments can even be briefly powered down so that this testing (which is our top priority) can occur.

Anyone can deploy the application, and what used to take hours and teams of engineers can now be done by one person. However, we are still aiming to deploy this on all mobile platforms. In order to put all of our developers on this challenge, we first need to ensure they have the right tools and configurations. In the mobile world, that means more than just installing a few software packages and adjusting some settings. In some cases, that could mean you need new desktops, with entirely different operating systems.

Not every mobile vendor offers an SDK on all operating systems, and in fact, there isn’t one operating system that’s common to the top selling mobile phones today.

For example, you may only develop iOS applications using xCode, which runs only on Mac OSX. Both Windows and Android rely on an SDK compatible with Windows, and each has dependencies on external libraries to function (especially Android). Many developers typically favor MacBooks running VMware Fusion to accommodate for all of these different environments, but what if you decide that to re-write the application quickly, you require some temporary contractors? Those contractors are going to need those development environments with the right SDKs and testing.

This is also where vCloud Suite shines. It provides Desktop as a Service to those new contractors. The same platform that allowed us to provision the entire server-side application allows us to provision any client-side resources they might need.

By provisioning all of the infrastructure at once, we are now ready to redevelop our mobile app. We can spend developer time development and testing, making it the best app possible, instead of wasting resources for work environment deployment.

comic part 4

Now, let’s think back to that challenge I laid out earlier. Once you start deploying your applications using VMware’s vCloud Suite, how long will it take to improve your mobile applications across all platforms? I bet we’re not measuring that time in months any longer. Instead, mobile applications are improved in just a week or two.

Your call to action is clear:

  • Implement vCloud Suite on top of your existing infrastructure and public cloud deployments.
  • Streamline your application development process by using vCloud suite to deploy both server and client-side applications, dev and test environments, dev and test tools, and sample databases – for all platforms – at the click of a button.

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

A Critical Balance of Roles Must Be in Place in the Cloud Center of Excellence

By: Pierre Moncassin

There is a rather subtle balance required to make a cloud organization effective – and, as I was reminded recently, it is easy to overlook it.

One key requirement to run a private cloud infrastructure is to establish a dedicated team i.e. a Cloud Center of Excellence. As a whole this group will act as an internal service provider in charge of all technical and functional aspects of the cloud, but also deal with the user-facing aspects of the service.

But there is an important dividing line within that group: the Center of Excellence itself is separated between Tenant Operations and Infrastructure Operations. Striking a balance between these teams is critical to a well-functioning cloud. If that balance is missing, you may encounter significant inefficiencies. Let me show you how that happened to two IT organizations I talked with recently.

First, where is that balance exactly?

If we look back at that Cloud Operating Model (described in detail in ‘Organizing for the Cloud‘), we have not one, but two teams working together: Tenant Operations and Infrastructure Operations.

In a nutshell, Tenant Operations own the ‘customer-facing’ role. They want to work closely with end-users. They want to innovate and add value. They are the ‘public face’ of the Cloud Center of Excellence.

On the other side, Infrastructure Ops only have to deal one customer – Tenant Operations. In addition to this, they also have to handle hardware, vendor relationships and generally, the ‘nuts and bolts’ of the private cloud infrastructure.

Cloud Operating Model
But why do we need a balance between two separate teams? Let’s see what can happen when that balance is missing with two real-life IT organization I met a little while back. For simplicity I will call them A and B – both large corporate entities.

When I met Organization A, it had only a ‘shell’ Tenant Operations function. In other words, their cloud team was almost exclusively focused on infrastructure. The result? Unsurprisingly, they scored fairly high on standardization and technical service levels. End users either accepted a highly standardized offering, or had to go through loops to negotiate obtained exceptions – neither option was quite satisfactory. Overall, Organization A struggled to add recognizable value to their end-users: “we are seen as a commodity”. They lacked a well-developed Tenant Organization.

Organization B had the opposite challenge. They belonged to a global technology group that majors on large-scale software development. Application development leaders could practically set the rules about what infrastructure could be provisioned. Because each consumer group yielded so much influence, there was practically a separate Tenant Operation team for each software unit.

In contrast, there was no distinguishable Infrastructure Ops function. Each Tenant Operations team could dictate separate requirements. The overall intrastructure architecture lacked standardization – which risked defeating the purpose of a cloud approach in the first place. With a balance tilted towards Tenant Operations, Organization B probably scored highest on customer satisfaction – but only as long as customers did not have to bear the full cost of non-standard infrastructure.

******

In sum having two functionally distinct teams (Tenants and Infrastructure) is not just a convenient arrangement, but a necessity to operate a private cloud effectively. There should be ongoing discussions and even negotiation between the two teams and their leaders.

In order to foster this dual structure, I would recommend:

  1. Define a chart for both teams that clearly outlines their relative roles and ‘rules of engagement.’
  2. Make clear that each team’s overall objectives are aligned, although the roles are different. That could be reflected through management objectives for the leaders of each team. However, this also requires some governance in place to give them the means to resolve their discussions.
  3. To help customers realize the benefits of standardization, consider introducing service costing (if not already in place) – so that the consumer may realize the cost of customization.

Follow @VMwareCloudOps and @Moncassin on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

Assembling Your Cloud’s Dream Team

By: Pierre Moncassin

Putting together a Cloud Center of Excellence (COE) is not about recruiting ‘super-heroes’ – but a matter of balancing skills and exploiting learning opportunities. 

On several occasions, I’ve heard customers who are embarking on the journey to the cloud ask: “How exactly do you go about putting together a ‘dream team’ capable of launching and delivering cloud services in my organization?” VMware Cloud Operation’s advice is fairly straightforward: put together a core team known as the Cloud Center of Excellence as early as possible. However, these team members will need a broad range of skills across cloud management tools, virtualization, networking and storage, as well as solid processes and organizational knowledge. Not to mention, sound business acumen as they will be expected to work closely with the business lines (far more so than traditional IT silos).

This is why at first sight, the list of skills can seem daunting. It need not be. The good news is that there is no need to try to recruit ‘super-heroes’ with impossibly long resumes. The secret is to balance skills, and taking advantage of several important opportunities to build skills.

***

First let’s have a closer look at the skills profiles for the Cloud COE as described in our whitepaper  ‘Organizing for the Cloud’. I won’t go into the specifics of each role, but as a starter, here are some core technical skills required (list not inclusive):

  • Virtualization technologies: vSphere
  • Provisioning with a combination of vCAC or VCD
  • Workflows Automation: VCO
  • Configuration and Compliance: VCM
  • Monitoring/Event Management: VC OPS
  • Applications, storage, virtual networks, applications.

But the team also needs members with broad understanding of processes and systems engineering principles, customer facing service development skills, and a leader with  sound knowledge of financial and business principles.

Few organizations will have individuals with all these skills ready from day one – but fortunately they do not need to. A cloud COE is a structure that will grow over time, and the same applies to the skills base. For example, vCO scripting skills might be required at some stage in order to develop advanced automation scripts – but that level of automation might not be required until the second or third phase of the cloud implementation, after the workflows are established. However, we need some planning to have the skills available when needed.

Make the most out of on-site experts:

Organizations usually start their cloud journey with project team as a transitional structure. They generally have consultants from VMware or a consultancy partner on-site working alongside them.  This offers an excellent opportunity to both accelerate the cloud project, and to allow internal hires to absorb critical skills from those experts. However – and this is an important caveat – the knowledge transfer needs to be intentional. Organizations can’t expect a transfer of knowledge and skills to happen entirely unprompted. Internal teams may not always have the availability or training to absorb cloud-related skills ‘spontaneously’ during the project. Ad hoc team members often have emergencies from their ‘day job’ (i.e. their business-as-usual responsibilities) that interrupt their work with the on-site experts.  So I advise to plan knowledge exchanges early in the project. That will ensure that external vendors and consultants train the internal staff, and in turn, project team members can then transfer their knowledge to the permanent cloud team.

Get formal training:

Along with informal on-site knowledge transfer, it can be a good idea to plan formal classroom-based VMware training and certifications. Compared to a project-based knowledge exchange, formal training generally provides a deeper understanding of the fundamentals, and is also valuable to employees from a personal development point of view. Team members may have additional motivation to attend formal courses that are recognized in the industry, especially if it leads to a recognized qualification such as VMware Certified Professional (VCP).

Build skills during the pilot project:

Many cloud projects begin with a pilot phase where a pilot (i.e. a prototype installation) is deployed with a cut-down functionality. This is a great opportunity to build skills in a ‘safe’ environment. Core team members get the chance to familiarize themselves both with the new technology and stakeholders. For example, a Service Catalog becomes far more real once potential users and administrators can see and touch the provisioning functions with a tool like vCenter Automation Center. For technical specialists, the pilot can be a chance to learn new technologies and overcome any fear of change. Building a prototype early in the cloud project can also give teams the opportunity to play around with tools and explore their capabilities.

A summary of how your IT organization can structure its Cloud Center of Excellence and prepare it for success:

  1. Plan to build up skills over time. All technical skills are not required in-depth from day one. Rather, look at a blend of technical skills that will grow and evolve as the cloud organization matures. The Cloud team is a ‘learning organization’.
  2. Plan ahead. Schedule formal and informal knowledge transfers – including formal training – between internal staff and external vendors and consultants.
  3. Make the most out of a pilot project. Create a safe learning environment where team members and stakeholders can acquire skills at their own pace.

Follow @VMwareCloudOps and @Moncassin on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

From “Routine” Patch Management to Compliance-as-a-Service: How Cloud Transforms a Costly Maintenance Function into an Innovative Value-Adding Service

By: Pierre Moncassin

You might not think of Patch Management as the coolest part of running an IT operation (cloud or non-cloud). It is a challenging, often time-consuming part of keeping infrastructure secure. And in many industries, it is a critical dependency for meeting regulatory requirements. As a result, IT managers can be forgiven for thinking of it as hard work that just needs to get done.

But recent discussions with global vCloud customers have given me a new perspective on the Patch Management function. These customers’ service owners have quickly grasped some new possibilities that come with an automated cloud infrastructure, one of which involves exploring how to offer patch management services to their end users. While this is not a new concept for managed providers, it’s a step change for an internal IT department – and one that turns Patch Management into a pretty exciting role.

Let us look at the key elements of that change:

To start with – we now see the patching toolset as a key component of end-user service. There is no point in offering patching services in a cloud environment without reliable automation to deliver them – the core toolset for patch reporting is of course VMware’s vCenter Configuration Manager, complemented by vCenter Orchestration for automation of specific remediation workflows.

But beyond the automated patch level checking and deployment, VCM also opens the possibility of adapting compliance reports for each user. Without going into too much detail here, it’s relatively straight forward to configure VCM in a ‘service-aware’ structure. VCM allows administrators to define virtual machine groups that match, for example, the virtual servers assigned to a specific division. Filters can be further set to extract only the patch information relating to that end user. Then your VCM reports can be exported into customer-facing reporting tools to provide customized compliance reports.

Make that switch to a service mindset. Of course, bringing the idea of ‘service’ into the patching activity means a mindset shift. It implies service levels and well-defined expectations on both sides. And it invites a financial perspective, too: the cost of delivering the services can be evaluated and potentially communicated back to the consumer. As the cloud organization matures, it may consider charging for those specific services.

Moving to a service approach for patching, then, can be a stepping-stone towards delivering further value-added services to the end user: not just routine patching, but other compliance services that can become more and more visible. Again, patching has moved well beyond its traditional role.

Next, integrate patching with Self-Service capabilities. As with any on-demand service, patching-as-a-service will need to be published in the Service Catalog. In all likelihood, patch management would be offered as an option complementing another service (e.g. server or application provisioning). There are many ways to publish such a service – on the service portal, for example, (if there is such a dedicated portal), or directly within vCenter Automation Center (vCAC). In vCAC, a patch management service could be made available either at provisioning time, or, potentially, at runtime when a machine is already running (vCAC for example can issue a ticket to the service desk to make the request).

Beyond the Service Catalog, there is also interesting integration potential if patching requirements are ‘pre-built’ into the vCAC blueprint. In a nutshell, vCAC can be configured to select the patching option that will be applied by vCenter Configuration Manager at runtime. In my view, that type of integration has considerable potential – potential I’ll explore in a separate blog in the near future.

Lastly, communicate. It’s an obvious part of a service mindset, but also one that’s easy to overlook: if you are adopting this new way of looking at patch management, you need to ensure that two-way communication takes place with your end users, whether to define the service, to publish it, or during its delivery. This is an extended, if not new function for the service owners.

In sum:

Patch Management – often associated with ‘hard work’ done in the background – is transformed with Cloud Operations:

  • ‘Hard work’ becomes ‘service design work’ – a common theme across VMware Cloud Operations
  • Team focus shifts from ‘keeping the servers running’ to a more creative activity closely engaged with consumers – offering a new, value-adding service
  • Patching services set the foundation for more comprehensive services under the umbrella of Compliance-as-a-service
  • Technical integrations between self-service provisioning and patch management can be leveraged to open new avenues for self-service automation

Follow @VMwareCloudOps and @Moncassin on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.