Home > Blogs > VMware Operations Transformation Services > Tag Archives: IT as a service

Tag Archives: IT as a service

6 Processes You Should Automate to Provide IT-as-a-Service

kai_holthaus-cropBy Kai Holthaus

IT-as-a-Service (ITaaS) is one of the current paradigm shifts in managing IT organizations and service delivery. It represents an “always-on” approach to services, where IT services are available to customers and users almost instantly, allowing unprecedented flexibility on the business side with regards to using IT services to enable business processes.

This brave new world requires a higher degree of automation and orchestration than is common in today’s IT organizations. This blog post describes some of the new areas of automation IT managers need to think about.

1&2) Event Management and Incident Management

This is the area where automation and orchestration got their start – automated tools and workflow to monitor whether servers, networks, storage—even applications—are still available and performing the way they should be. An analysis should be performed to study whether events, when detected, could be handled in an automated fashion, ideally before the condition causes an actual incident.

If an incident already happened, incident models can be defined and automated, implementing self-healing techniques to resolve the incident. In this case, an incident record must be created and updated as part of executing the incident model. Also, it may be advisable to review the number of incident models executed within a given time period, to determine if a problem investigation should be started.

It is important to note that when a workflow makes these kinds of changes in an automatic fashion, at the very least the configuration management system must be updated per the organization’s policies.

3) Request Fulfillment

Automation and orchestration tools are removing the manual element from request fulfillment. Examples include:

  • Requests for new virtual machines, databases, additional storage space or other infrastructure
  • Requests for end-user devices and accessories
  • Requests for end-user software
  • Request for access to a virtual desktop image (VDI) or delivery of an application to a VDI

Fulfillment workflows can be automated to minimize human interaction. Such human interaction can often be reduced to the approval step, as required.

Again, it is important that the configuration management system gets updated per the organization’s policies since it is part of the workflows.

4&5) Change and Configuration Management

Technology today already allows the automation of IT processes that usually require change requests, as well as approvals, implementation plans, and change reviews. For instance, virtual machine hypervisors and management software such—such as vSphere—can automatically move virtual machines from one physical host to another in a way that is completely transparent to the user.

Besides automating change, the configuration management system should be automatically updated so that support personnel always have accurate information available when incidents need to be resolved.

6) Continuous Deployment

The examples provided so far for automating activities in an IT organization were operations-focused. However, automation should also be considered in other areas, such as DevOps.

Automation and orchestration tools can define, manage, and automate existing release processes, configuring workflow tasks and governance policies used to build, test, and deploy software at each stage of the delivery processes. The automation can also model existing gating rules between the different stages of the process. In addition, automation ensures the correct version of the software is being deployed in the correct environments. This includes integrating with existing code management systems, such as version control, testing, or bug tracking solutions, as well as change management and configuration management procedures.

In an ITaaS model, automation is no longer optional. To fulfill the promise of an always-on IT service provider—and remain the preferred service-provider of your customers—consider automating these and other processes.


Kai Holthaus is a delivery manager with VMware Operations Transformation Services and is based in Oregon.

Why Service Owners Are Integral to IT-as-a-Service Delivery

By Kai Holthaus

kai_holthaus-cropThe Service Owner Role

The service owner role is central for an IT organization that is operating IT as a service (ITaaS). Why? Because the service owner is accountable for delivering services to customers and users, and accountabilities include:

  • To act as prime customer contact for all service-related enquiries and issues
  • To ensure that the ongoing service delivery and support meet agreed customer requirements
  • To identify opportunities for service improvements, discuss with the customer, and raise the request for change (RFC) for assessment if appropriate
  • To liaise with the appropriate process owners throughout the service management lifecycle
  • To solicit required data, statistics and reports for analysis, and to facilitate effective service monitoring and performance
  • To be accountable to the IT director or service management director for the delivery of the service

Please note that I emphasize  “accountability” instead of “responsibility.” The service owner is accountable, meaning they set the goals and oversee the execution.  The actual execution is performed by individuals or functions that have the “responsibility” for each activity.

Let’s take a closer look…

The service owner is the main escalation point for all service-related compliments, complaints, and other issues. You can think of the service owner as a sports coach, directing how the team should play a particular game, but not really participating by playing in the game. As the coach is accountable to the team’s owner for the team’s success, so is the service owner accountable to the customer(s) and the service management director for ongoing quality of the service.

Responsibilities Throughout the Service Lifecycle

The service owner role has accountabilities in each of the five lifecycle stages, as defined by ITIL:

  • Service strategy
  • Service design
  • Service transition
  • Service operation
  • Continual service improvement

I recommend to my clients that they assign a service owner very early in the lifecycle, so that there is a single point of accountability throughout its creation and life.  If we compare this to the product world, a service owner is like a product manager at an automotive company who is accountable as the new car model is designed, developed, and built—and ultimately for the satisfaction of the car’s buyers.

Shifting to an ITaaS Model

While the idea of the service owner role is just as valid in an ITaaS world as it is in a more traditional IT service provider world, there are a few important differences.

Service Owners Must Enable a Faster Time to Market
Moving to an ITaaS model typically requires faster development and release cycles than in a more traditional model. This is usually accomplished by moving to an Agile development model, such as Scrum. Using such models means that the full set of requirements for a service to be released will not be available at the time when development starts. Instead, development begins with the best set of requirements available at the time, and relying on future development / release cycles to address missing requirements.

The certainty of receiving a fully defined set of utility and warranty of a service is being exchanged for more rapid improvement of the service. Service design and transition activities are executed more in a spiral-type model than in a waterfall-type approach.

The service owner in an Agile environment becomes the Scrum product manager, representing the view of the customer in the Scrum model. As the product manager, the service owner is responsible for the pipeline of customer requirements driving the development of the service.  Business decisions on whether to advance the service through another round of development and release is based on the available information at the time.

Service Owners Need a Better Grasp on Future Demand
Some services, particularly infrastructure services, such as providing CPU power or storage, become utility-type services, comparable with the utility services everybody experiences at home, such as electrical power, natural gas, or water. Instead of provisioning dedicated infrastructure at the time of service development or deployment, the service owners must ensure there is enough capacity when needed, e.g., storage should be available instantly available when required— similar to water flowing immediately when you turn on the faucet at home. This requires a much better understanding of future demand, patterns of business activity, and user profiles than is typically the case today.

Service Owners Will Give up Some Control to Enable Automation
Due to the nature of ITaaS, service owners will be required to give up some control over the configuration of the service. For example, automation tools already move virtual machines from one physical host to another based on current workloads, without any human control. To fully deliver on the ITaaS promise, this type of automation must increase. Increased automation will require either defining more changes as standard changes, which can be implemented without approval (and in this case, automatically, after the tool has recorded the change), or give up change control completely, and let the tools handle them. Such automation tools can also automatically update the configuration management system, so that valid information will always be available.

How Will Services Operate in the Cloud?
While today’s services are largely delivered from in-house data centers, the ITaaS model makes full use of hybrid and public clouds.  Service owners must understand the ramifications of moving parts of the service infrastructure (or even the entire infrastructure) into the public cloud. This requires a better understanding of required service levels, and what will happen if cloud providers experience incidents or even disasters.

To conclude, the specific accountabilities associated with the service owner role don’t change dramatically when an IT organization moves to an ITaaS model. Primarily, a service owner will need to shift from defining architectures and infrastructure as part of the service design to defining the service in terms of requirements, and necessary service levels for supporting services. Also, traditional controls over the infrastructure may no longer apply when automation is used to fully deliver on the promise of ITaaS.

While the shift in the service owner’s responsibilities isn’t dramatic when transforming to an ITaaS model, the importance of the role grows significantly.  If you haven’t already explored implementing or expanding this role as you transform to deliver ITaaS, be sure to include this as part of your roadmap for moving forward.

=====
Kai Holthaus is a senior transformation consultant with VMware Accelerate Transformation Services and is based in Oregon.

It’s Time for IT to Come Out of the Shadows

Chances are shadow IT is happening right now at your company. No longer content waiting for their companies’ IT help, today’s employees are taking action into their own hands by finding and using their own technology to solve work challenges as they arise—a trend that likely isn’t fading into the shadows anytime soon.

Print

What I Learned from VMware’s Internal Private Cloud Deployment

By Kurt Milne

kurtmilne-cropFor seven years as an industry analyst, I studied top-performing IT organizations to figure out what made them best-in-class. And after studying 30 private cloud early adopters in 2011, I co-authored a book about how to deploy private cloud.

But after joining VMware last year, I’ve had the opportunity to spend six months working closely with VMware’s IT team to get an in-depth understanding of our internal private vCloud Suite deployment.

In this multi-part blog series, I’ll write about what I’ve learned.

Lesson learned – The most important thing I learned, and what really reframed much of my thinking about IT spending, is that VMware IT invested in our private cloud strategy to increase business agility.  And that effort drastically lowered our costs.

Breaking it down:

1. We made a strategic decision to try something different.

Over the years, I’ve studied companies that use every form of squeezing IT budgets there is. But what happens with a “cut till it hurts” or a “cut until something important breaks” approach is that the primary objective of lowering IT budgets is often achieved. But it also leaves IT hamstrung and unable to meet the needs of the business. An unbalanced focus on cost cutting reduces IT’s ability to deliver. That in turn lowers business perception of IT value, which further focuses efforts on cost cutting. Define “death spiral.”

VMware didn’t follow that path when we decided to invest in private cloud. We justified our “Project OneCloud” based on belief that that the traditional way of growing IT capabilities wouldn’t scale to meet our growth objectives. We have doubled revenue and headcount many times over the last 10 years. The IT executive team had the insight to realize that a linear approach of increasing capacity by buying more boxes and adding more headcount would not support business needs as we double in size yet again. We are no longer a startup. We have grown up as a company. We had to try a different approach.

Apparently VMware IT is not alone with this thinking. IT Under Pressure: McKinsey Global Survey results shows a marked shift in 2013 as IT organizations are using IT to improve business effectiveness and efficiency, not just manage costs.

2. Effective service design drove adoption.

What really enabled our private cloud success was broad adoption. There is a commitment and investment in private cloud that requires broad adoption to justify the cost and effort. The promise of delivering IT services the same old way at lower cost didn’t drive adoption. What drove adoption was a new operating model focused on delivering and consuming IT as a service. Specifically, abstracting infrastructure delivered as basic compute, network, and storage as a service. Then designing IT services for specific groups of consumers that allowed them to get what they need, when they needed it. That included application stacks, dev/test environments, and any other business function that depends on IT infrastructure (almost all do in the mobile-cloud era). We strove to eliminate the need to call IT, and also eliminated tickets between functional groups within IT.

Ten different business functions — from sales, marketing, and product delivery, to support and training — have moved their workloads to the cloud. Many have their own service catalog with a focused set of services as front end on the private cloud. Many have their own operations team who monitor and support automation and process that are built on top of infrastructure services.

Carefully designing IT services, then giving people access to get what they need when they need it without having to call IT — is key to success.

3. Broad adoption drove down costs via scale economies.

We started with one business group deploying sales demos and put their work in a service catalog front end on the private cloud. Then we expanded onboarding other functional groups to the cloud. One trick – and that is to develop a relationship with procurement. Any time someone orders hardware within the company, get in front of the order and see if they will deploy on private cloud instead.

Make IT customers’ jobs easier. Accelerate their time to desired results. Build trust by setting realistic expectations, then delivering per expectation.

Three primary milestones:

  1. Once we onboarded a few key tenants and got to ~10,000 VMs in our cloud, we lowered cost per general purpose VM by roughly 50 percent. With a new infrastructure as a service model that allowed consumers to “outsource infrastructure” to our central cloud team — and at a much lower cost per VM — word got out, and multiple other business groups wanted to move to the cloud.
  2. Once we onboarded another handful of tenants and got to ~50,000 VMs in our private cloud, we lowered cost per general purpose VM by another 50 percent. We were surprised by how fast demand grew and how fast we scaled from 10,000 to 50,000 VMs.
  3. We are “all in” and now on track to meet our goal of having around 95 percent of all our corporate workloads in private or hybrid cloud (vCloud Hybrid Service) – for a total of around 80,000 to 90,000 VMs. We expect cost per VM to drop another 50 percent.

So we set out to increase agility and better meet the needs of the business, delivered services that made IT consumers’ jobs easier, and as a result we dropped our cost per VM by ~85 percent.

Key takeaways:

  • Our private cloud goal was to reshape IT to better meet revenue growth objectives.
  • We transformed IT to deliver IT services in a way that abstracted the infrastructure layer and allowed various business team to “outsource infrastructure.”
  • Ten different internal business groups have moved workloads to private cloud.
  • Less focus on infrastructure and easy access to personalized services made it easier for IT service consumers to do their jobs and focus more on their customers.
  • A new operating model for IT and effective service design drove adoption.
  • Broad adoption drove down costs. By ~85 percent.

Below are links to two short videos of VMware IT executives sharing their lessons learned related to cost and agility. In my next post, I’ll talk about what I learned about a new operating model for IT.

—-
Follow @VMwareCloudOps and @kurtmilne on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

 

 

 

 

 

 

 

 

A VMware Perspective on IT as a Service, Part 3: Agility, How to Measure it, and Keep Improving it Over Time

By: Paul Chapman, VMware Vice President Global Infrastructure and Cloud Operations

In this series of posts, I’m offering a VMware Corporate IT perspective on the journey to IT as a Service, looking at how we made the change ourselves, sharing some of the many benefits that ITaaS is bringing us, and offering some insights on how – if you’re considering taking the plunge – you might successfully make the transition yourself.

I started out by offering my take on the journey to IT as a Service. Then I shared the story of how our applications operations group transformed its operations by shifting to the Software Defined Data Center. This time, I’ll explain how we think of agility at VMware.

If I had to pick just two reasons why we decided to transition to IT as a Service (ITaaS) at VMware, it was that:

a) It would let IT deliver at the speed of business,

and

b) It would make IT a game changer – by providing business transformation through IT transformation, we’d be helping the business scale and grow.

But, if our vision embraced getting IT to run at the speed of business, delighting our users every day, and turning IT into an innovation center and not a roadblock, agility, efficiency, and an ITaaS mindset were the engines that would get us there.

Importantly, though, we put agility first. Many IT businesses think of IT as a cost center and put efficiency and cost first. While the “cut ‘till it hurts” approach can drive down costs and make efficiency metrics looks better, it can also leave IT hamstrung and unable to innovate and support changing business needs. Our big insight was that investments that significantly improved agility ALSO resulted in higher efficiency and service quality gains. By leading with agility, we could achieve both.

How We Think of Agility

When we think about agility, it’s on three levels. We’re aiming for:

  1. Zero demand. Every time anyone has to make an IT request, that’s a potential point of friction. They’ve had to stop what they’re doing and ask for something to be done, and then wait even longer for a response before they can move on. So our goal has been to completely remove their need to ask in the first place. You’d be surprised how much low hanging fruit there is here. There are lots of diamonds in the backend data, as long as IT studies it with the mindset of elimination vs. just resolution speed and SLA improvements.
  2. More self-service. Self-service is another key component to agility and ITaaS. The alternative is that our customers have to go to a help desk, the last place they should go, and wait for someone to get back to them through some kind of queued ticketing system. Taking IT out of the equation and giving users the ability to self-serve significantly increases speed-to-solution and user satisfaction.
  3. The ability to serve complex requests. If you have made progress on the first two, the calls and tickets you do get will by default feature more complex or lower volume requests. So what’s the best way to respond to them? Here’s how we do it: We’ll see, for example, that on average it’s taking n days to deliver or solve service request x. Then we ask (if we cannot eliminate or provide self-service) how can we slice up these requests, automate workflow and individual tasks, and reduce handoffs as much as possible? It’s not a one-time deal, it’s a process and an incrementally ongoing quest to reduce the time-to-deliver on the demand that comes to IT. Hiring or moving people to focus on this vs. volume-based hiring to solve requests is far more likely to have a transformative impact on IT and its agility in serving customer needs.

Measuring Agility

The shorter we can make the delivery time for any service, even the most complex, the better off our customers are in terms of being able to get what they need to do their job. So how do we measure our responsiveness?

Our goals here are very aggressive. We’re looking to reduce the time it takes to deliver a particular service from months, days, and weeks, to hours, minutes, and seconds, and in some cases eliminate the need altogether.

Specific examples of metrics I like to use to gauge my organizing’s agility include:

  • The number of service requests that have been completely eliminated either through resolving at root the need in the first-place and or through fully automating solutions
  • The number of service requests that are self-served
  • If no automation or elimination is possible, then the cost- and speed-to-deliver of the solution from time of user creation
  • The services offered through the ‘outside-in’ IT Service Catalogue

I am not a fan of time-based SLAs because I think it promotes the wrong behavior – customer satisfaction, elimination, automation, and self service are far more meaningful than time-based SLAs.

More fundamentally, IT should always be trying for an outside-in view and not an inside-out view. All too often, Service Catalogues are based on IT defining its services and how a customer can “order them” vs. offering the services the “customer needs and wants.”

Predicting Demand in a Ticketless World

Done right, agility means offering services on-demand before the customer even makes a request. It’s a nice idea, but how does IT predict what issues might come up so that it can preemptively have solutions ready before they’re needed?

At VMware, one approach has been to significantly increase focus and investment on having the right forensics in place. This lets us go from reactive, to proactive, to predictive, and to be very aware of everything that is going on in the environment.

In addition, we monitor internal social communications for sentiment and issues that might be brewing. Today we see a lot of activity on our internal Socialcast collaboration site, which is helping us get ahead of issues as well as have a more intimate relationship with our users.

Then we add transparency. Internally, we have a portal that shares the quality of service delivery at any given point. So if we’re seeing degradation in a network connection, for example, or a quality of service issue with a particular application, anybody in the company can see what we know, what we’re looking at, and what we’re communicating about the issue.

Key Takeaways:

  • Agility, efficiency, and organizational mindset are at the heart of solving the movement to ITaaS.
  • Agility inevitably drives increased efficiency, but efficiency alone won’t lead to meaningful agility gains.
  • At VMware, we think of agility in terms of customer-centric, zero demand, automation, and self-service.
  • We measure agility in terms of speed and cost.
  • Forensics and transparency are key to predicting demand, and identifying and communicating issues.
  • Without true agility, it is very painful and costly for organizations to scale and to remain nimble at the same time.
  • The power of automation plus the power of a self-service mindset lets IT organizations help scale the business in a customer centric, cost-effective way.

You can find parts 1 and 2 of this series here and here. Next time, I’ll explain what it took to stand up and run our own internal private cloud with ~50k VM’s.

Follow @VMwareCloudOps and @PaulChapmanVM on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

A VMware Perspective on IT as a Service, Part 2: An In-house Example of IT Transformation

By: Paul Chapman, VMware Vice President Global Infrastructure and Cloud Operations

In this series of posts, I’m offering a VMware Corporate IT perspective on the journey to IT as a Service (ITaaS), looking at how we adopted the movement ourselves, sharing some of the many benefits that ITaaS is bringing us, and offering some insights on how – if you’re considering taking the plunge – you might successfully make the transition yourself.

Last time, I outlined the context for the movement to IT as a Service – one that suggests we’re now at a point where IT can no longer hide behind the complexity of IT environments, and where IT organizations need to deliver on new consumer expectations of service delivery if they are to have the agility and efficiency to deliver at the speed of business.

Today, I’m going to share the story of one of the functional IT groups at VMware – our Applications Operations group – that has transformed by shifting to a focus on agility and automation, with game changing results. If you’re curious to learn more, check out the full case study, or a short summary video here.

A Problem with Process

Here’s what happened: The Cloud Operations group within VMware corporate IT oversees the support of a portfolio of ~200+ business applications. The application operations team (AppOps) provisions and manages very complex SDLC development and test environments for a team of ~600+ global developers and quality assurance engineers who work on the VMware program portfolio.

By the middle of 2012, the AppOps team realized that it faced a serious issue with provisioning these environments.

As things stood, their processes were:

  • Slow – Manually provisioning a dev/test SDLC instance for a full enterprise applications ecosystem was taking in the range of 4-6 weeks per instance,
  • Disruptive – Hundreds of developers had to wait for a reliable new instance for extended periods of time, multiple times during the lifecycle,
  • Risk – Cascading delays created risk, keeping other portfolio projects from being able to start and/or complete on time, potentially costing millions of dollars in delays,
  • Inconsistent – Quality and lead times were unpredictable, varying with schedule complexity, different outcomes from manually repeated processes, and the capacity and availability of team members distributed around the globe.

The knock-on impact of a delay was very costly. Every time a new environment experienced delays, developers were idle and millions of dollars were at stake. This made portfolio planning inordinately difficult. We could have shrunk the portfolio and slowed the delivery of business critical programs in response, but that was unacceptable given our overall corporate growth objectives.

Then, not surprisingly, IT was therefore under considerable pressure to increase its agility, speed, and throughput.

Not the Easy Fix

Clearly, AppOps needed to reduce provisioning times and increase schedule predictability and service quality.

One way to do that would have been to try and improve the efficiency of the large “human middleware” they already had in place, applying lean methodologies and trying to be as “efficient” as possible when executing standard repeatable tasks.

However, a thorough process review made it clear that more than a continuous efficiency program was required. The primary issue was that they were scheduling and managing a large number of people who were performing, in the most part, skilled but repeatable tasks. Even with an improved provisioning process, the human-middleware problem would never fully go away, as speed and predictably could never reach the desired goals.

Instead, the AppOps group chose to completely replace and automate its provisioning process using a VMware on-premises private cloud, based on the software-defined data center. This would completely automate SDLC instance provisioning, using blueprints, policies, and automation and management capabilities using the VMware vCloud® Suite and other adjacent tools.

If they were to succeed, two factors would be critical:

  • Ambitious, long term objectives. To be successful, any solution needed to be game changing – instead of making incremental improvements to the existing process, AppOps was looking to turn a process that traditionally took 4-6 weeks to into one taking just a matter of hours. Solving this problem required a radically different approach that was built from the ground up.
  • An available private cloud. VMware had already deployed, at scale, its private cloud (called ‘Project OneCloud’), delivering infrastructure-as-a-service (IaaS) capabilities for internal use. With vCloud Suite’s automation and management capabilities, the private cloud could host all non-production SDLC instances – eliminating the need for lengthy hardware provisioning cycles.

By late 2012, the AppOps team was ready to start building the new, automated and streamlined provisioning platform, setting itself the goal of deploying all Dev/Test SDLC instances within 24 hours of a request.

Doing this meant driving transformation in three areas:

  • Architecture – Shifting from a traditional virtualized data center environment to a SDDC private cloud and deploying cloud management with automation capabilities to provision complex SDLC environments. Each instance contains over 30 applications, including the company’s full ERP, custom applications, portals, middleware, IDM, BI, webservers, app servers, integrations, databases, and more.
  • Operations – Converting manual, time consuming processes to an end-to-end, automated scripted process with blueprint-based provisioning. Key employee transitions would include investments in change-management and supporting employees through training and education, moving them to more value-added and meaningful roles in the new cloud operating model.
  • Financial – Moving from a project-capex based infrastructure funding model to a service-opex consumption and chargeback model. Instead of incurring costs for building and maintaining infrastructure to support the virtual machines, IT could pass the cost of workloads to individual project requestors. In turn, because of the ability to provision quickly and provide transparent opex service costs, there has been a higher increase in de-provisioning instances which has in turn increased infrastructure utilization and reduced spend on net-new infrastructure.

The Payoff and Business Benefit

Phase one of the project – deploying basic automated provisioning and management capabilities – has now been completed. 2,800 virtual machines that support dev/test instances have been transitioned to the new OneCloud environment, resulting in game-changing benefits:

  • Reduced provisioning time from 4-6 weeks to 36 hours: on track to achieve goal of <24 hours,
  • Increased productivity of 600 developers by as much as 20 percent,
  • Improved service quality so that AppOpps can now consistently say “Yes” to all project requests in the time required,
  • Saved the business $6M per year in infrastructure and operating costs,
  • Moved people to higher-order, more meaningful IT roles, e.g. blueprinting and automation design.

Phase two will focus on further enhancing automation and management capabilities and transitioning more pre-production environments to the private cloud.

Lessons Learned

  • Agility investments are self-sustaining. Investing in increased agility yields significant additional benefits, such as substantially reduced operating and infrastructure costs, and increasing service quality.
  • vCloud Suite is a full solution. The AppOps team implemented vCloud Suite to automate provisioning and management of SDLC instances. Out-of-box functionality let them automate and manage a wide range of core tasks. The availability of SDKs and APIs let them deliver additional automation and management functionality through adjacent tools.
  • On-demand capabilities change IT service consumption. SDLC instances are no longer viewed with the same risk outlook as before. Where developers and applications owners formerly felt the need to keep an instance open for multiple and/or on-going projects, AppOps can now release those instances back into the provisioning pool in a “disposable infrastructure” service consumption model.
  • APIs replace ticketing and late-night meetings. A service catalog and API calls help IT clarify and simplify communication about the services AppOps delivers and what its customers can expect in return. Efficiency has replaced the time-consuming, difficult, and highly-variable task of scheduling and coordinating work between multiple, globally distributed teams.

Key Takeaway:

The VMware corporate IT organization decided to invest in improving agility, and, as a byproduct, not only increased service speed and quality, but also dramatically lowered IT infrastructure and operating costs.

Next time, I’ll look at agility: how we measure it and how we keep continuously improving. In Part 4, I’ll explain what it took to stand up and run our own internal private cloud that so far include  ~50k VMs.

For more information in the meantime, please see:

Follow @VMwareCloudOps & @PaulChapmanVM on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

A VMware Perspective on IT as a Service, Part 1: The Journey

By: Paul Chapman, VMware Vice President Global Infrastructure and Cloud Operations

At VMware, we now live and breathe IT as a Service (ITaaS). It’s a relatively new phenomenon, but many companies are beginning to make the shift.

In this short series of posts, I’m going to offer a VMware Corporate IT perspective on the journey to IT as a Service, looking at how we ourselves made the change, sharing some of the many benefits that ITaaS is bringing us, and offering some insights on how – if you’re considering taking the plunge – you might successfully make the transition yourself.

A Long Time Coming

First up, it’s worth remembering that the concept of IT as a Service is not that new. As a community of IT professionals, we’ve been talking about it for at least a decade. But for a long time, the obvious and daunting challenge of managing very complex IT infrastructures gave us an excuse not to act.

Fast forward to today: We now have Software as a Service, Infrastructure as a Service, Platform as a Service and so on – which creates a whole new consumption model for IT. These innovations make it very hard for IT organizations to keep hiding behind complexity as an excuse not to act.

Meanwhile, though, the pace of technological change has all but outstripped and outpaced internal IT’s ability to keep up with consumer-like demand, and that’s spurred the rise of successful innovation silos such as the Workdays, the Salesforce.coms, and the Amazons of the world – that have been happy to do the work instead.

You could say this is revenge of the business, with lines of business and developers now happy to bypass the IT infrastructure organization entirely, turning to external providers offering on-demand services and innovative business models.

In the face of this, most IT organizations struggle to keep up with demand for new services. Quite frankly, I think that even if they were funded to fully support the demand, most IT organizations still wouldn’t be able to keep up using traditional service delivery models. I think the inefficiencies and the lack of agility rooted in a heavy-human and “process-bound” way of doing things would just get in the way.  Even simply maintaining the status quo these days is enough to overwhelm and exhaust personnel.

Not Whether, But How Fast

That’s certainly how we saw things here at VMware. It wasn’t so much a question of whether to go on the journey to IT as a Service, but a question of how fast we could get there. In 2010, 99% of our servers were virtualized, and 20% of business applications were delivered by SaaS vendors. We saw the power of the shift to more agile service delivery. We established an executive mandate to further transform our IT services as soon as possible, and we also started to take an outside-in look vs. an inside-out look.

Our plan included four basic principles:

  • To get IT up to business speed;
  • To become much more efficient and agile in our operations;
  • To delight our users every day;
  • And to turn IT into an innovation center, not a roadblock.

These four principles still encapsulate what we expect ITaaS to deliver.

We knew that traditional service delivery models imposed numerous friction points between the business and IT. We also knew we had to significantly increase our levels of automation and self-service. We knew that our consumers were looking for us to act like an internal SaaS provider, and we knew that the solution was to move to a zero-touch, customer-centric model. With that, we committed to further increasing our SaaS portfolio, and we started our 1st private cloud initiative.

In 2012, 30% of our applications were delivered via SaaS providers, and we launched our 2nd private cloud effort. “OneCloud” is our internal IaaS private cloud based on a software-defined datacenter architecture. Within a year of launch, we now have 9 different large SDLC tenants groups internally and have approximately ~40,000 virtual machines in our private cloud.

Two Insights: Automation and Change Management

I’ll write another post about building a private cloud in the near future, but I want to share what I think is different today, that allows IT orgs to finally – and successfully – make the change to this new way of doing business. Here are two quick insights:

First: The move to third party cloud services offers IT a route back to relevance and value.

That may sound strange coming from a vendor that sells virtualization solutions, but the workloads we shifted to third party providers are the easily-siloed and more mature business processes – sending HR processes to Workday, say, or ITSM to ServiceNow, or SFA out to Salesforce. They are important, but they are not the mission-critical processes that differentiate us in the marketplace.

We found that by dismantling the “soviet era” systems that were running those paper-based processes and moving them to the cloud, we freed up a significant amount of internal resources from having to focus on the things that really did not differentiate us in the marketplace.

What remained, and what we’ve been aggressively moving to a private cloud SDDC environment, are the systems that are often more complex than what we moved to SaaS provider – systems like our license management system myvmware.com, or our online training hands-on labs (hol.com), or provisioning dev and test environments for hundreds of developers who use these applications every day. But the agility and efficiency we gain by maintaining them in house, in a private cloud, directly impacts our customers and supports our revenue growth objectives.

If you look at what many IT organizations spend on the services they don’t silo off elsewhere, 60-80% of it is spent in just keeping the lights on. And when you break it down even further, a lot of that goes to what I call “human middleware” – people doing standard repeatable work: install another database, install an operating system, maintain that database, maintain that operating system. We are actively and aggressively replacing our human middleware with end-to-end automation with impressive results that are a win-win for our customers, our business, and for IT.  Many of the traditional roles that are shrinking are shifting to more meaningful roles, and we are finding our IT professionals are learning new and more relevant skills.

The Promise of Change

That leads to my second insight: The shift to ITaaS is daunting to IT employees, but it offers them a great opportunity.

It’s not hard to make the case that an IT as a Service strategy unlocks levels of agility and efficiency we’ve never seen before. But we also need to consider how it impacts the people we’re asking to make it work. Fears, uncertainties, and doubts about the change are understandable, and it’s important to recognize that both personal and cultural challenges exist.

Change in our industry is inevitable and constant.  Who is still doing the IT job they did 10 years ago? But leading a transformation to ITaaS means that you need to paint a picture of a future state for your employees that highlights roles and opportunities that are much more meaningful than they’re used to under the traditional model. Who wouldn’t want to be relieved of rote and repetitive jobs and asked instead to be part of a forward-thinking and innovative sea change that can make a positive difference? This is important to keep in mind as you look to align employee incentives with the changes that you want to make.

In Summary:

  • ITaaS isn’t a new concept, but with new technologies, its time has come.
  • Even with better funding, most IT orgs wouldn’t be able to keep up with the demand they face today using traditional process-bound methods.
  • Shifting some applications to SaaS providers frees up IT resources and costs to automate and innovate where it really counts.
  • Shifting core differentiating business processes to a private cloud significantly improves agility, and creates new opportunities for innovation that weren’t possible in a traditional data center environment.
  • Finally, own the problem. Actively lead the cultural shift required to transform the architecture and operating model that enables IT as a Service.

In part 2 of this series, I’ll share specific details about just one of the IT groups that has transformed what it does based on a shift to private cloud. In part 3, I’ll look at agility: how we measure it and how we keep continuously improving. In Part 4, I’ll explain what it took to stand up and run our own internal private cloud with ~40k VMs.

Follow @VMwareCloudOps & @PaulChapmanVM on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

Industry Veterans Share Key Lessons on Delivering IT as a Service

Ian Clayton, an ITSM industry veteran, and Paul Chapman, VMware’s Vice President of Global Infrastructure & Cloud Operations, know a lot about IT service delivery. Join Ian and Paul next Tuesday, July 23rd at 9am PT, as they share real lessons learned about delivering IT as a Service.

You will get more out of this brief webcast than most of the sessions presented at expensive conferences!

The webinar will cover:

  • VMware’s own IT service delivery transformation based on cloud
  • Business justification of an ITaaS delivery model
  • Key success factors for driving technology and operational transformation

Outside-in thinking is needed to give IT a winning strategy. But inside-out leadership is required to make the changes that enable a successful execution. Don’t miss this opportunity to hear from IT experts as they share real advice in successfully delivering IT as a Service in the cloud era – register now!

We’ll also be live-tweeting during the event via @VMwareCloudOps – follow us for updates! Also join the conversation by using the #CloudOps and #SDDC hashtags.

Rethinking IT for the Cloud, Pt. 1 – Calculating Your Cloud Service Costs

By: Khalid Hakim

So you’re the CIO of an organization and you’ve been asked to run IT like a business. What do you do?

You can start by seeing IT as a technology shop, with “services” displayed on its shelves. Each service is price-tagged, with specs printed on the back-tag. A service catalog is available for customers to pick up and request services from. Each service or set of services is managed by the “service manager/owner” role. Your IT shop would have an Income Statement (Profit-Loss P&L) and a Balance Sheet.

Think of it – in other words – as a business within a business: IT is just a smaller organization within the main business org. And where’s the value to you in that? Well, it’s because your boss is right: IT should be a business enabler, a revenue supporter, and a value creator. And because it helps you ditch your colleagues’ long-held impression that IT is nothing more than a revenue drain.

Next, you need to show exactly how your organization contributes to the success and profitability of the business. How can the CEO and CxOs further realize the value of the IT you’re supplying? How can you calculate the contribution of every dollar of investment in IT to their net income? These are just a few of the questions that you need to consider when positioning IT on a critical value path.

Cloud is Here to Help

As you look to transform IT from a passive order-taker to an IT service broker, or even to a strategic business partner, you’ll likely look to cloud computing for agility, reliability, and efficiency. Cloud can deliver all of these things, with stunning results. But this transformation cannot happen without a paradigm shift in how you operate and manage your technology.

Luckily, cloud computing embraces consumerization and commoditization and is a perfect fit for the IT shop/P&L model: Everything is expressed in terms of “services” and business value. If I could introduce a new English word, it would be “servicizing,” as in “servicizing your IT.” Part of this transformation means moving from C-bills to S-bills, that is, moving from Component level bills (e.g. an IT bill that has IT components) to Services bills, which are more clear and understandable. And to begin this process, you need to “servicize” your whole IT context.

There are multiple steps involved here, but for any of this to be worthwhile, what you do has to be justifiable within your new cost/benefit framework. So you need to start off with a true understanding of how much each and every service is costing your company.

In the remainder of this post, I’m going to suggest a few key points that will help you identify and calculate what each cloud service costs. Future blog posts in this series will address other important steps to IT transformation for the cloud, such as the importance of automating your IT cost transparency as well as a step-by-step guide to tagging costs as CAPEX and/or OPEX.

Calculating Cloud Service Costs

Step 0: Define your cloud service – I am calling this step zero because you first need to truly understand what makes up your cloud service before you can go any further. Service definition is a separate exercise and discipline whose foundations should be deeply rooted in your organization if you want to describe it as “service-oriented.” Defining a cloud service helps you see the boundaries of your service, as well as correctly understand and identify its components. And it solves one of your biggest service cost challenges, reducing the “unabsorbed costs” bucket by clearly identifying all cost components, including your service’s technology, processes and team.

Step 1: Identify direct and indirect fixed costs – With an accurate service definition, all components that contribute to your service delivery (technology, processes, and team) are now identified. This next step is to identify the direct costs that your drivers and elements contribute to your service. In addition, you’ll need to identify all indirect fixed cost drivers and apply the allocation percentage that has been agreed upon during the establishment of your service’s cost model. Your support contract is a common example of an indirect fixed cost: The cost of your support contract should be split over the number of products and calls, as previously detailed in your contract.

Step 2: Identify direct and indirect variable costs – Another challenge is dealing with your variable costs and how to allocate them to the services that depend on these costs. Much of this should have been defined in the service’s cost model, so you should apply those same policies on the identified variable-cost drivers and elements. Your monitoring tool is a great example of an indirect variable cost, as the costs need to be distributed over your fluctuating number of applications or services being monitored at any given time.

Step 3: Identify any unabsorbed costs – The “unabsorbed costs” bucket is a group of cost drivers and elements whose costs you cannot attribute to any particular service, meaning they must be attributed across all services. During the development of your service’s cost model, you need to decide how to deal with such costs. Typically, there will be a certain uplifting amount that needs to be added or allocated to each service. A good example of this would be the cost of labor (i.e. service managers) that should be distributed across all services.

Step 4: CAPEX/OPEX tag and adjust – There is no major decision-making in this step, as most of these Capex and Opex discussions should have taken place during the time you purchased your cloud service components. However, it is very important to tag each cost with a CAPEX or OPEX (or both in some cases) because that will eventually impact the way that you distribute and allocate operational or depreciated costs of each element.

Step 5: Finalize your service cost calculations – After identifying and defining all of your cost units (e.g. per User or Consumption: per GB) and metering options (e.g. hourly, weekly, monthly, etc.), finalize your service cost per cost unit calculations considering all the elements gathered in the previous steps I’ve just outlined.

In summary, when preparing your IT team for cloud computing, keep in mind the following:

  • Successfully implementing cloud computing in your company starts by changing the way you see IT (and making sure everyone on your team is aware and on-board as well).
  • It is essential to carefully and correctly define your cloud service and to keep in mind the cost model you established for your service as you do so.
  • Identifying the costs of your cloud service will let you illustrate the value of IT at your company and show how your cloud service positively impacts your business as a whole.
  • You can follow the 5-step process outlined above to ensure that you have fully identified your costs.

You may not be personally on the hook to figure all this out, but service owners/managers or someone in your IT department probably is.  So why not forward this post to folks you work with in IT, and suggest that they attend the IT Financial Management Association Conference in Savannah, Georgia next week. I’ll be hosting a workshop on Monday, July 8th at 8am on cloud IT service financial management, and on Wednesday, July 10th at 10am, I’ll be presenting an overview of cloud service financial management.

Stay tuned for the next post in this series, where I will discuss service definition in more detail. In the meantime, if you’re interested in reading more on the transformation of IT, check out these other posts:

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.