Tag Archives: IT Ops

Transforming Your IT Operations

The way customers now consume technology has changed significantly.  They are becoming more comfortable using platforms such as e-commerce marketplaces.  In the US markets alone, online sales are predicted to grow by almost 60% to over $400B by 2018.

IT organizations are now aggressively trying to transform their technology platforms from static legacy infrastructure to the dynamic, agile infrastructures provided by virtualization, cloud and the software-defined data center.

In this short video, operations architect David Crane shares why your focus needs to include more than just technology when moving to a new technical architecture and infrastructure for IT operations. Analysis of, along with the planning and design of your future state operating model, are just as important if you want to realize the full benefits of transformation.

Green vs. Grey — Rethinking Your IT Operations

Neil MitchellBy Neil Mitchell

Can you really create a new greenfield IT organization with no legacy constraints?

In this short video, operations architect Neil Mitchell explains that while anything is theoretically possible, most IT execs need to face the reality of impact on legacy IT operations.

====
Neil Mitchell is an operations architect with the VMware Operations Transformation global practice and is based in the UK.

3 Key Trends for 2015: How to Keep Pace with the Rapidly Changing IT Landscape

craig dobsonBy Craig Dobson

So much happened in 2014, and as the New Year begins, I’m looking forward to finding out what 2015 holds—both from a market and an industry perspective. One thing is for certain: the rapid changes we have seen in our industry will continue into the New Year. In fact, the pace of change is likely to accelerate.

I believe the following key trends will be shaping the IT landscape of 2015:

  • Increased application focus
  • Continued movement from CapEx to OpEx models (embracing “x-as-a-Service”)
  • Heightened focus on accurate measurement of the cost-of-IT

Let’s explore these trends in a little more detail.

Application Focus

All throughout 2014 I have been hearing clients say: “it’s all about the application.” In the face of global competition and with the rise of disruptive startups testing the old school business models, the lines of business are seeking innovation, market differentiation, and quick response to changing market dynamics. They are driving IT—and all too frequently looking outside, to cloud-based solutions— to enable quick response to these dynamic changes, often at a lower entry cost.

In 2015, lines of business will prioritize and focus on the business applications that will support the goal of serving, winning, and retaining customers. Application portfolios will change to hybrid architectures that increasingly leverage x-as-a-service models. Supporting platform decisions (such as infrastructure and cloud) will be made based on application decisions. IT professionals will need to stay on top of evolving business applications in order to more effectively support the demands of the lines of business.

Moving from CapEx to OpEx

The appetite to consume anything-as-a-service from external providers has grown throughout 2014, and is now significantly shifting the IT funding model from three- to five-year CapEx investments to OpEx-based consumption models. This shift will accelerate in 2015, and will often be tied to shorter contract periods, with an increased focus on cost and an expectation of a continued improvement on cost-to-serve.

What is driving this change is a general acceptance by mainstream enterprise businesses and different levels of government (through policy changes) that cloud-based services make economic sense, combined with the fact that the business risk of consuming these services has decreased.

Accurate Measurement of the Cost-of-IT

With the shift from CapEx to OpEx models and the focus on the business value of the application lifecycle, the CIO will be under even more pressure to show value back to the lines of business. In 2015, with these new dynamics, and with IT moving to become a full broker of services or portfolio manager (for both internal and external services) delivering x-as-a-service capabilities, this change will demand a greater level of granular and real-time financial reporting at a service level for the consuming lines of business.

This increased financial awareness will provide the ability for IT to show value, offer apples-to-apples comparison between internal IT and external services, as well as comparison between suppliers.

In addition to the cost transparency measures, I believe we will also see an aggressive focus on driving down operational costs to allow the savings to be targeted at next-generation business applications.

Ready for 2015

Let’s face it — change is a given, and 2015 will be no exception for IT. Forward-thinking IT leaders will get ready to deliver applications that meet the dynamic demands of the business; x-as-a-service offerings that meet or exceed end-user requirements; and financial reporting capabilities that not only show end users what they’re paying for but also enable IT to quantify its value.


Craig Dobson is Senior Director of VMware Technical Services for the Asia Pacific region and is based in Sydney.

How to Avoid 5 Common Mistakes When Implementing an SDDC Solution

By Jose Alamo

Jose alamo-cropImplementing a software-defined data center (SDDC) is much more than implementing or installing a set of technology — an SDDC solution requires clear changes to the organization vision, policies, processes, operations, and organization readiness. Today’s CIO needs to spend a good amount of time understanding the business needs, the IT organization’s culture, and how to establish the vision and strategy that will guide the organization to make the adjustments required to meet the needs of the business.

The software-defined data center is an open architecture that impacts the way IT operates today. And as such, the IT organization needs to create a plan that will utilize the investments in people, process, and technology already made to deliver both legacy and new applications while meeting vital IT responsibilities. Below is a list of five common mistakes that I’ve come across working with organizations that are implementing SDDC solutions, and my recommendations on how avoid their adverse impacts:

1. Failure to develop the vision and strategy—including the technology, process, and people aspects
Many times organizations implement solutions without setting the right expectation and a clear direction for the program. The CIO must use all the resources available within the IT organization to create a vision and strategy, and in some cases it is necessary to bring in external resources that have experience in the subject. The vision and strategy must align with the business needs, and it should identify the different areas that must be analyzed to ensure a successful adoption of an SDDC solution.

In my experience working with clients, it is imperative that as part of the planning a full assessment is conducted, and it must include the areas of people, process, and technology. A SWOT analysis should also be completed to fully understand the organization’s strengths,  weaknesses, opportunities, and threats. Armed with this insight, the CIO and IT team will be able to express the direction that must be taken to be successful, including the changes required across people, process, and technology.

Failing to complete this step will add complexity and lack of clarity for those who will be responsible for implementing the solution.

2. Limited time spent reviewing and understanding the current policies
There are often many policies within the IT organization that can prevent moving forward with the implementation of SDDC solutions. In such cases, the organization needs to have an in-depth review of the current policies governing the business and IT day-to-day operations. The IT team also needs to ensure it devotes a significant amount of time with the company’s security and compliance team to understand their concerns and what measures need to be taken to make the necessary adjustments to support the implementation of the solutions. For example, the IT organization needs to look at its change policies; some older policies could prevent the deployment of process automation that is key to the SDDC solution. When these issues are identified from the beginning, IT can start the negotiation with the lines of business to either change its policies or create workarounds that will allow the solution to provide the expected value.

Performing these activities at the beginning of the project will allow IT leadership to make smart choices and avoid delays or workarounds when deploying future SDDC solutions.

3. Lack of maturity around the IT organization’s service management processes
The software-defined data center redefines IT infrastructure and enables the IT organization to combine technology and a new way of operating to become more service-oriented and more focused on business value. To support this transformation, mature service management processes need to be established.

After the assessment of current processes, the IT organization will be able to determine which process will require a higher level of maturity, which process will need to be adapted to the SDDC environment, and which processes are missing and will need to be established in order to support the new environment.

Special attention will be required for the following processes:  financial management, demand management, service catalog management, service level management, capacity management, change management, configuration management, event management, request fulfillment, and continuous service improvement.

Ensure ownership is identify for each process, with KPIs and measurable metrics established—and keep the IT team involved as new processes are developed.

4. Managing the new solution as a retrofit within the current environment
Many IT organizations will embrace a new technology and/or solution only to attempt to retrofit it into their current operational model. This is typically a major mistake, especically if the organization is expecting better efficiency, more flexibility, lower cost to operate, transparency, and tighter compliance as potential benefits from an SDDC.

Organizations must assess their current requirements and determine if they will be required for the new solutions. Most processes, roles, audit controls, reports, and policies are in place to support the current/legacy environment, and each must be assessed to determine its purpose and value to the business, and to determine whether it is required for the new solution.

IT leadership should ask themselves: If the new solution is going to be retrofitted into the current operational model, then why do we need a new solution?  What business problems are we going to resolve if we don’t change the way we operate?

My recommendation to my clients is to start lean, minimize the red tape, reduce complex processes, automate as much as possible, clearly identify new roles, implement basic reporting, and establish strict change policies. The IT organization needs to commit to minimize the number of changes to the new solution to ensure only changes that are truly required get implemented.

5. No assessment of the IT organization’s capabilities and no plan to fill the skill set gaps
The most important resource to the IT organization is its people. IT management can implement the greatest technologies, but their organizations will not be successful if their people are not trained and empowered to operate, maintain, and enhance the new solution.

The IT organization needs to first assess current skill sets. Then work with internal resources and/or vendors to determine how the organization needs to evolve in order to achieve its desired state. Once that gap has been identified, the IT management team can develop an enablement plan to begin to bridge the gap. Enablement plans typically include formal “train the trainer” models to cascade knowledge within the organization, as well as shadowing vendors for organizational insight and guidance along with knowledge transfer sessions to develop self-sufficiency. In some cases it may be necessary to bring in external resources to augment the IT team’s expertise.

In conclusion, implementing a software-defined data center solution will require a new approach to implementing processes, technologies, skill sets, and even IT organizational structures. I hope these practical tips on how to avoid common mistakes will help guide your successful SDDC solution implementations.

====
Jose Alamo is a senior transformation consultant with VMware Accelerate Advisory Services and is based in Florida. Follow Jose on Twitter @alamo_jose  or connect on LinkedIn.

A New Angle on the Classic Challenge of Retained IT

By Pierre Moncassin

Pierre Moncassin-cropWhen discussing the organization models for managing cloud infrastructure with customers, I have come across situations where some if not all infrastructure services are outsourced to a third party. In these situations my customers often ask – does your (VMware) operating model still apply? Should I retain cloud-related skills in-house? If so, which ones?

The short answer is: Yes. The advice I give my customers is that their IT organization should establish a core organization modeled on the “tenant operations” team as defined in Organizing for the Cloud, a VMware white paper by my colleague Kevin Lees.

Let’s assume a relatively simple scenario where a single outsourcer is providing “standard” infrastructure services — such as computing, storage, backups. In this scenario, the outsourcer has accepted to transform at least some of its services towards software-defined data center (SDDC), which is by no means an easy step (I will return to that point later).

For now let’s also assume a cooperative situation where customer and outsourcer are collaboratively working towards a cloud model. The question is — what skills and functions should the customer retain in-house? Which skills can be handed over to the outsourcer?

The question is a classic one. In traditional infrastructure outsourcing, we would talk about a “retained IT” organization.  For the SDDC environment, here are some skill groups that I believe have to be preserved within the core, in-house team:

  • Service Design and Self-service Provisioning is clearly a skillset to keep in-house. The in-house team must be able to work with the business to define services end-to-end, but the team should also be able to grasp accurately the possibilities that automation offers with software such as VMware vCloud Automation Center.  Though I am not suggesting that the core team needs to be expert in all aspects of workflows, APIs or scripting, they do need a solid grasp of the possibilities of automation.
  • Process Automation and Optimization.  A solid working knowledge of automation software is useful but not enough.  The in-house teams are required to decide which processes to automate and how. They need to make business-level decisions. Which processes are worth automating? What is the benefit of automation versus its cost?
  • Security and Compliance is often a top priority for cloud adopters. The cloud-based services need to align with enterprise policies and standards.  The retained IT function must be able to demonstrate compliance and where needed, enforce those standards in the cloud infrastructure.
  • Service Level Management and Trend Analysis. Whilst the retained IT organization does not need to be involved in the day-to-day monitoring and troubleshooting, they need to be able to monitor key service levels. Specifically, the business users will be highly sensitive to the performance of some business-critical applications. The retained IT organization will need to keep enough knowledge of these applications and of performance monitoring tools to ensure that application performance is measured adequately.
  • Application Life Cycle (DevOps). We have assumed in our scenario an infrastructure-only outsourcing — the skills for application development remaining in-house.  In the SDDC environment, the tenant operations team will work closely with the application development teams. Amongst other skills, the retained IT will need detailed knowledge not only of application provisioning, but also the architectures, configuration dependencies, and patching policies required to maintain those applications.

I have reviewed skills groups needed as more automation is used, but there will be less reliance on skills that relate to routine tasks and trouble-shooting. Skills that can typically be outsourced include:

  • Routine scripting and monitoring
  • System (middleware) configuration
  • Routine network administration

The diagram below is a (very simplified) summary of the evolution from traditional retained IT to tenant operations for SDDC environments.

Retained IT modelIt is also worth noting that the transformation from traditional infrastructure outsourcing to SDDC is a far from obvious step from the point of view of an outsourcer. Why should the outsourcer invest time and cost to streamline services, if the end customer has already contracted to pay for the full cost of service? Gaining buy-in from the outsourcer to transform its model can be a significant challenge. Therefore it is prudent to key to gain acceptance either:
–  early in the contract negotiations, so that the provider can build in a cloud delivery model in its service offering,
– or towards the end of a contract when the outsourcer is often highly motivated to obtain a renewal.

Finally outsourcers may initiate their own technology refresh programs, which can create a win-win situation when both sides are prepared to invest in modernization towards SDDC.

3 Key Take-Aways

  1. Organizations that undertake their journey to SDDC with an outsourcer are advised to establish a core SDDC  organization including most tenant operations skills; a key focus is to leverage automation (whilst routine, repetitive tasks can be outsourced).
  2. The exact profile of the tenant operations (retained IT) will depend on the scope of the outsourcing contract.
  3. Early contract negotiations, renewals, or technology refresh can create opportunities to encourage an outsourcer to move towards the SDDC model.

———
Pierre Moncassin
is an operations architect with VMware’s Global Operations Transformation Practice and is based in the UK. Follow @VMwareCloudOps on Twitter for future updates.

VMware vCenter Operations Manager Users: Raise Your Hands!

Keng-Leong-Choong-cropBy Choong Keng Leong

I innocently asked attendees in a workshop I was delivering at one of my clients, “Who uses VMware vCenter Operations Management Suite in your company?“ I got two simple answers: “Cloud administrator” or “VM administrator.”  This triggered me to write this blog and hopefully will change your thinking if you have the same answer.

The vCenter Operations Management Suite consists of four components:

  • vCenter Operations Manager : Allows you to monitor and manage the performance, capacity and health of your SDDC infrastructure, operating systems and applications
  • vCenter Configuration Manager : Enables you to automate configuration management across virtual and physical servers, and continuously assess them for compliance with IT policies, regulatory, and security compliance
  • vCenter Hyperic : Helps to monitor operating systems, databases, and applications
  • vCenter Infrastructure Navigator : Automatically discovers and visualizes application components and infrastructure dependencies

If I were to map the vCenter Operations Management Suite to the IT processes it can support, it would look like the matrix shown in Table 1:

Table 1: A Possible vCenter Operations Management Suite to Process Mapping

What Table 1 also implies is that multiple roles will be using and accessing vCenter Operations Manager, or be a recipient of its outputs, i.e., reports. For example, the IT Director can access the vCenter Operations Manager Dashboard to view the overall health of the infrastructure. The Application Support team accesses it via a Custom Dashboard to understand applications status and performance. The IT Compliance Manager reviews the compliance status of IT systems on the vCenter Operations Manager Dashboard and gets more details from the vCenter Configuration Manager to initiate remediation of the systems.

Table 2 below shows a possible list of roles accessing the vCenter Operations Management Suite.

Table 2: Possible List of Roles Using vCenter Operations Management Suite

Tables 1 and 2 illustrate clearly that vCenter Operations Management Suite is not just another lightweight app for the cloud or VM administrator — it supports multiple IT operational processes and roles.

Taking a step further, you need to embed vCenter Operations Management Suite into operational procedures to take maximum advantage of the tools’ full potential and integrated approach to performance, capacity, and configuration management. To draw an analogy –  if you deploy a new SAP system without defining the triggers or use cases for a user to access the SAP system; establishing the procedural steps on which modules to access and how to navigate in the system; what to input; how to query and report and so on; it is unlikely the system will be rolled out successfully.

Although vCenter Operations Management Suite is not as complex, the concept is the same. You need to define procedures with tight linkage to the tools to ensure they are used consistently and in the way it is designed or configured for.

I hope that my blog motivates you to start thinking about transforming your IT operations to make full use of the capabilities of your VMware technology investment.

========
Choong Keng Leong is an operations architect with VMware Professional Services and is based in Singapore. You can connect with him on LinkedIn

New Technical Roles Emerge for the Cloud Era: The Rise of the Cross-Domain Expert

By Pierre Moncassin

Pierre Moncassin-cropSeveral times over the last year, I have heard this observation: “It is all well and good to introduce new cloud management tools — but we need to change the IT roles to take advantage of these tools. This is our challenge.” As more and more of the clients I work with prepare their transition to a private cloud model, they increasingly acknowledge that traditional IT specialist roles need to evolve.

We do not want to lose the traditional skills — from networking to storage to operating systems — but we need to use them in a different way. Let me explain why this evolution is necessary and how it can be facilitated.

Emergence of Multi-Disciplinary Roles
In the traditional, pre-cloud IT world, specialists tended to carve a niche in their specific silos: they were operating systems specialists, network administrators, monitoring analysts, and so on. There was often little incentive to be concerned about competencies too far beyond one’s silo. After all, it was in-depth, vertical expertise that led to professional recognition — even more so when fast troubleshooting was involved (popularly known as “firefighting”). With a brilliant display of troubleshooting, the expert could become the hero of the day.

In the same silo model, business-level issues tended to be handled far away from the technologists. The technology specialists were rarely involved in such questions as billing for IT usage or defining service levels — an operations manager or service manager would worry about those things.

Whilst this silo model had its drawbacks, it still worked well enough in traditional, pre-cloud IT organizations — where IT services tended to be stable and changes were infrequent. But it does not work in a cloud environment, because the cloud approach requires end-to-end services — defined and delivered to the business.

Cloud consumers do not simply request network or storage services; they expect an end-to-end service across all the traditional silos. If an application does not respond, end users do not care whether the cause lies within networks or middleware: they expect a resolution of their service issue within target service levels.

Staffing the Cloud Center of Excellence
To design and manage such cloud-based services, the cloud center of excellence (COE) requires broader roles than the traditional silos. We need architects and analysts who can comprehend all aspects of a service end-to-end. They will have expertise in each traditional silo, but just as importantly, the ability to architect and manage services that span across each of those silos. I call these roles “cross-domain experts,” because they possess both the vertical (traditional silo) and horizontal (cross-silo) expertise, including a solid understanding of the business aspects of services.

Cross-domain competencies are essential to bring a cross-disciplinary perspective to cloud services. These experts bring a broad spectrum of skills and understand the ins and outs of cloud services across network, server, and storage — as well as a solid grasp of multiple automation tools. Beyond the technical aspects, they are also able focus on the business impact of the services.

Cross-domain experts also need to cross the bridge between the traditionally separate silos of  “design” and “build.” Whilst in the traditional IT model the design/development activities could be largely separated from the build requirements, a service-for-the-cloud model needs to be designed with build considerations up front.

Org for Cloud wpEvery team member in the COE needs to possess an interdisciplinary quality. If we look more specifically at the organization model defined in the white paper Organizing for the Cloud, after the leaders, these hybrid roles are foremost to be found in the following categories:

  • In the tenant operations team, the key hybrid roles are service architect and service analyst.
  • In the infrastructure operations team, the architect is a key hybrid role.

Takeaways

  • To build a successful cloud COE, develop multi-disciplinary roles with broad skills across traditional silos (such as networks, servers, and middleware). Break down the traditional barriers between design and build.
  • Foster both formal training and practical experience across domains.
  • Organize training in both automation and management tools.

——-
Pierre Moncassin is an operations architect with VMware Operations Transformation Services and is based in France. Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

The Top 10 CloudOps Blogs of 2013

What a year it’s been for the CloudOps team! Since launching the CloudOps blog earlier this year, we’ve published 63 items and have seen a tremendous response from the larger IT and cloud operations community.

Looking back on 2013, we wanted to highlight some of the top performing content and topics from the CloudOps blog this past year:

1. “Workload Assessment for Cloud Migration Part 1: Identifying and Analyzing Your Workloads” by Andy Troup
2. “Automation – The Scripting, Orchestration, and Technology Love Triangle” by Andy Troup
3. “IT Automation Roles Depend on Service Delivery Strategy” by Kurt Milne
4. “Workload Assessment for Cloud Migration, Part 2: Service Portfolio Mapping” by Andy Troup
5. “Tips for Using KPIs to Filter Noise with vCenter Operations Manager” by Michael Steinberg and Pierre Moncassin
6. “Automated Deployment and Testing Big ‘Hairball’ Application Stacks” by Venkat Gopalakrishnan
7. “Rethinking IT for the Cloud, Pt. 1 – Calculating Your Cloud Service Costs” by Khalid Hakim
8. “The Illusion of Unlimited Capacity” by Andy Troup
9. “Transforming IT Services is More Effective with Org Changes” by Kevin Lees
10. “A VMware Perspective on IT as a Service, Part 1: The Journey” by Paul Chapman

As we look forward to 2014, we want to thank you, our readers, for taking the time to follow, share, comment, and react to all of our content. We’ve enjoyed reading your feedback and helping build the conversation around how today’s IT admins can take full advantage of cloud technologies.

From IT automation to patch management to IT-as-a-Service and beyond, we’re looking forward to bringing you even more insights from our VMware CloudOps pros in the New Year. Happy Holidays to all – we’ll see you in 2014!

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

Understanding Process Automation: Lean Manufacturing Lessons Applied to IT

by: Mike Szafranski

With task automation, it is pretty simple to calculate that it is worth taking 2 hours to automate a 10-minute task if you perform that task more than 12 times. Even considering the fixed and variable costs of the automation solution, the math is pretty straightforward.

But the justification for automating more complex processes composed of dozens of ‘10 minute tasks’ completed by different actors – including the inevitable scheduling and wait time between each task – is a bit more complex. Nonetheless, an approach exists.

You can find it laid out in Kim, Behr, and Spafford’s modern classic of business fiction, The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win [IT Revolution Press, 2013], in which the authors show how the principals of lean manufacturing are directly applicable to IT process automation.

So what lessons do we learn when building a case for process automation by applying lean manufacturing principles to IT Ops? Let’s take a look.

Simple Steps Build the Business Case

First, you need to break the process you’re interested in into its constituent parts.

Step 1 – Document Stages in the Process and Elapsed Time. Through interviews, identify major process stages and then document the clock time elapsed for each. Note, use hard data for elapsed time if possible. People involved in the process rarely have an accurate perception of how long things really take. Look at process artifacts such as emails, time stamps on saved documents, configuration files, provisioning, or testing tool log files to measure real elapsed time.

Step 2 – Document Tasks and Actors. Summarize what gets accomplished at each stage and, most importantly, detail all the tasks and record which teams perform them. If a task involves multiple actors working independently with a handoff, that task should be broken down into sub-tasks.

Step 3 – Document FTE Time. Record the work effort required for each task. We’ll call that the Full Time Equivalent (FTE). This is the time it takes to do the actual task work, assuming no interruptions, irregularities, or rework.

Step 4 – Document Wait Time. Understanding wait time is critical to building a case for process automation. If actors are busy, or if there are handoffs between actors, then elapsed time is often multiple times longer than FTE time. This is because at each handoff, the task must sit in queue until a resource is ready to process the task.

After taking these steps, you can summarize in a chart similar to this.

In Lean Manufacturing, the concept of wait time or queue time has a mathematical formula [see chapter 23 of The Phoenix Project]. The definition is:

The formula, of course, offers hard proof of what you already knew – that the busier you are, the longer it takes to get new work done. With multiple actors on a task, each can contribute to wait time, with the amount they contribute depending on how busy they are.

In the example below, there are five separate teams (security, network, dev, QA and VM) involved in the Validate Firewall step in the flow. Each team is also busy with other tasks. 

Figure 2. In a manually constructed environment, the network settings, firewall rules, and application ports need to be validated. More often than not, they need to be adjusted due to port conflicts or firewall rules. Wait times correlate strongly with % ultilization.

As you can see, the time spent by FTEs is 5.5 hours, which is only around 15% of the clock time. Clearly, with complex tasks, FTE is only a part of the story.

Step 5 – Account for Unplanned Work. Unplanned work occurs when errors are found, requiring a task from an earlier step in the process to be reworked or fixed.

In complex automation, unplanned work is another reality that complicates the process and increases FTE time. It also dramatically impacts clock time – in two ways. First, there’s the direct impact of additional time spent waiting for the handoff back upstream in the process. Second, and even more dramatic, is the opportunity cost. Planned work tasks need to stop while the process actor sets things aside and addresses the unplanned work. Unplanned work can thus have a multiplier effect, causing cascading delays up and down the process flow.

One aim of automation, of course, is to reduce unplanned work – and that reduction that can also be calculated, further adding to the business case for process automation. Indeed, studies have shown that, currently, unplanned work consumes 17% of a typical IT budget.

Process Automation Can Offer More Than Cost Reduction

But there’s potentially even more to the story than a complete picture of IT work and detailed accounting of reduced work effort and timesavings. The full impact of process automation can include:

  • Improved throughput
  • Enabling rapid prototyping
  • Higher quality
  • Improved ability to respond to business needs

The cumulative impact of these can be substantial. Indeed, it can easily exceed the total impact of direct cost reductions.

Step 6 – Estimate total benefit to business functions. If calculating the value of reducing FTE, wait times, and unplanned work is relatively straight forward, figuring the full business impact of reducing overall calendar time for a critical processes (from 4 weeks to 36 hours, say) requires more than a direct cost reduction calculation. It’s worth doing, though, because the value derived from better quality, shorter development times, etc., can substantially exceed the value of FTE hours saved through automation (see figure 3). 

Figure 3. The secondary impacts of automating processes and increasing agility and consistency can be much larger than the value of the FTE hours saved.

You do it by asking IT customers to detail the benefits they see when processes are improved. There are many IT KPIs that can help here, such as the number of help desk tickets received in a specific period, or the number and length of Severity 1 IT issues.

We used this method at VMware when we automated dev/test provisioning and improved the efficiency of 600 developers by 20%. We achieved a direct cost reduction related to time and effort saved. But we found an even bigger impact, even if it was harder to quantify, in improved throughput, in always being able to say, “Yes” to business requests, and in enabling rapid prototyping.

Lessons Learned

With these steps, you can capture major process stages, tasks, actors, calendar time, work effort, and points of unplanned work, quantifying the business value of automating a process end-to-end – and making your case for end-to-end process automation all the stronger.

Key takeaways:

  • It’s possible to make a business case for automating end-to-end IT processes;
  • You can do this by applying concepts from lean manufacturing;
  • The concepts of wait time and unplanned work are central;
  • Efficiency driven cost reduction is only part of the equation, however;
  • To quantify the full value of agility, work with IT customers to gauge improvements in KPIs that reflect improved business outcomes.

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

Transforming IT Services is More Effective with Org Changes

By: Kevin Lees

Last time, I wrote about the challenge of transforming a traditional IT Ops culture and the value of knowing what you’re up against.

Now I want to suggest some specific organizational changes that – given those cultural barriers – will help you successfully undertake your transformation.

At the heart of the model I’m suggesting is the notion of a Cloud Infrastructure Operation Center of Excellence. What’s key is that it can be adopted even when your org is still grouped into traditional functional silos. 

Aspiration Drives Excellence

A Cloud Infrastructure Operation Center of Excellence is a virtual team comprised of the people occupying your IT org’s core cloud-focused roles: the cloud architect, cloud analyst, cloud developers and cloud administrators. They understand what it means to configure a cloud environment, and how to operate and proactively monitor one. They’re able to identify potential issues and fix them before they impact the service.

Starting out, each of these people can still be based in the existing silos that have grown up within the organization. Initially, you are just identifying specific champions to become virtual members of the Center of Excellence. But they are a team, interacting and meeting on a regular basis, so that from the very beginning they know what’s coming down the pipe in terms of increased capacity or capability of the cloud infrastructure itself, as opposed to demands for individual projects.

Just putting them together isn’t enough, though. We’ve found that it’s essential to make membership of the cloud team an aspirational goal for people within the IT organization. It needs to be a group that people want to be good enough to join and for which they are willing improve their skills. Working with the cloud team needs to be the newest, greatest thing.

Then, as cloud becomes more prominent and the defacto way things are done, the Cloud Center of Excellence can expand and start absorbing pieces of the other functional teams. Eventually, you’ll have broken down the silos, the Cloud Center of Excellence will be the norm for IT, and everybody will be working together as an integrated unit.

Four Steps to Success

Here are four steps that can help ensure that your Cloud Infrastructure Operation Center of Excellence rollout is a success:

Step 1 – Get executive sponsorship

You need an enthusiastic, proactive executive sponsor for this kind of change.  Indeed, that’s your number one get – there has to be an executive involved who completely embraces this idea and the change it requires, and who’s committed to proactively supporting you.

Step 2 – Identify your team  

Next you need to identify the right individuals within the organization to join your Center of Excellence. IT organizations that go to cloud invariably already run a virtualized environment, which means they already employ people who are focused on virtualization. That’s a great starting point for identifying individuals who are best qualified to form the nucleus of this Center. So ask: Who from your existing virtualization team are the best candidates to start picking up responsibility for the cloud software that gets layered on top of the virtualized base?

Step 3 – Identify the key functional teams that your cloud team should interact with.

This is typically pretty easy because your cloud team has been interacting with these functional teams in the context of virtualization. But you need to formalize the conneciton and identify a champion within each of these functional teams to become a virtual member of the Center of Excellence. Very importantly, to make that work, the membership has to be part of that person’s job description. That’s a key piece that’s often missed: it can’t just be on top of their day job, or it will never happen. They have to be directly incentivized to make this successful.

Step 4 – Sell the idea

Your next step is basically marketing. The Center of Excellence and those functional team champions must now turn externally within IT and start educating everybody else – being very transparent about what they’re doing, how it has impacted them, how it will impact others within IT and how it can be a positive change for all. You can do brown bag lunches, or webinars that can be recorded and then downloaded and watched, but you need some kind of communication and marketing effort to start educating the others within IT on the new way of doing things, how it’s been successful, and why it’s good for IT in general to start shifting their mindset to this service orientation.

Don’t Forget Tenant Operations 

There’s one last action you need to put in place to really complete your service orientation: create a team that is exclusively focused outwards toward your IT end customers. It’s what we call Cloud Tenant Operations.

Tennant Ops is one of three Ops tiers that enable effective operations in the cloud era. It is also called “Service Ops,” which is one of three Ops tiers outlined here and here.

One of the most important roles in this team is the customer relationship (or sometimes ‘collaboration’) manager who is directly responsible for working with the lines of business, understanding their goals and needs, and staying in regular contact with them, almost like a salesperson, and supporting that line of business in their on-boarding to, and use of, the cloud environment.

They can also provide demand information back to the Center of Excellence to help with forward capacity planning, helping the cloud team stay ahead of the demand curve by making sure they have the infrastructure in place when the lines of business need it.

Tenant Operations is really the counterpart to the Cloud Infrastructure Operation Center of Excellence from a service perspective – it needs to comprise of someone who owns the services offered out to the end customers over their life cycle, a service architect and service developers who actually can understand the technical implications of the requirements. These requirements are coming from multiple sources, so the team needs to identify the common virtual applications that can be offered out and consumed by multiple organizations (and teams within organizations) as opposed to doing custom one-off virtual application development.

In a sense, Tenant Operations function as the dev ops team from a cloud service perspective and really instantiate the concept of a service mindset, becoming the face to the external end users of the cloud environment.

These Changes are Doable

The bottom line here: transforming IT Ops is doable. I have worked with many IT organizations that are successfully making these changes. You can do it too.

Additional Resources

For a comprehensive look at how to best make the transition to a service-oriented cloud infrastructure, check out Kevin’s white paper, Organizing for the Cloud. 

Also look for VMware Cloud Ops Journey study findings later this month, which highlights common operations capability changes, and the drivers for those changes. For future updates, follow us on Twitter at @VMwareCloudOps, and join the conversation by using the #CloudOps and #SDDC hashtags.