Home > Blogs > VMware CloudOps

How to Take Charge of Incident Ticket Ping Pong

By Pierre Moncassin

Pierre Moncassin-cropWhen incident tickets are repeatedly passed from one support team to another, I like to describe it as a “ping pong” situation. Most often this is not a lack of accountability or skills within individual teams. Each team genuinely fails to see the incident as relevant to their technical silo. They each feel perfectly legitimate in either assigning the ticket to another team, or even assigning it back to the team they took it from.

And the ping pong game continues.

Unfortunately for the end user, the incident is not resolved whilst the reassignments continue. The situation can easily escalate into SLA breaches, financial penalties, and certainly disgruntled end users.

How can you prevent such situations? IT service management (ITSM) has been around for a long while, and there are known mitigations to handle these situations. Good ITSM practice would dictate some type of built-in mechanisms to prevent incidents being passed back and forth. For example:

  • Define end-to-end SLAs for incident resolution (not just KPIs for each resolution team), and make each team aware of these SLAs.
  • Configure the service desk tool to escalate automatically (and issue alerts) after a number of reassignments, so that management becomes quickly aware of the situation.
  • Include cross-functional resolution teams as part of the resolution process (as is often done for major incident situations).

In my opinion there is a drawback to these approaches—they take time and effort to put in place; incidents may still fall through the cracks. But with a cloud management platform like VMware vRealize Suite, you can take prevention to another level.

A core reason for ping pong situations often lies in the team’s inability to pinpoint the root cause of the incident. VMware vRealize Operations Manager (formerly known as vCenter Operations Manager) provides increased visibility into the root cause, through root cause analysis capabilities. Going one step further, vRealize Operations Manager gives advance warning on impending incidents—thanks to its analytical capabilities. In the most efficient scenario, support teams are warned of the impending incident and its cause, well ahead of the incident being raised. Most of the time, the incident ping pong game should never start.

Takeaways:

  • Build a solid foundation with the classic ITSM approaches based on SLAs and assignment rules.
  • Leverage proactive resolution, and take advantage of enhanced root cause analysis that vRealize Operations Manager offers via automation to reduce time wasted on incident resolution.


Pierre Moncassin is an operations architect with the VMware Operations Transformation global practice and is based in Taipei. Follow @VMwareCloudOps on Twitter for future updates.

 

4 Ways to Maximize the Value of VMware vRealize Operations Manager

By Rich Benoit

Benoit-cropWhen installing an enterprise IT solution like VMware vRealize Operations Manager (formerly vCenter Operations Manager), supporting the technology implementation with people and process changes is paramount to your organization’s success.

We all have to think about impacts beyond the technology any time we make a change to our systems, but enterprise products require more planning than most. Take, for example, the difference between installing VMware vSphere compared to an enterprise product. The users affected by vSphere generally sit in one organization, the toolset is fairly simple, little to no training is required, and time from installation to extracting value is a matter of days. Extend this thinking to enterprise products and you have many more users and groups affected, a much more complex toolset, training required for most users, and weeks or months from deployment to extracting real value from the product. Breaking it down like this, it’s easy to see the need to address supporting teams and processes to maximize value.

Here’s a recent example from a technology client I worked with that is very typical of customers I talk to. Management felt they were getting very little value from vRealize Operations Manager. Here’s what I learned:

  • Application dashboards in vRealize Operations Manager were not being used (despite extensive custom development).
  • The only team using the tool was virtual infrastructure (very typical).
  • They had not defined roles or processes to enable the technology to be successful. outside of the virtual infrastructure team.
  • There was no training or documentation for ongoing operations.
  • The customer was not enabled to maintain or expand the tool or its content.

My recommendations were as follows, and this goes for anyone implementing vRealize Operations Manager:

  1. Establish ongoing training and documentation for all users.
  2. Establish an analyst role to define, measure and report on processes and effectiveness related to vRealize Operations Manager and to also establish relationships with potential users and process areas of vRealize Operations Manager content.
  3. Establish a developer role to create and modify content based on the analyst’s collected requirements and fully leverage the extensive functionality vRealize Operations Manager provides.
  4. Establish an architecture board to coordinate an overall enterprise management approach, including vRealize Operations Manager.

The key takeaway here: IT transformation isn’t a plug-and-play proposition, and technology alone isn’t enough to make it happen. This applies especially to a potentially enterprise-level tool like vRealize Operations Manager. In order to maximize value and avoid it becoming just another silo-based tool, think about the human and process factors. This way you’ll be well on the way towards true transformational success for your enterprise.

—-
Rich Benoit is an Operations Architect with the VMware Operations Transformation global practice.

Building Service-based Cost Models to Accelerate Your IT Transformation

By Khalid Hakim

“Why is this so expensive?”

As IT moves towards a service-based model, this is the refrain that IT financial managers often hear. It’s a difficult question to answer if you don’t have the data and structure that you need to clearly and accurately defend the numbers. Fighting this perception, and building trust with the line of business, requires a change in how IT approaches cost management that will match the new IT-as-a-service format.

The first and most important step in building service-based cost models is defining what exactly a service is, and what it is not. For example, the onboarding process: is this a service, a process, or an application? Drawing the lines of what service means within your organization, and making it consistent and scalable, will allow you to calculate unit costs. Businesses are already doing cost management by department, by product, by technology, but what about the base costs, such as labor, facilities, or technology within a software-defined data center? Your final service cost should include all these components in a transparent way, so that other parts of the business can understand what exactly they are getting for their money.

Building these base costs into your service cost requires an in-depth look into how service-to-service allocation will work. For example, how do you allocate the cost of the network, which is delivered to desktops, client environments, wireless, VPN, and data centers? Before you can start to bring in a tool to automate costing out your services, map out how each service affects another, and define units and cost points for them. While it’s often tempting to jump straight into service pricing and consider yourself done once it’s complete, it’s important to start with a well defined service catalog, including costs for each service, then to continue to manage and optimize once the pricing has been implemented. Service costing helps to classify your costs, to understand what is fixed, what is variable, direct, indirect, and so forth.

So we’ve allocated the shared cost (indirect cost in accounting language) of services across the catalog. Now it’s time to bring in the service managers—the people who really understand what is being delivered. Just as a manufacturing company would expect a product manager to understand their product end to end, service managers should understand their entire service holistically. Once you’ve built a costing process, the service manager should be able to apply that process to their service.

In the past, service managers have really only been required to understand the technology involved. Bringing them into this process may require them to understand new elements of their service, such as how to sell the service, what it costs, and how to market it. It helps to map out the service in a visual way, which helps the service managers understand their own service better, and also identifies the points at which new costs should be built into the pricing model. Once you understand the service itself, then decide how you want to package it, the SLAs around it, and what the cost of a single unit will be. When relevant, create pre-defined packages that customers will be able to choose from.

SCP white paper coverOnce the costing has been implemented, you can circle back and use the data you’re gathering to help optimize the costs. This is where automation can offer a lot of value. VMware Realize Business (formerly IT Business Management Suite) helps you align IT spending with business priorities by getting full transparency of infrastructure and application cost and service quality. At a high level, it helps you build “what if” cost models, which automatically identify potential areas for cost reduction through virtualization or consolidation. The dashboard view offers the transparency needed to quickly understand cost by service and to be able to justify your costs across the business.

Service-based cost models are a major component of full IT transformation, which requires more than just new technology. You need an integrated approach that includes modernization of people, process, and technology. In this short video below, I share some basic steps that you need to jumpstart your business acumen and deliver your IT services like a business.

For more in-depth guidance, you can also access my white paper: Real IT Transformation Requires a Real IT Service Costing Process, as a resource on your journey to IT as a service.

====
Khalid Hakim is an operations architect with the VMware Operations Transformation global practice and is based in Dallas. You can follow him on Twitter @KhalidHakim47.

 

ISO Standards and the VMware Private Cloud Operating Model

By Craig Savage

Craig Savage cropI am often asked, when talking with my clients about the organization and process changes that come with the evolution to cloud operations, how the VMware’s private cloud operating model affects companies that are, or are planning to be, certified to the ISO/IEC 20000 (Information Technology – Service Management) or ISO/IEC 27000 (Information Technology – Security Techniques – Information Security Management Systems) family of standards. For brevity, I will refer to them as ISO20000 and ISO27000.

In this short article I will demonstrate how working with VMware to evolve your organization to this model can actually ease the compliance burden or make certification simpler. For simplicity I will refer to compliance, which covers security and regulatory compliance, as the concepts in the VMware private cloud operating model apply to both.

The ISO/IEC 27000 series of standards provide what are considered to be best practice recommendations on information security management, risks, and controls within the context of an overall information security management system (ISMS). This ISMS can either be an extension of an Information Management System from another standard previously certified, or adapted to cater for further standards if ISO27000 is the first certification obtained. It is broad in scope, covering more than just privacy, confidentiality and technical security issues, and is designed to promote pragmatic security throughout an organisation.

ISO/IEC 20000 is the international standard for IT Service Management. If you have heard of BS 15000, it was based on this British Standard and was developed to reflect best practice guidance contained within the ITIL (Information Technology Infrastructure Library) framework. Like ISO27000, it requires an Information Management System, in this instance called a Service Management System.

Basically the ISO certification process requires that you have documented all your processes and roles, that you continuously monitor and improve them, and that you have a repository where you can store all of the evidence that you are doing this, which I will refer to as an Information Management System (IMS).

I believe that it’s important to differentiate between a process model and an organization structure. This may sound obvious, however it is worth being clear that the only correlation required between the process model and the organization structure is that there are defined owners for each process, and these people are in a position of suitable authority to carry out the processes they are responsible for and to optimize them for their organisation.

Figure 1 below illustrates VMware’s private cloud operations framework that comprises of process areas and functional activity groupings that are recommended to build a mature, efficient, and agile cloud operations environment for our customers. The red highlighted box has been added to show that in an ISO-certified environment, you would have need of an additional cross-functional, central IMS in which to capture and evidence the required information for your continued certification audits.

 

Figure 1: VMware Private Cloud Operations Framework

Figure 1: VMware Private Cloud Operations Framework

While VMware’s private cloud model does not deliver an IMS in its entirety, the document packs that accompany the services our Operations Transformation Services team deliver can be used to form the basis of a basic IMS or can be overlaid on your existing IMS data by your IMS administrator. For example, the  operating model has clear descriptions of the functional areas, tenant operations and cloud infrastructure operations, and has role packages for all of the core roles in our structure, which include skills matrices and training plans – these are key requirements for both of the ISO standards, so either overlay nicely into your existing role packages if you are certified, or form an excellent baseline to start from.

The VMware private cloud operating model also defines the interactions and relative responsibilities between the roles in a RACI (Responsible, Accountable, Consulted, Informed) style chart, a further boon when it comes to your internal and external audits. The role packages and RACI chart do not list the specific responsibilities and activities that each standard looks for; these can be aligned specifically for your organisation.

Other key areas of compatibility to call out include Continuous Improvement and Security Management. Continuous Improvement – as well as being a core concept within both ISO standards – is central to the methodology used by VMware for operations transformation, as per our continuous improvement cycle diagram below. Security Management is of course a substantial topic in its own right, but with the VMware cloud operating model adopting a “security built in” approach and the focus on service management—as a cloud that is not secure or well managed is useful to no one—the natural relationship is self-evident.

Figure 2: Continuous Improvement Cycle

Figure 2: Continuous Improvement Cycle

In summary, this alignment of core concepts between the VMware private cloud operating model and the requirements of the ISO standards makes them naturally compatible and complementary. This article only provides an introduction to these topics. Working with VMware to evolve your IT organisation to the future-facing private cloud model can benefit any standards-based compliance regime you have in place or plan to implement.

====
Craig Savage is a VMware operations transformation architect and is based in the UK. You can follow @craig_savage on Twitter.

Is Your IT Organization Ready to Deliver?

By Kevin Lees

Kevin_cropI recently updated the white paper I wrote a couple of years ago — Organizing for the Cloud — which has been quite popular with our customers. The good news:

  • It’s shorter – condensed to really focus in on the areas that our customers have told us are the greatest help
  • The core concepts and models remain intact and have survived the test of time, and our customers continue to benefit from our best practice recommendations

From my perspective, there is no bad news; at least any I could come up with. IT leaders continue to validate with me that a new organizational approach as well as their people—and their roles and responsibilities—are more important than ever.

While I wrote that the core concepts and models have survived the test of time, that’s not to say this is just a condensed version. I’ve updated a few sections based on new technical capabilities enabled by the SDDC and my experience working directly with customers  including:

  • The organizational impacts of the software-defined data center (SDDC) as the cloud infrastructure – including a couple of new roles
  • How to get started
  • An expanded section on key cross-team collaboration

Cover v2Just to name a few.

Organizational change continues to be top of mind as IT executives implement SDDC as the infrastructure of choice for cloud as well as double down on their use of cloud as the future of IT. No matter what the intended topic of conversation when discussing the operational implications of SDDC and cloud, 9 out of 10 conversations I have with customers quickly turn to organizational implications.

Organizational change is a critical step to success in the new era of cloud. I hope you find this revision as useful as our customers found the original.

=====
Kevin Lees is principal architect for VMware’s global Operations Transformation Practice and is based in Colorado.

How to Measure the Impact of Your IT Transformation

By Matt Denton

Matt_Photo1Generally when a company makes a decision to move in a new direction, a lot of analysis and rigor take place to ensure the decision is the right one. Business cases are created and vetted until everyone is in agreement and the project is approved. This is all great and necessary to kick off a new initiative. However, once the project is in motion, how often do we measure the results against the original business case to see if we are delivering on what the company expected?

Think about it. A project gets kicked off and everyone is heads down implementing the new changes and making sure they meet their deadlines. Going back to review a business case is usually not a priority and, quite frankly, who has the time? But at some point, senior leadership will ask for an analysis, and one will be created to meet that one-time request. Then, it is back to business as usual—until the next request comes along.

What if you could measure the impact IT transformation has on the business proactively and in real time? Projects become more meaningful. Employees can see how their work is impacting the business. Transformation begins to make sense and can be justified. This can be done if you take the time to generate key performance indicators and metrics ahead of time.

Start by asking the team these questions at the beginning of a project:

  1. Why are we doing this?
  2. What are we trying to improve?
  3. How will we measure it?
  4. What is our current state benchmark?
  5. What is our target?
  6. How will we impact the business if we reach our target state?
  7. Do we have data to measure progress?

What Metrics Matter Most?
Usually I see companies measure progress based on financial metrics. For example, did we save the company money? However, there are hundreds of metrics that relate to agility, cost, and quality. The key is to pick those that are most impactful to the processes you expect to improve as part of the transformation. These may not all be financially driven, but will still have a measurable impact on the business.

Below are some other areas where you can measure business impact:

  • IT financial management
  • Service level management
  • Demand management
  • Service desk management
  • Incident management
  • Problem management
  • Change management
  • Configuration management
  • Availability management
  • Continuity management
  • Release management
  • Capacity management
  • Security management

Some of the metrics that fall into these categories are what I refer to as the “hard to quantify” or “soft” benefits. These are generally thrown out or overlooked during the business case development. I believe that once you can quantify these, you can translate them into real benefits and measure their impact on the business.

Provided the data exists, I’ve been able to help many clients both track the metrics they decide to measure and demonstrate how they can show the impact IT transformation has on their company. By quantifying these metrics and showing the impact your improvements are making on the business, you will know at any given time if the changes you are undertaking are making a difference or if you are falling short of your expectations. And, you will also be able to identify if additional changes are required to meet the project’s objective.

Too often, I see clients lose focus on the reason they started a project. This is easy to do on long projects. People change roles, leadership changes, or other projects take priority. Putting metrics in place and understanding their impact on the business will help you maintain that focus. The qualitative data gathered during implementation and post implementation are important to measure the impact IT transformation has made on your business. The data you collect and analyze will begin to tell a story and allow you to make precise decisions on where additional improvements are needed to make the biggest impact.

======
Matt Denton is a VMware transformation architect and is based in Wisconsin.

Incorporating DevOps, Agile, and Cloud Capabilities into Service Design

By Reg Lo

ReginaldLo-cropShadow IT is becoming more prevalent because the business demands faster time-to-market and the latest innovations—and business stakeholders believe their internal IT department is behind the times or hard to deal with (or both!). The new IT requires new ways of designing and quickly deploying services that will meet and exceed customer requirements.

Check out my recent webcast to learn how to design IT services that:
• Take advantage of a DevOps approach
• Embody an Agile methodology to improve time-to-market
• Leverage cloud capabilities, such as elastic capacity and resiliency
• Remove the desire of the business to use shadow IT

BrightTalk webinar

===
Reg Lo is the Director of the Service Management practice for VMware Accelerate Advisory Services and is based in California.

 

 

3 Key Trends for 2015: How to Keep Pace with the Rapidly Changing IT Landscape

craig dobsonBy Craig Dobson

So much happened in 2014, and as the New Year begins, I’m looking forward to finding out what 2015 holds—both from a market and an industry perspective. One thing is for certain: the rapid changes we have seen in our industry will continue into the New Year. In fact, the pace of change is likely to accelerate.

I believe the following key trends will be shaping the IT landscape of 2015:

  • Increased application focus
  • Continued movement from CapEx to OpEx models (embracing “x-as-a-Service”)
  • Heightened focus on accurate measurement of the cost-of-IT

Let’s explore these trends in a little more detail.

Application Focus

All throughout 2014 I have been hearing clients say: “it’s all about the application.” In the face of global competition and with the rise of disruptive startups testing the old school business models, the lines of business are seeking innovation, market differentiation, and quick response to changing market dynamics. They are driving IT—and all too frequently looking outside, to cloud-based solutions— to enable quick response to these dynamic changes, often at a lower entry cost.

In 2015, lines of business will prioritize and focus on the business applications that will support the goal of serving, winning, and retaining customers. Application portfolios will change to hybrid architectures that increasingly leverage x-as-a-service models. Supporting platform decisions (such as infrastructure and cloud) will be made based on application decisions. IT professionals will need to stay on top of evolving business applications in order to more effectively support the demands of the lines of business.

Moving from CapEx to OpEx

The appetite to consume anything-as-a-service from external providers has grown throughout 2014, and is now significantly shifting the IT funding model from three- to five-year CapEx investments to OpEx-based consumption models. This shift will accelerate in 2015, and will often be tied to shorter contract periods, with an increased focus on cost and an expectation of a continued improvement on cost-to-serve.

What is driving this change is a general acceptance by mainstream enterprise businesses and different levels of government (through policy changes) that cloud-based services make economic sense, combined with the fact that the business risk of consuming these services has decreased.

Accurate Measurement of the Cost-of-IT

With the shift from CapEx to OpEx models and the focus on the business value of the application lifecycle, the CIO will be under even more pressure to show value back to the lines of business. In 2015, with these new dynamics, and with IT moving to become a full broker of services or portfolio manager (for both internal and external services) delivering x-as-a-service capabilities, this change will demand a greater level of granular and real-time financial reporting at a service level for the consuming lines of business.

This increased financial awareness will provide the ability for IT to show value, offer apples-to-apples comparison between internal IT and external services, as well as comparison between suppliers.

In addition to the cost transparency measures, I believe we will also see an aggressive focus on driving down operational costs to allow the savings to be targeted at next-generation business applications.

Ready for 2015

Let’s face it — change is a given, and 2015 will be no exception for IT. Forward-thinking IT leaders will get ready to deliver applications that meet the dynamic demands of the business; x-as-a-service offerings that meet or exceed end-user requirements; and financial reporting capabilities that not only show end users what they’re paying for but also enable IT to quantify its value.


Craig Dobson is Senior Director of VMware Technical Services for the Asia Pacific region and is based in Sydney.

What Metrics Should Be Measured for Change Management?

By Kai Holthaus

kai_holthaus-cropThat, of course, depends (favorite answer of consultants everywhere…). As an IT executive, start by asking yourself what you want to achieve. Once you select a critical success factor (CSF), key performance indicator (KPI) or associated metrics and start to report on those metrics, you will see two types of behavior within your IT organization.

Most of your employees will want to be good team players, and they will work to meet the desirable metrics (and avoid undesirable ones). For example, if you start reporting on the number of changes implemented without proper authorization (which could be discovered through configuration audits), and you start disciplining staff for implementing changes without authorization, you will see the number of unauthorized changes go down (in most cases). However, you will also find that some staff will try to game the system by implementing changes without proper authorization, then making it look as if (in your tracking system) they’d had the authorization.

Also keep in mind, that metrics can have unintended consequences. Sticking with the example of tracking (and trying to reduce) the number of unauthorized changes, you may be surprised to see the backlog of changes waiting to be approved grow, because your approval process was not yet ready to handle all the change that was going on in your environment. So, it’s a good practice to be prepared to adjust your metrics accordingly. This also applies to metrics that have been in place for a while. If you have driven the number of authorized changes to zero, and have held it there for the last 12 months, you may want to consider adjusting your focus to other issues (but don’t lose sight completely…unauthorized changes can quickly creep back in).

Finally, make sure that you can actually measure the things you need to measure to report on CSFs and KPIs. Setting a goal of no unauthorized changes is laudable but will remain a goal until you have found a way to detect unauthorized changes.

To conclude, here are some examples of KPIs to consider for your change management process:
kai graphic

=========

Kai Holthaus is a transformation consultant with VMware Accelerate Advisory Services and is based in Oregon.

Making IT Go Faster – Forrester Research Sheds Light on How

By Kurt Milne

kurtmilne-cropToday’s IT managers face increasing pressure to be more responsive and move faster. However, most IT organizations have built their IT organization to promote control and safety. People, process, and tools have traditionally been deployed to strictly limit change in order to optimize service quality and efficiency. In fact many of the most successful IT organizations have built their reputation by deploying elements of ITIL or other control frameworks to ensure critical system uptime.

Latest Forrester Research lays out a path forward for IT organizations that want to increase agility without losing control

orrester coverIt is easy to say – “Let’s use the cloud to move faster and be more responsive to the business.” But how do those with investment in ITIL, or who have thoughtfully developed process control methodologies, adapt to new demands for speed? Demands forcing IT to do things it may not be comfortable with? A new Forrester study based on interviews with 265 IT professionals in North America and Europe sheds some light on the best path forward.

Forrester found that:

  • IT organizations are quickly moving to on-demand, dynamic IT infrastructure
  • Users demand faster provisioning and want IT to be easy to consume
  • Those companies that have already deployed more dynamic change models are moving away from a centralized CMDB strategy
  • Developers are the primary consumers of ready-to-use application middleware stacks
  • IT can support rapid change without sacrificing configuration, compliance, and governance controls

If you have investment in IT process maturity and are looking to improve IT agility and deploy more automation without sacrificing control, then read the full Forrester report.
—-
Follow @kurtmilne on Twitter.