Home > Blogs > VMware CloudOps

Service Catalog Is The New Face of IT

By Choong Keng Leong

Keng-Leong-Choong-cropMany organizations on their journey to delivering IT as a service have chosen to adopt and implement VMware vCloud® Automation Center™ to automate the delivery and management of IT infrastructure and services through a unified service catalog and self-service portal.  As this transformation requires a new IT operating model and change in mindset, a common challenge that IT organizations encounter is:

  • How do I define and package IT services to offer and publish on the service catalog?

This is analogous to a mobile operator putting together a new mobile voice and data plan that the market wants and pricing it attractively.

Here’s a possible approach to designing a service catalog for vCloud Automation Center implementation.

Service Model
Service catalog is the new face of IT. It is a communication platform and central source of information about the services offered by IT to the business. It is also empowering users through an intuitive self-service portal that allows them to choose, request, track, and manage their consumption and subscription to IT services.

The first step to developing the service catalog and identifying the services within it is to understand the business requirements as to how these demands are going to be fulfilled — that is to develop a service model. For example, you could start with a business function — Sales — and then pick a business process — client relationship management (CRM). CRM can be further broken down into three domains: operational CRM, collaborative CRM, and analytical CRM. Each of the CRM systems can be instantiated in different environments (product, test, and development). Each instance is technically implemented and delivered via a three-tier system architecture. What you would get is shown below in Figure 1, which is a service model for CRM.

ServiceCatalog

Figure 1. Service Model for CRM

Repeat the above steps for the other business functions. At the end of the exercise, you have defined service categories, catalog items, and service blueprints for implementation of a service catalog and self-service portal in vCloud Automation Center.

Service Catalog
Using the above business centric approach allows you to define a customer-friendly service catalog of business services. The service categories and catalog items are in business-familiar terms, and only relevant information is presented to the business user so as not overwhelm him/her with the complexities of the underlying technologies and technicalities.

The business services are provisioned using service blueprints, which are templates containing the complete service specifications, technical service levels (e.g., RTO, RPO, and IOPS), and infrastructure (e.g., ESXi cluster, block or file storage, and network).  The service blueprints allow IT to automate provisioning through vCloud Automation Center. To maximize business benefits and optimization of infrastructure resources, it is also important to establish a technical service catalog of technical capabilities and to pool infrastructure resources with similar capabilities. Then, vCloud Automation Center can provision a service via the service blueprint to the most cost-effective resource pool and providing optimal performance.

In summary, using a business-centric approach to designing your service catalog elevates IT to speaking in business terms and provides a whole new IT experience to your users.

——-
Choong Keng Leong is an operations architect with VMware Professional Services and is based in Singapore. You can connect with him on LinkedIn.

The Operations Transformation Track at VMworld: What’s It All About?

VMworldVerticalGIF_07.14.14 (1)As businesses move toward the software-defined data center model, it’s not just the technology that changes. IT organizations must evolve – sometimes radically – to ensure a more service-oriented and business value focused way of operating. Sure, that’s a logical argument, but how do you make sure those operational changes happen? How do you hire or retrain employees with the right skills to manage these new technologies? Which processes do you need to overhaul? Which processes are no longer necessary? Is your org chart even relevant anymore?

Come hear real-world experiences from companies like Boeing and McAfee, which have successfully shifted their operating models. Boeing’s Enes Yildirim explains how the company put itself on the path to multi-million dollar cost savings. McAfee’s Meerah Rajavel details how the security software firm turned the vision of IT transformation into business achievement.

Check out the SDDC > Operations Transformation track in the Schedule Builder for these and more sessions that will share best practices and key considerations to accelerate your journey to the software-defined data center.

7 Key Steps to Migrate Your Provisioning Processes to the Cloud

By David Crane

dcrane-cropIn an earlier blog, my colleague Andy Troup shared an experience where his customer wanted to embark on a process automation project, which could have had disastrous (and consequently frustrating and costly) results, as the process itself was inherently unsuitable for automation.

Automating processes is one of the first projects that organizations embark on once a cloud infrastructure is in place, but why? The answer lies in legacy IT organizations structures that have typically operated in silos.

Many of the IT organizations that I work with have a number of groups such as service development, sales engineering, infrastructure engineering, and IT security that face similar challenges that can include (among many others):

  • Applications provisioned across multiple environments such as development, QA, UAT, sales demonstrations, and production
  • Managing deployments of application workloads in a safe and consistent manner
  • Balancing the speed and agility of deploying the services required to deliver and improve business results while meeting compliance, security, and resource constraints

With the agility that cloud computing offers, organizations look to the benefits that automating the provisioning processes may bring to overcome the above challenges such as:

  • Reduced cycle time and operating costs
  • Improved security, compliance, and risk management
  • Reduced capital and operating expenditures
  • Reduced need for management/human intervention
  • Improved deployment/provisioning time
  • Consistent quality in delivery of services

The IT organizations I work with are often sold these benefits without consideration of the operational transformation required to achieve them. Consequently, when the IT team kicks off a project to automate business processes, especially service provisioning, their focus is on the potential benefits that may be achieved. The result of this focus is that automation becomes a panacea, and not something that should underpin the IT organization’s overall operational transformation project.

As IT leaders, when considering migrating your provisioning processes to your cloud environment, you need to realize that automation alone will not necessarily provide the cure to problems that exist within a process.

You should not consider the benefits of automation in isolation. For example, too much focus on cost reduction can frequently lead to compromise in other areas leading to objections and resistance from business stakeholders. You should consider other benefits that have non-tangible (or direct) metrics, such as improved staff satisfaction. Automation frees your technical staff from repetitive (and uninteresting) activities, which results in both improved staff retention and indirect cost benefit.

As you select processes to migrate from a physical (or virtual) environment to the cloud, the subsequent automation of those processes should not be an arbitrary decision. Frequently my clients choose processes as candidates for automation for reasons based on personal preferences, internal political pressures, or because some process owners shout louder than others!

Instead the desired business benefits the organization wishes to achieve should be considered in conjunction with the characteristics, attributes, and measurable metrics of a process characteristics and a formal assessment made of its suitability for automation.

Your automation project should also be implemented in conjunction with an organization structure assessment, which may require transformation and the introduction of new roles and responsibilities required to support the delivery of automated and self-optimizing processes.

Important Steps to Your Successful Process Assessment
Based on my experience assisting customers in this exercise, I recommend taking these steps before you embark on a process assessment:

  1. Understand automation and what it actually means and requires. Many organizations embark on automation without actually understanding what this means and the context of automated processes and their capabilities. Subsequent delivery then either leads to disappointment as automation does not meet expectations, or the process is not truly automated but instead has some automated features that do not deliver all the expected benefits.
  2. Identify and document the expected business benefits to be achieved through introduction of process automation. This is an important task. Without understanding the benefits automation is expected to achieve, you cannot identify which processes are the correct choices to help you do just that.
  3. Understand cloud infrastructure system management capabilities required to support process automation (e.g., ability to detect environmental changes, process throughput monitoring capability) and implement if required.
  4. Identify ALL processes required to support automated provisioning (e.g., instantiation, governance, approval) to create a process portfolio.
  5. Identify the common process automation characteristics that exist across the process portfolio (e.g., self configuration, self healing, self audit and metric analysis). Note that process characteristics are unique, high-level identifiers of automation across the portfolio.
  6. Identify the common attributes that the process characteristics share. These are more granular than process characteristics and thus may be common to more than one characteristic in the same process.
  7. Identify the metrics available for each process in the portfolio, and apply a maturity assessment based on their ability to be measured and utilized. Metric maturity is an essential part of the assessment process as it determines not just the suitability of the process for automation, but also its capability to perform self optimization.

Process Assessment Weighting and Scoring
When undertaking a process assessment program, an organization needs to understand what is important and prioritize accordingly. For example, if we consider the business benefits of automation, a managed service provider would probably prioritize business benefits differently to a motor trade retail customer.

Once you’ve prioritized your processes, they can be assessed more accurately and weighted based on each identified business benefit. Prioritization and weighting is essential, and you need to carefully consider the outcomes of this exercise in order for your process assessment to reflect accurately whether processes are suitable for automation or not.

And remember, as previously mentioned avoid considering each assessment criteria in isolation. Each process characteristic and associated attribute can have a direct impact on the desired business benefit, however if its metric maturity is insufficient to support it, then the business benefit will not be fully achieved.

For example, let’s say that you have identified that a business process you wish to automate has a self-healing characteristic. One of the attributes the characteristic possesses is the ability to perform dynamic adjustment based on real-time process metrics. The characteristic and attribute would lead you to expect the realize benefits such as reduced cycle time, reduced OpEx, consistent quality of service, and improved staff retention.

However, although you’ve identified the metrics required to meet the characteristic and attribute needs, they are neither measured or acted upon. Consequently, because the metric maturity level is low, then the expected business benefit realization capability is also lowered.

Figure 1 below shows a small sample of the assessment of a process in relation to a single process characteristic, common attributes, anticipated business benefits and their weighting and the impact that a poor metric maturity has on their capability to deliver the anticipated business benefit.

Figure 1. Assessment displaying impact of low metric maturity

Contrast this to Figure 2 below, which assesses a process with exactly the same characteristics, attributes, business benefits, but has supporting management capabilities and consequently much improved metric maturity:

Figure 2. Assessment displaying impact of high metric maturity

Based on this small data sample, the process in Figure 2 is a more likely candidate for process automation. The assessment process also then identifies, and allows the IT organization to focus on, areas of remediation needed to optimize processes to enable them to be suitable automation candidates.

The result is the IT organization is able to realize not just the business benefits that have been promised by automation more effectively, but they are also able to set realistic expectations with the business, which brings benefits all of its own.

In summary, automation is not the “silver bullet” for broken or inefficient processes. IT leaders need to consider expected business benefits in conjunction with process characteristics, attributes, and metrics and in the context of what is important to the business. By assessing the suitability of a process for automation, you can save the cost of a failed project and disappointed stakeholders. Finally, you should not undertake any provisioning process project in isolation to other operations transformation projects, such as organization structure and implementation of cloud service management capabilities.

I will discuss the steps to success mentioned above in more detail in my next blog.

===

David Crane is an operations architect with the VMware Operations Transformation global practice and is based in the U.K.

 

3 Steps to Get Started with Cloud Event, Incident, and Problem Management

By Rich Benoit

Benoit-cropWe are now well entrenched in the Age of Software. Regardless of the industry, there is someone right now trying to develop software that will turn that industry on its head. Previously, companies worked with one app that had the infrastructure along with it. It was all one technology, and one vendor’s solutions. Now there are tiers all over the place, and the final solution uses multiple components and technologies, as well as virtualization. This app is a shape shifter, one that changes based on the needs of the business. When application topology is changing like this over time, it creates a major challenge for event, incident, and problem management.

Addressing that challenge involves three major steps that will affect the people, processes, and technologies involved in managing your app.

1. Visualize with unified view
The standard approach to monitoring is often component- or silo-focused. This worked well when apps were vertical where an entire application was on one server; but with a new, more horizontal app that spans multiple devices and technologies – physical, virtual, web – you need a unified view that shows all tiers and technologies of an application. That view has to aggregate a wide range of data sources in a meaningful way, and then identify new metrics and metric sources. The rule of thumb should be that each app gets its own set of dashboards: “big screen” dashboards for the operations center that shows actionable information for event and incident management; detailed interactive dashboards that allow the application support team to drill down into their app; and management level dashboards that show a summary business view of application health and KPIs.

By leveraging these dashboards, event and incident management teams can pull up in real time to diagnose any issues that arise (see example below). Visualization is key in this approach, because it allows you to coordinate the data in a way that will actually allow for identification of events, incidents, and problems.

big screen dbVMware® vCenter™ Operations Manager™ “big screen” dashboard

2. Aggregate
When you’re coordinating a number of distributed apps, establishing timelines and impact becomes a much more complicated process. Here’s where your unified view can start to help identify problems before they occur. Track any changes that occur, and then map them back to any changes that have happened. When I’m working with clients, I demonstrate the VMware® vCenter™ Operations Manager™ ability to establish dynamic thresholds. The dynamic thresholds track back what constitutes common fluctuations, and leverages those analytics to establish baselines around what constitutes “normal.” By looking at the overall data in a big picture, the software can avoid false triggering around normal events.

3. Leveraging Problem Management
Ideally, you will be catching events and incidents before they result in downtime. However, that requires constantly looking for new metrics and metrics sources to create a wider view of the app. Problem management teams should be trained to identify opportunities for new metrics and new metrics sources. From there, the development team should take those new metrics and incorporate them into the unified view. When an issue occurs, and you look for the root cause, also stop to see if any specific metrics changed directly before the problem occurred. Tracking those metrics could alert you to a possible outage before it occurs the next time. Problem management then becomes a feedback loop where you identify the root cause, look at the surrounding metric, and then update the workflows to identify precursors to problems.

This doesn’t require you to drastically change how you are managing problems. Instead, it just involves adding an extra analytics step that will help with prevention. The metrics you’re tracking through the dashboard will generally fall into three basic buckets:

  • Leading indicators for critical infrastructure
  • Leading indicators for critical application, and
  • Metrics that reflect end-user experiences

Once you have established the value of finding and visualizing those metrics, the task of problem management becomes proactive, rather than reactive, and the added level of complexity becomes far more manageable.

—————-
Richard Benoit is an Operations Architect with the VMware Operations Transformation global practice and is based in Michigan.

Does IT Financial and Business Management Matter When Implementing Your Cloud?

 By Khalid Hakim

This is one of the common questions that I keep getting from my clients when building IT operations transformation roadmaps. In fact, one of the key considerations of a transformation roadmap is the business management side of your cloud, which often gets deprioritized by key stakeholders.

Let’s think it through: Can you tell me on the spot what your total cloud spend is, and, what that spend is comprised of? Here’s another one:  What’s the cost for you to deliver a unit of infrastructure as a service (IaaS)? And what about your consumers: Who consumes what service and at what cost?

Can you identify the services used and the cost allocation for each service? How is your cost efficiency compared to that of other public cloud infrastructures? How can you use that type of information to optimize the cost of your existing and future operations? How can you create a showback report to each of your stakeholders?

Let’s say that you’re the VP of Cloud Infrastructure — think through how you would justify your data center investments. Have you proactively analyzed demand vs. capacity along with operations cost of your cloud services?  How can you scale dynamically to fulfill your consumer needs? Have you thought about your goal to optimize the cost of delivering cloud services? Don’t you need closer monitoring to the quality of your delivery, such as continuous analysis and improvement?

(I can hear you thinking, enough with the questions already…)

If you are the IT financial manager, imagine how you can reduce your long-term commitments by moving from CAPEX to OPEX if appropriate. With financial and business management capabilities, your planning and IT budgeting would be based on actual cloud service demand and consumption practices. You would also be able to leverage benchmarking and “what if” scenarios for your cloud service costing optimization opportunities.

What keeps CIOs awake at night during a cloud implementation is the challenge of how they will demonstrate and deliver value for the cloud investment, as well as contribute as an innovator to the business by dynamically supporting growth and transformation as a result of their cloud cost optimization.

In fact, your ability to respond at the speed of business through fact-based decision making and responsiveness to changing needs in a dynamic environment is key to your success. The transparency of your cloud delivery value in context of demand, supply, cost, and quality will help improve your alignment with business goals to cloud services delivered.

In my next blog, I’ll cover some of the built-in functionality and business disciplines of the VMware IT Business Management Suite that can help you succeed in your cloud delivery and accelerate time to value.

——–
Khalid Hakim is an operations architect with the VMware Operations Transformation global practice. You can follow him on Twitter @KhalidHakim47.

VMworld-graphicCheck out the VMworld 2014 Operations Transformation track for opportunities to hear from experienced VMware experts, practitioners, and the real-world experiences of customers transforming their IT infrastructure and operational processes.

 

The People and Process Behind the Service Portal

By David Crane

dcrane-cropAs more IT organizations move away from the traditional, siloed model of IT and toward becoming a service provider, new questions arise. Running a smooth, cost-effective, efficient service portal can ease a number of the issues that IT faces, but only if done correctly.

The portal serves as the interface that helps consumers navigate through available service options and select them as needed. Behind the scenes, IT is serving as a contractor, comparing service requirements to different capabilities that may be internal, on premise, or from other providers. The user doesn’t care, so long as they are getting what they need.

So you have a portal, and you have a cloud. Now what?

Consistently Capture Service Requirements
With the right foundation, managing the service portal can be a smooth process,. The first step is to understand the unique requirements that your users have, and deliver the resources that are going to meet their needs. The best way to understand that is to step outside the traditional organizational silos and engage directly with the lines of business.

Once you understand the various service needs, create service charts, such as the example below:

Service chartThese will serve to identify all the different components required in each service. Most of these components will be common across different services, and can then be built out separately. Take a “cookie cutter” approach to these components, so that when mixed and matched they will create the services needed. Part of correctly understanding these components will involve a deeper understanding of the service definition process. What tasks will need to happen across all operational levels? Who will be responsible for those tasks?

Right People, Right Services
Oftentimes, IT organizations feel anxiety about the level of automation that stands behind a portal. It’s challenging to think of users that have previously been carefully walked through specialized processes suddenly having the ability to requisition services through an automated process. Creating clearly defined roles and restricting access to the catalog based on those roles can alleviate these fears.

Once you have the roles defined, deploy provisioning groups to different IT resources consumers. Allow these provisioning groups to handle the issue of deployment capabilities and instead focus on using policies to govern how those deployments will take place. Use the defined roles for the portal to determine which users can perform which actions within their environment. The policies will dictate which components will be required in each context.

When Is a Service Ready to Go in the Catalog?
Some IT organizations, once their portal is set up, try to lump the service portfolio process in with the service catalog management responsibilities. This can lead to frustration and inefficiency down the line, and can undermine the cost savings and automation value that the cloud provides. Instead, use your senior technical resources to create the service definitions and components. This will be the best use of their skills, and is also the work that they are going to find challenging and interesting.

Once that is done, more junior resources can combine and deploy those components into the catalog. It becomes a simple process of handing the service configuration document to the person responsible for deployment.

Integrated Transition to Catalog
The transition process — getting services out of the catalog and into the portfolio — can be difficult and technical. Avoid a lot of the messiness by getting operational input early in the process so that all the requirements are understood up front. Here, again, is where it’s important to keep your senior resources working on high level issues: getting components aligned to the corporate enterprise structure, security, and any other issues that require IT’s attention. If the components are aligned to the business needs, the services that are composed of those components will also align by default.

Once the business and IT agree that there is a need for a service, the service owner and service architect should ensure that the required components exist. For any component, security, access policies and provisioning processes should already be determined — no need for testing, change process or QA. From there the service architect can take the components out to create the service configuration. Keep this streamlined and simple.

New Roles
Making all this work smoothly requires some new roles within the organization. A customer relationship manager (CRM) will act as the interface between the tech teams and the consumer. The CRM captures the requirements, keeps the consumer happy, and keeps IT aligned and communicating with the business. Unlike a managed service provider, the CRM should operate within the cloud tenants team to ensure and understanding of internal IT. The service owner, discussed above, is responsible for taking the requirements gathered and doing something with them, including negotiating contracts with the cloud providers.

The service portfolio manager will know the portfolio inside and out, and create a standardized environment. The service architects will combine components and author a configuration document whenever a new service is required. The service QA will test the created services. The service admin will be responsible for taking the configuration requirements and deploying into the catalog.

The service portal should serve as a powerful tool that connects consumers, both internal and external, with the services they need. By building strong component foundations, creating well-defined roles, and assigning resources where they will be most effective, IT organizations can ensure that their portal process runs smoothly and efficiently.


David Crane is an operations architect with the VMware Operations Transformation global practice and is based in the U.K.

A New Kind of Hero

By Aernoud van de Graaf

AERNOUD VDG crop-filterIn every IT organization there is that special person. Let’s call him Phil. Phil is the kind of person who…
- knows everything
- can fix anything
- knows everybody
- and gets things done in hours that would take others days or weeks to accomplish.

Phil is a hero. If the system goes down, call him, and he will fix it — even if it takes him all night. You can call Phil any time of the day, and he will rise up to the challenge and make sure that the system is up and running in no time. And you love him for it.

One day before a project was to go live, the project leader found out that to implement the new application required a change to the firewall rule, which normally takes two weeks. He called Phil, and after a friendly cup of coffee with the network guys, the firewall change was made in just two days (instead of two weeks).

But during the installation of the application, the test database was accidentally connected to the application instead of the production database. Chaos ensued. The lines of business (LOBs) were furious because customers were getting the wrong orders and threatening to end their contracts. Phil fixed it within a day — and though a lot of money was lost due to wrong orders, not one customer ended its contract. It could have been much worse, had it not been for Phil.

Because of his history of “saved the day” work, Phil gets recognized in the organization — not just within IT, but also with the LOBs. Everyone knows and loves Phil, and he gets high scores on his reviews. After a particularly nasty problem took him an entire weekend to fix, Phil got an extra bonus and a nice weekend with his family in a 5-star hotel. He earned it.

As we said before. Phil is our hero.

And then there is John. John has a vision. He is convinced that things need to change. The current environment is far too complex and needs to be simplified and standardized. Also, most things are done manually, taking way too much time and causing a lot of problems — because people make mistakes.

John envisions a world where IT users have access to a portal that will automatically provision what they need, without any human intervention. Applications, workplaces, middleware, servers, storage, network, and security — all at the push of just a few buttons. The user simply classifies the business requirements, and the policies will ensure the application is provisioned to meet the requirements. John did his research and found that this way, implementation errors can be nearly eliminated and time-to-market greatly improved.

No longer would IT need loads of people monitoring and managing the operations and health of the infrastructure. No more firefighting 24X7. Most things would be handled by the tooling. If manual intervention were required, the tooling would give context and advise on where to look and what could be done. This may be the opportunity to put Phil to good use. Instead of spending time fixing things, Phil can start creating solutions that would add real value to the business.

Phil is the only one with the technical skills and the influence with the CIO to bring this vision to reality, so John sets up a meeting with him. John explains to Phil all the advantages that standardization, virtualization, and automation will bring to the organization. No more weekends spent fixing problems, no more last-minute interventions. Everything will run smoothly. The business will profit because the quality of service will go up, costs will go down, and IT can change at the speed of the business — speeding up innovation and time-to-market. John asks Phil to help design and build the solution and to support him in presenting it to the CIO.

But Phil is not as enthusiastic as John would have thought. In Phil’s opinion, no tools can replace the knowledge he has of the IT environment. And he is already far too busy with today’s work to spend time on John’s project. Automation maybe useful, but virtualization and automation will only make things worse for Phil. He would lose insight of where the applications are running and is afraid he could no longer fix things because of constant changes based on rules a tool determines are useful. Phil states that IT should stay the way it is, where he knows all the little nooks and crannies and how to fix them. He will not support John’s initiative and will advise the CIO to leave things status quo.

John is surprised. He does not understand why Phil would not want to improve the current situation.  Having IT run smoothly may not provide as many opportunities to save the day, but it will ensure that IT is really enabling the business.

It is time for a new kind of hero.

—–
Aernoud van de Graaff is a business solutions architect with VMware Accelerate Advisory Services and is based in the Netherlands. You can follow him on Twitter @aernoudvdgraaff 

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

VMware #1 in IDC Worldwide Datacenter Automation Software Vendor Shares

Today’s VMware Company Blog announces that market research firm IDC has named VMware the leading datacenter automation software vendor based on 2013 software revenues.(1)

IDC’s report, “Worldwide Datacenter Automation Software 2013 Vendor Shares,” determined that VMware’s lead in 2013 jumped 65.6 percent over 2012 results and its market share now stands at 24.1 percent, more than 10 percentage points above the second place vendor. Overall, the worldwide market for datacenter automation grew by 22.1 percent to $1.8 billion in 2013. Download full IDC report here.

(1)   IDC, “Worldwide Datacenter Automation Software 2013 Vendor Shares,” by Mary Johnston Turner, May 2014

How IT Can Transform “Trust Debt” into True Business Alignment

By Kevin Lees

Kevin_cropIn my previous post, 5 Ways Cloud Automation Drives Greater Cost and Operational Transparency, I wrote about how automation can help alleviate tension between IT and and the lines of business. Let’s continue to explore that theme as we look into ways to bring in tighter alignment between business objectives and IT capabilities.

“Instead of cost centers that provide capabilities, IT organizations must become internal service providers supplying business-enabling solutions that drive innovation and deliver value…true business partners rather than increasingly irrelevant, cost-centric technology suppliers.” This quote comes from the white paper How IT Organizations Can Achieve Relevance in the Age of Cloud[1], which provides insight into the ways IT needs to change to become a true partner with the entire business to help meet overall objectives.

So to get to that place of true partnership and business agility, all IT has to do is become an internal service provider and deliver business-enabling solutions, and then the business will regard IT as a true partner, right? If only it were that simple.

While there’s nothing wrong with the goal of adding business value and increasing responsiveness to business requirements, there is another problem that has been largely overlooked: The “trust debt” that has built up between IT and its business customers.

As a result of the way it has operated in the past, IT must overcome trust debt to gain true business alignment. Business alignment must be achieved before the optimal “business-enabling solutions” can be designed, developed, and deployed to meet business users’ needs.

As is the case with financial debts, IT must make payments on this accumulated trust debt, with interest. The interest comes in where IT must go above and beyond end user expectations to prove its willingness and ability to ensure that technology helps, rather than hinders, the business. The payments themselves can take many forms, including: implementing new technology that delivers new capabilities, demonstrating a service-oriented mindset, or even taking the extra step of becoming truly transparent.

Overcoming Trust Debt: Starting Point and First Steps
Making any change starts with a bit of exploration and personal reflection. Ultimately technology and IT’s role as a whole is about meeting the needs of the business at the speed business requires. This, of course, demands greater agility — enabled by the ability to offer cloud computing capabilities on top of a software-defined data center.

To overcome trust debt, IT must first get out of its comfort zone, which is firmly rooted in enabling technology. IT leaders may first need to ask: What is IT “enabling” with technology?

Start with the Stack
Let’s face it: Agility demands a dynamic technology stack. To be dynamic at the level that business requires today can only be achieved in software; hardware is too static and difficult to change. A software-defined data center uses a fully virtualized stack that can quickly and dynamically change to meet the needs of the business.

Automation, coupled with the key cloud capability of self-service, on-demand provisioning provides agility. More than anything else, automated self-service, on-demand provisioning alone can be the compelling reason businesses are drawn to cloud. Imagine what would happen if business constituents could select the service offering to deploy, along with the level of service they desire, and some very short time later the virtual server would be available with that service (a marketing demo, for example). That’s a huge win and a step closer to eradicating trust debt.

This level of service alone could become IT’s calling card. The marketing demo example mentioned above is not hypothetical—I saw this recently at a large financial institution. A marketing team needed to stand up a demo that customers could access externally so they could beat the competition to market. Traditional IT said that demo could be available in about six weeks. But the marketing person driving the initiative had heard about this thing called a cloud that had been set up in a separate IT initiative. She contacted the responsible IT team who gave her access. Within 24 hours she had her demo up with customers actively using it.

This one occurrence launched the company’s cloud initiative. Word spread like wildfire throughout the organization and demand ramped so quickly that IT had to gate it to bring on more infrastructure. (If only they’d had a hybrid cloud!)

Simply put, agility sold the cloud. And what better way to regain trust and create new opportunities to drive business alignment?

——
Kevin Lees is Global Principal Architect, Operations Transformation Practice. Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.


 [1] CIO white paper: “How IT Organizations Can Achieve Relevance in the Age of Cloud,” 2013

Guidance for Major Incident Management Decisions

By Brian Florence

Brian Florence-cropIf you’re an IT director or CIO of a corporation that has large, business-critical environments, you’re very aware that if those environments are unavailable for any length of time, your company will be losing a lot of money every minute of that downtime (millions of dollars, even).

Most of my IT clients manage multiple environments, many of which fall into the business-critical category. One proactive step is to define “key” or “critical” environments, which can be assigned to a specific individual accountable for the restoration of service for that environment.

The Information Technology Infrastructure Library (ITIL) defines a typical incident management process as one that is designed to restore services as quickly as possible, and a “major incident” management process is designed to focus specifically on business-critical service restoration. When there are incidents causing major business impact that are beyond typical major incident management functions,  it’s important to pinpoint accountability (special attention, even beyond their regular major incident process) for those business-critical environments where your company would experience a significant loss of capital or critical functionality.

The First Responder Role

Under multiple business-critical environment scenarios, each major environment is assigned a first responder who assumes the major incident lead role for accountability and leadership. The first responder has accountabilities that are typically over and above the normal incident management processes for which an incident manager and/or major incident manager may be responsible. The first responder’s accountabilities are to:

  • Restore service for those incidents that fall into the agreed-upon top priority assignment (P0/P1, or S0/S1, depending upon whether priority or severity is the chosen terminology), as well as all technical support team escalations and communications to management regarding incident status and follow-up, once resolved.
  • Create documentation to guide the service restoration process (often referred to as a playbook or other unique name recognized for each major environment), which specifies contacts for technical teams, major incident management procedures for that specific environment, identification of the critical infrastructure components that make up the environment, or other environment-specific details that would be needed for prompt service restoration and understanding of the environment.
  • Develop the post-incident review process and communications, including the follow-up problem management process (in coordination with any existing problem management team) to ensure its successful completion and documentation.

I also recommend that this primary process management role of accountability be assigned to someone familiar with all of the components and processes of the specific environment they are responsible for, so the management process can run as smoothly as possible for business-critical incidents.

Reducing the Business-Impact of Major Incidents

With a first responder in place, the procedure for resolving major incidents is more prescribed. With each major incident, your company learns what is causing incidents—and most importantly, has a documented process in place for resolution.  Ultimately, the incidents are resolved faster and more efficiently, and your company avoids costly loss of critical functionality or capital due to downtime and is able to avoid similar incidents in the future

The business increasingly looks to IT to drive innovation. By keeping business-critical environments available, you can deliver on business goals that contribute to the bottom line.

—–
Brian Florence is a transformation consultant with VMware Accelerate Advisory Services and is based in Michigan.