Home > Blogs > VMware CloudOps

Is Your IT Organization Ready to Deliver?

By Kevin Lees

Kevin_cropI recently updated the white paper I wrote a couple of years ago — Organizing for the Cloud — which has been quite popular with our customers. The good news:

  • It’s shorter – condensed to really focus in on the areas that our customers have told us are the greatest help
  • The core concepts and models remain intact and have survived the test of time, and our customers continue to benefit from our best practice recommendations

From my perspective, there is no bad news; at least any I could come up with. IT leaders continue to validate with me that a new organizational approach as well as their people—and their roles and responsibilities—are more important than ever.

While I wrote that the core concepts and models have survived the test of time, that’s not to say this is just a condensed version. I’ve updated a few sections based on new technical capabilities enabled by the SDDC and my experience working directly with customers  including:

  • The organizational impacts of the software-defined data center (SDDC) as the cloud infrastructure – including a couple of new roles
  • How to get started
  • An expanded section on key cross-team collaboration

Cover v2Just to name a few.

Organizational change continues to be top of mind as IT executives implement SDDC as the infrastructure of choice for cloud as well as double down on their use of cloud as the future of IT. No matter what the intended topic of conversation when discussing the operational implications of SDDC and cloud, 9 out of 10 conversations I have with customers quickly turn to organizational implications.

Organizational change is a critical step to success in the new era of cloud. I hope you find this revision as useful as our customers found the original.

=====
Kevin Lees is principal architect for VMware’s global Operations Transformation Practice and is based in Colorado.

How to Measure the Impact of Your IT Transformation

By Matt Denton

Matt_Photo1Generally when a company makes a decision to move in a new direction, a lot of analysis and rigor take place to ensure the decision is the right one. Business cases are created and vetted until everyone is in agreement and the project is approved. This is all great and necessary to kick off a new initiative. However, once the project is in motion, how often do we measure the results against the original business case to see if we are delivering on what the company expected?

Think about it. A project gets kicked off and everyone is heads down implementing the new changes and making sure they meet their deadlines. Going back to review a business case is usually not a priority and, quite frankly, who has the time? But at some point, senior leadership will ask for an analysis, and one will be created to meet that one-time request. Then, it is back to business as usual—until the next request comes along.

What if you could measure the impact IT transformation has on the business proactively and in real time? Projects become more meaningful. Employees can see how their work is impacting the business. Transformation begins to make sense and can be justified. This can be done if you take the time to generate key performance indicators and metrics ahead of time.

Start by asking the team these questions at the beginning of a project:

  1. Why are we doing this?
  2. What are we trying to improve?
  3. How will we measure it?
  4. What is our current state benchmark?
  5. What is our target?
  6. How will we impact the business if we reach our target state?
  7. Do we have data to measure progress?

What Metrics Matter Most?
Usually I see companies measure progress based on financial metrics. For example, did we save the company money? However, there are hundreds of metrics that relate to agility, cost, and quality. The key is to pick those that are most impactful to the processes you expect to improve as part of the transformation. These may not all be financially driven, but will still have a measurable impact on the business.

Below are some other areas where you can measure business impact:

  • IT financial management
  • Service level management
  • Demand management
  • Service desk management
  • Incident management
  • Problem management
  • Change management
  • Configuration management
  • Availability management
  • Continuity management
  • Release management
  • Capacity management
  • Security management

Some of the metrics that fall into these categories are what I refer to as the “hard to quantify” or “soft” benefits. These are generally thrown out or overlooked during the business case development. I believe that once you can quantify these, you can translate them into real benefits and measure their impact on the business.

Provided the data exists, I’ve been able to help many clients both track the metrics they decide to measure and demonstrate how they can show the impact IT transformation has on their company. By quantifying these metrics and showing the impact your improvements are making on the business, you will know at any given time if the changes you are undertaking are making a difference or if you are falling short of your expectations. And, you will also be able to identify if additional changes are required to meet the project’s objective.

Too often, I see clients lose focus on the reason they started a project. This is easy to do on long projects. People change roles, leadership changes, or other projects take priority. Putting metrics in place and understanding their impact on the business will help you maintain that focus. The qualitative data gathered during implementation and post implementation are important to measure the impact IT transformation has made on your business. The data you collect and analyze will begin to tell a story and allow you to make precise decisions on where additional improvements are needed to make the biggest impact.

======
Matt Denton is a VMware transformation architect and is based in Wisconsin.

Incorporating DevOps, Agile, and Cloud Capabilities into Service Design

By Reg Lo

ReginaldLo-cropShadow IT is becoming more prevalent because the business demands faster time-to-market and the latest innovations—and business stakeholders believe their internal IT department is behind the times or hard to deal with (or both!). The new IT requires new ways of designing and quickly deploying services that will meet and exceed customer requirements.

Check out my recent webcast to learn how to design IT services that:
• Take advantage of a DevOps approach
• Embody an Agile methodology to improve time-to-market
• Leverage cloud capabilities, such as elastic capacity and resiliency
• Remove the desire of the business to use shadow IT

BrightTalk webinar

===
Reg Lo is the Director of the Service Management practice for VMware Accelerate Advisory Services and is based in California.

 

 

3 Key Trends for 2015: How to Keep Pace with the Rapidly Changing IT Landscape

craig dobsonBy Craig Dobson

So much happened in 2014, and as the New Year begins, I’m looking forward to finding out what 2015 holds—both from a market and an industry perspective. One thing is for certain: the rapid changes we have seen in our industry will continue into the New Year. In fact, the pace of change is likely to accelerate.

I believe the following key trends will be shaping the IT landscape of 2015:

  • Increased application focus
  • Continued movement from CapEx to OpEx models (embracing “x-as-a-Service”)
  • Heightened focus on accurate measurement of the cost-of-IT

Let’s explore these trends in a little more detail.

Application Focus

All throughout 2014 I have been hearing clients say: “it’s all about the application.” In the face of global competition and with the rise of disruptive startups testing the old school business models, the lines of business are seeking innovation, market differentiation, and quick response to changing market dynamics. They are driving IT—and all too frequently looking outside, to cloud-based solutions— to enable quick response to these dynamic changes, often at a lower entry cost.

In 2015, lines of business will prioritize and focus on the business applications that will support the goal of serving, winning, and retaining customers. Application portfolios will change to hybrid architectures that increasingly leverage x-as-a-service models. Supporting platform decisions (such as infrastructure and cloud) will be made based on application decisions. IT professionals will need to stay on top of evolving business applications in order to more effectively support the demands of the lines of business.

Moving from CapEx to OpEx

The appetite to consume anything-as-a-service from external providers has grown throughout 2014, and is now significantly shifting the IT funding model from three- to five-year CapEx investments to OpEx-based consumption models. This shift will accelerate in 2015, and will often be tied to shorter contract periods, with an increased focus on cost and an expectation of a continued improvement on cost-to-serve.

What is driving this change is a general acceptance by mainstream enterprise businesses and different levels of government (through policy changes) that cloud-based services make economic sense, combined with the fact that the business risk of consuming these services has decreased.

Accurate Measurement of the Cost-of-IT

With the shift from CapEx to OpEx models and the focus on the business value of the application lifecycle, the CIO will be under even more pressure to show value back to the lines of business. In 2015, with these new dynamics, and with IT moving to become a full broker of services or portfolio manager (for both internal and external services) delivering x-as-a-service capabilities, this change will demand a greater level of granular and real-time financial reporting at a service level for the consuming lines of business.

This increased financial awareness will provide the ability for IT to show value, offer apples-to-apples comparison between internal IT and external services, as well as comparison between suppliers.

In addition to the cost transparency measures, I believe we will also see an aggressive focus on driving down operational costs to allow the savings to be targeted at next-generation business applications.

Ready for 2015

Let’s face it — change is a given, and 2015 will be no exception for IT. Forward-thinking IT leaders will get ready to deliver applications that meet the dynamic demands of the business; x-as-a-service offerings that meet or exceed end-user requirements; and financial reporting capabilities that not only show end users what they’re paying for but also enable IT to quantify its value.


Craig Dobson is Senior Director of VMware Technical Services for the Asia Pacific region and is based in Sydney.

What Metrics Should Be Measured for Change Management?

By Kai Holthaus

kai_holthaus-cropThat, of course, depends (favorite answer of consultants everywhere…). As an IT executive, start by asking yourself what you want to achieve. Once you select a critical success factor (CSF), key performance indicator (KPI) or associated metrics and start to report on those metrics, you will see two types of behavior within your IT organization.

Most of your employees will want to be good team players, and they will work to meet the desirable metrics (and avoid undesirable ones). For example, if you start reporting on the number of changes implemented without proper authorization (which could be discovered through configuration audits), and you start disciplining staff for implementing changes without authorization, you will see the number of unauthorized changes go down (in most cases). However, you will also find that some staff will try to game the system by implementing changes without proper authorization, then making it look as if (in your tracking system) they’d had the authorization.

Also keep in mind, that metrics can have unintended consequences. Sticking with the example of tracking (and trying to reduce) the number of unauthorized changes, you may be surprised to see the backlog of changes waiting to be approved grow, because your approval process was not yet ready to handle all the change that was going on in your environment. So, it’s a good practice to be prepared to adjust your metrics accordingly. This also applies to metrics that have been in place for a while. If you have driven the number of authorized changes to zero, and have held it there for the last 12 months, you may want to consider adjusting your focus to other issues (but don’t lose sight completely…unauthorized changes can quickly creep back in).

Finally, make sure that you can actually measure the things you need to measure to report on CSFs and KPIs. Setting a goal of no unauthorized changes is laudable but will remain a goal until you have found a way to detect unauthorized changes.

To conclude, here are some examples of KPIs to consider for your change management process:
kai graphic

=========

Kai Holthaus is a transformation consultant with VMware Accelerate Advisory Services and is based in Oregon.

Making IT Go Faster – Forrester Research Sheds Light on How

By Kurt Milne

kurtmilne-cropToday’s IT managers face increasing pressure to be more responsive and move faster. However, most IT organizations have built their IT organization to promote control and safety. People, process, and tools have traditionally been deployed to strictly limit change in order to optimize service quality and efficiency. In fact many of the most successful IT organizations have built their reputation by deploying elements of ITIL or other control frameworks to ensure critical system uptime.

Latest Forrester Research lays out a path forward for IT organizations that want to increase agility without losing control

orrester coverIt is easy to say – “Let’s use the cloud to move faster and be more responsive to the business.” But how do those with investment in ITIL, or who have thoughtfully developed process control methodologies, adapt to new demands for speed? Demands forcing IT to do things it may not be comfortable with? A new Forrester study based on interviews with 265 IT professionals in North America and Europe sheds some light on the best path forward.

Forrester found that:

  • IT organizations are quickly moving to on-demand, dynamic IT infrastructure
  • Users demand faster provisioning and want IT to be easy to consume
  • Those companies that have already deployed more dynamic change models are moving away from a centralized CMDB strategy
  • Developers are the primary consumers of ready-to-use application middleware stacks
  • IT can support rapid change without sacrificing configuration, compliance, and governance controls

If you have investment in IT process maturity and are looking to improve IT agility and deploy more automation without sacrificing control, then read the full Forrester report.
—-
Follow @kurtmilne on Twitter.

Transforming Operations and Perception of the IT Organization

By David Crane

dcrane-cropA recent engagement with a long-established telecommunications firm presented a huge challenge—the solution for which is a great example of how operations transformation can drive technical transformation. The firm’s customer base spans various global regions, each of which presented a different customer experience. The IT organization functioned in extremely siloed environments, having grown organically over 25 years to support an aging, fragmented infrastructure.

A frustrated but motivated CIO laid down the following requirements for the VMware consulting services team, to be met over an aggressive six-month timeline:

  • Reduce operational costs
  • Improve agility
  • Provide more service offerings
  • Help IT become a service broker and eliminate shadow IT
  • Build a flexible architecture to meet the needs of the business
  • Reduce total number of physical data centers
  • Gain more control and compliance of IT infrastructure environments

The internal IT team lacked the expertise and resources required to implement a software-defined data center (SDDC) solution. Their service request process was time-consuming, manual, and inconsistent. Add to that an average provisioning time for a full end-to-end server of eight weeks, and it’s no surprise that internal customers were seeking out external solution providers for their IT needs.

The VMware team set out to remedy all of this with the following solution:

  • Implement a production SDDC platform
  • Make self-service automated provisioning the first available service
  • Assess the customers’ operating processes
  • Introduce an optimized organizational structure
  • Integrate operations transformation and technical implementation
  • Take a phased approach to the project with clearly defined milestones to deliver immediate results
  • Ensure the VMware team team worked closely with internal groups

Transforming the Operating Model
Breaking down the siloed IT organization, and introducing horizontal, cross-departmental communications was the first step to allow the customer to become service-focused.

The team did have the business analyst concept, but the analysts sat outside the IT organization. They didn’t understand IT and weren’t incentivized to do so. As a result, rogue users were going out and doing things themselves, leading to compliance and governance issues.

We introduced the concept of infrastructure operations and tenant operations. These were cross-functional teams that talk to each other—a virtual center of excellence within the IT organization. As part of this organizational change, we brought in new roles, the two most important being the customer relationship manager and the service owner. We brought customer relationship management back into IT, so the person in the role started to understand IT and what they could deliver (and how) against customer requirements.

One of these requirements was the revelation that customers did not really have an interest in availability.  This was not because they didn’t care, but simply because IT over the years has become robust enough that availability is expected.  What their customers really cared about was the speed, and standardization, of the service provisioning lifecycle, as it was this that allowed them to quickly respond to market demands, and support the business objective to be the first to market with new products.

This led to a technical requirement as the IT organization’s customers requested to see this information in a dashboard format, so that proactive monitoring of the provisioning process could take place.

Transforming Infrastructure Operations
The service owners played a key role in saying VMware vRealize Operations only looks at infrastructure—this resulted in a demand to change things within VMware vRealize Automation.

However, the dashboards needed to be delivered through vRealize Operations. To meet the technical requirement, we focused on the self-service provisioning portal and allowed consumers to monitor the status of their ordered services via that portal. To do that, we needed a dashboard in VMware vRealize Operations to monitor the KPIs involved in service provisioning. In order to build the dashboard to monitor provisioning time, we had to create a custom solution using vRealize Automation. The technical solution was necessary to enable the operating framework architecture and organizational model to support it.

Dashboard Solution
We ended up with a provisioned resources dashboard as shown in figure 1 below that lists each virtual machine (VM) and the number of minutes it took to be provisioned. Less than 30 minutes shows green, less than two hours shows yellow, and over two hours is red. It also shows the average, minimum, and maximum times to provision.

Time to Provision

Figure 1:  Provisioned resources dashboard

The dashboard also enabled the customer to use data to feed back into the service life cycle process. For example, they started to understand service demand. Service owners—who were expected to forecast demand for services—could now do so with more accuracy. Now that the team was forecasting capacity demand more accurately, they were able to increase credibility by sharing this information with the infrastructure team. And ultimately they saved money by having a better handle on demand.

The dashboard also allowed IT to develop proactive operational processes.  On several occasions the service owners started to see a degradation in performance of the provisioning process, while the infrastructure monitoring dashboards were still showing a healthy ecosystem.

On further analysis, changes to the underlying infrastructure, whilst keeping in tolerance and SLA for the IT infrastructure teams, were having an accumulative impact further down the chain to the service provisioning process.

The provisioning dashboard and further integration with the customers’ service desk platform and event, incident, and problem management processes allowed the IT infrastructure teams to tune the change management process so that service provisioning would not be affected.

In the end, IT became service-oriented because of the dashboard. Because internal customers could use that tool to see the incredible accuracy with which the IT team was meeting its 30-minutes-or-less goal, it had a huge impact on the way the IT was perceived within business. IT’s credibility skyrocketed, and suddenly it became easier to drive things like the “cloud first” policy within the organization.

======
David Crane is an operations architect with the VMware Operations Transformation global practice and is based in the U.K.

How to Avoid 5 Common Mistakes When Implementing an SDDC Solution

By Jose Alamo

Jose alamo-cropImplementing a software-defined data center (SDDC) is much more than implementing or installing a set of technology — an SDDC solution requires clear changes to the organization vision, policies, processes, operations, and organization readiness. Today’s CIO needs to spend a good amount of time understanding the business needs, the IT organization’s culture, and how to establish the vision and strategy that will guide the organization to make the adjustments required to meet the needs of the business.

The software-defined data center is an open architecture that impacts the way IT operates today. And as such, the IT organization needs to create a plan that will utilize the investments in people, process, and technology already made to deliver both legacy and new applications while meeting vital IT responsibilities. Below is a list of five common mistakes that I’ve come across working with organizations that are implementing SDDC solutions, and my recommendations on how avoid their adverse impacts:

1. Failure to develop the vision and strategy—including the technology, process, and people aspects
Many times organizations implement solutions without setting the right expectation and a clear direction for the program. The CIO must use all the resources available within the IT organization to create a vision and strategy, and in some cases it is necessary to bring in external resources that have experience in the subject. The vision and strategy must align with the business needs, and it should identify the different areas that must be analyzed to ensure a successful adoption of an SDDC solution.

In my experience working with clients, it is imperative that as part of the planning a full assessment is conducted, and it must include the areas of people, process, and technology. A SWOT analysis should also be completed to fully understand the organization’s strengths,  weaknesses, opportunities, and threats. Armed with this insight, the CIO and IT team will be able to express the direction that must be taken to be successful, including the changes required across people, process, and technology.

Failing to complete this step will add complexity and lack of clarity for those who will be responsible for implementing the solution.

2. Limited time spent reviewing and understanding the current policies
There are often many policies within the IT organization that can prevent moving forward with the implementation of SDDC solutions. In such cases, the organization needs to have an in-depth review of the current policies governing the business and IT day-to-day operations. The IT team also needs to ensure it devotes a significant amount of time with the company’s security and compliance team to understand their concerns and what measures need to be taken to make the necessary adjustments to support the implementation of the solutions. For example, the IT organization needs to look at its change policies; some older policies could prevent the deployment of process automation that is key to the SDDC solution. When these issues are identified from the beginning, IT can start the negotiation with the lines of business to either change its policies or create workarounds that will allow the solution to provide the expected value.

Performing these activities at the beginning of the project will allow IT leadership to make smart choices and avoid delays or workarounds when deploying future SDDC solutions.

3. Lack of maturity around the IT organization’s service management processes
The software-defined data center redefines IT infrastructure and enables the IT organization to combine technology and a new way of operating to become more service-oriented and more focused on business value. To support this transformation, mature service management processes need to be established.

After the assessment of current processes, the IT organization will be able to determine which process will require a higher level of maturity, which process will need to be adapted to the SDDC environment, and which processes are missing and will need to be established in order to support the new environment.

Special attention will be required for the following processes:  financial management, demand management, service catalog management, service level management, capacity management, change management, configuration management, event management, request fulfillment, and continuous service improvement.

Ensure ownership is identify for each process, with KPIs and measurable metrics established—and keep the IT team involved as new processes are developed.

4. Managing the new solution as a retrofit within the current environment
Many IT organizations will embrace a new technology and/or solution only to attempt to retrofit it into their current operational model. This is typically a major mistake, especically if the organization is expecting better efficiency, more flexibility, lower cost to operate, transparency, and tighter compliance as potential benefits from an SDDC.

Organizations must assess their current requirements and determine if they will be required for the new solutions. Most processes, roles, audit controls, reports, and policies are in place to support the current/legacy environment, and each must be assessed to determine its purpose and value to the business, and to determine whether it is required for the new solution.

IT leadership should ask themselves: If the new solution is going to be retrofitted into the current operational model, then why do we need a new solution?  What business problems are we going to resolve if we don’t change the way we operate?

My recommendation to my clients is to start lean, minimize the red tape, reduce complex processes, automate as much as possible, clearly identify new roles, implement basic reporting, and establish strict change policies. The IT organization needs to commit to minimize the number of changes to the new solution to ensure only changes that are truly required get implemented.

5. No assessment of the IT organization’s capabilities and no plan to fill the skill set gaps
The most important resource to the IT organization is its people. IT management can implement the greatest technologies, but their organizations will not be successful if their people are not trained and empowered to operate, maintain, and enhance the new solution.

The IT organization needs to first assess current skill sets. Then work with internal resources and/or vendors to determine how the organization needs to evolve in order to achieve its desired state. Once that gap has been identified, the IT management team can develop an enablement plan to begin to bridge the gap. Enablement plans typically include formal “train the trainer” models to cascade knowledge within the organization, as well as shadowing vendors for organizational insight and guidance along with knowledge transfer sessions to develop self-sufficiency. In some cases it may be necessary to bring in external resources to augment the IT team’s expertise.

In conclusion, implementing a software-defined data center solution will require a new approach to implementing processes, technologies, skill sets, and even IT organizational structures. I hope these practical tips on how to avoid common mistakes will help guide your successful SDDC solution implementations.

====
Jose Alamo is a senior transformation consultant with VMware Accelerate Advisory Services and is based in Florida. Follow Jose on Twitter @alamo_jose  or connect on LinkedIn.

It’s Time for IT to Come Out of the Shadows

Chances are shadow IT is happening right now at your company. No longer content waiting for their companies’ IT help, today’s employees are taking action into their own hands by finding and using their own technology to solve work challenges as they arise—a trend that likely isn’t fading into the shadows anytime soon.

Print

10 Factors to Consider When Estimating IT Staff Ratios Needed to Operate a Cloud Platform

By Pierre Moncassin

Pierre Moncassin-cropIn this post, I want to share with you some “rule of thumb” estimates on how many full-time equivalent (FTE) positions an IT organization may need to operate a cloud platform. Note: this is not an exact science, so I wanted to give you the practitioner’s approach. What are the general guidelines? What do I need to take into account?

Readers can learn more specific details around the different roles in the cloud management team in the VMware white paper “Organizing for the Cloud” as a starter. Here I use a generic term of “administrator” or “operator” to broadly describe the technicians/analysts/operators who manage and configure the tools on a daily basis. Here’s my list of factors to consider when estimating IT staff ratios:

  1. Number of lines of business. It stands to reason that the higher the number of distinct business units (lines of business) that are using the cloud, the higher the number and complexities of workflows to support, the more user profiles to manages, reports to produce, and so forth.
  2. Number of data centers. If the toolsets must manage multiple data centers, there will be added complexity in order to manage multiple environments, which often are in different locations.
  3. Level staff skill/experience. The higher the experience of the operators, the larger and more complex the infrastructure they can manage.  In other words, IT should require fewer FTEs to manage the same level of complexity in a cloud infrastructure. (This is a topic that deserves a separate article: “How the IT Organization Learns to Use Cloud Management Tools — and Over Time.”)
  4. Number of services. By this I mean cloud-type services, as in IT-as-a-service or applications. As a starter, determine how many services will be offered in the cloud service catalog.
  5. Workflow complexity. Factor in the internal complexity of the automated workflows. For example, on a scale of 1-5 (5 being most complex), a workflow with multiple approval points might score as 5, whereas a basic workflow as 1.
  6. Internal process complexity. Within IT, the organization with a higher number of mandatory internal process steps (which might all be in place for good reason) will likely need more staff (or it will take their staff longer) to carry out the same tasks as the organization with fewer internal process steps. A higher degree of complexity often develops in highly regulated environments, be it defense or civil administrations, or where an outsourcing provider requires rigid contractual relationships with inflexible approvals. Process and workflow complexity are related but separate considerations (all processes are not automated into workflows).
  7. Number of third-party integrations. The more integrations that need to be built into the automation workflows, the higher the workload for the operators.
  8. Rate of change. Change may be due to business change (mergers, acquisitions, new products, new applications), but also technological change (such as internal transformation programs). These may impact FTE requirements.
  9. Number of virtual machines under management. It may help to group into broad ranges: less than 100, 100 to 1,000, 1,000 to 10,000, and above 10,000. That range will impact FTE requirements.
  10. Number of user dashboards/reports to maintain. This can range from a couple basic reports to dozens of dashboards and complex reports. If the reporting is not sufficiently automated, the “unfortunate” administrators may need to spend a substantial part of their time producing custom reports for various user groups.

For those readers keen on modeling, each factor I’ve provided can be quite easily prorated on a 1-to-5 scale and turned into a formula. Others can be satisfied with applying as a simple rule of thumb.

My approach can be extended to VMware vRealize Automation or vRealize Operations management products, as well as other management tools. Stay tuned for a future article, as I am also at work to break down the roles far more accurately than “administrators.”

Meanwhile, consider the above factors I’ve outlined as basic guidelines. And a call to action for practitioners: Compare my guidelines to your metrics, and send me your feedback!

—-
Pierre Moncassin is an operations architect with the VMware Operations Transformation global practice and is based in the UK.