Home > Blogs > VMware Operations Transformation Services > Tag Archives: Ops transformation

Tag Archives: Ops transformation

4 Ways to Maximize the Value of VMware vRealize Operations Manager

By Rich Benoit

Benoit-cropWhen installing an enterprise IT solution like VMware vRealize Operations Manager (formerly vCenter Operations Manager), supporting the technology implementation with people and process changes is paramount to your organization’s success.

We all have to think about impacts beyond the technology any time we make a change to our systems, but enterprise products require more planning than most. Take, for example, the difference between installing VMware vSphere compared to an enterprise product. The users affected by vSphere generally sit in one organization, the toolset is fairly simple, little to no training is required, and time from installation to extracting value is a matter of days. Extend this thinking to enterprise products and you have many more users and groups affected, a much more complex toolset, training required for most users, and weeks or months from deployment to extracting real value from the product. Breaking it down like this, it’s easy to see the need to address supporting teams and processes to maximize value.

Here’s a recent example from a technology client I worked with that is very typical of customers I talk to. Management felt they were getting very little value from vRealize Operations Manager. Here’s what I learned:

  • Application dashboards in vRealize Operations Manager were not being used (despite extensive custom development).
  • The only team using the tool was virtual infrastructure (very typical).
  • They had not defined roles or processes to enable the technology to be successful. outside of the virtual infrastructure team.
  • There was no training or documentation for ongoing operations.
  • The customer was not enabled to maintain or expand the tool or its content.

My recommendations were as follows, and this goes for anyone implementing vRealize Operations Manager:

  1. Establish ongoing training and documentation for all users.
  2. Establish an analyst role to define, measure and report on processes and effectiveness related to vRealize Operations Manager and to also establish relationships with potential users and process areas of vRealize Operations Manager content.
  3. Establish a developer role to create and modify content based on the analyst’s collected requirements and fully leverage the extensive functionality vRealize Operations Manager provides.
  4. Establish an architecture board to coordinate an overall enterprise management approach, including vRealize Operations Manager.

The key takeaway here: IT transformation isn’t a plug-and-play proposition, and technology alone isn’t enough to make it happen. This applies especially to a potentially enterprise-level tool like vRealize Operations Manager. In order to maximize value and avoid it becoming just another silo-based tool, think about the human and process factors. This way you’ll be well on the way towards true transformational success for your enterprise.

Rich Benoit is an Operations Architect with the VMware Operations Transformation global practice.

How to Avoid 5 Common Mistakes When Implementing an SDDC Solution

By Jose Alamo

Jose alamo-cropImplementing a software-defined data center (SDDC) is much more than implementing or installing a set of technology — an SDDC solution requires clear changes to the organization vision, policies, processes, operations, and organization readiness. Today’s CIO needs to spend a good amount of time understanding the business needs, the IT organization’s culture, and how to establish the vision and strategy that will guide the organization to make the adjustments required to meet the needs of the business.

The software-defined data center is an open architecture that impacts the way IT operates today. And as such, the IT organization needs to create a plan that will utilize the investments in people, process, and technology already made to deliver both legacy and new applications while meeting vital IT responsibilities. Below is a list of five common mistakes that I’ve come across working with organizations that are implementing SDDC solutions, and my recommendations on how avoid their adverse impacts:

1. Failure to develop the vision and strategy—including the technology, process, and people aspects
Many times organizations implement solutions without setting the right expectation and a clear direction for the program. The CIO must use all the resources available within the IT organization to create a vision and strategy, and in some cases it is necessary to bring in external resources that have experience in the subject. The vision and strategy must align with the business needs, and it should identify the different areas that must be analyzed to ensure a successful adoption of an SDDC solution.

In my experience working with clients, it is imperative that as part of the planning a full assessment is conducted, and it must include the areas of people, process, and technology. A SWOT analysis should also be completed to fully understand the organization’s strengths,  weaknesses, opportunities, and threats. Armed with this insight, the CIO and IT team will be able to express the direction that must be taken to be successful, including the changes required across people, process, and technology.

Failing to complete this step will add complexity and lack of clarity for those who will be responsible for implementing the solution.

2. Limited time spent reviewing and understanding the current policies
There are often many policies within the IT organization that can prevent moving forward with the implementation of SDDC solutions. In such cases, the organization needs to have an in-depth review of the current policies governing the business and IT day-to-day operations. The IT team also needs to ensure it devotes a significant amount of time with the company’s security and compliance team to understand their concerns and what measures need to be taken to make the necessary adjustments to support the implementation of the solutions. For example, the IT organization needs to look at its change policies; some older policies could prevent the deployment of process automation that is key to the SDDC solution. When these issues are identified from the beginning, IT can start the negotiation with the lines of business to either change its policies or create workarounds that will allow the solution to provide the expected value.

Performing these activities at the beginning of the project will allow IT leadership to make smart choices and avoid delays or workarounds when deploying future SDDC solutions.

3. Lack of maturity around the IT organization’s service management processes
The software-defined data center redefines IT infrastructure and enables the IT organization to combine technology and a new way of operating to become more service-oriented and more focused on business value. To support this transformation, mature service management processes need to be established.

After the assessment of current processes, the IT organization will be able to determine which process will require a higher level of maturity, which process will need to be adapted to the SDDC environment, and which processes are missing and will need to be established in order to support the new environment.

Special attention will be required for the following processes:  financial management, demand management, service catalog management, service level management, capacity management, change management, configuration management, event management, request fulfillment, and continuous service improvement.

Ensure ownership is identify for each process, with KPIs and measurable metrics established—and keep the IT team involved as new processes are developed.

4. Managing the new solution as a retrofit within the current environment
Many IT organizations will embrace a new technology and/or solution only to attempt to retrofit it into their current operational model. This is typically a major mistake, especically if the organization is expecting better efficiency, more flexibility, lower cost to operate, transparency, and tighter compliance as potential benefits from an SDDC.

Organizations must assess their current requirements and determine if they will be required for the new solutions. Most processes, roles, audit controls, reports, and policies are in place to support the current/legacy environment, and each must be assessed to determine its purpose and value to the business, and to determine whether it is required for the new solution.

IT leadership should ask themselves: If the new solution is going to be retrofitted into the current operational model, then why do we need a new solution?  What business problems are we going to resolve if we don’t change the way we operate?

My recommendation to my clients is to start lean, minimize the red tape, reduce complex processes, automate as much as possible, clearly identify new roles, implement basic reporting, and establish strict change policies. The IT organization needs to commit to minimize the number of changes to the new solution to ensure only changes that are truly required get implemented.

5. No assessment of the IT organization’s capabilities and no plan to fill the skill set gaps
The most important resource to the IT organization is its people. IT management can implement the greatest technologies, but their organizations will not be successful if their people are not trained and empowered to operate, maintain, and enhance the new solution.

The IT organization needs to first assess current skill sets. Then work with internal resources and/or vendors to determine how the organization needs to evolve in order to achieve its desired state. Once that gap has been identified, the IT management team can develop an enablement plan to begin to bridge the gap. Enablement plans typically include formal “train the trainer” models to cascade knowledge within the organization, as well as shadowing vendors for organizational insight and guidance along with knowledge transfer sessions to develop self-sufficiency. In some cases it may be necessary to bring in external resources to augment the IT team’s expertise.

In conclusion, implementing a software-defined data center solution will require a new approach to implementing processes, technologies, skill sets, and even IT organizational structures. I hope these practical tips on how to avoid common mistakes will help guide your successful SDDC solution implementations.

Jose Alamo is a senior transformation consultant with VMware Accelerate Advisory Services and is based in Florida. Follow Jose on Twitter @alamo_jose  or connect on LinkedIn.

Leveraging Proactive Analytics to Optimize IT Response

By Rich Benoit

Benoit-cropWhile ushering in the cloud era means a lot of different things to a lot of different people, one thing is for sure: operations can’t stay the same. To leverage the value and power of the cloud, IT organizations need to:

  1. Solve the challenge of too many alerts with dynamic thresholds
  2. Collect the right information
  3. Understand how to best use the new alerts
  4. Improve the use of dynamic thresholds
  5. Ensure the team has the right roles to support the changing environment

These steps can often be addressed by using the functionality within VMware vRealize Operations Manager, as described below.

1) Solve the challenge of too many alerts with dynamic thresholds
In the past when we tried to alert based on the value of a particular metric, we found that it tended to generate too many false positives. Since false positives tend to lead to the alerts being ignored, we raise the value of hard threshold for the alert until we no longer get false positives. The problem is that users are now calling in before the alert actually triggers, defeating the purpose of the alert in the first place. As a result, we tend to monitor very few metrics because of the difficulty in finding a satisfactory result.

However, now we can leverage dynamic thresholds generated by analytics. These dynamic thresholds identify the normal range for a wide range of metrics according to the results of competing algorithms that best try to model the behavior for each metric over time. Some algorithms are based on time such as day of the week, while others are based on mathematical formulas. The result is a range of expected behavior for each metric for a particular time period.

One of the great use cases for dynamic thresholds is that they identify the signature of applications. For example, they can show that the application always runs slow on Monday mornings or during month-end processing. Each metric outside of the normal signature constitutes an anomaly. If enough anomalies occur, an early warning smart alert can be generated within vRealize Operations Manager that indicates that something has changed significantly within the application and someone should investigate to see if there’s a problem.

2) Collect the right information
As we move from more traditional, client-server era environments to cloud era environments, many teams still use monitoring that has been optimized for the previous era (and tends to be siloed and component-based, too).

It’s not enough to just look at what’s happening with a particular domain or what’s going on with up-down indicators. In the cloud era, you need to look at performance that’s more aligned with the business and the user experience, and move away from a view focused on a particular functional silo or resource.

By putting those metrics into a form that an end user can relate to, you can give your audience better visibility and improve their experience. For example, if you were to measure the response time of a particular transaction, when a user calls in and says, “It’s slow today,” you can check the dynamic thresholds generated by the analytics that show the normal behavior for that transaction and time period. If indeed the response times are within the normal range, you can show the user that although the system may seem slow, it’s the expected behavior. If on the other hand the response times are higher than normal, a ticket could be generated for the appropriate support team to investigate. Ideally, the system would have already generated an alert that was being researched if a KPI Smart Alert had been set up within vRealize Operations Manager for that transaction response time.

3) Understand how to best use the new alerts

You may be wondering: Now that I have these great new alerts enabled by dynamic thresholds, how can I best leverage them?  Although they are far more actionable than previous metric-based alerts, the new alerts may still need some form of human interaction to make sure that the proper action is taken. For example, it is often suggested that when a particular cluster in a virtualized environment starts having performance issues that an alert should be generated that would burst its capacity. The problem with this approach is that although performance issues can indicate a capacity issue, they can also indicate a break in the environment.

The idea is to give the user as much info as they need when an alert is generated to make a quick, well-informed decision and then have automations available to quickly and accurately carry out their decision. Over time, automations can include more and more intelligence, but it’s still hard to replace the human touch when it comes to decision making.

4) Improve the use of dynamic thresholds
A lot of monitoring tools are used after an issue materializes. But implementing proactive processes gives you the opportunity to identify or fix an issue before it impacts users. It’s essential that the link to problem management be very strong so processes can be tightly integrated, as shown in figure 1.

event incident problem cycle

Figure 1: Event incident problem cycle

During the Problem Management Root Cause Analysis process, behaviors or metrics are often identified that are leading indicators for imminent impacts to the user experience. As mentioned earlier, vRealize Operations Manager, as the analytics engine, can create both KPI and Early Warning smart alerts, at the infrastructure, application, and end-user level to alert on these behaviors or metrics. By instrumenting these key metrics within the tool you can create actionable alerts in the environment.

5) Ensure the team has the right roles to support the changing environment.
With the new found abilities enabled by an analytics engine like vRealize Operations Manager, the roles and its structure become more critical. As shown in figure 2 below, the analyst role should be there to identify and document the opportunity for improvement, as well as, report on the KPIs that indicate the effectiveness of the alerts already in place. In addition, developers are needed to develop the new alerts and other content within vRealize Operations Manager.

new roles

Figure 2: New roles to support the changing environment

In a small organization, one person may be performing all of these functions, while in a larger organization, an entire team may perform a single role. This structure can be flexible depending on the size of the organization, but these roles are all critical to leveraging the capabilities of vRealize Operations Manager.

By implementing the right metrics, right KPIs, right level of automation, and putting the right team in place, you’ll be primed for success in the cloud era.

Richard Benoit is an Operations Architect with the VMware Operations Transformation global practice.

CloudOps at VMworld – Operations Transformation Track

VMworld, taking place August 25th through August 29th in San Francisco, is the virtualization and cloud computing event of the year.

The Operations Transformation track offers 21 sessions designed to share real-world lessons learned about the changing IT Operations landscape in the Cloud era. Self-service provisioning, automation, tenant operations, hybrid cloud and SDDC architectures are all optimized when operations change.

You can find out how to get the most out of the latest VMware technology by attending sessions focused on these operations transformation topics. Some of the sessions include:

OPT5414 – Automating, Optimizing and Measuring Service Provisioning in a Hybrid Cloud

David Crane, Cloud Operations Consulting Architect, on service provisioning and how automated provisioning can help reduce costs, improve flexibility and agility, speed time to market and improve ROI of cloud deployments.

For more on this topic, check out our Friday Reading List on Orchestration and Automation.

OPT5705 – Balancing Agility with Service Standardization: Easy to Say But Hard To Do

A panel of seasoned IT experts, including VMware’s VP IT operations discuss what does and doesn’t work with service standardization, where services can be tailored to meet unique needs, best practices for driving a common service definition process across a set of constituents.

For more on standardization, check out our Friday Reading List on Standardization in the Cloud Era.

OPT5051 – Key Lessons Learned from Deploying a Private Cloud Service Catalog

John Dixon of GreenPages Technology Solutions discusses lessons learned from a recent project deploying a private cloud service catalog for a financial services firm.

John Dixon was a co-host in our last #CloudOpsChat on Reaching Common Ground When Defining Services. Check out some of his insights in the recap blog.

OPT5569 – Leveraging Hybrid Cloud to Transform Enterprise IT from a Cost Center to a Revenue Driver

What if you could transform a “cost center” into a consultative center of customer innovation? Learn how you can leverage hybrid cloud to turn your “cost center” into a revenue driver with Jeffrey Ton, SVP Corporate Connectivity & CIO, Goodwill Industries of Central Indiana and John Qualls, Senior Vice President of Business Development, Bluelock.

For more on this topic, read our webinar recap blog on 5 key steps to effective IT operations in a Hybrid world.

OPT4732 – Leveraging IT Financial Transparency to Drive Transformation

Achieving financial transparency is fundamental to IT transformation. This session shows you how to leverage IT financial transparency to drive the transformation your business needs.

Read Khalid Hakim’s recent blog on Calculating Your Cloud Service Costs for more on this subject.

OPT4689 – Operations Transformation – Expanding the Value of Cloud Computing

A forcing function for change, cloud computing helps IT organizations move away from focusing only on siloed technology challenges. Phil Richards and Ed Hoppitt explain how to expand the value of cloud computing.

Ed Hoppitt is also a writer for the VMware CloudOps blog. Check out his work here.

OPT5215 – Organizing for Cloud Operations – Challenges and Lessons Learned

Addressing the organizational changes that must take place for IT to successfully operate a cloud environment and  provide hybrid-cloud services, as well as lessons learned from customers who have experienced this change.

Want to learn more? Check out Kevin Lees’ 3-part series on this topic – Part 1, Part 2, Part 3

OPT5489 – Pivot From Public Cloud to Private Cloud with vCloud and Puppet

Edward Newman and Mike Norris from EMC  explain how EMC has built a private cloud, pulled workloads back in from public cloud, and saved a bunch of money. Hard proof that private cloud is cheaper than public cloud.

OPT4963 – SDDC IT Operations Transformation: Multi-customer Lessons Learned

Technical account managers Valentin Hamburger and Bjoern Brundert of VMware share lessons learned from working with multiple customers, on how to overcome ancient, siloed IT processes and holistically enable your infrastructure to leverage an automated, policy-driven Datacenter.

OPT5697 – Symantec’s Real-World Experience with a VMware Software-Defined Data Center

Learn about the real-world experience of Symnatec’s IT organization who has deployed one of the world’s largest private clouds in a VMware-based Software-Defined Data Center.

OPT5474 – The Transformative Power and Business Case for Cloud Automation

Understand the terminology and the key success factors behind the concepts from two industry leading automation experts. Cut through the clutter and attend this session to learn from use cases that highlight the value of different types of automation, as well as proven approaches for building a business case for each.

Read this blog post by Kurt Milne for more information on task automation economics!

OPT5593 – Transforming IT to Community Cloud: A Canadian Federal Government Success Story

The story of Shared Services Canada, which scaled its private cloud to meet the needs of a community of 43 departments on a private vCloud deployment.

OPT5315 – Transform IT Into a Service Broker – Key Success Factors

The concept of an IT service broker is compelling. This session will explain key success factors in transforming IT into a service broker.

OPT5656 – VMware Customer Journey – Where are we with ITaaS and Ops Transformation in the Cloud Era

Kurt Milne Director of CloudOps at VMware, and Mike Hulme, Director of Enterprise Marketing at VMware, discuss where we are with ITaaS and Ops Transformation in the cloud era. Understand what your peers are doing that could benefit you, and learn what drives value across SMB, Commercial and Enterprise accounts on multiple continents.

Read more about how CloudOps represents a new way of managing IT in the Cloud Era.

OPT5194 – VMware Private Cloud – Operations Transformation

Venkat Gopalakrishnan, Director of IT at VMware offers operations transformation lessons learned from VMware’s own vCloud deployment. Ask the expert.  He has both VMware product and operations expertise.

We hope this guide will help you put together an unforgettable VMworld schedule!

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.