Home > Blogs > VMware Operations Transformation Services > Tag Archives: vmware cloud ops

Tag Archives: vmware cloud ops

SDDC: Changing Organizational Cultures

By Tim Jones

TimJones-cropI like to think of SDDC as “service-driven data center” in addition to “software-defined data center.” The vision for SDDC expands beyond technical implementation, encompassing the transformation from IT shop to service provider and from cost center to business enabler. The idea of “service-driven” opens the conversation to include the business logic that drives how the entire service is offered. Organizations have to consider the business processes that form the basis of what to automate. They must define the roles required to support both the infrastructure and the automation. There are financial models and financial maturity necessary to drive behavior on both the customer and the service provider side. And finally, the service definitions should be derived from use cases that enable customers to use the technology and define what the infrastructure should support.

When you think through all of the above, you’re really redefining how you do business, which requires a certain amount of cultural change across the entire organization. If you don’t change the thinking about how and why you offer the technology, then you will introduce new problems alongside the problems you were trying to alleviate. (Of course the same problems will happen faster and will be delivered automatically. )

I correlate the advancement to SDDC to the shift that occurred when VMware first introduced x86 virtualization. The shift to more efficient use of resources that were previously wasted on physical servers by deploying multiple virtual machines gathered momentum very quickly. But based on my experiences, the companies that truly benefited were those that implemented new processes for server requisitioning. They worked with their customers to help them understand that they no longer needed to buy today what they might need in three years, because resources could be easily added in a virtual environment.

The successful IT shops actively managed their environments to ensure that resources weren’t wasted on unnecessary servers. They also anticipated future customer needs and planned ahead. These same shops understood the need to train support staff to manage the virtualized environment efficiently, with quick response times and personal service that matched the technology advances. They instituted a “virtualization first” mentality to drive more cost savings and extend the benefits of virtualization to the broadest possible audience. And they evangelized. They believed in the benefits virtualization offered and helped change the culture of their IT shops and the business they supported from the bottom up.

The IT shops that didn’t achieve these things ended up with VM sprawl and over-sized virtual machines designed as if they were physical servers. The environment became as expensive or more expensive than the physical-server-only environment it replaced.

The same types of things will happen with this next shift from virtualized servers to virtualized, automated infrastructure. The ability for users to deploy virtual machines without IT intervention requires strict controls around chargeback and lifecycle management. Security vulnerabilities are introduced because systems aren’t added to monitoring or virus scanning applications. Time and effort—which equate to cost—are wasted because IT continues to design services without engaging the business. Instead of shadow IT, you end up with shadow applications or platforms that self-service users create because what they need isn’t offered.

The primary way to avoid these mistakes is to remake the culture of IT—and by extension the business—to support the broader vision of offering ITaaS and not just IaaS.

Tim Jones is business transformation architect with VMware Accelerate Advisory Services and is based in California. Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags.

An ITBM Service Costing Process is Key to IT Transformation

By Khalid Hakim

KHALID-cropAs more businesses recognize the integral role IT plays in the overall success of the enterprise, executive and business stakeholders have higher expectations of IT’s performance and its ability to prove its value. Providing cost transparency back to the business is key to meeting those expectations.

That is why today’s IT organization needs to have an in-depth understanding of the costs associated with delivering IT services, enabling each service manager/owner to defend his or her numbers from a service angle (not from an expense code or department/project budget) and hence improve the overall IT service value perception.

This highlights the need for a new management discipline that provides a framework to deliver IT as a service and manage the business of IT: IT Business Management (ITBM). Yet many IT leaders do not have the support, knowledge, or bandwidth needed to implement an effective ITBM practice, with its core focus on minimizing IT costs while maximizing business value.

When I’m working with customers, I use VMware’s ITBM Service Costing Process (SCP) to facilitate a modular service-based costing approach that offers ease in manageability and operability. In my next post I’ll dig into the details of how the SCP solution is used as well as the benefits and business value it addresses. But first, I want to clarify the far-reaching repercussions of failing to implement these processes.

Common challenges facing IT
The biggest problem for today’s IT organizations is not insufficient funds or financial management people skills, but rather IT planning, budgeting, costing, allocating, and pricing, all of which are based on by-department cost management.

Traditional IT costing methods don’t explicitly call out value-service based structures and bills. They are more focused on costs associated with technology component purchases, projects implemented, cost code totals, department costs, and customer allocations of these non-value-add cost elements.

These situations create a host of business issues for IT:

Failure to understand the costs of IT deliverables Not all service managers are able to understand their end-to-end service costs and defend their expenses due to the lack of true service views, including service catalogs and definitions, as well as service-based cost models.

  • Arbitrary cost cutting and budget shrinking decisions — Management often looks at expense lists from cost-codes or a totals view, not from a service-based view that enables top management to see a holistic path to savings.
  • Random cost allocation — IT’s cost allocation is typically based on policies and guidelines set by the finance management department that are usually technically driven and don’t reflect the full value of IT.
  • Overstated or understated service costs — IT service cost calculations may include superfluous cost elements or exclude key cost elements. This is all caused by lack of a well-defined service-based costing process standard across IT, which results in services that can’t be compared “apples-to-apples” with outside service providers.

The “IT is always expensive” perception — Service managers and owners can’t confidently defend their numbers, which results in a common perception that IT is expensive.

Lack of trust and value realization — Due to the lack of value-centric conversations and full service-based cost transparency, talks tend to focus on numbers instead of the true value delivered to the business. As long as services are not being managed as business, then customers will continue to question what their money is buying.

Data does not support making meaningful decisions — One of the biggest challenges IT faces without an ITBM SCP is unreliable and inaccurate financial data related to IT assets.

Poor budget processes or lack of budget clarity —The traditional IT budgeting process follows a limited approach that limits IT’s capabilities view and creates uncertainties and inefficiencies in day-to-day operations.  Running IT like a business requires budgets to be based on services demands, rather than expense codes.

Limited financial and business management background — Financial management is not stressed across the IT organization, instead seen as a specialized role important for ITFM managers only. Service managers and IT generally lack basic financial management background that could provide them important insights.

But there is good news for the IT organization. Check back, and I’ll share more details about the ITBM SCP solution and the four key areas in which it addresses these challenges.
——-

Khalid Hakim is an operations architect with the VMware Operations Transformation global practice. You can follow him on Twitter @KhalidHakim47.

Automation – The Scripting, Orchestration, and Technology Love Triangle

By Andy Troup

In speaking with some of my customers, the message comes resoundingly across “WE WANT TO AUTOMATE.” So this is the sweet spot for cloud solutions as they have in-built automation to provide the defined benefits of cloud computing such as On-demand self service, Resource pooling and Rapid elasticity (as defined by NIST here).

However, upon scratching the surface and digging a little deeper, the other thing I’ve found is that when I’m told “yes we’ve got automation,” it typically means that a lot of effort has gone into developing a whole heap of scripts that have been written to solve a very specific problem. This, I would argue, is not the best way for automation to be achieved.

I was in conversation with a customer a few weeks ago where they wanted to automate a particular part of their provisioning process, and my recommendation to them was “DON’T DO IT.” Why did I say this? Well, the process was broken, inefficient, relied on spreadsheets & scripts and meant there was constant rework to have a satisfactorily provisioned system. Their provisioning process took weeks and weeks. There was no point in automating this broken process – what needed to happen was that the process had to be fixed or changed first. I won’t go into anymore detail about this particular problem, but my point is that sometimes you have to take a step back and see if there are other ways of solving a particular problem.

In summary – there’s no point in automating a broken process.

So, why do we want to automate our IT systems and the provisioning of them anyway? Primarily because we want two things:

  1. To take the boring repeatable activities that many IT administrators undertake and get a system to do it instead. This frees up time for the administrator to do the more interesting and difficult things.
  2. Remove the potential for errors. Anything that is done as a manual activity involving people is liable to be inconsistent and error-prone (I say liable, but really we all know that they will be inconsistent and error-prone). Cloud solutions are all based on the premise that everything is standardized, and thus we need to remove any activity that introduces unreliability.

OK, so we’ve now established that automation is a good thing. All we need to do now is work out HOW we’re going to automate, and this may introduce some difficult decisions.

So what are the automation options? Well, in my mind automation comes in three different flavours which should be used together to solve the automation challenge. Here they are with some definitions I found:

  1. Script – programs written for a special runtime environment that can interpret and automate the execution of tasks which could alternatively be executed one-by-one by a human operator. (http://en.wikipedia.org/wiki/Script_(computing))
  2. Orchestration – describes the automated arrangement, coordination, and management of complex computer systems, middleware, and services. (http://en.wikipedia.org/wiki/Orchestration_(computing))
  3. Policy – Policy-based management is an administrative approach that is used to simplify the management of a given endeavor by establishing policies to deal with situations that are likely to occur. (http://whatis.techtarget.com/definition/policy-based-management)

In terms of their use, the image below shows how I believe they should be used and in what quantities. As you can see, we should be aiming for as much policy implementation as possible with as little script as we can achieve.

If you have a process you’d like to automate, to find the solution, you should work up the pyramid from the bottom.

So the first question you should ask yourself is “can I create a policy or several policies to solve the problem?” This will have a dependency on the technology available to utilize the policy, but should be the first port of call. It may even be worth considering investing in the technology to make the policy implementation possible. The overhead of creating and maintaining policies are small and they will provide a robust solution to your problem with reliability and consistency.

If it isn’t possible to create a policy to solve the challenge, next consider orchestrating a solution. This will provide a reusable, standardized capability that has an element of management/maintenance overhead and will be reliable.

Finally, if neither policy nor orchestration will work for you, then utilize scripting as a last resort. Why a last resort? Scripting is a tactical, bespoke solution for a specific requirement and will require managing and maintenance during its entire life, which in turn will incur costs and will be less reliable.

So in summary, when you are considering automating a process:

  • Step back from the automation challenge and consider the options. They may not be what you expected.
  • Work up the “Love Triangle” from the bottom.
  • If you can’t implement a policy, consider orchestration and use scripting as a last resort.

For more great insight on automation, see our previous posts highlighting automation economics and IT automation roles.

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.