Home > Blogs > VMware Accelerate Advisory Services > Tag Archives: Accelerate Advisory Services

Tag Archives: Accelerate Advisory Services

CIO Imperative: Master Customer Experience to Remain Relevant

Begin a New Life as an Innovation Services Team and Deliver the Experience Your Customers Feel Entitled To

Heman Smithcustomer experienceBy Heman Smith

What is meant by customer experience – for those whom IT serves?

Today’s customers are used to immediate access to an app, typically via a mobile device, immediate ability to execute a task, and immediate results.  This delivers satisfaction and a positive customer experience.  Every industry is experiencing this, with nearly any transaction type you can imagine: banking, healthcare, retail, hospitality, travel and more. Surprisingly, this perceived expectation of immediacy is also spreading rapidly to sectors commonly considered slow to change and respond to change: public sector, utilities, military, etc.

Perception is reality – because people make decisions based on what they perceive to be true. Customers (internal and external) will now often choose the path to easiest results and lowest cost, with less loyalty and commitment than ever before.

What must be done for IT to regain its “preferred provider” status?

Whether we like it or not, IT is not always seen as the business’ preferred provider.  In-house IT is no longer seen as a “must-have”.  Alternatives not only exist, but are expanding and becoming equivalently mature and capable (SaaS, cloud-native apps in the public cloud, etc.). What must IT do now to develop and provide new value to replace its old role and charter?

Optimize Core Services

Immediately and aggressively optimize the core services IT offers that support easy application development, deployment, access and consumption:

  • IaaS, PaaS, Environment as a service, etc.
  • Open and flexible application access
  • Support any app/any device/anytime/anywhere (ie: EUC via solutions such as VMware’s Workspace ONE)
  • Application-focused security based on modern, multi/hybrid cloud-data center network models (VMware’s Airwatch, NSX, etc.)
  • IT-as-a-Business practices: show-back, charge-back, etc.

Embrace the Innovation Services Brand and Mindset

Move away from the legacy name and identity of IT (Information Technology), and adopt a new stance or brand as “Innovation Services”, leading the charge to provide capabilities-as-services needed by the business, using a best resource model as appropriate (developed, or brokered). Much of this change is leadership and culture driven, with process re-design and technology choices supporting the decisions made.

This approach requires the practice of teams counseling together to create an ideal process for delivering more ideal outcomes; both (1) internal to the teams themselves, making their lives easier, and (2) external to the end customer, making their lives easier.  This delivers better customer experience to each party!

Because of this shift in stance, the choice of technologies made by the team(s) is determined by the needed outcome, and how well a technology can rapidly, easily and cost effectively enable that outcome.

Will that cause a lot of technology loyalty shift? Yes.

Must vendors respond by being on-point to support that speed and adaptability in order for their IT customers to deliver better experience and outcomes? Absolutely!

The applications people use, coupled with ubiquitous mobility – are driving the pace of business and IT. DevOps is a response to that opportunity and pressure.

Develop Your DevOps Model… Now

IT must leap into supporting and accelerating the successful adoption of an appropriate-fit DevOps model in order to be of real value to the business. If Infrastructure Services teams don’t clearly understand this mandate, and rapidly take the stance of championing DevOps, then the application development side of the house will find other resources. This change is not optional; it is already underway, and will occur rapidly in the near future whether or not traditional IT teams want it.

If IT doesn’t rapidly respond to this need and change, its chance to be the business’ preferred provider will disappear because some new, successful, out-sourced or internally-stood-up alternative will be entrenched, and change will be seen as too difficult, or unnecessary.

What does this mean for me as an IT leader, and what can I do today?

Delivering exceptional customer experience must be the new mantra and reality for any effective IT leader, and thus for their IT organization. Becoming an “Innovation Services” team, instead of old-fashioned technology maintenance team is the key.

Focus on reducing friction in how any “consumer” (internal / external) accesses and consumes the new services (EUC, IaaS, PaaS, DevOps, etc.). The very mindset of IT staff must shift from habitually operating from a “keep it up and running” mentality – operations first, and adopt a new framework.

Innovation Services now focuses on:

  • How can we make “this” (whatever service “this” may refer to) easier to do, access, support, etc.?
  • How can we make consumption more appealing, more cost effective, more transparent?
  • How can we make us and our services as invisible as possible?
  • And, as I often hear during consulting conversations with frustrated IT leaders: “How can we function more like, so we can compete with, Amazon?”
  1. Mindset is a critical first step: Words have power. So take a stand, make a commitment, and step up to a different future. Craft a vision of opportunity, and invite each member of IT to step into becoming part of the new Innovation Services organization.
  2. Thinking through, and adopting a proven model for change as an Innovation Services provider is the second step. VMware has leading practices and services that assist with this.
  3. Re-organize based on service delivery function, not technology silos.
  4. Stick with it through the challenges of change. Partner with those who know and can coach you to success.

=======

Heman Smith is a Strategist with VMware Accelerate Advisory Services and is based in Utah.

If They Come – Are You Ready?

Part 1:  Optimizing demand management to deliver “Just in Time” cloud service provisioning.

Bill IrvineBy Bill Irvine

Demand ManagementA common phrase overheard during the creation of new cloud based innovation environments for modernized applications is “build it and they will come”.

The surprise for many IT organizations is that they do come and the challenge becomes dealing with that success and the on-going management of their new environments. Many organizations do not operationalize their capabilities or establish the governance processes they need to be successful as a cloud service provider by the time the technology goes live.

Common Complaints about Cloud Services

In my work with customers designing solutions to address the needs of their business via cloud-based innovation, we hear a consistent list of concerns.

“We are always blindsided by requests that we don’t have the capacity to fulfill – it came out of nowhere”

“We never get enough specific information from the business on what they want until it’s too late”

“They always want to over-provision the environments – we’re always in negotiation mode on resource requirements”

“There’s never enough capacity to meet the business and operational demand”

“Nobody ever gives back resources – even when we know they’re not being used”

“There’s never enough budget to buy capacity when we need it”

“We are always getting escalations about the speed of provisioning and spend most of our time reacting to delayed and unfulfilled requests”

“We have to wait for approvals for every piece of the PaaS puzzle”

 “Everything we do is custom which makes it impossible to automate”

These and a host of similar issues relate to a common theme – the need for effective demand, capacity and request management to ensure a standardized, streamlined, consistent and automated approach to service provisioning.

In this blog series, we will cover each of the key processes in the lifecycle and their importance in creating an IT service brokerage model that can always support the dynamic business demand. The expectations on IT have never been greater.

Communication is Key to Demand Management

The primary goal of demand management is to understand the pipeline of service requirements from the business and to interpret these needs into a predictable forecast of consumption. This forecast becomes a vital part of ensuring that IT always has the capacity to fulfill the evolving requirements and requests.

Sounds easy enough, but the challenge in predicting demand for most organizations is the difference in language between the Line of Business (LOB), the development teams supporting them and the infrastructure providers charged with hosting the services.

IT capacity planners want technical specifications and details of the individual resource components (CPU, Memory, Storage etc.) required in order to ensure the appropriate configuration of resources in the correct “landing zones”.

The business representatives however, typically present their needs in terms of market growth, marketing initiatives that may drive increases in transactions or potential decreases in business volume based on the seasonality of the service supported. These needs are rarely static by nature and evolve over time from conception to reality.

The conversion of business and service needs into technical resource requirements is often more art than science and relies on effective communication and collaboration between a broad group of stakeholders to continuously interpret, structure and mature demand data into knowledge that can be acted upon.

IT needs to interact proactively with the stakeholders to identify demand as early as possible at its source. This source data should be documented in a system of record so that it can be tracked, aligned by service and updated as more detailed information becomes available.  Demand data is progressed through a maturity funnel where requirements are codified, refined, validated, prioritized and compared against historical patterns & trends. This enables the initial business data to be transformed into technical resource requirements and actionable plans.

Demand Maturity Concepts

In order to create a comprehensive and contextual picture of current and future business and service demand, requirements should be subjected to a series of analytical steps to refine the demand.

demand management maturity
Fig 1. Demand Maturity Funnel

  • Capture data from all available sources. Different sources will have differing levels of specificity from business concept to actual service performance data.
  • Understand the sources (e.g. LOB, Service data, etc.) to enable comparisons and correlation with past requirements. Grouping requirements by service should become the overall organizing principle to help make sense of the overall demand.
  • Develop, configure and size a logical grouping of resources into a service offering (e.g. Infrastructure or Platform as a Service) to simplify the calculation of future needs and enable IT to better standardize and automate the provisioning processes. Pre-defined service offerings also provide the opportunity to steer the customer towards preferred solutions that are more efficient and cost effective.
  • Identify patterns of business activity (PBA) for each service and develop educated assumptions as to future needs through the analysis of past requirements, requests and configurations. It’s OK to make assumptions, the business is often guessing at the early stages. Even placeholder information can be valuable especially early in the funnel. Assumptions can be validated and adjusted over the life of the requirement.
  • Develop LOB user profiles and analyze their service usage patterns to further refine the understanding of the needs and requests.
  • Understand existing patterns of business activity, prior demand and the technical profile of related platforms consumed by the specific business unit, the applications supporting the service and the volume of transactions to form an evolutionary pipeline or funnel.

Demand requirements managed through these activities will provide IT the confidence to commit to more aggressive service levels and guarantees regarding capacity and associated cloud resource provisioning.

Key Demand Management Roles

As mentioned, there are many parties and stakeholders involved in managing demand effectively. The most obvious and often overlooked stakeholders are the lines of business themselves. IT’s continuous interaction with the business is key to their improved understanding of the customer needs and to break the cycle of being reactive and unresponsive.

Two of the key roles to ensure this ongoing relationship and demand based dialog are the Business Relationship Manager (BRM) and the Service Owner. These roles are critical to understanding the patterns of business and service activity and ensuring appropriate capacity and capability on a service-by-service basis.

The BRM has a primary responsibility to represent all of elements of IT and the associated service provision and performance to the business function. They are responsible for orchestrating the capture of demand from the business and assisting in the conversion of these needs into the technical capacity that meets the expectations. BRM activities in support of demand prediction include:

  • Identification of customer needs
  • Capture of planned projects and initiatives
  • Communicating changes in service profiles or volumes
  • “Selling” the improvements in service capabilities and helping to influence customer behavior and optimize the business usage of the services provided.

The Service Owner ensures that there is an understanding and awareness of the service as a whole, who utilizes the service, how it supports the business functions, the service capabilities and the current service performance. The Service Owner will be responsible for:

  • Quantification and codification of the overall service needs, resource proportions, configuration and operational dynamics to optimize the performance of the production service
  • Key input into decisions regarding resource capacity and configuration changes required
  • Creation of the environment profiles and service offerings used in the downstream environments (e.g. Dev, Test, QA) as required by the development and operational functions

Demand Management Benefits

Implemented successfully, demand management will enable improvements across all aspects of service provisioning but especially in the areas of capacity and request fulfillment.

Some of the key benefits include:

  • Increased customer satisfaction with services and requests being provisioned without the delays inherent in a reactive environment
  • Improved and faster understanding of service and business requirements with demand being objectively quantified
  • Capacity based risk is identified and addressed throughout the course of the above activities
  • Accurate demand and capacity trending will reduce “over-provisioning” and provide more accurate budgetary planning data to optimize resource / infrastructure costs
  • Basis for JIT (Just In Time) purchasing and release of capacity using confidence-driven forecasts
  • Improved alignment with business goals giving an accurate “picture” of demand activities required to enable business goal attainment
  • Increased confidence in allocation of IT resources and their readiness for service provision

Next Steps

So how do you get started with improving demand management? Some proven initial steps developed with our customers include:

  • Start talking to the business customers and associated development teams to open the dialog and establish the process
  • Enhance standard requirements capture with each line of business defining their requirements by service
  • Capture future needs and updates earlier in the lifecycle to feed the forecasting process
  • Update the guidelines for all requirements capture to be consistent regardless of type (e.g. innovation, run, grow etc.) in a common format for input into demand planning
  • Establish improved methods for collecting trend based run and growth requirements by service.
  • Develop Patterns of Business Activity for each service and monitor key performance and consumption metrics to model current and future operational needs.
  • Analyze and redefine the real-time metrics you collect to better track and report against ongoing capacity use, headroom requirements and growth

In my next post in the series, I will discuss the capacity management stage of lifecycle, focusing on the conversion of demand into capacity requirements and optimization of the overall capacity plan.

=======

Bill Irvine is a Principal Strategist with VMware Accelerate Advisory Services and is based in Colorado.

IT’s Payback Time – Part 2: Avoiding the risks that prevent ROI realization on IT innovation

Les ViszlaiBy Les Viszlai

ROI In part 1 of this two-part series we discussed how economists use many formal models to calculate ROI (Return on Investment), TCO (Total Cost of Ownership), as well as some of the methods for determining IT Business Value and payback periods.  In this post we’ll dig into the areas that can delay or prevent you and your organization from realizing the projected benefits from this ROI activity.

It’s critical that your ROI initiative has a communication plan that clearly communicates the status, timing and risks to the ROI initiative on a regular basis.  It’s not unusual for IT organizations to spend a lot of upfront effort to get business approval to proceed with an initiative, disappear and later back pedal on why the ROI initiative failed.

Potential Risks to ROI

There are a number of risk areas that can potentially impact the realization and value of an ROI initiative.

Financial

Changes in the business may cancel/delay the ROI initiative.  The initial ROI may be based on spending money that has a longer payback period then the business is now willing to take on in the current budget reporting cycle.  

Human Resources

Describes the people component of ROI initiatives.  A lack of training or not having the right people to execute and manage the project results in project timelines that are delayed.  Additional unplanned staff costs can be incurred in order to rework or complete the initiative.  Consider adding the cost of using Professional Services firms that have the expertise to accelerate the project as part of the initial ROI calculations to avoid these often costly unplanned costs.

Legal/Governance

Requirements change due to unforeseen circumstances or new industry related compliance requirements that present themselves after project kick off, and additional resources (Technology/People/Funding) are required to complete the ROI initiative.  This additional resource requirement may wipe out any of the original ROI benefits due to unplanned delays or costs.  

Management

Priorities can simply change and management’s commitment to support and funding can be delayed or cancelled. Having a solid communication plan in place keeps the initiative on managements radar and reduces the chances that their interest will wane.

Market

Market changes and competitive pressures or new customer demand may cause management to delay or cancel the project.  Resources (people/funding) can get diverted from IT to other areas of the business.

Organization

Political infighting or parent company relationships may limit ROI benefits. There can be a dependency on the business unit to use technology/services that benefit the parent company, increasing costs at the subsidiary level and reducing the ROI benefit. For example, if the parent company institutes an accounting package that enables simplified reporting across all of its subsidiaries, the costs increase to maintain and implement this system and impacts any resource savings.

Dependencies

Reliance by the current ROI initiative on a different project or initiative is a common risk.  Key resources (people/time/money) can be tied up which can impact the projected ROI of our current initiative.

Technology

Implementation related ROI activities can be affected by chosen technology that is not compatible with an existing system (not uncommon).  Or the new technology could have limited scalability and can’t handle the current or projected system demand.  A simple example is the case of existing switches that can’t handle the new call center phone system volume, or a new cloud services provider can’t handle the volume the business is generating.

Users

ROI benefits may be based on when and how users will utilize the new capabilities.  Anything that prevents them from doing so will be a risk.  The ROI initiative should have a strong end user communication component that describes why/how/when the transition will happen, and don’t forget the end user training if its needed.

Vendors

When you engage vendors to provide critical services/technology, sometimes they won’t execute as promised or go out of business before the initiative is completed.

Keep Your Guard Up

ROI RisksBe aware of the potential risks that may impact your ROI initiative during the initial analysis phase and factor that contingency into your planning.  A strong predefined communication plan will go a long way in preventing and/or minimizing the impact of many of the potential risk areas described in this blog. I personally like the traditional high level red/yellow/green dashboards that give a snap shot of risk over time, but use whatever works best for your organization to keep these risks top of mind.

=======

Les Viszlai is a principal strategist with VMware Advisory Services based in Atlanta, GA.

Successful Transformations Require Clarity in Strategy and Execution

Heman Smithby Heman Smith

blog_graphic_CIO_clarityThe recent “The State of IT Transformation” report by VMware and EMC is an up-to-the-minute overview of how companies across multiple industries are faring in their efforts to transform their IT organizations.

The report offers valuable insights into the pace and success of IT transformation over the last few years and outlines where companies feel they have the most to do. But two specific data points in the report – highlighting gaps between companies’ ambitions and their actual achievements – struck me in particular. Here they are:

  • 90% of companies surveyed felt it important to have a documented IT transformation strategy and road map, with executive and line of business support. Yet over 55% have nothing documented.
  • 95% of the same organizations thought it critical that an IT organization has no silos and works together to deliver business focused services at the lowest cost. And yet less than 4% percent of organizations report that they currently operate like this.

Both of these are very revealing, I think, and worth digging into a little deeper.

Taking the second point first, my immediate reaction here is: Could IT actually operate with no silos? Is that ever achievable?

To answer, you have to define what “silo” means. A silo can be a technology assignment (storage, networking, compute, etc..) and that’s usually what’s meant within IT by the word.  Sometimes, though, it represents a team assignment, whether by expertise or a focus on delivering a particular service, that is in its own way a type of silo.

So when companies say they wish they could operate with no silos and be able to work together, I wonder if that’s really an expression of frustration with poor collaboration and poor execution? My guess is that what they’re really saying is: “we don’t know how to get our teams, our people, to collaborate effectively and execute well.”

If I’m right, what can they do about it? How can companies improve IT team collaboration, coordination, and execution?

Being clear about clarity

The answer takes us back to the first data point, that 90% of companies feel it’s important to have a documented IT transformation strategy and road map with executive and line of business support, yet over 55% have nothing documented. A majority of companies, in other words, lack strategic clarity.

Without strategic clarity, it’s very difficult for teams to operate and execute toward an outcome that is intentional and desired. Instead they focus on the daily whirlwind that surrounds them, doing whatever the squeakiest wheels dictate. I’m reminded of what Ann Latham, president of Uncommon Clarity, has said: “Over 90% of all conflict comes from a lack of clarity.”

Clarity, in my experience, has three different layers.

Clarity of intent.

This is what you want to accomplish (the vision); why you want to do it (the purpose); and when you want it done (the end point). You can also frame this as, “We want to go from X (capability) to Y (capability) by Z (date).”

Clarity of delivery.

As you move towards realizing your vision, you learn a lot more about your situation, which brings additional clarity.

Clarity of retrospect.

We joke about 20/20 hindsight, but it’s valuable because it lets us compare our original intentions with outcomes and learn from what happened in between.

Strategic clarity is really about that first layer. If companies are not clear upfront about what they want, it’s almost impossible for their teams and employees to understand what’s wanted from them and how they can do it – or to track their progress or review it once a project is complete. Announce a change without making it clear how team members can help make it a reality and you invite fear and inertia. While waiting for clarity, people disengage and everything slows down.

I’ve seen, for example, companies say they’re going to “implement a private cloud.”  That’s an aspirational statement of desire, but not one of clear intent. A clear statement of intent would be: “We’re going to use private cloud technologies to shift our current virtual environment deployment pace of 4+ weeks into production to less than 24 hours by the end of June 2016.” Frame it like, and any person on the team can figure out how they can or cannot contribute toward that exact, clear goal. More importantly, the odds of them collectively achieving the outcome described by the goal are massively increased.

I suspect that the overwhelming majority of companies reporting that they’d like a strategic IT transformation document and road map but don’t yet have one, have for the most part failed to decide what exact capabilities they want, and by when.

This isn’t new. For the last 30 plus years, IT has traditionally focused on technologies themselves rather than the outcomes that those technologies can enable. Too many IT cultures do technology first and then “operationalize it.” But that’s fundamentally flawed and backwards, especially in today’s services-led environment.

Operating models and execution

Delivering on your strategic intent requires more than clarity in how you describe it, of course. Your operating model must also be as simple and as focused as possible on delivering the specific outcomes and capabilities outlined in your plan. Otherwise, you are placing people inside the model without knowing how they can deliver the outcomes it expects, because they don’t know what they’re trying to do.

Implementing an effective operating model means articulating the results you are looking for (drawn from your strategy), then designing a model that lets employees do that as directly and rapidly as possible. That’s true no matter what you’re building – an in-house private cloud, something from outside, or a hybrid. Everyone needs to know how they can make decisions – and make them quickly – in order to deliver the results that are needed.

That brings me to the my last observation. When companies have no documented strategy or road map (and remember that’s 55% of companies surveyed in the VMware/EMC report), they are setting themselves up for what I call “execution friction.” With no clear strategy, companies focus on technology first and “operationalize” later. They end up in the weeds of less-than-successful technology projects, and spend energy and resources on upgrading capacity, improving details, and basic IT pools, while failing to craft a technology model that supports delivering the capabilities written in their strategic model. It’s effort that uses up power while slowing you down instead of pushing you forward: execution friction. Again, it’s viewing IT as a purchase, when today more than ever it should be viewed as a strategic lever to accelerate a company’s ability to deliver.

In his book on strategic execution, Ram Charan says that to understand execution, you need to keep three key points in mind:

  • Execution is a discipline, and integral to strategy
  • Execution is the major job of the business (and IT) leader
  • Execution must be a core element of an organization’s culture

Charan’s observations underline what jumps out at me in the data reported by the VMware/EMC study: that you can’t execute effectively without strategic clarity.

Wise IT leaders, then, will make and take the time necessary to get strategically clear on their intended capability outcomes as soon as possible, then document that strategy, share it, and work from it with their teams in order to achieve excellent execution. If more companies do that, we’ll see silos disappearing in a meaningful way, too, because more will be executing on their strategy with success.

=======

Heman Smith is a Strategist with VMware Accelerate Advisory Services and is based in Utah.  

Moving Beyond Infrastructure as a Service to Platform as a Service

Brian MartinezBy Brian Martinez

What's Next - PaaSMy VMware colleague Josh Miller recently explored how companies are extending a DevOps model into their infrastructure organizations and what can be done to speed that essential transition.

I want to talk about the step after that. Where do you go after achieving infrastructure-as-a-service?

Here’s how I think of it. Infrastructure as a service (IaaS) focuses on deploying infrastructure as quickly as possible and wrapping a service-oriented approach around it. That’s essential. But infrastructure in itself doesn’t add direct value to a business. Applications do that. In more and more industries the first company to release that new killer app is the one that wins or at least draws the most value.

So, while it’s essential that you deliver infrastructure quickly, it’s worth lies in helping deploy applications faster, build services around those applications, and speed time to market.

So you have IaaS, what’s next?  Enter the concept of the platform as a service (PaaS). PaaS can be realized in a variety of ways. It might be through second generation platforms such as database-as-a-service or middleware-as-a-service. Or it could be via third generation platforms based on unstructured PaaS like containers (think Docker) or structured PaaS (think Pivotal Cloud Foundry).

The flexibility you have in terms of options here is significant and your strategy should be based on the needs of your developers.   Many times we see strategies built around a tool name instead of the outcomes needed from that tool.  Listening to the developer’s needs should help determine what the requirements are.  Then build backwards from there.  Often we you won’t end up with the same tooling then you thought you would.

All the approaches to PaaS, though, share a key feature: they are driven by both a holistic and a life-cycle view of IT. In other words, it’s dangerous to view any IT function today as either separate from any other, or as a one-time deal. Instead, we need to be thinking of everything as connected and at the same time being constantly iterated and improved.

Work from that perspective and it’s easier to navigate the often daunting array of options you have when it comes to PaaS.

Certainly, as you move along this path, it’s very possible to end up with multiple, small cloud-native apps deployed on multiple platforms spread across multiple different data sets – so be aware of the lifecycle.

One other note: there are so many different tools coming to market so quickly in this space that what you pick now may not be what you use in a couple of years. A lot of our customers are nervous about that. So it’s worth remembering that these tools are designed so that you can move your code, and the work that you’re doing with the code, to whatever platform is best suited to deliver it to your customer.

The bottom line: Encourage your customers to try things out so they can create DevOps learning experiences.  Be responsive in enabling developers to access new tools, while setting the right boundaries on how they can use those tools (think service definition) and where they bring them to bear.  Approaching PaaS with a unified culture of continuous iteration and improvement will enable your developers with the tools they need to move fast, without losing the control and stability essential to IT operations.

=======

Brian Martinez is a Strategist with VMware Advisory Services and is based in New York.

Transforming IT into a Cloud Service Provider

Reg LoBy Reg Lo

Until recently, IT departments thought that all they needed to do was to provide a self-service portal to app dev to provision VMs with Linux or Windows, and they would have a private cloud that was comparable to the public cloud.

Today, in order for IT to become a cloud service provider, IT must not only embrace the public cloud in a service broker model, IT needs to provide a broader range of cloud services.  This 5 minute webinar, describes the future IT operating model as IT departments transform into cloud service providers.

Many IT organizations started their cloud journey by creating a new, separate cloud team to implement a Greenfield, private cloud.  Automation and proactive monitoring using a Cloud Management Platform was key to the success for their private cloud.  By utilizing VMWare’s vRealize Cloud Management Platform, IT could easily expand into the hybrid cloud, provisioning workloads to vCloud Air or other public clouds from a single interface.  Effectively, creating “one cloud” for the business to consume and “one cloud” for IT to manage.

However, the folks managing the brownfield weren’t staying still.  They too wanted to improve the service they were providing the business and they too wanted to become more efficient.  So they also invested in automation.  Without a coherent strategy, both Brownfield and Greenfield took their own separate forks down the automation path, confusing the business on which services they should be consuming.  We started this journey by creating a separate cloud team.  However, it may be time to re-think the boundaries of the private cloud and bring Greenfield and Brownfield together to provide consistency in the way we approach automation.

In order to be immediately productive, the app dev teams are looking for more than infrastructure-as-a-service.  They want platform-as-a-service.  These might be second generation platforms such as database-as-a-service (Oracle, MSSQL, MySQL, etc.) or middleware-as-a-service (such as Web Methods).  Or they need third generation platforms based on unstructured PaaS like containers or structured PaaS like cloud foundry.  The terms first, second and third generation map to the mainframe (1st generation), distributed computing (2nd generation), and cloud native applications (the 3rd generation).

Multiple cloud services can be bundled together to create environment-as-a-service.  For example, LAMP-stacks – Linux, Apache, MySQL and PHP (or Python).  These multi-VM application blueprints lets entire environments be provisioned at a click of a button.

A lot of emphasis has been placed on accessing these cloud services through a self-service portal.  However, DevOps best practices is moving towards infrastructure as code.  In order to support developer-defined infrastructure, IT organizations must also provide an API to their cloud.  Infrastructure-as-code lets you version the infrastructure scripts with the application source code together, ultimately enabling the same deployment process in every environment (dev, test, stage and prod) – improving deployment success rate.

Many companies are piloting DevOps with one or two application pipelines.  However, in order to scale, DevOps best practices must be shared across multiple app dev teams.  App dev teams are typically not familiar with architecting infrastructure or the tools that automate infrastructure provisioning.  Hence, a DevOps enablement team is useful for educating the app dev teams on DevOps best practices and providing the DevOps automation expertise.  This team can also provide feedback to the cloud team on where to expand cloud services.

This IT operating model addresses Gartner’s bimodal IT approach.  Mode 1 is traditional, sequential and used for systems of record.  Mode 2 is agile, non-linear, and used for systems of engagement.  Mode 1 is characterized by long cycle times measured in months whereas mode 2 has shorter cycle times measured in days and weeks.

It is important to note that the business needs both modes to exist.  It’s not one or the other.  Just like how the business needs both interfaces to the cloud: self-service portal and API.

What does this mean to you?  IT leaders must be able to articulate a clear picture of the future-state that encompasses both mode 1 and mode 2, that leverages both a self-service portal and API to the organization’s cloud services.  IT leaders need a roadmap to transform their organization into cloud service providers that traverse the hybrid cloud.  The biggest challenge to the transformation is changing people (the way they think, the culture) and processes (the way they work).  VMware can not only help you with the technology; VMware’s AccelerateTM Advisory Services can help you address the people and process transformation.

 


Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

Is Your IT Financial Model Fit for ITaaS and the Cloud?

Harris_SeanBy Sean Harris

IT as a Service (ITaaS) and Cloud Computing (Public, Private & Hybrid) are radically different approaches to delivering IT, from traditional IT delivery models, that require new operating models, processes, procedures and organisations to unlock their true value. While the technology enables this change, it does not deliver it.

Measuring the Business Value of IT as a Service

A question I hear often from customers is, “How do I measure and demonstrate the value of ITaaS and Cloud Computing?”  For many organisations, the model for measurement of value (the return) and cost (the investment), as well as the metrics that have context in an ITaaS delivery model, are unclear.  For example, most (surprisingly not every) customer I deal with can articulate the price of a server, but that metric has no context in an ITaaS delivery model.

Link IT Cost to ValueI have talked before on this very blog about the importance for IT in this new digital era to be able to link the investments in IT and the costs of running IT to the gains in business efficiency and true business value. This will link your business services, the margins and revenues they generate and the benefits they deliver to customers and the business as a whole to IT costs and investments. This is one step. The other side of this equation is how to represent, measure and track the cost of delivery of the IT services that underpin the business services, then present them in a form that has context in terms of the consumption of the business services that are delivered.

Have you mapped your business services to IT services in terms of dependency and consumption?

Have you mapped IT spend to IT services and IT service consumption?

What about your organisation and procedures? How do you account for IT internally?

The Project-Based Approach

Most of the organisations I speak to have a project based approach to IT spend allocations. There are variations in the model from one organisation to the next, but the basic model is the same. In this approach:

  • Funds for new developments are assigned to projects based on a business plan or other form of justification.
  • The project is responsible funding the work to design and develop (within the organisations governance structure) the business and IT services needed to support the new deployment.
  • The project is also typically responsible for funding the acquisition of the assets needed to run these services (although the actual purchase may be made elsewhere) – these typically include infrastructure, software licenses, etc.
  • In most cases the project will also fund the first year (or part year) of the operational costs. At this point responsibility for the operation is passed to a service delivery or operations team who are responsible for funding the on-going operational aspects. This may or may not include a commitment or ownership of tech refresh, upgrades and updates.

What is included can vary drastically. Rarely is there any on going monitoring of the costs mapping to revenues and margins. When it comes to tech refresh, in many cases, it is treated as a change to the running infrastructure and so needs an assigned project to fund that refresh. This leads to tech refresh competing with innovations for a single source of funds.

The Problem with Project-Based Accounting

Just for a second, imagine a car company offering a deal where you (the consumer) pay the cost of the car, the first years service, tax, insurance and fuel and then after that you pay NOTHING (no fuel, no insurance, no tax, no service). Would that not lead you to believe that after year one the car is free?

While the business as whole sees the whole cost of IT, no line of business or business service has visibility of the impact it is having on the operational cost of IT. It is also extremely hard, if not impossible, to track if a business service is still operating profitably, as any results are inaccurate and process of calculation is fragmented.

Surely this needs to change significantly if any IT organisation is to seriously consider moving to an ITaaS (or cloud) delivery model? Is it actually possible to deliver the benefits associated with ITaaS delivery without this change in organisation and procedure?

Applying a service-based costing approach can seem intimidating at first, but it is essential to achieving value from your ITaaS transformation and gets much easier with expert help.  If you are approaching this transformation, contact our Accelerate Advisory Services team at VMware who, along with the Operations Transformation Services team, provides advice and guidance to customers around constructing an operating model, organisation, process, governance and financial management approach that supports an ITaaS delivery model for IT.

=======

Sean Harris is a Business Solutions Strategist in EMEA based out of the United Kingdom.

Evolving Cyber Security – Lessons from the Thalys Train Attack in France

Gene LikinsBy Gene Likins

Earlier this year, I was privileged to facilitate a round table for forty seven IT executives representing sixteen companies in the financial services industry.  As expected for a gathering of FSI IT executives, one of the primary topics on the docket was security.

The discussion started with a candid listing of threats, gaps, hackers and the challenges these pose for all in the room.  The list was quite daunting.  The conversation turned to the attempted terrorist attack on the Thalys high speed international train, traveling from Amsterdam to Paris.  A heavily armed gunman had boarded the train with an arsenal of weapons and was preparing to fire on passengers.  Luckily, several passengers managed to subdue the gunman and prevent any deaths

Immediately following the incident, the public began to question the security measures surrounding the train and the transit system in general.  Many recommended instituting airport style security measures, including presentation of identity papers, metal detectors, bag searches and controlled entry points

Given the enormous cost and the already strained police resources running at capacity, some are now calling for a different perspective on security.  As former interior minister of France Claude Gueant said,

“I do not doubt the vigilance of the security forces, but what we need now is for the whole nation to be in a state of vigilance.

As IT professionals, this should sound familiar.   So what can we glean from this incident and apply it to cyber security?

  1. Share the burden of vigilance with customers.
    72% of online customers welcome advice on how to better protect their online accounts (Source: Telesign).  One way to share the burden with customers is to recommend or require the use of security features such as Two Factor Authentication (2FA).  Sending texts of recent credit card transactions is an example of a “passive” way of putting the burden on the customer.  The customer is asked to determine if the charge is real and notify the card issuer if it’s not.  Companies should begin testing the waters of just how much customers are willing to do to protect their data.  They may be surprised.
  2. Avoid accidentally letting the bad guys in. 
    One of the common ways that online security is breached is by employees unknowingly opening emails which contain information such as “know what your peers make” or “learn about the new stock that’s about to double in price”. IT groups should continually inform their internal constituents on the nature of threats so we can all stay vigilant and look out for “suspicious characters”.
  3. Contain the inevitable breaches.
    It’s not a matter of “if”, it’s a matter of “when”. Network virtualization capabilities, such as micro‐segmentation, bring security inside the data center with automated, fine‐grained policies tied to individual workloads.  Micro‐segmentation effectively eliminates the lateral movement of threats inside the data center and greatly reduces the total attack surface.  This also buys security team’s time to detect and respond to malicious activities before they get out-of-hand.

Cyber SecurityBuilding a comprehensive security strategy should be on the agenda of all CIOs in 2016.  Cyber criminals are constantly creating new methods of threatening security, and technology is changing daily to counteract them.

VMware NSX, VMware’s network virtualization platform, enables IT to virtualize not just individual servers or applications but the entire network, including all of the associated security and other settings and rules.  This technology enables micro-segmentation and can move your security capabilities forward by leaps and bounds, but it’s only part of a holistic strategy for preventing security breaches.

To remain ahead of the threats, it requires a constant evolution of people, processes and governance, along with technology, to continuously identify and address security concerns for your organization and your customers.  For help building your security strategy, contact the experts at VMware Accelerate Advisory Services

========

Gene Likins is the Americas Director of Accelerate Transformation Services for VMware and is based in Atlanta, GA.

The rules to success are changing – but are you?

Ed HoppittBy Ed Hoppitt

We live in a world where the quickest growing transportation company owns no cars (Uber), the hottest accommodation provider owns no accommodation (Air BnB) and the world’s leading internet television network creates very little of its own content (Netflix). Take a moment to let that sink in. Each of these companies is testament to the brave new world of IT that is continuing to shape and evolve the business landscape that surrounds each of us. And the reality is that the world’s leading hypergrowth companies no longer need to own a huge inventory. They instead depend on a global platform that easily facilitates commerce for both consumers and businesses on a massive, global scale.

In order to stay relevant today, your business must be in a position to adapt, in keeping with the evolving expectations of end users. If success used to be governed by those who were best able to feed, water and maintain existing infrastructure, it is today championed by those who are least afraid of opening up new opportunities through innovation. Applications, platforms and software are all changing the business rules of success, so instigating change to adapt is no longer just part of a business plan; it’s an essential survival tool.

With this in mind, here are three essential pointers to help ensure your business is able to adapt, on demand:

1.       Embrace openness

All around us, agile start-ups and individuals are leveraging the unique confluence of open platforms, crowd-funding and big data analytics that exist around us. The pace of technology change means that no individual company need be responsible for doing everything themselves, which is why more than ever, there’s a real business need for open source. Open source helps to create a broad ecosystem of technology partners, all helping make it possible to work closer with developers to drive common standards, security and interopability within the cloud native application market.

2.       Develop scale at speed

Adrian Cockroft, one of the founders of Netflix, a poster child of the software-defined business once famously said that: “scale breaks hardware, speed breaks software and speed at scale breaks everything.” What Adrian realised was that to develop speed at scale, traditional approaches simply do not work, and new methodologies are required, allowing applications to be more portable and broken down into smaller units. New approaches to security services also allow micro service architectures to be utilised.

3.       Create a one unified platform

Open market data architectures are being increasingly used to give developers the freedom to innovate and experiment. While this is precisely what’s required to keep pace in a world of constant change, it also means that your IT infrastructure stands at risk of growing increasingly muddled, as developers become more empowered to code in their own way. This where a single unified platform holds the key, as this is what is ultimately required to best manage the infrastructure, ensuring compliance, control, security and governance, all the while giving developers the freedom to innovate.

Ask yourself a simple question, can I handle the exponential rate of change that is happening all around me? If the answer to that is not a resolute yes, it is time that you invested some thought into how you can. Uber, Air BnB and Netflix are proof that previously classic barriers to entry that once inhibited small players from gaining traction in the market place are breaking down. Nobody said that surviving in such a disruptive landscape would be easy, but with thought and planning, it needn’t be too difficult either.

If you want to find out more about this and how to transform your business in the software-defined era, take a look at what our EMEA CTO Joe Baguley has to say in this blog post.

=======

Ed Hoppitt is a CTO Ambassador & Business Solution Architect, vExpert, for VMware EMEA and is based in the U.K.

Introducing Kanban into IT Operations

les2By Les Viszlai

Development teams have been using Agile software methodologies since the late 80’s and 90’s, and incremental software development methods can be traced back to the late 50’s.

A question that I am asked a lot is, “Why not run Scrum in IT Operations?”  In my experience, operations teams are trying to solve a different problem.  The nature of demand is different for software development vs the operations side of the IT house.

Basically, Software Development Teams can:

  • Focus their time
  • Share work easily
  • Have work flows that are continuous in nature
  • Generally answer to themselves

While Operations Teams are:

  • Constantly interrupted (virus outbreaks, systems break)
  • Dealing with specialized issues (one off problems)
  • Handling work demands that are not constant (SOX/PCI, patching)
  • Highly interdependent with other groups

In addition; operational problems cross skills boundaries.

What is Kanban?

Kanban is less restrictive than Scrum and has two main rules.

  1. Limit work in progress (WIP)
  2. Visualize the workflow (Value Stream Mapping)

With only two rules, Kanban is an open and flexible methodology that can be easily adapted to any environment.  As a result, IT operations projects, routine operations/ production-support work and operational processes activities are ideally suited to using a Kanban approach.

Kanban (literally signboard or billboard in Japanese) is a scheduling system for lean and just-in-time (JIT) production. Kanban was originally developed for production manufacturing by Taiichi Ohno, an industrial engineer at Toyota.  One of the main benefits of Kanban for IT Operations is that it will establish an upper limit to the work in progress at any given process point in a system.   Understanding the upper limits of work loads helps avoid the overloading of certain skill sets or subsets of an IT operations team.  As a result, Kanban takes into account the deferent capabilities of IT operations teams

Key Terms:

Bottlenecks

Let’s look at our simple example below; IT operations is broken up into the various teams that each have specific skills sets and capabilities (not unlike a number of IT shops today). Each IT ops team is capable of performing a certain amount of work in a given timeframe (u/hr). Ops Team 4, in our example below, is the department bottleneck and we can use Kanban methodology to solve this work flow problem, improve overall efficiencies and complete end-user requests sooner.

Kanban Bottlenecks

As we said earlier, the advantage of adopting a Kanban methodology is that it is less structured than Scrum and is easier for operations teams to adopt. Kanban principles can be applied to any process your IT operations team is already running. The key focus is to keep tasks moving along the value stream.

Flow

Flow, a key term used in Kanban, is the progressive achievement of tasks along the value stream with no stoppages, scrap, or backflows.

  • It’s continuous… any stop or reverse is considered waste.
  • It reduces cycle time – higher quality, better delivery, lower cost

Kanban Flow

Break Out the Whiteboard

Kanban uses a board (electronic or traditional whiteboard) to organize work being done by IT operations.

A key component to this approach is breaking down Work (tasks) in our process flow into Work Item types.  These Work Items can be software related like new features, modifications or fixes to critical bugs (introduced into production).  Work Items can also be IT services related like; employee on-boarding, equipment upgrades/replacements etc.

Kanban Board

The Kanban approach is intended to optimize existing processes already in place.  The basic Kanban board moves from the left to the right. In our example, “New Work” items are tracked as “Stories” and placed in the “Ready” column.  Resources on the team (that have the responsibility or skill set) move the work item into the first stage (column) and begin work.  Once completed the work item is moved into the next column labeled “Done”.  In the example above a different resource was in place as an approver before the work item could move to the next category, repeating for each subsequent column until the Work Item is in production or handed off to an end-user.  The Kanban board also has a fast lane along the bottom. We call this the “silver bullet lane” and use it for Work Items of the highest priority.

How to Succeed with Kanban

In my previous experience as a CIO, the biggest challenge in adopting Kanban in IT operations was cultural.  A key factor in success is the 15 min daily meeting commitment by all teams involved.  In addition, pet projects and low priority items quickly surface and some operations team members are resistant to the sudden spotlight.  (The Kanban board is visible to everyone

Agreement on goals is critical for a successful rollout of Kanban for operations.   I initially established the following goals;

  • Business goals
    • Improve lead time predictability
    • Optimize existing processes
      • Improve time to market
      • Control costs
  • Management goals
    • Provide transparency
    • Enable emergence of high maturity
    • Deliver higher quality
    • Simplify prioritization
  • Organizational goals
    • Improve employee satisfaction (remember ops team 4)
    • Provide slack to enable improvement

In addition, we established SLA’s in order to set expectations on delivery times and defined different levels of work priority for the various teams.  This helped ensure that the team was working on the appropriate tasks.

In this example, we defined the priority of work efforts under 5 defined areas; Silver Bullet, Expedite, Fixed Date, Standard and Intangible.

Production issues have the highest priority and are tagged under the Silver Bullet work stream.  High priority or business benefit activities fell under Expedite.  Fixed Date described activities had an external dependency such as Telco install dates.  And, repeatable activities like VM builds or Laptop set-ups would be defined as Standard.  Any other request that had too many variables and undefined activities was tagged as Intangible (a lot of projects fell into this category).

I personally believe that you can’t fix what you can’t measure, but the key to adopting any new measurement process is to start simple.  We initially focused on 4 areas of measurement:

  1. Cycle Time: This measurement is used to track the total days/hours that a work item took to move through the board.  This was the time it took to move through the board once a Work Item moved out of the Ready column.
  1. Due Date Performance: Simply measures the number of Work Items that completed on or before the due date out of the total work items completed
  1. Blocked Time: This measurement was used to capture the amount of days/hours that work items are stalled in any column
  1. Queue Time: This measurement was used to track how long work items sat in the Ready column.

These measurements let us know how the Operations team performed in 4 areas:

  • How long items sit before they are started by Operations.
  • Which area/resource within IT is causing blockage for things being done.
  • How good is the team at hitting due dates and
  • The overall time it takes things to move through the system under each Work stream.

Can we use Kanban with DevOps?

The focus on Work In Progress (WIP) and Value Stream Mapping makes Kanban a great option to extend into DevOps. Deploying Work Items becomes just another step in the Kanban process, and with its emphasis on optimizing the whole delivery rather than just the development process, Kanban and DevOps seem like a natural match.

As we saw, workflow is different in Kanban than in Scrum. In a Scrum model, new features and changes are defined for the next sprint. The sprint is then locked down and the work is done over the sprint duration (usually 2 weeks). Locking down the requirements in next sprint ensures that the team has the necessary time to work without being interrupted with other “urgent” requirements.  And the feedback sessions at the end of the sprints ensure stakeholders approve the delivered work and continue to steer the project as the business changes.

With Kanban, there are no time constraints and the focus is on making sure the work keeps flowing, with no known defects, to the next step.  In addition, limits are placed on WIP as we demonstrated earlier.  This ensures that a maximum number of features or issues can be worked on at a given time. This should allow teams to focus and deliver with higher quality.  In addition, the added benefit of workflow visibility drives urgency, keeps things moving along and highlights areas of improvement.   Remember, Kanban has its origins in manufacturing, and its key focus is on productivity and efficiency of the existing system. With this in mind, Kanban by design, can be extended to incorporate basic aspects of software development and deployment.

In the end, organizations that are adopting DevOps models are looking to increase efficiencies, deploy code faster and respond quicker to business demands. Both the Kanban and Scrum methodologies address different areas of DevOps to greater and lesser degrees.

The advantages of the Kanban system for IT operations is in its ability to create accountability in a very visible system. The visibility of activities, via the Kanban board and its represented Work Items, aid in improving production flow and responsiveness to customer demand.  It also helps shift the teams focus to quality improvement and teamwork through empowerment and self-monitoring activities.

=========

Les Viszlai is a principal strategist with VMware Advisory Services based in Atlanta, GA.