Home > Blogs > VMware Accelerate Advisory Services

Evolving Cyber Security – Lessons from the Thalys Train Attack in France

Gene LikinsBy Gene Likins

Earlier this year, I was privileged to facilitate a round table for forty seven IT executives representing sixteen companies in the financial services industry.  As expected for a gathering of FSI IT executives, one of the primary topics on the docket was security.

The discussion started with a candid listing of threats, gaps, hackers and the challenges these pose for all in the room.  The list was quite daunting.  The conversation turned to the attempted terrorist attack on the Thalys high speed international train, traveling from Amsterdam to Paris.  A heavily armed gunman had boarded the train with an arsenal of weapons and was preparing to fire on passengers.  Luckily, several passengers managed to subdue the gunman and prevent any deaths

Immediately following the incident, the public began to question the security measures surrounding the train and the transit system in general.  Many recommended instituting airport style security measures, including presentation of identity papers, metal detectors, bag searches and controlled entry points

Given the enormous cost and the already strained police resources running at capacity, some are now calling for a different perspective on security.  As former interior minister of France Claude Gueant said,

“I do not doubt the vigilance of the security forces, but what we need now is for the whole nation to be in a state of vigilance.

As IT professionals, this should sound familiar.   So what can we glean from this incident and apply it to cyber security?

  1. Share the burden of vigilance with customers.
    72% of online customers welcome advice on how to better protect their online accounts (Source: Telesign).  One way to share the burden with customers is to recommend or require the use of security features such as Two Factor Authentication (2FA).  Sending texts of recent credit card transactions is an example of a “passive” way of putting the burden on the customer.  The customer is asked to determine if the charge is real and notify the card issuer if it’s not.  Companies should begin testing the waters of just how much customers are willing to do to protect their data.  They may be surprised.
  2. Avoid accidentally letting the bad guys in. 
    One of the common ways that online security is breached is by employees unknowingly opening emails which contain information such as “know what your peers make” or “learn about the new stock that’s about to double in price”. IT groups should continually inform their internal constituents on the nature of threats so we can all stay vigilant and look out for “suspicious characters”.
  3. Contain the inevitable breaches.
    It’s not a matter of “if”, it’s a matter of “when”. Network virtualization capabilities, such as micro‐segmentation, bring security inside the data center with automated, fine‐grained policies tied to individual workloads.  Micro‐segmentation effectively eliminates the lateral movement of threats inside the data center and greatly reduces the total attack surface.  This also buys security team’s time to detect and respond to malicious activities before they get out-of-hand.

Cyber SecurityBuilding a comprehensive security strategy should be on the agenda of all CIOs in 2016.  Cyber criminals are constantly creating new methods of threatening security, and technology is changing daily to counteract them.

VMware NSX, VMware’s network virtualization platform, enables IT to virtualize not just individual servers or applications but the entire network, including all of the associated security and other settings and rules.  This technology enables micro-segmentation and can move your security capabilities forward by leaps and bounds, but it’s only part of a holistic strategy for preventing security breaches.

To remain ahead of the threats, it requires a constant evolution of people, processes and governance, along with technology, to continuously identify and address security concerns for your organization and your customers.  For help building your security strategy, contact the experts at VMware Accelerate Advisory Services


Gene Likins is the Americas Director of Accelerate Transformation Services for VMware and is based in Atlanta, GA.

The rules to success are changing – but are you?

Ed HoppittBy Ed Hoppitt

We live in a world where the quickest growing transportation company owns no cars (Uber), the hottest accommodation provider owns no accommodation (Air BnB) and the world’s leading internet television network creates very little of its own content (Netflix). Take a moment to let that sink in. Each of these companies is testament to the brave new world of IT that is continuing to shape and evolve the business landscape that surrounds each of us. And the reality is that the world’s leading hypergrowth companies no longer need to own a huge inventory. They instead depend on a global platform that easily facilitates commerce for both consumers and businesses on a massive, global scale.

In order to stay relevant today, your business must be in a position to adapt, in keeping with the evolving expectations of end users. If success used to be governed by those who were best able to feed, water and maintain existing infrastructure, it is today championed by those who are least afraid of opening up new opportunities through innovation. Applications, platforms and software are all changing the business rules of success, so instigating change to adapt is no longer just part of a business plan; it’s an essential survival tool.

With this in mind, here are three essential pointers to help ensure your business is able to adapt, on demand:

1.       Embrace openness

All around us, agile start-ups and individuals are leveraging the unique confluence of open platforms, crowd-funding and big data analytics that exist around us. The pace of technology change means that no individual company need be responsible for doing everything themselves, which is why more than ever, there’s a real business need for open source. Open source helps to create a broad ecosystem of technology partners, all helping make it possible to work closer with developers to drive common standards, security and interopability within the cloud native application market.

2.       Develop scale at speed

Adrian Cockroft, one of the founders of Netflix, a poster child of the software-defined business once famously said that: “scale breaks hardware, speed breaks software and speed at scale breaks everything.” What Adrian realised was that to develop speed at scale, traditional approaches simply do not work, and new methodologies are required, allowing applications to be more portable and broken down into smaller units. New approaches to security services also allow micro service architectures to be utilised.

3.       Create a one unified platform

Open market data architectures are being increasingly used to give developers the freedom to innovate and experiment. While this is precisely what’s required to keep pace in a world of constant change, it also means that your IT infrastructure stands at risk of growing increasingly muddled, as developers become more empowered to code in their own way. This where a single unified platform holds the key, as this is what is ultimately required to best manage the infrastructure, ensuring compliance, control, security and governance, all the while giving developers the freedom to innovate.

Ask yourself a simple question, can I handle the exponential rate of change that is happening all around me? If the answer to that is not a resolute yes, it is time that you invested some thought into how you can. Uber, Air BnB and Netflix are proof that previously classic barriers to entry that once inhibited small players from gaining traction in the market place are breaking down. Nobody said that surviving in such a disruptive landscape would be easy, but with thought and planning, it needn’t be too difficult either.

If you want to find out more about this and how to transform your business in the software-defined era, take a look at what our EMEA CTO Joe Baguley has to say in this blog post.


Ed Hoppitt is a CTO Ambassador & Business Solution Architect, vExpert, for VMware EMEA and is based in the U.K.

Introducing Kanban into IT Operations

les2By Les Viszlai

Development teams have been using Agile software methodologies since the late 80’s and 90’s, and incremental software development methods can be traced back to the late 50’s.

A question that I am asked a lot is, “Why not run Scrum in IT Operations?”  In my experience, operations teams are trying to solve a different problem.  The nature of demand is different for software development vs the operations side of the IT house.

Basically, Software Development Teams can:

  • Focus their time
  • Share work easily
  • Have work flows that are continuous in nature
  • Generally answer to themselves

While Operations Teams are:

  • Constantly interrupted (virus outbreaks, systems break)
  • Dealing with specialized issues (one off problems)
  • Handling work demands that are not constant (SOX/PCI, patching)
  • Highly interdependent with other groups

In addition; operational problems cross skills boundaries.

What is Kanban?

Kanban is less restrictive than Scrum and has two main rules.

  1. Limit work in progress (WIP)
  2. Visualize the workflow (Value Stream Mapping)

With only two rules, Kanban is an open and flexible methodology that can be easily adapted to any environment.  As a result, IT operations projects, routine operations/ production-support work and operational processes activities are ideally suited to using a Kanban approach.

Kanban (literally signboard or billboard in Japanese) is a scheduling system for lean and just-in-time (JIT) production. Kanban was originally developed for production manufacturing by Taiichi Ohno, an industrial engineer at Toyota.  One of the main benefits of Kanban for IT Operations is that it will establish an upper limit to the work in progress at any given process point in a system.   Understanding the upper limits of work loads helps avoid the overloading of certain skill sets or subsets of an IT operations team.  As a result, Kanban takes into account the deferent capabilities of IT operations teams

Key Terms:


Let’s look at our simple example below; IT operations is broken up into the various teams that each have specific skills sets and capabilities (not unlike a number of IT shops today). Each IT ops team is capable of performing a certain amount of work in a given timeframe (u/hr). Ops Team 4, in our example below, is the department bottleneck and we can use Kanban methodology to solve this work flow problem, improve overall efficiencies and complete end-user requests sooner.

Kanban Bottlenecks

As we said earlier, the advantage of adopting a Kanban methodology is that it is less structured than Scrum and is easier for operations teams to adopt. Kanban principles can be applied to any process your IT operations team is already running. The key focus is to keep tasks moving along the value stream.


Flow, a key term used in Kanban, is the progressive achievement of tasks along the value stream with no stoppages, scrap, or backflows.

  • It’s continuous… any stop or reverse is considered waste.
  • It reduces cycle time – higher quality, better delivery, lower cost

Kanban Flow

Break Out the Whiteboard

Kanban uses a board (electronic or traditional whiteboard) to organize work being done by IT operations.

A key component to this approach is breaking down Work (tasks) in our process flow into Work Item types.  These Work Items can be software related like new features, modifications or fixes to critical bugs (introduced into production).  Work Items can also be IT services related like; employee on-boarding, equipment upgrades/replacements etc.

Kanban Board

The Kanban approach is intended to optimize existing processes already in place.  The basic Kanban board moves from the left to the right. In our example, “New Work” items are tracked as “Stories” and placed in the “Ready” column.  Resources on the team (that have the responsibility or skill set) move the work item into the first stage (column) and begin work.  Once completed the work item is moved into the next column labeled “Done”.  In the example above a different resource was in place as an approver before the work item could move to the next category, repeating for each subsequent column until the Work Item is in production or handed off to an end-user.  The Kanban board also has a fast lane along the bottom. We call this the “silver bullet lane” and use it for Work Items of the highest priority.

How to Succeed with Kanban

In my previous experience as a CIO, the biggest challenge in adopting Kanban in IT operations was cultural.  A key factor in success is the 15 min daily meeting commitment by all teams involved.  In addition, pet projects and low priority items quickly surface and some operations team members are resistant to the sudden spotlight.  (The Kanban board is visible to everyone

Agreement on goals is critical for a successful rollout of Kanban for operations.   I initially established the following goals;

  • Business goals
    • Improve lead time predictability
    • Optimize existing processes
      • Improve time to market
      • Control costs
  • Management goals
    • Provide transparency
    • Enable emergence of high maturity
    • Deliver higher quality
    • Simplify prioritization
  • Organizational goals
    • Improve employee satisfaction (remember ops team 4)
    • Provide slack to enable improvement

In addition, we established SLA’s in order to set expectations on delivery times and defined different levels of work priority for the various teams.  This helped ensure that the team was working on the appropriate tasks.

In this example, we defined the priority of work efforts under 5 defined areas; Silver Bullet, Expedite, Fixed Date, Standard and Intangible.

Production issues have the highest priority and are tagged under the Silver Bullet work stream.  High priority or business benefit activities fell under Expedite.  Fixed Date described activities had an external dependency such as Telco install dates.  And, repeatable activities like VM builds or Laptop set-ups would be defined as Standard.  Any other request that had too many variables and undefined activities was tagged as Intangible (a lot of projects fell into this category).

I personally believe that you can’t fix what you can’t measure, but the key to adopting any new measurement process is to start simple.  We initially focused on 4 areas of measurement:

  1. Cycle Time: This measurement is used to track the total days/hours that a work item took to move through the board.  This was the time it took to move through the board once a Work Item moved out of the Ready column.
  1. Due Date Performance: Simply measures the number of Work Items that completed on or before the due date out of the total work items completed
  1. Blocked Time: This measurement was used to capture the amount of days/hours that work items are stalled in any column
  1. Queue Time: This measurement was used to track how long work items sat in the Ready column.

These measurements let us know how the Operations team performed in 4 areas:

  • How long items sit before they are started by Operations.
  • Which area/resource within IT is causing blockage for things being done.
  • How good is the team at hitting due dates and
  • The overall time it takes things to move through the system under each Work stream.

Can we use Kanban with DevOps?

The focus on Work In Progress (WIP) and Value Stream Mapping makes Kanban a great option to extend into DevOps. Deploying Work Items becomes just another step in the Kanban process, and with its emphasis on optimizing the whole delivery rather than just the development process, Kanban and DevOps seem like a natural match.

As we saw, workflow is different in Kanban than in Scrum. In a Scrum model, new features and changes are defined for the next sprint. The sprint is then locked down and the work is done over the sprint duration (usually 2 weeks). Locking down the requirements in next sprint ensures that the team has the necessary time to work without being interrupted with other “urgent” requirements.  And the feedback sessions at the end of the sprints ensure stakeholders approve the delivered work and continue to steer the project as the business changes.

With Kanban, there are no time constraints and the focus is on making sure the work keeps flowing, with no known defects, to the next step.  In addition, limits are placed on WIP as we demonstrated earlier.  This ensures that a maximum number of features or issues can be worked on at a given time. This should allow teams to focus and deliver with higher quality.  In addition, the added benefit of workflow visibility drives urgency, keeps things moving along and highlights areas of improvement.   Remember, Kanban has its origins in manufacturing, and its key focus is on productivity and efficiency of the existing system. With this in mind, Kanban by design, can be extended to incorporate basic aspects of software development and deployment.

In the end, organizations that are adopting DevOps models are looking to increase efficiencies, deploy code faster and respond quicker to business demands. Both the Kanban and Scrum methodologies address different areas of DevOps to greater and lesser degrees.

The advantages of the Kanban system for IT operations is in its ability to create accountability in a very visible system. The visibility of activities, via the Kanban board and its represented Work Items, aid in improving production flow and responsiveness to customer demand.  It also helps shift the teams focus to quality improvement and teamwork through empowerment and self-monitoring activities.


Les Viszlai is a principal strategist with VMware Advisory Services based in Atlanta, GA.

Increase the Speed of IT with DevOps and PaaS

Reg Lo By Reg Lo

How do you increase the speed of IT?

In this 5 minute video whiteboard session I will describe two key strategies for making IT more agile and improving time to market.  For your convenience there is also a transcript of the video below.

Two key strategies for increasing the speed of IT are:

  1. Deliver more applications using DevOps. Traditional waterfall methods are too slow.  Agile methodologies are an improvement but without accelerating both the infrastructure provisioning and application development, IT is still not responsive enough for the business.  Today, many organizations are experimenting with DevOps but to really move the needle, organizations must adopt DevOps at scale.
  2. Deliver new Platform-as-a-Service faster. Infrastructure-as-a-Service is the bare minimum for IT departments to remain relevant to the business.  If IT cannot provide self-service on-demand IaaS, the business will go directly to the public cloud.  To add more value to the IaaS baseline and accelerate application delivery, IT must deliver application platforms in a cloud model, i.e. self-service, on-demand, with elastic capacity.

Let’s start with this second key strategy: delivery new PaaS services faster.  PaaS services include second generation platforms (database-as-a-service, application server-as-a-service, web server-as-a-service) as well as third generation platforms for cloud native applications such as Hadoop-as-a-service, Docker-as-a-service or Cloud Foundary-as-a-service.

In order to launch these new PaaS services faster, IT must have a well-defined service lifecycle that it can use to quickly and repeatably create these new services.  What are the activities and what artifacts must be created in order to analyze, design, implement, operate and improve a service?

Once you have defined the service lifecycle, you can launch parallel teams to create the new service: platform-as-a-service, database-as-a-service, or X-as-a-service where X can be anything.  Each service can be requested via the self-service catalog, delivered on demand, and treated like “code” so it can be versioned with the application build.

Each service needs a single point of accountability – the Service Owner.  The service owner is responsible for the full lifecycle of the service.  They are part of the Cloud Services team, or also called the Cloud Tenant Operations team.  The Cloud Services Team also manages the service catalog, provides the capability to automate provisioning, and manages the operational health of the services.

The Cloud Services Team is underpinned by the Cloud Infrastructure Team. This team combines cross-functional expertise from compute, storage and network to create the profiles or resource pools that the cloud services are built on.  The Cloud Infrastructure Team is also responsible for capacity management and security management.  The team not only manages the internal private cloud, but also the enterprise’s consumption of the public cloud, transforming IT into a service broker.

Now that we’ve described the new cloud operating model, let’s return to the first key strategy for increasing the speed of IT: deliver more applications using DevOps.  Many organizations have tasks one or two applications teams to pilot DevOps practices such as continuous integration and continuous deployment.  This is a good starting point, however, in order to expand DevOps at scale so IT can provide a measurable time-to-market impact for the business, we need to make the adoption easier and more systematic.

The DevOps enablement team is a shared services team that provides consulting services to the other app dev teams; contains the expertise in automation so that other app dev teams do not need to become the expert in Puppet, Chef, or VMware CodeStream; and, this team drives a consistent approach across all app dev teams to avoid a fragmented approach to DevOps.

Remember how we talked about expanding PaaS?  With self-service on-demand PaaS provisioning, app dev teams can build environment-as-a-service: an application blue print that contains multiple VMs (the database server, application server, web server, etc.)  Environment-as-a-service lets app dev teams treat infrastructure like code, helping them adopt continuous deployment best practices by linking software versions to infrastructure versions.

By delivering more applications using DevOps and by delivering new PaaS services faster, you can increase the speed of IT.

Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

Software Defined Networking for IT Leaders – 5 Steps to Getting Started

Reg Lo By Reg Lo

In Part 1 of “Software Defined Networking (SDN) for IT Leaders”, micro-segmentation was described as one of the most popular use-cases for SDN.  With the increased focus on security, due to growing number of brand-damaging cyber attacks, micro-segmentation provides a way to easily and cost-effectively firewall each application, preventing attackers from gaining easy access across your data center once they penetrate the perimeter defense.

This article describes how to get started with micro-segmentation. Micro-segmentation is a great place to start for SDN because you don’t need to make any changes to the existing physical network, i.e. it is a layer of protection that sits on top of the existing network.  You can also approach micro-segmentation incrementally, i.e. protect a few critical applications at a time and avoid boiling the ocean.  It’s a straightforward to dip your toe into SDN.

5 Simple Steps to Get Started:

  1. Software Defined Networking ProcessIdentify the top 10 critical apps. These applications may contain confidential information, may need to be regulatory compliant, or they may be mission critical to the business.
  2. Identify the location of these apps in the data center. For example, what are the VM names or are the app servers all connect to the same virtual switch.
  3. Create a security group for each app. You can also define generic groups like “all web servers” and setup firewall rules such as no communication between web servers.
  4. Using SDN, define a firewall rule for each security group that allows any-to-any traffic. The purpose of this rule is to trigger logging of all network traffic to observe the normal patterns of activity.  At this point, we are not restricting any network communications.
  5. Inspect the logs and define the security policy. The amount of time that needs to elapse before inspecting the logs is application dependent.  Some applications will expose all their various network connections within 24 hours.  Other applications, like financial apps, may only expose specific system integration during end-of-quarter processing.  Once you identify the normal network traffic patterns, you can update the any-to-any firewall rule to only allow legitimate connections.

Once you have completed these 5 steps, repeat them for the next 10 most critical apps, incrementally working your way through the data center.

In Part 3 for Software Defined Networking for IT Leaders, we will discuss the other popular starting point or use case: automating network provisioning to improve time-to-market and reduce costs.

Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

From CIO to CEO: Shaping Your ITaaS Transformation Approach

Jason StevensonBy Jason A. Stevenson

In this CIO to CEO series we’ve discussed how to run, organize, and finance an ITaaS provider. In this blog, we will discuss how to approach the ITaaS transformation to gain the most value.

Barriers to Success

Transforming a traditional IT organization to be a private cloud provider and/or public cloud broker using ITaaS contradicts many basic human behaviors. To undergo a transformation we must convince ourselves to:

  • Put other people first; specifically customers paying for the service and users receiving the service.
  • Place ourselves in a service role from the very top of the IT organization to the very bottom and allow ourselves to be subservient to others.
  • Give up the notion of self-importance and recognize each and every person plays an equal role in the chain when delivering an end-to-end service and accept a fair amount of automation of what we do on a regular basis.
  • Redistribute control from individuals to processes that leverage group intelligence and center authority within service ownership and lifecycle.
  • Become truly accountable for our role in service delivery and support where all involved can clearly see what we have done or not done.
  • Approach problems and continual service improvement in a blameless environment that shines light on issues rather than covering or avoiding them.

In other words, we must be: humble, honest, relaxed, and trusting. Not the kind of words you often hear in a technology blog but nonetheless accurate. In essence, we must change. That is different from he must change, she must change, or they must change; and that is very different from it must change. In the IT industry, we tend to abstract change by focusing on “it” which is often hardware and software and then deflect change to “they” which is often users or another department.

Are You Ready for Change?

The first two questions an ITaaS CEO (CIO) needs to ask are:

  1. “Do I really want change?”
  2. “Are we really willing to change?”

Initially the answer seems obvious “Of course I do, we have to.” In that subtle nuance of “I” and then “we” lies our challenge. As we wade in we realize the level of resistance that’s out there and the effort it will take to overcome it. We begin to realize the long term commitment needed. For the faint of heart, change dies then and there. But for those who take on challenge the journey is just beginning.

Many IT organizations are focused on service/technology design and operation and therefore do not have the necessary level of in-house expertise to guide their own organization through a complete people, process, and technology transformation. To ensure the greatest return on their investment, many organizations look to a partner that is:

  • Specifically experienced in transformation to instill confidence within their organization.
  • External to their organization and somewhat removed from internal politics to increase effectiveness.

Assuming a partnership is right for your organization for these or other reasons the question becomes “How do I pick the right partner?

Personal Trainer vs. Plastic Surgeon

You’ve heard the saying “Be careful what you ask for you might just get it.” This is very true of an ITaaS CEO looking for a partner to relieve some of the effort and commitment associated with change. Becoming a cloud provider/broker is hard work. The analogy of personal trainer was specifically chosen because of its implication that the hard work cannot be delegated. Though counterintuitive, the more an ITaaS CEO or his/her team attempts to push the “hard work” to a partner the more the partner becomes a plastic surgeon, “delivering” a pretty package with no guarantee that your organization won’t slip into old habits and lose all value gained.

A personal trainer doesn’t exercise or eat for their client, but coaches them along every step of the way. A personal trainer does not usually deliver exercise equipment, assemble the equipment, or even necessarily write a manual on how to use the equipment. What personal trainer does is show up at regular intervals to work side by side with an individual, bringing the knowledge they have honed by going through this themselves and assisting others.

For an IT organization that is looking to become a cloud provider/broker, reduce cost, or just be more consistent or agile, a partner can help more by providing consultation than deliverables. This isn’t to say that a partner shouldn’t develop comprehensive plans or assets, but your organization will get more value out of developing plans and assets as a team, coached by your partner, to ensure a perfect fit. Though the partner may recommend technology and then implement it, this is just one piece of the puzzle. Establishing a trusted relationship between the partner and the IT organization takes regular workouts/interaction.

My Approach to “Personal Training”

We all wish we didn’t have day jobs or family responsibilities when we want to spend time at the gym but reality mandates we spend short but regular amounts of time with our personal trainer. This is also true of the IT department and their partner. Though 100% of the organization must be involved in change at some point these workshops equate to approximately 10% of the time of 10% of the organization. When I am engaged in large-scale transformation with my customers, I find the optimal cadence for hands-on consultation is every two or three weeks, with three days (usually in the middle of the week) that feature half-day workshops consisting of a 2-hour morning session and a 2-hour afternoon session. These workshops are both timed and structured using principles that build goodwill with all stakeholders involved whether they are proponents or opponents to change.

A simplified version of my approach to overcoming resistance and obtaining commitment is to:

  1. Start by sharing a common vision, how the vision has value, and how we all contribute to producing that value.
  2. Raise awareness and understanding through orientation to allow as many people as possible to reach their own conclusion instead of trying to tell them what their conclusion should be.
  3. Establish trust with all involved by involving them in an assessment. Not an assessment of technology or the environment; rather an assessment that draws people out and engages them by asking what is working and shouldn’t be changed, what isn’t working and needs to be changed, and what challenges do we foresee in making a change.
  4. Openly and interactively draft a service model that includes features, benefits, and commitments and a management model that includes roles, responsibilities, policies, processes.
  5. Automate and enforce the service model and the management model through tool requirements.
  6. Review and revise after group workshops and through great finesse. Over time the group will progressively elaborate on these models and tools, going broader and deeper just as a personal trainer would by adding more exercises, more weight, and repetitions.

Avoiding Burnout

At the gym, we may overcome the initial challenge of understanding health and exercise only to discover how painful our aching muscles can be. This stresses the importance of rest and cross training. Much like a personal trainer, our IT partner must not push an organization to gorge on change or focus on one particular component of change for too long. This can sometimes seem sporadic to our team so the approach must be clearly communicated as part of the vision.

You may have noticed the use of the word “team” and wondered who is it? We can consider team in this context as the service’s stakeholders. That is everyone involved in delivery of service and yes that does include customer and user representatives.

An ITaaS CEO that involves his or herself… and not only chooses change and discipline, but also a partner focused on promoting team change and self-discipline over short-cut deliverables, will find their organization in a much better position to transform to a cloud provider/broker using ITaaS.


Jason Stevenson is a Transformation Consultant with VMware.

ROI Analysis: 4 Steps to Set the Scope

Everything and the Kitchen Sink

Lisa SmithBy Lisa Smith

As I discussed in “Introduction to ROI”, determining what to include or exclude in your ROI analysis can be tricky.  What if you leave off a metric that ends up being materially very significant and eats up all of your savings when you begin to implement?  Shouldn’t you just include ALL of your related costs, just to be sure?  Sometimes it feels much more comfortable spending many hours in “analysis paralysis”, just to be safe.

But I am jumping ahead of myself.  Let’s start with the mechanics of how to figure out the scope of your ROI Analysis.

Setting the Scope for ROI Analysis

The guiding principle is pretty simple:  Model the stuff your solution is going to impact.   

Step 1 – Create a Capabilities List

Create a list of all the capabilities your project is going to deliver in a single column (I like to use Excel, even for a text exercise like this).

Step 2 – Create a Benefits List

In a second column next to this list of capabilities, draft the benefits that are going to be delivered by these capabilities.  To do this you need to answer the question, “So what?” with regard to every capability you are delivering.  The lens for these benefits should be focused on what the company is going to achieve from the successful implementation of the project you are requesting funding for.  Don’t get caught up in how you are going to measure these benefits just yet.  Don’t judge what will be considered big savings or small savings.  Just free flow – write them down.

If this exercise is really hard then it might mean that you need to get a better handle on your desired future state.  If you have a great vision of where you are trying to get to with your project, a list of high level benefits should flow easily.

Step 3 – Refine Your Value Drivers

After you have your list of capabilities and benefits (I will refer to these items as “value drivers”), share that list with your peers who are familiar with your “as is” state and your “to be” state.   You might be given a few more value drivers to consider, or you might decide to drop a few value drivers.  If your project includes benefits or services consumed by other teams or end customers then you need to spread your wings, leave your nest and start talking to other people.  The conversation would sound like, “If we could provide, ABC Service, what would the benefits to your team be?”  This “voice of the customer” data provides a great insight into how well adopted your project or service would be; evidence that wider spread gains would be achieved beyond just your team.

“If you build it, they will come” may work in the movies, but the success of any technology project hinges on internal adoption.  Many internal cloud projects failed to get the adoption levels they were expecting from other lines of business.  They built it, but no one is coming.  Quite often these missteps could have been avoided if they had a greater understanding of their consumers’ needs and desires, and used those to fuel the short list of the cloud services provided.

Step 4 – Create a Value Model

Let’s create a value model using an example from VMware’s product vROPs, Capacity Management benefit statement:

“Capacity management helps identify idle 1 or overprovisioned2 VMs to reclaim excess capacity and increase VM density3 without impacting performance.  “

Reclaiming excess capacity is our “So what?”, or benefit statement.    We need to draft a model to calculate the financial impact of reclaiming excess capacity.  The model should include the before and after depiction from our three new capabilities:

  1. Removing idle workloads
  2. Reducing assets provisioned per workload
  3. Increasing VM density or consolidation ratio

ROI AnalysisThe net effect of reclaiming excess capacity refers to expenses associated with server and storage infrastructure capacity.   If that was the scope of your analysis, the capital expense associated with the server and storage savings would be all you needed.  You might also consider expanding the scope, to include the operating expenses also tied to that hardware.  Those operating expenses would cover the power and cooling of the hardware, the data center space overhead, the administrator labor needed to install, maintain, support and refresh that gear.  This technique is often referred to creating a “total cost of ownership” view, considering the acquisition cost as well as the ongoing operating cost of an asset.

Try grouping the value drivers into themes such as reducing computer hardware infrastructure, reducing incident management processes, or reducing time to market for application development.  For a single theme it is best to create a single value model for all of those value drivers.  This is most helpful in providing a single depiction of the “as is” costs or process effort of today compared to a view of the future “to be” state – comparing apples to apples.

So where do we draw the line on scope?

If your value drivers are a pretty short list, feel free to model every aspect of those benefits.  If your value benefits list is lengthy and your “so what” list has over 10 items, please consider scoping to only key and material items.

While I am going to be applying a bit of circular logic here for a minute – if any one line item in your TCO analysis is less than 10% of your total savings, you might consider dumping it from your model.  The savings does not support the effort to QA your math or the effort to socialize the savings.  Secondly, if your savings is considered “soft savings” you might want to consider dropping it also.  Especially if you have plenty of hard dollar saving to justify the investment.  Don’t gild the lily.  If you are not familiar with “hard and soft dollars” concept, I will be covering that in a future article.


Lisa Smith is a Business Value Consultant in the VMware Accelerate Advisory Services team and is based in New York, NY.

Don’t Let Stakeholder Management & Communications Be Your Transformation’s Goop Mélange

John Worthington By John Worthington

What is “Goop Mélange”?

In an episode of TV’s “The Odd Couple” Oscar took on making his own dinner.  He mixed in potato chips, sardines, pickles, and whipped cream.  It was then garnished with ketchup.  When Felix asked what he called this mélange, Oscar answered, “Goop.”

So what does this have to do with stakeholder management?

The importance of stakeholder management is referenced in almost all best practice guidance including ITIL, COBIT, PMBOK, TOGAF and many more. In addition, the number of channels available for engaging stakeholders is growing to include social media, smartphones and other enabling technologies.

Unchecked, your stakeholder management plan can quickly become a very confusing mix of uncoordinated communication. Mixing up a little bit of everything can wind up being the goop mélange of your transformation program.

One way to assure a desirable mix of communication channels is to establish a Service Management Office (SMO), which can begin to develop marketing and communication expertise within the IT organization based on a well defined stakeholder management strategy.

The stakeholder management plan can take a look at the organizational landscape based on the current and future needs of a transformation path, identify key stakeholders and provide the analysis and guidance needed for others (such as project managers, architects, etc.) to effectively do so as well within the transformation context.

From a service management perspective, the stakeholder management plan and the SMO can set in motion the improvements needed to establish cross-functional communication. An example might be Service Owners driving dialog about end-to-end IT services across technical domains.

The stakeholder management plan, supported by a well-sponsored SMO, can also ensure that top-to-bottom communication channels are matured. This is enables communication between Process Ownership, Process Management and Process Practitioners as another example.

stakeholder managment

Sticking to these basics of stakeholder management and communications as you begin your transformation can make sure your stay focused on building a solid foundation for more sophisticated communication channels when the organization is ready, and avoid making a goop mélange out of your transformation communications.


John Worthington is a VMware transformation consultant and is based in New Jersey. Follow @jMarcusWorthy and@VMwareCloudOps on Twitter.

Commodity IT – It’s a good thing for everyone!

les2By Les Viszlai

Traditionally, IT commodity services and products have the highest percentage of being outsourced in our industry. There is a belief among a majority of people in our industry that IT commoditization is a very bad thing.

In its simplest form, a company’s success is measured by its capability to produce a product or service efficiently and to meet the market demand for that product or service.  Running IT as a business implies that we need to also do the same for our internal customers by providing products and services cost effectively with repeat-ability and predictability at an agreed cost.  IT organizations are viewed as a business enabler and trusted advisor by getting out of the business of building and running commodity IT products and services and focusing on providing the business with resources that provide new product capabilities and business differentiators in their market.

We need to understand what a commodity is in order to change how IT is run and provide that business focus.

commodity starts as any thing that has a perceived value by a consumer. A more common understanding of a commodity is a product that is generic in nature and has the same basic value as all similar items. (a simplified example is a Server)

For example, virtualized hardware, in the IT data center context, is a device or device component that is relatively inexpensive, widely available and more or less interchangeable with other hardware of its type.

By definition, a commodity product lacks a unique selling point. In an IT context, the term usually differentiates typical IT products from specialized or high-performance products. A commodity, when looked at this way, is a low-end but functional product without distinctive features.  Generally, as hardware moves along the technology cycle, that hardware becomes a commodity as the technologically matures in that marketplace. That implies that most hardware products, like network switches, that have been around for a long time are now available in commodity versions, although they aren’t generally marketed as such.  In the past, physical networking infrastructure was viewed as specialized until the maturity of virtualization of this layer of technology became available and more widely spread.

Example IT Transformation Journey - Commodity Services

Example IT Transformation Journey

Lets look at a more traditional commodity example, the local gas station.  The gas at your pump is the same as the gas at any of the other pumps. It’s also the same as the gas at the station across the street or across town.  There may be a very slight change in price, but the products essentially sell for the same basic price and are the same regardless where they are purchased.  Can your consumers of IT products and services get this same consistency? Are we trying to refine our own gas before providing it to our consumers?

When we compare this scenario to a traditional IT organization we need to ask ourselves, what commodity areas are we focusing on reinventing and providing. From the IT standpoint, can we virtualize our hardware environment and reap some of the benefits like “FASTER TIME TO MARKET”, “COST EFFECTIVENESS” and “IMPROVED QUALITY”.  Are we already doing this and can we drive further up the technology curve build and/or broker services and products such as IaaS, PaaS or SaaS to drive faster business value?

Our common fear is that “Commodity IT” implies low quality and cost or that our roles will be outsourced.  In comparison, a commodity product or service is not sold cheaper that it takes to produce or run it.  Like wise a commodity IT product or service should have affordability, repeat-ability and predictability in delivery and as brokers not builders,  IT organizations can provide this service to the business.

In summary, studies have shown that IT organizations focus a lot of effort and resources on building and running products and services at the back end of the transformation curve.  This is the area with the least amount of business value but consumes a majority of IT time & resources.  IT organizations need to shift focus to the front of the technology cycle and help the business create and engineer new products and services that drive revenue.

IT best practices exist to help IT organizations move to the front of the technology cycle by reorganizing people, process and technology to use the building blocks of various commodity products and services to provide affordable, repeatable and predictable services. This allows IT organizations to focus on the front of the curve.  Now that’s a lot more valuable and fun.

Commodity IT – It’s a great thing for everyone!


Les Viszlai is a principal strategist with VMware Advisory Services based in Atlanta, GA.

From CIO to CEO: Financing your ITaaS Organization with Charge-back

Jason StevensonBy Jason Stevenson

In my latest CIO to CEO blogs, we discussed How to RUN and ORGANIZE an Information Technology as a Service (ITaaS) Provider. In this blog, we will discuss how to FINANCE an ITaaS Provider.

One of the loftiest goals as an ITaaS Provider is the ability to charge-back, or at least show-back cost of services. It can also be daunting to do so in a fair manner for all customers.

Leonardo da Vinci said “Simplicity is the ultimate sophistication” and Albert Einstein said “If you can’t explain it simply, you don’t understand it well enough.” In this blog I’ll walk through a quickly executable IT cost model to calculate Total Cost of Ownership (TCO), using mostly tools and data we already have, to establish a price that is:

  • Simple
  • Accurate
  • Equitable
  • Scalable
  • Agile

STEP 1: Establish Labor Costs

Bob worked 40 hours this week. 16 hours in support of services including day-to-day operations, online training every other day, and daily staff meetings. 24 hours were spent working on a migration for Project 2. All employees had a holiday the first Monday of this particular month. As it happens, none of our employees took any leave for the rest of the month. Out-of-office equals $0 cost and is excluded. The reason for this will become more evident when we discuss burdened rates.

IT Cost Model

Highlighted in blue, each project is associated with a service. It is important to note Project 2 is associated with Service B. The cost of Project 2 is derived from our employees’ wage and a rate of 1.5. The wage is multiplied by this rate to equal a burdened rate. Burdened rates reflect wages plus benefits and fee if any. Bob worked 24 hours on the Project 2’s migration in Week 2 per our example and contributed to Project 2’s cost of $5,724.00 this month.

Highlighted in orange, Bob worked 16 hours in support of services in Week 2 per our example. Tickets (requests, incidents, problems, changes, etc.) are associated with each service. Out of the 112 total tickets this month, 30 were related to Service B or approximately 27%. Using the number of tickets per each service and total labor for service gives us the cost of service. In this case, $17,172.00 * (30/112) = $4,599.64 for Service B this month.

Highlighted in green, the total cost of Service B labor for this month is $10,323.64 by adding $4,599.64 for project labor and $5,724.00 for service labor from our previous calculations. Our service organization must recover $36,252.00 this month for just labor.

IT Cost ModelIT Cost ModelIT Cost Model

STEP 2: Distribute Operational Expense

Services may require operational expenditures (OPEX) like: leases, utilities, maintenance, etc.; some of which may be indirect and need to be fairly distributed across services based on relative service size. By dividing subscribers of a service by the total number of subscribers we arrive at a fair percentage of monthly operational expenses for Service B of 28%. Indirect operational expenses for Service B are calculated fairly by $18,000.00 * (250/900) = $5,000.00.

Some expenditures are so large that the expense cannot be recognized at one point in time and need to be amortized (for example building a new data center). Depending on your model, this expense may be captured as either a project expense or an operational expense. In keeping our model simple, amortization should be used sparingly. When absolutely necessary, it should be used consistently by establishing a concise policy like “All expenses over $500,000 will be amortized (spread) over 36 months.” In our example, no expenses exceeded half-a-million-dollars and so amortization does not apply. However, if we had an Expense E of $1,750,000 for our new data build-out it would be recognized as $1,750,000 / 36 = $46,611.11.


STEP 3: Calculate Total Cost of Ownership

Projects may require capital expenditures (CAPEX) beyond labor like: material procurements, suppliers of personnel, etc. plus travel. Project costs are often direct and simply added with project labor for total project cost. We can distribute these costs as well if needed. Total Cost of Ownership for Service B rises to $18,323.64 by adding capital expenses of $3,000.00 to our previous operational expenses and labor. Assuming our other services and projects require only labor, our service organization must recover $44,252.00 this month to remain solvent. Dividing Service B’s total cost by 250 subscribers gives us a cost of $73.29 per subscriber per month.


STEP 4: Establish Charge-back

Our previously mentioned 250 subscribers of Service B is spread over four service customers (departments in this example) as 100 + 100 + 25 + 25 = 250. A fee of 7% which is the difference between our cost and our price. Recovery from each customer of their fair share of services. Department A has 100 subscribers of Service B; which equates to 100 * ($73.29 *107%) = $7,842.03. 107% is a representation of 100% of the cost plus 7% fee. Our previous Total Cost of Ownership of $44,252.00 is now a price of $47,349.64, allowing our service organization to not just be solvent but also profitable to enable research, development, improvement, etc. of services.

IT Cost Model

If the culture of your organization does not allow for your IT department to charge customers for service the same principles can still be applied. Remove the fee (7% in our example) and communicate the same information as Service Consumption Report rather than an invoice to provide show-back to your customers. Though they may not be paying for services you can still influence their behavior by giving them comparative analysis against their peers.

The following table summarizes charge-back using an IT Cost and Price Model.

IT Cost Model

In my next blog post we will explore tips for how to ensure, as the new “CEO” or your IT department, that your transition to an agile, innovative, and profitable service provider is a successful one.


Jason Stevenson is a Transformation Consultant with VMware.