Home > Blogs > VMware Accelerate Advisory Services

IT Innovation has a Major Impact on Attracting – and Retaining – Talented Staff

Mark Sternerby Mark Sterner

When CIOs adopt leading technologies like self–service provisioning, software defined networks, cloud native applications, and mobile solutions, they’re typically motivated by the significant business efficiencies and agility that these new technologies can deliver.

Those are essential considerations, of course, but I’m going explore another, often overlooked, reason to upgrade to IT’s cutting edge: that the technology you deploy for internal use plays a major role in attracting – and retaining – talented staff that will transform your business to a digital enterprise.

You’re only as good as your talent, after all, and anything that frustrates them or otherwise drives your employees – especially the best ones – to think about jumping ship, is a problem you need to deal with.

Attracting Millennials

Attract Top TalentThis problem is becoming more urgent as Millennials join the workforce. Young people joining the workforce today have expectations of mobility, interoperability, ease of use, speed of technology upgrades, the consumerization of IT and more based on their experience with technology since grade school. With next year’s new hires, those expectations will only increase.

This has an even greater impact on companies that are making serious investments in customer-facing technology. I’ve heard young employees at a well known IT enterprise, for example, say, “I can’t believe I work for a tech company and I can’t get everything on my phone and that the applications are so slow and so hard to maneuver.”

I’ll write more about how Millennials are changing IT in my next post, but here I’ll just add that young people who arrive at companies with outdated internal IT are going to be looking to leave as soon as possible, bringing all the associated costs and delays that come with having to replace people who were performing well.

Retaining Top Talent

Of course, attracting and retaining talent isn’t just about your newest hires. I’ve also seen highly experienced employees motivated to move because they’re asked to work with outdated systems, processes, and tools. These employees know how much better they could be performing with better technologies at their disposal and are simply frustrated at dealing with antiquated infrastructure, manual processes, paper-based systems, and having to constantly put out fires instead of focusing on innovation.

This was made even more apparent to me when I worked with a large pharma company that spun off one of their divisions with a new greenfield approach to internal IT (but no real difference in their customer-facing business). They advertised jobs in the spin-off internally, and a large number of their best people jumped at the chance, leaving the parent company badly lacking in experience.

Ambitious IT professionals can be even harder to keep.  Those individuals take it on themselves to keep learning and pick up the very latest skills. If their company isn’t supporting their personal development because it has no ambition to deploy those technologies, employees will take that as a signal that they should be working elsewhere.

There’s one further cost to holding back on new technologies that future-oriented employees – of whatever age – are keen to use. If you finally spend money on new technology after the best of them have left, you’ll be short of the skills to make full use of the capabilities you’ve invested. And in the age of the fully-digital enterprise when IT is no longer simply a support function, you’ll be failing to get maximum benefit from an essential competitive differentiator.

How Do You Stay Ahead? (Spoiler: It’s not all about technology!)

Clearly, this adds weight to any efforts you have underway to advance your internal systems. It bolsters the case for investing in flexible, virtualized work environments that are mobile-friendly and device agnostic. As you free employees to work from anywhere and on any device, and on modern systems that are fast, adaptable, and efficient, you will set yourself apart in the marketplace for talent. Existing employees will view your company more positively – meaning they’ll be far less likely to look elsewhere and that you’ll get a reputation among talented, forward-looking people in your sector as the place work.

But investing in internal IT for talent retention isn’t just about the technology. People and process are crucial considerations, too.

Your best staff will know about and want to use the latest solutions, but they can’t be expected to make maximum use of them without training and support. So when you do update your IT, you need to be sure that employees are supported in the transition and that your organization is prepared to shift its operating model to fully exploit the systems you are putting in place. And you need to be ready to get help to do that if needed.

In addition, empower your tech staff to help guide the technology roadmap you create. It helps build the sense of ownership that will keep them attached to the organization, but it’s also smart management. These people have experience, knowledge of the business, and proven ambition. You’re always going to build a better system if you include them in your planning than you would if you present them with a plan that’s already a done deal.

========

Mark Sterner brings over 14 years of experience in IT Service Management. He has worked in both the process development and ITIL implementation areas for large IT organizations. Mark is currently a Transformation Consultant at VMware, Inc.

Transforming IT into a Cloud Service Provider

Reg LoBy Reg Lo

Until recently, IT departments thought that all they needed to do was to provide a self-service portal to app dev to provision VMs with Linux or Windows, and they would have a private cloud that was comparable to the public cloud.

Today, in order for IT to become a cloud service provider, IT must not only embrace the public cloud in a service broker model, IT needs to provide a broader range of cloud services.  This 5 minute webinar, describes the future IT operating model as IT departments transform into cloud service providers.

Many IT organizations started their cloud journey by creating a new, separate cloud team to implement a Greenfield, private cloud.  Automation and proactive monitoring using a Cloud Management Platform was key to the success for their private cloud.  By utilizing VMWare’s vRealize Cloud Management Platform, IT could easily expand into the hybrid cloud, provisioning workloads to vCloud Air or other public clouds from a single interface.  Effectively, creating “one cloud” for the business to consume and “one cloud” for IT to manage.

However, the folks managing the brownfield weren’t staying still.  They too wanted to improve the service they were providing the business and they too wanted to become more efficient.  So they also invested in automation.  Without a coherent strategy, both Brownfield and Greenfield took their own separate forks down the automation path, confusing the business on which services they should be consuming.  We started this journey by creating a separate cloud team.  However, it may be time to re-think the boundaries of the private cloud and bring Greenfield and Brownfield together to provide consistency in the way we approach automation.

In order to be immediately productive, the app dev teams are looking for more than infrastructure-as-a-service.  They want platform-as-a-service.  These might be second generation platforms such as database-as-a-service (Oracle, MSSQL, MySQL, etc.) or middleware-as-a-service (such as Web Methods).  Or they need third generation platforms based on unstructured PaaS like containers or structured PaaS like cloud foundry.  The terms first, second and third generation map to the mainframe (1st generation), distributed computing (2nd generation), and cloud native applications (the 3rd generation).

Multiple cloud services can be bundled together to create environment-as-a-service.  For example, LAMP-stacks – Linux, Apache, MySQL and PHP (or Python).  These multi-VM application blueprints lets entire environments be provisioned at a click of a button.

A lot of emphasis has been placed on accessing these cloud services through a self-service portal.  However, DevOps best practices is moving towards infrastructure as code.  In order to support developer-defined infrastructure, IT organizations must also provide an API to their cloud.  Infrastructure-as-code lets you version the infrastructure scripts with the application source code together, ultimately enabling the same deployment process in every environment (dev, test, stage and prod) – improving deployment success rate.

Many companies are piloting DevOps with one or two application pipelines.  However, in order to scale, DevOps best practices must be shared across multiple app dev teams.  App dev teams are typically not familiar with architecting infrastructure or the tools that automate infrastructure provisioning.  Hence, a DevOps enablement team is useful for educating the app dev teams on DevOps best practices and providing the DevOps automation expertise.  This team can also provide feedback to the cloud team on where to expand cloud services.

This IT operating model addresses Gartner’s bimodal IT approach.  Mode 1 is traditional, sequential and used for systems of record.  Mode 2 is agile, non-linear, and used for systems of engagement.  Mode 1 is characterized by long cycle times measured in months whereas mode 2 has shorter cycle times measured in days and weeks.

It is important to note that the business needs both modes to exist.  It’s not one or the other.  Just like how the business needs both interfaces to the cloud: self-service portal and API.

What does this mean to you?  IT leaders must be able to articulate a clear picture of the future-state that encompasses both mode 1 and mode 2, that leverages both a self-service portal and API to the organization’s cloud services.  IT leaders need a roadmap to transform their organization into cloud service providers that traverse the hybrid cloud.  The biggest challenge to the transformation is changing people (the way they think, the culture) and processes (the way they work).  VMware can not only help you with the technology; VMware’s AccelerateTM Advisory Services can help you address the people and process transformation.

 


Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

Technology is not a Magic Wand for DevOps

Theresa StoneBy Theresa Stone

All too often I walk into companies that want to implement DevOps as part of their software defined data center (SDDC) journey and hear conversations filled with frustration like:

We have implemented 8 new tools and our developers seem to be mostly happy with them; but we continue to have issues delivering anything on time!  Our operations staff are frustrated and internal customers won’t allow their applications in our virtualized environment.

OR

 We bought all these new tools and implemented them, I even paid for my people to have formal training on them, but I don’t feel like we’re any better off than we were before!

Many organizations have bought into the falsehood that DevOps is just a technology play.   That could not be further from the truth, so don’t fall for that trap.   Successful DevOps organizations must focus on a lot more than just implementing technology to achieve success.

IT leaders should invest in cultural changes, people, skills gaps and collaboration issues above all other issues to achieve DevOps success. Organizations embarking on a DevOps initiative need to take a step back and evaluate if you are on track for success by approaching DevOps holistically.   These initiatives require a transformation strategy built around clearly defined goals and the development of a well-defined roadmap that incorporates people, process, technology and culture.

Core Pillars of DevOps Transformation

Here are some activities that are often incorporated in a transformation roadmap for DevOps broken down across the core pillars required for success – note one is not like the others:

DevOps PillarsPEOPLE Transformation

  • Governance frameworks are put in place to support and enable value realization from DevOps
  • Organization and operating models are modified to facilitate holistic changes to culture
  • People are invested in with necessary training and skills enhancements

PROCESS Transformation

  • Operations and development engineers participate together in the entire service life-cycle from design through to production support
  • An incident command system is in place where the development team is involved in incident resolution
  • Processes are re-engineered to be more efficient, lean and repeatable

TECHNOLOGY Transformation

  • DevOps technology improvements place reliance on build, test and release automation along with orchestration across technologies and integrated tool chains using continuous delivery capabilities
  • Infrastructure is treated as code
  • The DevOps team delivers small chunks of value to the customer more often
  • Recovery oriented computing – fail forward

CULTURE Transformation

  • High trust, team culture demonstrating effective, seamless cross-functional collaboration, open communications, performance orientation and learning culture (generative organization)
  • Demonstrated Servant Leadership – enable and serve from the top down
  • Established collective ownership
  • Creativity is encouraged

(All of these culture items must be focused on and incorporated into the attributes and activities above.)

Why is Technology the Pillar Most Organizations Focus on First?

Even though new technology is important and usually required, most organizations focus only on tools and do not achieve desired outcomes.   Why does this happen over and over again?   I believe it is due to a couple of factors:

  1. IT leaders gravitate toward what comes easiest and what seems most important to them – i.e. implementing new technology
  2. Leaders in general have a hard time comprehending the importance of people, process and cultural changes and what that actually looks like; therefore, investments in seeking outside assistance from experts are not made where they may be needed the most

In today’s fast-paced, ever-changing landscape, filled with disruptive technology, successful companies must be strategic and operate efficiently to remain on top.  DevOps is not easy and it does not happen overnight; however, it can produce the desired results if you take a holistic approach.  There are many success stories of those that embraced the changes and transformation needed across people, process, technology and culture to be the new or rising leaders in their industry.    Are you next?

========

Theresa Stone is a Transformation Process Architect with VMware and is located in Virginia.

Is Your IT Financial Model Fit for ITaaS and the Cloud?

Harris_SeanBy Sean Harris

IT as a Service (ITaaS) and Cloud Computing (Public, Private & Hybrid) are radically different approaches to delivering IT, from traditional IT delivery models, that require new operating models, processes, procedures and organisations to unlock their true value. While the technology enables this change, it does not deliver it.

Measuring the Business Value of IT as a Service

A question I hear often from customers is, “How do I measure and demonstrate the value of ITaaS and Cloud Computing?”  For many organisations, the model for measurement of value (the return) and cost (the investment), as well as the metrics that have context in an ITaaS delivery model, are unclear.  For example, most (surprisingly not every) customer I deal with can articulate the price of a server, but that metric has no context in an ITaaS delivery model.

Link IT Cost to ValueI have talked before on this very blog about the importance for IT in this new digital era to be able to link the investments in IT and the costs of running IT to the gains in business efficiency and true business value. This will link your business services, the margins and revenues they generate and the benefits they deliver to customers and the business as a whole to IT costs and investments. This is one step. The other side of this equation is how to represent, measure and track the cost of delivery of the IT services that underpin the business services, then present them in a form that has context in terms of the consumption of the business services that are delivered.

Have you mapped your business services to IT services in terms of dependency and consumption?

Have you mapped IT spend to IT services and IT service consumption?

What about your organisation and procedures? How do you account for IT internally?

The Project-Based Approach

Most of the organisations I speak to have a project based approach to IT spend allocations. There are variations in the model from one organisation to the next, but the basic model is the same. In this approach:

  • Funds for new developments are assigned to projects based on a business plan or other form of justification.
  • The project is responsible funding the work to design and develop (within the organisations governance structure) the business and IT services needed to support the new deployment.
  • The project is also typically responsible for funding the acquisition of the assets needed to run these services (although the actual purchase may be made elsewhere) – these typically include infrastructure, software licenses, etc.
  • In most cases the project will also fund the first year (or part year) of the operational costs. At this point responsibility for the operation is passed to a service delivery or operations team who are responsible for funding the on-going operational aspects. This may or may not include a commitment or ownership of tech refresh, upgrades and updates.

What is included can vary drastically. Rarely is there any on going monitoring of the costs mapping to revenues and margins. When it comes to tech refresh, in many cases, it is treated as a change to the running infrastructure and so needs an assigned project to fund that refresh. This leads to tech refresh competing with innovations for a single source of funds.

The Problem with Project-Based Accounting

Just for a second, imagine a car company offering a deal where you (the consumer) pay the cost of the car, the first years service, tax, insurance and fuel and then after that you pay NOTHING (no fuel, no insurance, no tax, no service). Would that not lead you to believe that after year one the car is free?

While the business as whole sees the whole cost of IT, no line of business or business service has visibility of the impact it is having on the operational cost of IT. It is also extremely hard, if not impossible, to track if a business service is still operating profitably, as any results are inaccurate and process of calculation is fragmented.

Surely this needs to change significantly if any IT organisation is to seriously consider moving to an ITaaS (or cloud) delivery model? Is it actually possible to deliver the benefits associated with ITaaS delivery without this change in organisation and procedure?

Applying a service-based costing approach can seem intimidating at first, but it is essential to achieving value from your ITaaS transformation and gets much easier with expert help.  If you are approaching this transformation, contact our Accelerate Advisory Services team at VMware who, along with the Operations Transformation Services team, provides advice and guidance to customers around constructing an operating model, organisation, process, governance and financial management approach that supports an ITaaS delivery model for IT.

=======

Sean Harris is a Business Solutions Strategist in EMEA based out of the United Kingdom.

Evolving Cyber Security – Lessons from the Thalys Train Attack in France

Gene LikinsBy Gene Likins

Earlier this year, I was privileged to facilitate a round table for forty seven IT executives representing sixteen companies in the financial services industry.  As expected for a gathering of FSI IT executives, one of the primary topics on the docket was security.

The discussion started with a candid listing of threats, gaps, hackers and the challenges these pose for all in the room.  The list was quite daunting.  The conversation turned to the attempted terrorist attack on the Thalys high speed international train, traveling from Amsterdam to Paris.  A heavily armed gunman had boarded the train with an arsenal of weapons and was preparing to fire on passengers.  Luckily, several passengers managed to subdue the gunman and prevent any deaths

Immediately following the incident, the public began to question the security measures surrounding the train and the transit system in general.  Many recommended instituting airport style security measures, including presentation of identity papers, metal detectors, bag searches and controlled entry points

Given the enormous cost and the already strained police resources running at capacity, some are now calling for a different perspective on security.  As former interior minister of France Claude Gueant said,

“I do not doubt the vigilance of the security forces, but what we need now is for the whole nation to be in a state of vigilance.

As IT professionals, this should sound familiar.   So what can we glean from this incident and apply it to cyber security?

  1. Share the burden of vigilance with customers.
    72% of online customers welcome advice on how to better protect their online accounts (Source: Telesign).  One way to share the burden with customers is to recommend or require the use of security features such as Two Factor Authentication (2FA).  Sending texts of recent credit card transactions is an example of a “passive” way of putting the burden on the customer.  The customer is asked to determine if the charge is real and notify the card issuer if it’s not.  Companies should begin testing the waters of just how much customers are willing to do to protect their data.  They may be surprised.
  2. Avoid accidentally letting the bad guys in. 
    One of the common ways that online security is breached is by employees unknowingly opening emails which contain information such as “know what your peers make” or “learn about the new stock that’s about to double in price”. IT groups should continually inform their internal constituents on the nature of threats so we can all stay vigilant and look out for “suspicious characters”.
  3. Contain the inevitable breaches.
    It’s not a matter of “if”, it’s a matter of “when”. Network virtualization capabilities, such as micro‐segmentation, bring security inside the data center with automated, fine‐grained policies tied to individual workloads.  Micro‐segmentation effectively eliminates the lateral movement of threats inside the data center and greatly reduces the total attack surface.  This also buys security team’s time to detect and respond to malicious activities before they get out-of-hand.

Cyber SecurityBuilding a comprehensive security strategy should be on the agenda of all CIOs in 2016.  Cyber criminals are constantly creating new methods of threatening security, and technology is changing daily to counteract them.

VMware NSX, VMware’s network virtualization platform, enables IT to virtualize not just individual servers or applications but the entire network, including all of the associated security and other settings and rules.  This technology enables micro-segmentation and can move your security capabilities forward by leaps and bounds, but it’s only part of a holistic strategy for preventing security breaches.

To remain ahead of the threats, it requires a constant evolution of people, processes and governance, along with technology, to continuously identify and address security concerns for your organization and your customers.  For help building your security strategy, contact the experts at VMware Accelerate Advisory Services

========

Gene Likins is the Americas Director of Accelerate Transformation Services for VMware and is based in Atlanta, GA.

The rules to success are changing – but are you?

Ed HoppittBy Ed Hoppitt

We live in a world where the quickest growing transportation company owns no cars (Uber), the hottest accommodation provider owns no accommodation (Air BnB) and the world’s leading internet television network creates very little of its own content (Netflix). Take a moment to let that sink in. Each of these companies is testament to the brave new world of IT that is continuing to shape and evolve the business landscape that surrounds each of us. And the reality is that the world’s leading hypergrowth companies no longer need to own a huge inventory. They instead depend on a global platform that easily facilitates commerce for both consumers and businesses on a massive, global scale.

In order to stay relevant today, your business must be in a position to adapt, in keeping with the evolving expectations of end users. If success used to be governed by those who were best able to feed, water and maintain existing infrastructure, it is today championed by those who are least afraid of opening up new opportunities through innovation. Applications, platforms and software are all changing the business rules of success, so instigating change to adapt is no longer just part of a business plan; it’s an essential survival tool.

With this in mind, here are three essential pointers to help ensure your business is able to adapt, on demand:

1.       Embrace openness

All around us, agile start-ups and individuals are leveraging the unique confluence of open platforms, crowd-funding and big data analytics that exist around us. The pace of technology change means that no individual company need be responsible for doing everything themselves, which is why more than ever, there’s a real business need for open source. Open source helps to create a broad ecosystem of technology partners, all helping make it possible to work closer with developers to drive common standards, security and interopability within the cloud native application market.

2.       Develop scale at speed

Adrian Cockroft, one of the founders of Netflix, a poster child of the software-defined business once famously said that: “scale breaks hardware, speed breaks software and speed at scale breaks everything.” What Adrian realised was that to develop speed at scale, traditional approaches simply do not work, and new methodologies are required, allowing applications to be more portable and broken down into smaller units. New approaches to security services also allow micro service architectures to be utilised.

3.       Create a one unified platform

Open market data architectures are being increasingly used to give developers the freedom to innovate and experiment. While this is precisely what’s required to keep pace in a world of constant change, it also means that your IT infrastructure stands at risk of growing increasingly muddled, as developers become more empowered to code in their own way. This where a single unified platform holds the key, as this is what is ultimately required to best manage the infrastructure, ensuring compliance, control, security and governance, all the while giving developers the freedom to innovate.

Ask yourself a simple question, can I handle the exponential rate of change that is happening all around me? If the answer to that is not a resolute yes, it is time that you invested some thought into how you can. Uber, Air BnB and Netflix are proof that previously classic barriers to entry that once inhibited small players from gaining traction in the market place are breaking down. Nobody said that surviving in such a disruptive landscape would be easy, but with thought and planning, it needn’t be too difficult either.

If you want to find out more about this and how to transform your business in the software-defined era, take a look at what our EMEA CTO Joe Baguley has to say in this blog post.

=======

Ed Hoppitt is a CTO Ambassador & Business Solution Architect, vExpert, for VMware EMEA and is based in the U.K.

Introducing Kanban into IT Operations

les2By Les Viszlai

Development teams have been using Agile software methodologies since the late 80’s and 90’s, and incremental software development methods can be traced back to the late 50’s.

A question that I am asked a lot is, “Why not run Scrum in IT Operations?”  In my experience, operations teams are trying to solve a different problem.  The nature of demand is different for software development vs the operations side of the IT house.

Basically, Software Development Teams can:

  • Focus their time
  • Share work easily
  • Have work flows that are continuous in nature
  • Generally answer to themselves

While Operations Teams are:

  • Constantly interrupted (virus outbreaks, systems break)
  • Dealing with specialized issues (one off problems)
  • Handling work demands that are not constant (SOX/PCI, patching)
  • Highly interdependent with other groups

In addition; operational problems cross skills boundaries.

What is Kanban?

Kanban is less restrictive than Scrum and has two main rules.

  1. Limit work in progress (WIP)
  2. Visualize the workflow (Value Stream Mapping)

With only two rules, Kanban is an open and flexible methodology that can be easily adapted to any environment.  As a result, IT operations projects, routine operations/ production-support work and operational processes activities are ideally suited to using a Kanban approach.

Kanban (literally signboard or billboard in Japanese) is a scheduling system for lean and just-in-time (JIT) production. Kanban was originally developed for production manufacturing by Taiichi Ohno, an industrial engineer at Toyota.  One of the main benefits of Kanban for IT Operations is that it will establish an upper limit to the work in progress at any given process point in a system.   Understanding the upper limits of work loads helps avoid the overloading of certain skill sets or subsets of an IT operations team.  As a result, Kanban takes into account the deferent capabilities of IT operations teams

Key Terms:

Bottlenecks

Let’s look at our simple example below; IT operations is broken up into the various teams that each have specific skills sets and capabilities (not unlike a number of IT shops today). Each IT ops team is capable of performing a certain amount of work in a given timeframe (u/hr). Ops Team 4, in our example below, is the department bottleneck and we can use Kanban methodology to solve this work flow problem, improve overall efficiencies and complete end-user requests sooner.

Kanban Bottlenecks

As we said earlier, the advantage of adopting a Kanban methodology is that it is less structured than Scrum and is easier for operations teams to adopt. Kanban principles can be applied to any process your IT operations team is already running. The key focus is to keep tasks moving along the value stream.

Flow

Flow, a key term used in Kanban, is the progressive achievement of tasks along the value stream with no stoppages, scrap, or backflows.

  • It’s continuous… any stop or reverse is considered waste.
  • It reduces cycle time – higher quality, better delivery, lower cost

Kanban Flow

Break Out the Whiteboard

Kanban uses a board (electronic or traditional whiteboard) to organize work being done by IT operations.

A key component to this approach is breaking down Work (tasks) in our process flow into Work Item types.  These Work Items can be software related like new features, modifications or fixes to critical bugs (introduced into production).  Work Items can also be IT services related like; employee on-boarding, equipment upgrades/replacements etc.

Kanban Board

The Kanban approach is intended to optimize existing processes already in place.  The basic Kanban board moves from the left to the right. In our example, “New Work” items are tracked as “Stories” and placed in the “Ready” column.  Resources on the team (that have the responsibility or skill set) move the work item into the first stage (column) and begin work.  Once completed the work item is moved into the next column labeled “Done”.  In the example above a different resource was in place as an approver before the work item could move to the next category, repeating for each subsequent column until the Work Item is in production or handed off to an end-user.  The Kanban board also has a fast lane along the bottom. We call this the “silver bullet lane” and use it for Work Items of the highest priority.

How to Succeed with Kanban

In my previous experience as a CIO, the biggest challenge in adopting Kanban in IT operations was cultural.  A key factor in success is the 15 min daily meeting commitment by all teams involved.  In addition, pet projects and low priority items quickly surface and some operations team members are resistant to the sudden spotlight.  (The Kanban board is visible to everyone

Agreement on goals is critical for a successful rollout of Kanban for operations.   I initially established the following goals;

  • Business goals
    • Improve lead time predictability
    • Optimize existing processes
      • Improve time to market
      • Control costs
  • Management goals
    • Provide transparency
    • Enable emergence of high maturity
    • Deliver higher quality
    • Simplify prioritization
  • Organizational goals
    • Improve employee satisfaction (remember ops team 4)
    • Provide slack to enable improvement

In addition, we established SLA’s in order to set expectations on delivery times and defined different levels of work priority for the various teams.  This helped ensure that the team was working on the appropriate tasks.

In this example, we defined the priority of work efforts under 5 defined areas; Silver Bullet, Expedite, Fixed Date, Standard and Intangible.

Production issues have the highest priority and are tagged under the Silver Bullet work stream.  High priority or business benefit activities fell under Expedite.  Fixed Date described activities had an external dependency such as Telco install dates.  And, repeatable activities like VM builds or Laptop set-ups would be defined as Standard.  Any other request that had too many variables and undefined activities was tagged as Intangible (a lot of projects fell into this category).

I personally believe that you can’t fix what you can’t measure, but the key to adopting any new measurement process is to start simple.  We initially focused on 4 areas of measurement:

  1. Cycle Time: This measurement is used to track the total days/hours that a work item took to move through the board.  This was the time it took to move through the board once a Work Item moved out of the Ready column.
  1. Due Date Performance: Simply measures the number of Work Items that completed on or before the due date out of the total work items completed
  1. Blocked Time: This measurement was used to capture the amount of days/hours that work items are stalled in any column
  1. Queue Time: This measurement was used to track how long work items sat in the Ready column.

These measurements let us know how the Operations team performed in 4 areas:

  • How long items sit before they are started by Operations.
  • Which area/resource within IT is causing blockage for things being done.
  • How good is the team at hitting due dates and
  • The overall time it takes things to move through the system under each Work stream.

Can we use Kanban with DevOps?

The focus on Work In Progress (WIP) and Value Stream Mapping makes Kanban a great option to extend into DevOps. Deploying Work Items becomes just another step in the Kanban process, and with its emphasis on optimizing the whole delivery rather than just the development process, Kanban and DevOps seem like a natural match.

As we saw, workflow is different in Kanban than in Scrum. In a Scrum model, new features and changes are defined for the next sprint. The sprint is then locked down and the work is done over the sprint duration (usually 2 weeks). Locking down the requirements in next sprint ensures that the team has the necessary time to work without being interrupted with other “urgent” requirements.  And the feedback sessions at the end of the sprints ensure stakeholders approve the delivered work and continue to steer the project as the business changes.

With Kanban, there are no time constraints and the focus is on making sure the work keeps flowing, with no known defects, to the next step.  In addition, limits are placed on WIP as we demonstrated earlier.  This ensures that a maximum number of features or issues can be worked on at a given time. This should allow teams to focus and deliver with higher quality.  In addition, the added benefit of workflow visibility drives urgency, keeps things moving along and highlights areas of improvement.   Remember, Kanban has its origins in manufacturing, and its key focus is on productivity and efficiency of the existing system. With this in mind, Kanban by design, can be extended to incorporate basic aspects of software development and deployment.

In the end, organizations that are adopting DevOps models are looking to increase efficiencies, deploy code faster and respond quicker to business demands. Both the Kanban and Scrum methodologies address different areas of DevOps to greater and lesser degrees.

The advantages of the Kanban system for IT operations is in its ability to create accountability in a very visible system. The visibility of activities, via the Kanban board and its represented Work Items, aid in improving production flow and responsiveness to customer demand.  It also helps shift the teams focus to quality improvement and teamwork through empowerment and self-monitoring activities.

=========

Les Viszlai is a principal strategist with VMware Advisory Services based in Atlanta, GA.

Increase the Speed of IT with DevOps and PaaS

Reg Lo By Reg Lo

How do you increase the speed of IT?

In this 5 minute video whiteboard session I will describe two key strategies for making IT more agile and improving time to market.  For your convenience there is also a transcript of the video below.

Two key strategies for increasing the speed of IT are:

  1. Deliver more applications using DevOps. Traditional waterfall methods are too slow.  Agile methodologies are an improvement but without accelerating both the infrastructure provisioning and application development, IT is still not responsive enough for the business.  Today, many organizations are experimenting with DevOps but to really move the needle, organizations must adopt DevOps at scale.
  2. Deliver new Platform-as-a-Service faster. Infrastructure-as-a-Service is the bare minimum for IT departments to remain relevant to the business.  If IT cannot provide self-service on-demand IaaS, the business will go directly to the public cloud.  To add more value to the IaaS baseline and accelerate application delivery, IT must deliver application platforms in a cloud model, i.e. self-service, on-demand, with elastic capacity.

Let’s start with this second key strategy: delivery new PaaS services faster.  PaaS services include second generation platforms (database-as-a-service, application server-as-a-service, web server-as-a-service) as well as third generation platforms for cloud native applications such as Hadoop-as-a-service, Docker-as-a-service or Cloud Foundary-as-a-service.

In order to launch these new PaaS services faster, IT must have a well-defined service lifecycle that it can use to quickly and repeatably create these new services.  What are the activities and what artifacts must be created in order to analyze, design, implement, operate and improve a service?

Once you have defined the service lifecycle, you can launch parallel teams to create the new service: platform-as-a-service, database-as-a-service, or X-as-a-service where X can be anything.  Each service can be requested via the self-service catalog, delivered on demand, and treated like “code” so it can be versioned with the application build.

Each service needs a single point of accountability – the Service Owner.  The service owner is responsible for the full lifecycle of the service.  They are part of the Cloud Services team, or also called the Cloud Tenant Operations team.  The Cloud Services Team also manages the service catalog, provides the capability to automate provisioning, and manages the operational health of the services.

The Cloud Services Team is underpinned by the Cloud Infrastructure Team. This team combines cross-functional expertise from compute, storage and network to create the profiles or resource pools that the cloud services are built on.  The Cloud Infrastructure Team is also responsible for capacity management and security management.  The team not only manages the internal private cloud, but also the enterprise’s consumption of the public cloud, transforming IT into a service broker.

Now that we’ve described the new cloud operating model, let’s return to the first key strategy for increasing the speed of IT: deliver more applications using DevOps.  Many organizations have tasks one or two applications teams to pilot DevOps practices such as continuous integration and continuous deployment.  This is a good starting point, however, in order to expand DevOps at scale so IT can provide a measurable time-to-market impact for the business, we need to make the adoption easier and more systematic.

The DevOps enablement team is a shared services team that provides consulting services to the other app dev teams; contains the expertise in automation so that other app dev teams do not need to become the expert in Puppet, Chef, or VMware CodeStream; and, this team drives a consistent approach across all app dev teams to avoid a fragmented approach to DevOps.

Remember how we talked about expanding PaaS?  With self-service on-demand PaaS provisioning, app dev teams can build environment-as-a-service: an application blue print that contains multiple VMs (the database server, application server, web server, etc.)  Environment-as-a-service lets app dev teams treat infrastructure like code, helping them adopt continuous deployment best practices by linking software versions to infrastructure versions.

By delivering more applications using DevOps and by delivering new PaaS services faster, you can increase the speed of IT.


Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

Software Defined Networking for IT Leaders – 5 Steps to Getting Started

Reg Lo By Reg Lo

In Part 1 of “Software Defined Networking (SDN) for IT Leaders”, micro-segmentation was described as one of the most popular use-cases for SDN.  With the increased focus on security, due to growing number of brand-damaging cyber attacks, micro-segmentation provides a way to easily and cost-effectively firewall each application, preventing attackers from gaining easy access across your data center once they penetrate the perimeter defense.

This article describes how to get started with micro-segmentation. Micro-segmentation is a great place to start for SDN because you don’t need to make any changes to the existing physical network, i.e. it is a layer of protection that sits on top of the existing network.  You can also approach micro-segmentation incrementally, i.e. protect a few critical applications at a time and avoid boiling the ocean.  It’s a straightforward to dip your toe into SDN.

5 Simple Steps to Get Started:

  1. Software Defined Networking ProcessIdentify the top 10 critical apps. These applications may contain confidential information, may need to be regulatory compliant, or they may be mission critical to the business.
  2. Identify the location of these apps in the data center. For example, what are the VM names or are the app servers all connect to the same virtual switch.
  3. Create a security group for each app. You can also define generic groups like “all web servers” and setup firewall rules such as no communication between web servers.
  4. Using SDN, define a firewall rule for each security group that allows any-to-any traffic. The purpose of this rule is to trigger logging of all network traffic to observe the normal patterns of activity.  At this point, we are not restricting any network communications.
  5. Inspect the logs and define the security policy. The amount of time that needs to elapse before inspecting the logs is application dependent.  Some applications will expose all their various network connections within 24 hours.  Other applications, like financial apps, may only expose specific system integration during end-of-quarter processing.  Once you identify the normal network traffic patterns, you can update the any-to-any firewall rule to only allow legitimate connections.

Once you have completed these 5 steps, repeat them for the next 10 most critical apps, incrementally working your way through the data center.

In Part 3 for Software Defined Networking for IT Leaders, we will discuss the other popular starting point or use case: automating network provisioning to improve time-to-market and reduce costs.


Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

From CIO to CEO: Shaping Your ITaaS Transformation Approach

Jason StevensonBy Jason A. Stevenson

In this CIO to CEO series we’ve discussed how to run, organize, and finance an ITaaS provider. In this blog, we will discuss how to approach the ITaaS transformation to gain the most value.

Barriers to Success

Transforming a traditional IT organization to be a private cloud provider and/or public cloud broker using ITaaS contradicts many basic human behaviors. To undergo a transformation we must convince ourselves to:

  • Put other people first; specifically customers paying for the service and users receiving the service.
  • Place ourselves in a service role from the very top of the IT organization to the very bottom and allow ourselves to be subservient to others.
  • Give up the notion of self-importance and recognize each and every person plays an equal role in the chain when delivering an end-to-end service and accept a fair amount of automation of what we do on a regular basis.
  • Redistribute control from individuals to processes that leverage group intelligence and center authority within service ownership and lifecycle.
  • Become truly accountable for our role in service delivery and support where all involved can clearly see what we have done or not done.
  • Approach problems and continual service improvement in a blameless environment that shines light on issues rather than covering or avoiding them.

In other words, we must be: humble, honest, relaxed, and trusting. Not the kind of words you often hear in a technology blog but nonetheless accurate. In essence, we must change. That is different from he must change, she must change, or they must change; and that is very different from it must change. In the IT industry, we tend to abstract change by focusing on “it” which is often hardware and software and then deflect change to “they” which is often users or another department.

Are You Ready for Change?

The first two questions an ITaaS CEO (CIO) needs to ask are:

  1. “Do I really want change?”
  2. “Are we really willing to change?”

Initially the answer seems obvious “Of course I do, we have to.” In that subtle nuance of “I” and then “we” lies our challenge. As we wade in we realize the level of resistance that’s out there and the effort it will take to overcome it. We begin to realize the long term commitment needed. For the faint of heart, change dies then and there. But for those who take on challenge the journey is just beginning.

Many IT organizations are focused on service/technology design and operation and therefore do not have the necessary level of in-house expertise to guide their own organization through a complete people, process, and technology transformation. To ensure the greatest return on their investment, many organizations look to a partner that is:

  • Specifically experienced in transformation to instill confidence within their organization.
  • External to their organization and somewhat removed from internal politics to increase effectiveness.

Assuming a partnership is right for your organization for these or other reasons the question becomes “How do I pick the right partner?

Personal Trainer vs. Plastic Surgeon

You’ve heard the saying “Be careful what you ask for you might just get it.” This is very true of an ITaaS CEO looking for a partner to relieve some of the effort and commitment associated with change. Becoming a cloud provider/broker is hard work. The analogy of personal trainer was specifically chosen because of its implication that the hard work cannot be delegated. Though counterintuitive, the more an ITaaS CEO or his/her team attempts to push the “hard work” to a partner the more the partner becomes a plastic surgeon, “delivering” a pretty package with no guarantee that your organization won’t slip into old habits and lose all value gained.

A personal trainer doesn’t exercise or eat for their client, but coaches them along every step of the way. A personal trainer does not usually deliver exercise equipment, assemble the equipment, or even necessarily write a manual on how to use the equipment. What personal trainer does is show up at regular intervals to work side by side with an individual, bringing the knowledge they have honed by going through this themselves and assisting others.

For an IT organization that is looking to become a cloud provider/broker, reduce cost, or just be more consistent or agile, a partner can help more by providing consultation than deliverables. This isn’t to say that a partner shouldn’t develop comprehensive plans or assets, but your organization will get more value out of developing plans and assets as a team, coached by your partner, to ensure a perfect fit. Though the partner may recommend technology and then implement it, this is just one piece of the puzzle. Establishing a trusted relationship between the partner and the IT organization takes regular workouts/interaction.

My Approach to “Personal Training”

We all wish we didn’t have day jobs or family responsibilities when we want to spend time at the gym but reality mandates we spend short but regular amounts of time with our personal trainer. This is also true of the IT department and their partner. Though 100% of the organization must be involved in change at some point these workshops equate to approximately 10% of the time of 10% of the organization. When I am engaged in large-scale transformation with my customers, I find the optimal cadence for hands-on consultation is every two or three weeks, with three days (usually in the middle of the week) that feature half-day workshops consisting of a 2-hour morning session and a 2-hour afternoon session. These workshops are both timed and structured using principles that build goodwill with all stakeholders involved whether they are proponents or opponents to change.

A simplified version of my approach to overcoming resistance and obtaining commitment is to:

  1. Start by sharing a common vision, how the vision has value, and how we all contribute to producing that value.
  2. Raise awareness and understanding through orientation to allow as many people as possible to reach their own conclusion instead of trying to tell them what their conclusion should be.
  3. Establish trust with all involved by involving them in an assessment. Not an assessment of technology or the environment; rather an assessment that draws people out and engages them by asking what is working and shouldn’t be changed, what isn’t working and needs to be changed, and what challenges do we foresee in making a change.
  4. Openly and interactively draft a service model that includes features, benefits, and commitments and a management model that includes roles, responsibilities, policies, processes.
  5. Automate and enforce the service model and the management model through tool requirements.
  6. Review and revise after group workshops and through great finesse. Over time the group will progressively elaborate on these models and tools, going broader and deeper just as a personal trainer would by adding more exercises, more weight, and repetitions.

Avoiding Burnout

At the gym, we may overcome the initial challenge of understanding health and exercise only to discover how painful our aching muscles can be. This stresses the importance of rest and cross training. Much like a personal trainer, our IT partner must not push an organization to gorge on change or focus on one particular component of change for too long. This can sometimes seem sporadic to our team so the approach must be clearly communicated as part of the vision.

You may have noticed the use of the word “team” and wondered who is it? We can consider team in this context as the service’s stakeholders. That is everyone involved in delivery of service and yes that does include customer and user representatives.

An ITaaS CEO that involves his or herself… and not only chooses change and discipline, but also a partner focused on promoting team change and self-discipline over short-cut deliverables, will find their organization in a much better position to transform to a cloud provider/broker using ITaaS.

===============

Jason Stevenson is a Transformation Consultant with VMware.