Home > Blogs > VMware Accelerate Advisory Services > Tag Archives: Advisory Services

Tag Archives: Advisory Services

IT’s Case for Service Catalogs: Why do you need one?

by Les Viszlai

I’m surprised by how often I come across IT organizations that still aren’t fully persuaded of the value of using service catalogs. There is a lot of conversation around the value to the Business but I would like to jump in and outline the advantages of service catalogs from an IT Organization’s perspective.

First and foremost, service catalogs defend IT spending and budgets.

Without a service catalog, IT owns the generic technology line items that keep other parts of the business running. It’s not unusual for the software, hardware, and external IT services used by finance, marketing, sales and everyone else to get lumped together and tracked within IT’s budget. This bundling makes the IT Department’s spend look like a massive cost center compared to other departments within the business.  This then suggests that IT is overspending and ripe for an across-the-board cuts.

Under the generic model, finance teams looking to cut business costs by a target amount across the board start with IT.   To add insult to injury, I have found that IT departments are normally understaffed compared to other departments within the business and often underpaid based on industry benchmarks.

Moving to a full services catalog model changes that game. A successful service catalog defines all of the various offerings individually provided by IT to the business and aligns the software, hardware, and external IT services components needed to provide each of these offerings.  The services catalog should also clearly communicate the costs and service-level agreements associated with each service, giving IT the ability to show-back or charge-back the other departments.  This isn’t as monolithic a task as it may sound.

How does a service catalog make life better for IT?

Service CatalogLet me give you an example of how this changes behavior.   A company I was involved with in the past was running a number of different CRM products as a result of company acquisitions over time.  We kicked off a project that justifies the consolidation of the various CRM tools for both cost and efficiency (see my blog on ROI for tips on building such a business case).

Consolidation projects usually transition various users off of the legacy CRM tool onto the new consolidated centralized tool. Inadvertently a core group of users (usually tied to the original acquisition) resist the migration for some good and some bad reasons.  The net result is that our IT team is stuck supporting two tools.  The difference now with a defined Service Catalog is that IT can clearly charge the hold outs for both CRM product services. The corporate sanctioned CRM tool and the legacy CRM tool can be clearly tied to the holdout business unit.   In addition, IT is now in a position to defend the costs related to this and any additional Service Catalog items now tied to the end user or department that uses the service.

Now, let’s fast forward to budget time.  The CFO asks IT to reduce costs by 15%. Where before the dialog might be framed around reducing head count on the networking team, it can now be about which services in the service catalog the company wants to go without.

From a purely selfish point of view, that insulates IT from demands that are all-but impossible to satisfy. It gives the organization the ability to clearly signal when proposed cuts will damage the business. Additionally, the business also benefits, because they have now a less opaque window into their IT spending and its impact on, and interconnection with, their business functions.

Benefits for both IT and the Business

When we look at the broader case for service catalogs from the business perspective, that interconnection goes deeper.

On-Demand Access and Reliability

Service catalogs, of course, allow us to move to a model where end users click on a service they need and then just click once more whenever they need it again. Even at the end of the quarter, or when there’s some looming deadline, those end users always have the extra resources they need available without having to wait. So they’re gaining reliability, especially during times when IT resources traditionally got bottlenecked.

Speed

This model also opens the window for IT to add additional automation to any service. Say your SLA for providing an FTP-based service for file uploads and downloads is three days. With automation in place, you may be able to speed that up to hours, if not minutes. What’s more, you can offer this service as soon as the initial automation is complete and then keep fine tuning it by adding new tools, capabilities, and resources as they’re developed.

Efficiency

Standardizing clients, logging, auditing, and security, and adding system-relevant restrictions based on user profiles, restricts on-demand access so you only get service requests from people with a valid reason to ask for them. This saves IT money and time.

More Growth Potential

That automation and simplification story hints at the other major win for IT here. With a much more streamlined process enabled by its service catalog, IT has more resources available to manage growth more efficiently than it would have been able to under the old model.

Improving Business and IT Alignment

At the core, what both IT and the business are gaining when they adopt a service catalog is better alignment to needs.

When you have a common language and supporting data to communicate with the business the conversations around budgets fundamentally change.  Whereas before IT might have just presented a shopping list for approval, they can now say to the business, for example: “We’re creating ten times more FTP sites than we anticipated and they have to stay around three times longer. We’re out of storage, though. So, we need storage for the FTP site service.”

Similarly, the business can explain to IT that it needs IT’s four day process for creating an FTP site to be reduced to four hours. And because IT thinks in terms of services, it can easily budget out the resources it will take to make that happen.

Both conversations are now about the substance of the services that IT offers. Framed that way, they’re likely to be much more productive for each side – helping IT make good decisions about where to place its resources and allowing the business to understand how they can positively impact the services they are getting from IT.

Breaking Down Silos

When IT sets out to deliver a new service, like the simple FTP examples above, it’s very likely to involve people from a number of areas – in this case, networking, storage, and servers. In the past, everything and everyone was siloed. The network guy would work on the job then hand it over the hill to the server gal, who would hand it over the hill to the storage guy. With the new model, that behavior is broken down, because the network, storage, and server people are collaborating and sharing IP, all aligned to this one service, not their own specific technology silo.

Furthermore, by moving to a services model, IT is now aligning those various siloed resources in a way that enables knowledge transfer and increases overall efficiency and speed of delivery, and very likely lowers costs too.

Overall, the services model offers a route for IT to give business what it needs, but on terms that don’t compromise performance. Supported by the productive dialogue with IT that the model enables, the business can stay agile and scale up to meet demand, all while getting the most bang from its IT buck. That leads to growth, which IT, under this model, will be ready to support. Which of course is a good thing for everyone.

=======

Les Viszlai is a strategist with VMware Advisory Services and is based in Atlanta.

Quantifying the Business Impact of IT Agility

Harris_SeanBusiness ImpactBy Sean Harris

Let’s examine a story I see often in my work with customers as part of the VMware Advisory Services team.  The names and details have been changed to protect the innocent.

Jessica, the new Head of IT Service Delivery at ABC Banking Corp, was frustrated with being told that the cost of delivery of IT was too expensive. She wanted to show that had she massively reduced the cost of IT delivery with the new private cloud that her team had delivered.  Not only that, but the elastic demand and agility of the service had generated so much value to the business, in terms of revenue and market share, that the business should be investing considerably more in this solution going forward than they had done to date.

She worked with VMware’s Advisory Services to build a model that showed the true value of the private cloud solution in terms of time to market, market share (as a result of being earlier to market) and revenues over a 3-year period.

They first built a model that looked at supply and demand and showed the impact of shortage of supply on the loss of customers to competing services (so reducing demand and market penetration).  Once they understood the organisation’s ability to service demand they were able to estimate the revenue impact from lost customers, using the metric for average revenue per customer.

The Assumptions

The model did not consider the application development time of the service. It was assumed that this has already been done.  There is another value model that can be built to show the benefits of time to market through agile and cloud native application development vs traditional application development approaches, but that was out of scope for this exercise.

For a traditional service delivery model, it was assumed that capacity would be built linearly over time.  You need a certain capacity before the capability is available and/or there is a marketing decision made to delay the launch (availability) of a service (to prevent customer dissatisfaction due to disappointment when the service is actually not yet fully available).

For the cloud (public, private or hybrid) service it was assumed that the capability can be delivered from day one.  The agile elastic capacity of the private cloud infrastructure means the service receives the infrastructure capacity that it needs on demand.

The final key assumption was that they existed in a competitive market place and so there were other equivalent competitive services available to consumers. This means that if demand out strips supply and some consumers are unable to get the service when they want it a percentage will go to a competitive service and never return.

The Results

Business ImpactArmed with this model, Jessica could show her leadership team that with a traditional service delivery approach they were unable to deliver the service from day one, resulting in demand outstripping supply.  This would have resulted in a loss of final market share of 10 points (down from 40% to 30%) and a loss of 3 year service revenues of around 25%.

By switching to a private cloud delivery model, that allowed supply to match demand from day one, they would not lose out on revenue to their competitors.  Not only that, but she proved that a private cloud significantly reduced the TCO (total  cost of ownership) of infrastructure delivery at ABC Banking Corp. and that while some competing public cloud solutions were comparable in price, they were not fully compliant with (sometimes unique) security and audit requirements of the business and external regulators.

The lines of business and marketing were now able to clearly see the value to the business of the new private cloud infrastructure service and quickly approved additional investment in current private cloud.  They added private cloud services as well as directed a multiple new projects to target Jessica’s private cloud platform.

What can we learn from this story?

Providing on-demand infrastructure absolutely increases the agility of the business, and that agility has far reaching benefits throughout the organization, particularly for the bottom line.  A well-researched business case has proven to be the linchpin of success for many of the transformation initiatives, making it easy for the business to see the massive return on investment they will realize through shifting to a private cloud delivery model.

Like Jessica, many IT leaders have limited or no direct experience of creating business cases that go beyond IT costs and into revenue, market share or margin impacts to the business itself  If that’s the case for you, contact your VMware representative take advantage of the deep expertise of VMware Advisory Services.

=======

Sean Harris is a Business Solutions Strategist in EMEA based out of the United Kingdom.

Business IT is Coming Out of the Shadows

Harris_SeanShadow ITBy Sean Harris

For a number of years the perception among businesses that internal IT is unable or incapable of delivering to their needs (particularly for new and emerging requirements) has led them to bypass internal IT and source their solutions directly from external vendors. This is commonly referred to as Shadow IT.

According to most analysts (see references below), the shadow is now bigger than original object that cast the shadow, which only happens during the final hours of sunset (an interesting analogy) and has brought about the joke that the title CIO stands for “Career is Over”. Synonymous with this is the rise of the Chief Digital Officer (CDO), Chief Technology Officer (CTO) and application development teams, who are becoming more embedded in the lines of business.

How and why has this happened and what can Enterprise IT and the CIO do to reverse this trend?  Or is it too late?

Why is Shadow IT So Prevalent?

Ten to fifteen years ago, for the vast majority of businesses, IT and technology ran the business. By this I mean they ran the business systems, such as HR, CRM, inventory management, financial systems and logistics. There were a few notable exceptions such as mobile operators, media companies, investment banking and the likes of Thomson Reuters, for whom IT and technology was/is the business. In those cases, the revenue generating services that they provided were dependent on IT and technology.

This is even more true today. There are few, if any, businesses that do not rely on technology and IT of some sort to deliver business services to their customers, partners and channels or use IT to provide technology or IT services to enhance the customer experience. This is part of what is often referred to as the Digital Revolution. So why haven’t internal IT departments benefited from the increasing dependence of the business on IT and technology

There are a number of reasons for this. In no particular order these include:

  • A focus on the stability and reliability of IT systems, and the processes and procedures that support them, at the expense of agility has led to a perception that in-house IT is unable to react at the speed of business. This is despite the fact that technology has moved considerably in the direction of delivering agility combined with reliability and availability.
  • Organisational silos in IT make the organisation rigid and unable to react to the changing needs of the business.
  • A one size fits all approach to IT operations. The push for standardisation and shared services to improve IT operational efficiency has led to a one size fits all approach to IT operations, governance, security and application development/delivery.
  • A focus on IT operational efficiency rather than focusing on end to end business benefits and linking IT investment to business gain (market share, margin and revenue). This is often at the expense of user experience and business outcomes.
  • A lack of clear understanding of the IT and technology needs of the business and the clear articulation of this to all in IT. Without this, it’s impossible to articulate to the business of the value that internal IT delivers.

This has led to the lines of business looking elsewhere to fulfill their technology and IT needs. Most CIO’s and IT departments that I speak to complain of ever increasing pressure to reduce spending on IT and cut costs.  However, many analysts point to an increasing spend on technology and IT (see references below). So where is the money going?

The answer is what we call Shadow IT, but so we can hardly call it “Shadow” any more.  Most analysts point out that Shadow IT spend is now greater than the CIO’s IT spend. It is well and truly mainstream.

5 Steps To Reverse This Trend

Step 1

It may sound blindingly obvious, but the first step is to get a clear understanding of the needs and KPIs of the business and how IT maps into that. From this it is possible to start mapping IT spend into business benefits and making the case for IT investments.

Step 2

The next step is to understand that not all applications are equal. Simon Wardley does an excellent job of explaining what I mean in his blog.  Organisations need to take a good look at their application portfolio, what categories they drop into, what their natural lifecycle is and where they are in that lifecycle. This will help to build a multi-modal IT strategy based on the needs of the business and the applications that support the business services.

Step 3

Next we need to switch to a user experience and business outcomes approach to defining and developing IT services rather than features and functions and a sole focus on IT operational efficiency.

Step 4

Next recognise that in-house is not always the best answer. Sometimes the best solution is a third party service and so you need to build an architecture to support a service broker function. In this way IT can ensure that the business gets the best solution for its needs while ensuring that corporate governance, security, audit and compliance requirements are all meet, something that is often compromised by Shadow IT.

Step 5

The final step is to build an organisation and multiple sets of operational procedures and processes (reflecting multi modal operational requirements) to support all of the above. A key part of this transformation is a clear focus on a service-driven organisation designed around the need to support business services and needs of the business.

To be clear this is not a case of tweaking minor parts of the IT organisation of a typical enterprise.  This is a major transformation, but this is your only hope to stop the increasing marginalisation of internal IT and the role of the CIO.

If the IT organisation is able to make this transformation it will lead to a massive increase in investment in the IT organization, redirecting the business IT spend away from third party vendors and back to the IT organisation.  This leads to a massive change in perception of the contribution of the IT organisation to the business.

If you need help applying these principles to your orgnaisation, VMware’s Advisory Services can help you build a strategy and roadmap to undertake the transformation needed to move to a business focused IT delivery organization, maximising the value (and perceived value) of the internal IT organisation within the business.

References:

=======

Sean Harris is a Business Solutions Strategist in EMEA based out of the United Kingdom.

IT’s Payback Time – Calculating the ROI on IT Innovation – Part 1

To justify investment in new IT projects, we need to show that it pays off – here’s how to do that.

Les Viszlaiby Les Viszlai

ROI Calculation“Innovation in IT pays for itself.”  That’s something pretty much everyone in IT believes. But it’s also something that a surprising number of companies I visit aren’t in a position to prove.  Why? Because most companies don’t actually know what it costs to provide their IT services and can’t quite put a figure on the benefits IT innovation projects can bring.  Missing these key data points can make it very difficult to quantify the Return on Investment (ROI) or payback on any IT project, making it harder for IT to compete internally with other departments for scarce business funding.  Many times, approved IT budgets get frozen or delayed because the business does not understand the value of the projects in question and opportunities are missed or delayed.

In Part 1 of this blog series, let’s begin at a basic level in order to get you familiar with the topic of calculating ROI.  We’ll dig in to what you can do to calculate whether an IT project will be self-funding.

Calculating Basic ROI

ROI Increased Revenue Cost ReductionEconomists have used many formal models to calculate ROI (Return on Investment), TCO (Total Cost of Ownership), as well as methods for determining IT Business Value and payback periods.  For this conversation, let’s focus on basic ROI, and ask the question: if we spend X dollars on a new IT project or service in order to get a new or existing capability, will we spend less money than we are paying now for the equivalent capability or service that we will replace?   If an initiative does this, then we can easily make the case for moving forward with that innovation program.

To figure this out, we’ll look at these two areas:

  • Reduced or avoided capital and/or operational costs
  • Increased/Enhanced Revenue

Hard Costs and Soft Costs

Hard cost is money we have to pay. Most hard cost savings or cost avoidance opportunities are fairly easy to quantify. These savings will include the cost of hardware and software you no longer need to pay for and savings from staff reductions and licenses you will no longer need.   However, don’t forget to factor the added cost of the new hardware and software you are installing, any one time professional services fees you will need in order to deploy everything in place and any new staffing needs.  But this should all be relatively easy to quantify from a hard cost standpoint.

Soft cost savings or cost avoidance is more complex, because the benefits accrued are harder to put actual numbers on and its harder to get internal agreement on how its determined. In addition, most companies capture this information over a 3 to 5-year period, which may compete with short-term goals.

If you are already measuring soft costs today, then you’re ahead of the game.  However, you might be surprised by how often I see organizations failing to quantify them. The main reason, typically, is that nobody wants to do the work or no one understands the benefit. Quite often, I see companies look at an IT project purely from a hard cost savings perspective and say, “We can’t figure out how much time this will save, or how much happier this will make the client, so we’re not going to use these additional metrics as a measurement for this project.”

For those of you that want to start looking at this, I suggest reviewing the benefits below to see if they are addressed in the proposed project.  These project benefits are easier to quantify and can easily add up to substantial savings over time.  To calculate the savings for projects designed to improve existing capabilities, look at the current delivery time and associated costs and then subtract those numbers from the new projected delivery time and costs.

Will this IT project:

  • Provide faster delivery times?
    With simplified work flows and more repeatable processes being done more often by a machine automation, we can look forward to faster delivery times.  In order to calculate this, we multiply the current hourly FTE costs by the average delivery duration, by the number of requests on a yearly basis and compare that to the new times and costs.
  • Reduce the cost of training?
    With a simplified system, can we reduce training times for people new to the company, and likely employ more junior staff and divert more senior staff to innovation activities.   These savings can be quite high in organizations that have seasonal hiring needs and organizations that have a high staff turnover.
  • Lower regulatory and compliance costs?
    Automation and simplification activities can have a significant impact on reducing the cost of compliance, especially in regulation-intensive sectors like healthcare or finance.  These savings can be calculated by tracking the current FTE time used to manually record and document audit related activities and compare that to the improvements driven by the project.
  • Reduce human and machine errors?
    With simpler, more repeatable processes being done more often by a machine, we can look forward to less failures.  In order to calculate this, we multiply the current hourly loss, by the average downtime duration, by the number of times this happens on a yearly basis.
  • Drive faster resolution times?
    Using MTTR (how long it takes, on average, to restore a system) we multiply the number of incidents, by the time it takes to resolve, by the cost of personal on a yearly basis.

The above is the short list of soft cost savings you can use as a starting point.  They are easier to quantify and get agreement on, and collectively they can seriously add up.

Projecting Increases in Revenue

It should also be entirely possible to figure what the IT project change will do for your revenues.  Just to be clear: we’re not talking about the results of funding an entirely new product. We’re talking about the revenue enhancements that come with the cost avoidance/reductions and efficiencies tied to existing product/service lines.

Let’s take this scenario for quantifying IT Project payback: A business owner is running a web store where it takes a customer 3 minutes to buy something, but 90% of customers abandon the sale after 38 seconds. Along comes the innovative IT team, offering a project that reduces the average time-to-purchase down to 30 seconds. It’s entirely feasible, then, to figure the increased revenues that ought to accrue, all other things being equal, from the technology change and the faster buy time.

Again, the biggest thing that I see getting in the way of these kinds of calculations is that businesses first have to commit to doing them. I don’t think it really matters which method we use (ROI, EVA, TCO, they’re all fine). We just have to get agreement to pick one.

By doing the work upfront and having those numbers available for review, you put senior leadership in a better position to approve the IT project proposal.  It also leaves very little room for debate on the savings value of the project since we have established agreement within the organization on how the ROI is determined.

Key Take-Aways

  • Don’t forget to establish how you will calculate the expected ROI as you set an innovation strategy.
  • Don’t be hesitant to dive in.  Just pick an accounting method, get agreement on it within your organization, and then start doing the math.
  • This pays off!  In all likelihood it will help you prove that IT innovation does indeed pay for itself.
  • When IT innovation can pay for itself, this leads to more innovation, and that leads to increased customer satisfaction or added brand value, which of course will have a positive direct impact on your business.

Stay tuned.  In my next blog we will dig into the obstacles to watch out for that impact our ability achieve the projected savings.

=======

Les Viszlai is a principal strategist with VMware Advisory Services based in Atlanta, GA.

Why Should CIOs Invest in Network Virtualization with NSX?

kai_holthaus (150x150)By Kai Holthaus

Data-center virtualization is nearly all-encompassing by now. Most corporations have achieved a compute virtualization rate of over 80%. Only very few workloads remain on physical hardware instead of being handled by a virtual machine, and usually that’s because of very specialized requirements of the applications themselves. Storage is following closely behind.

Network VirtualizationThe main holdout to the software-defined data center (SDDC) is the network infrastructure. Most networks are still being managed on the physical hardware itself, instead of virtualizing the network layer as well, and moving the management of the network into software. With NSX, VMware has the premier network virtualization software, and NSX can help you reap the benefits of a virtualized network.

But why would a CIO invest in the network virtualization?  This blog post will explore the main use and business cases.

Use Case 1: Security

The importance of good security has only grown in recent years. Practically every week we hear of data breaches and hackers gaining access to sensitive data in some way, shape or form. The average cost of such a data breach in the US is over $6.5M [1].

Transformed Security with NSXData Center SecuritySecurity is complicated and costly. In a hardware-managed network environment, security must be designed in from the ground up, and implementing changes to the security setup become relatively big projects relatively quickly.

With NSX, you can implement micro-segmentation of the network. Network administrators can easily define and implement strong firewalls on each deployed virtual machine and on the hypervisors running those virtual machines. Changes in the requirements for the security can be implemented quickly, because it only requires the reconfiguration of the NSX setup, instead of having to reconfigure the physical hardware. Since deploying those additional firewalls is handled in software, the task to configure stronger firewall rules becomes easier, and network administrators gain the ability to control the network traffic flowing between different VMs in a more granular fashion.

For an easy to understand primer on micro-segmentation, check out my colleague’s blog on Understanding Software-Defined Networking for IT Leaders.

Use Case 2: Agility

The network is typically the bottleneck to rapidly deploying new virtual machines or new environments for virtual machines. This happens because the network is hardware-managed, which limits the ability of the network team to quickly change the network topology to accommodate new subnets or VLANs. It also means that provisioning a new VM cannot always be fully automated, because there is the potential for a manual reconfiguration of the network being required.

Moving management into software allows the full automation of the VM provisioning and configuration processes. Configuring new VMs now becomes a matter of minutes, if not seconds. Moving VMs between hosts can now easily been done, because NSX can automatically re-configure the network so that the VM can keep its network configuration, even when moving it somewhere else.

Having this ability to quickly set up and tear down entire networks, and reconfiguring the network on the fly is an essential requirement for continuous deployment and integration. Techniques like this allow DevOps-centric organizations to rapidly implement new functionality for their applications up to a rate of several changes to production systems within just a single minute.

Use Case 3: Availability / Disaster Recovery

Failing over to a Disaster Recovery (DR) site typically involves reconfiguring the network infrastructure to point at new servers. This is very time-consuming and error-prone. Moving management of the network into software now allows network teams to leave the physical network infrastructure alone when failing over to DR resources. The network traffic will simply be routed to a different VM when the original VM becomes unavailable. Integrating NSX into the DR plans, and into other data center management software, will therefore allow network teams to reduce RTO significantly.

These are only three use cases for why virtualizing the network using NSX is a winning business proposition. There are additional use cases, like enabling hybrid cloud environments, which further improve your return on investment for NSX.

Broad adoption of compute virtualization took about 10 years. With these use cases and benefits, it should not take 10 years to reach broad adoption of network virtualization.

=======

Kai Holthaus is a Sr. Transformation Consultant with VMware Operations Transformation Services and is based in Oregon.

[1] 2015 Cost of a Data Breach Study, Ponemon Institute

 

End User Computing Modernisation – Observations of Success

Charles BarrattBy Charles Barratt

As I come to the end of what has been a long customer engagement I find myself reflecting on what went well, not so well and REALLY well. I engaged with a client who was struggling with desktop iStock_000056305548_Large_modernization (300x200)transformation, having been shackled to Windows XP for too long, and had little direction to move in apart from the tried and tested approach of fat client refresh and System Center Configuration Manager (SCCM) application delivery; hardly transformative or strategic. Compared to what they were doing in the datacenter, the desktop environment was light-years behind, yet they had the capability of a modern datacenter to deliver a transformative digital workspace.

All too often, I witness organisations treating their desktop as second-class citizens to the datacenter, when in reality the datacenter is the servant to the endpoint. Those organisations that truly transform their end user computing (EUC) environments do so with three key principles in mind:

Engagement

All too often, IT starts with technology rather than thinking about what impact modernisation will have on users, their productivity and the financial model associated with end user IT. Gone are the days when we simply issued users with devices and mobile phones and never spoke to them again until they had an issue. Our end users are far more technically savvy and operate their own networks at home, they want to be engaged, they want a say on the appropriate application of technology and they want workplace flexibility; happy workers tend to stay where they are.

Users deserve to be engaged and by engaging them early on EUC transformation you create advocates who are part of the process and want to see it succeed. Don’t underestimate this vital stage. Simply put, “Stop starting with technology.”

Integration

It is no longer appropriate to operate end user computing environments in isolation to the rest of the IT organisation. Virtualisation stopped that trend from happening when we saw a movement of the desktop into the datacenter. As organisations start to consume different application and security models your EUC environment needs to be close to the action for performance and operational gains.

To fully harness this change, we see organisations starting to build out a centre of excellence containing members that span the many moving parts of an EUC environment from endpoint, applications security, networks, datacenter and operations. In doing so you can be confident that there will not be overspending on technology, there will be appropriate capacity to support your requirements and the best experience will be delivered to your end users.

Simplicity

I recently saw the lightbulb moment in my client’s eyes when discussing the simplification of application delivery; we were introducing AppVolumes. Rather than dazzle them with science, we had a simple demonstration and a discussion around the time tested install process of “Next, Next, Next Finish” into an AppStack and made them realize that the world has moved on.

As organisations look to re-architect critical applications they need to think about simplifying the application lifecycle management (ALM) for legacy applications, a key capability of AppVolumes. IT brings the ability to shorten the ALM process significantly, from request fulfillment through patching and updates, to drive consistency and stability whilst minimizing the cost associated with lifecycle and change processes.

As with all technologies, you need to make sure the investment reduces the problem and the financial gain supports the change. The architecture and minimal impact on existing processes places AppVolumes in a very desirable place to solve application delivery challenges.

Opportunities to transform the end user computing environment don’t come along very often but their impact on end user computing is prolific. There has never been a more exciting yet complicated time to be associated in this space.

To use the words of the late Steve Jobs, “You have to start with the customer experience and work back towards the technology.”

=======

Charles Barratt is an EUC Business Solutions Strategist for VMware’s Advisory Services team and based in the UK.

Technology is not a Magic Wand for DevOps

Theresa StoneBy Theresa Stone

All too often I walk into companies that want to implement DevOps as part of their software defined data center (SDDC) journey and hear conversations filled with frustration like:

We have implemented 8 new tools and our developers seem to be mostly happy with them; but we continue to have issues delivering anything on time!  Our operations staff are frustrated and internal customers won’t allow their applications in our virtualized environment.

OR

 We bought all these new tools and implemented them, I even paid for my people to have formal training on them, but I don’t feel like we’re any better off than we were before!

Many organizations have bought into the falsehood that DevOps is just a technology play.   That could not be further from the truth, so don’t fall for that trap.   Successful DevOps organizations must focus on a lot more than just implementing technology to achieve success.

IT leaders should invest in cultural changes, people, skills gaps and collaboration issues above all other issues to achieve DevOps success. Organizations embarking on a DevOps initiative need to take a step back and evaluate if you are on track for success by approaching DevOps holistically.   These initiatives require a transformation strategy built around clearly defined goals and the development of a well-defined roadmap that incorporates people, process, technology and culture.

Core Pillars of DevOps Transformation

Here are some activities that are often incorporated in a transformation roadmap for DevOps broken down across the core pillars required for success – note one is not like the others:

DevOps PillarsPEOPLE Transformation

  • Governance frameworks are put in place to support and enable value realization from DevOps
  • Organization and operating models are modified to facilitate holistic changes to culture
  • People are invested in with necessary training and skills enhancements

PROCESS Transformation

  • Operations and development engineers participate together in the entire service life-cycle from design through to production support
  • An incident command system is in place where the development team is involved in incident resolution
  • Processes are re-engineered to be more efficient, lean and repeatable

TECHNOLOGY Transformation

  • DevOps technology improvements place reliance on build, test and release automation along with orchestration across technologies and integrated tool chains using continuous delivery capabilities
  • Infrastructure is treated as code
  • The DevOps team delivers small chunks of value to the customer more often
  • Recovery oriented computing – fail forward

CULTURE Transformation

  • High trust, team culture demonstrating effective, seamless cross-functional collaboration, open communications, performance orientation and learning culture (generative organization)
  • Demonstrated Servant Leadership – enable and serve from the top down
  • Established collective ownership
  • Creativity is encouraged

(All of these culture items must be focused on and incorporated into the attributes and activities above.)

Why is Technology the Pillar Most Organizations Focus on First?

Even though new technology is important and usually required, most organizations focus only on tools and do not achieve desired outcomes.   Why does this happen over and over again?   I believe it is due to a couple of factors:

  1. IT leaders gravitate toward what comes easiest and what seems most important to them – i.e. implementing new technology
  2. Leaders in general have a hard time comprehending the importance of people, process and cultural changes and what that actually looks like; therefore, investments in seeking outside assistance from experts are not made where they may be needed the most

In today’s fast-paced, ever-changing landscape, filled with disruptive technology, successful companies must be strategic and operate efficiently to remain on top.  DevOps is not easy and it does not happen overnight; however, it can produce the desired results if you take a holistic approach.  There are many success stories of those that embraced the changes and transformation needed across people, process, technology and culture to be the new or rising leaders in their industry.    Are you next?

========

Theresa Stone is a Transformation Process Architect with VMware and is located in Virginia.