Home > Blogs > VMware Accelerate Advisory Services

Join us at EMC World for an IT Transformation Quick Chat

EMC WorldAre you attending EMC World next week in Las Vegas?  Join us on Monday, May 2 at 2:00 pm or on Tuesday, May 3rd at 10:00am for a Quick Chat in the Veronesse 2401B conference room.

The State of IT Transformation
with Bill Irvine, Principal Strategist at VMware

Gain strategic insights from our overview of “The State of IT Transformation” report, which was recently published by EMC and VMware after an analysis of data provided by more than 660 global firms.

Speaker
Bill Irvine is a Principal Strategist within the VMware Accelerate Advisory services team. As a pragmatic strategic consultant and an ITIL® certified Service Manager, Bill has worked with some of the top Fortune 1000 companies to identify and grow business value by developing practical and “right-sized” solution strategies with actionable roadmaps.

Successful Transformations Require Clarity in Strategy and Execution

Heman Smithby Heman Smith

blog_graphic_CIO_clarityThe recent “The State of IT Transformation” report by VMware and EMC is an up-to-the-minute overview of how companies across multiple industries are faring in their efforts to transform their IT organizations.

The report offers valuable insights into the pace and success of IT transformation over the last few years and outlines where companies feel they have the most to do. But two specific data points in the report – highlighting gaps between companies’ ambitions and their actual achievements – struck me in particular. Here they are:

  • 90% of companies surveyed felt it important to have a documented IT transformation strategy and road map, with executive and line of business support. Yet over 55% have nothing documented.
  • 95% of the same organizations thought it critical that an IT organization has no silos and works together to deliver business focused services at the lowest cost. And yet less than 4% percent of organizations report that they currently operate like this.

Both of these are very revealing, I think, and worth digging into a little deeper.

Taking the second point first, my immediate reaction here is: Could IT actually operate with no silos? Is that ever achievable?

To answer, you have to define what “silo” means. A silo can be a technology assignment (storage, networking, compute, etc..) and that’s usually what’s meant within IT by the word.  Sometimes, though, it represents a team assignment, whether by expertise or a focus on delivering a particular service, that is in its own way a type of silo.

So when companies say they wish they could operate with no silos and be able to work together, I wonder if that’s really an expression of frustration with poor collaboration and poor execution? My guess is that what they’re really saying is: “we don’t know how to get our teams, our people, to collaborate effectively and execute well.”

If I’m right, what can they do about it? How can companies improve IT team collaboration, coordination, and execution?

Being clear about clarity

The answer takes us back to the first data point, that 90% of companies feel it’s important to have a documented IT transformation strategy and road map with executive and line of business support, yet over 55% have nothing documented. A majority of companies, in other words, lack strategic clarity.

Without strategic clarity, it’s very difficult for teams to operate and execute toward an outcome that is intentional and desired. Instead they focus on the daily whirlwind that surrounds them, doing whatever the squeakiest wheels dictate. I’m reminded of what Ann Latham, president of Uncommon Clarity, has said: “Over 90% of all conflict comes from a lack of clarity.”

Clarity, in my experience, has three different layers.

Clarity of intent.

This is what you want to accomplish (the vision); why you want to do it (the purpose); and when you want it done (the end point). You can also frame this as, “We want to go from X (capability) to Y (capability) by Z (date).”

Clarity of delivery.

As you move towards realizing your vision, you learn a lot more about your situation, which brings additional clarity.

Clarity of retrospect.

We joke about 20/20 hindsight, but it’s valuable because it lets us compare our original intentions with outcomes and learn from what happened in between.

Strategic clarity is really about that first layer. If companies are not clear upfront about what they want, it’s almost impossible for their teams and employees to understand what’s wanted from them and how they can do it – or to track their progress or review it once a project is complete. Announce a change without making it clear how team members can help make it a reality and you invite fear and inertia. While waiting for clarity, people disengage and everything slows down.

I’ve seen, for example, companies say they’re going to “implement a private cloud.”  That’s an aspirational statement of desire, but not one of clear intent. A clear statement of intent would be: “We’re going to use private cloud technologies to shift our current virtual environment deployment pace of 4+ weeks into production to less than 24 hours by the end of June 2016.” Frame it like, and any person on the team can figure out how they can or cannot contribute toward that exact, clear goal. More importantly, the odds of them collectively achieving the outcome described by the goal are massively increased.

I suspect that the overwhelming majority of companies reporting that they’d like a strategic IT transformation document and road map but don’t yet have one, have for the most part failed to decide what exact capabilities they want, and by when.

This isn’t new. For the last 30 plus years, IT has traditionally focused on technologies themselves rather than the outcomes that those technologies can enable. Too many IT cultures do technology first and then “operationalize it.” But that’s fundamentally flawed and backwards, especially in today’s services-led environment.

Operating models and execution

Delivering on your strategic intent requires more than clarity in how you describe it, of course. Your operating model must also be as simple and as focused as possible on delivering the specific outcomes and capabilities outlined in your plan. Otherwise, you are placing people inside the model without knowing how they can deliver the outcomes it expects, because they don’t know what they’re trying to do.

Implementing an effective operating model means articulating the results you are looking for (drawn from your strategy), then designing a model that lets employees do that as directly and rapidly as possible. That’s true no matter what you’re building – an in-house private cloud, something from outside, or a hybrid. Everyone needs to know how they can make decisions – and make them quickly – in order to deliver the results that are needed.

That brings me to the my last observation. When companies have no documented strategy or road map (and remember that’s 55% of companies surveyed in the VMware/EMC report), they are setting themselves up for what I call “execution friction.” With no clear strategy, companies focus on technology first and “operationalize” later. They end up in the weeds of less-than-successful technology projects, and spend energy and resources on upgrading capacity, improving details, and basic IT pools, while failing to craft a technology model that supports delivering the capabilities written in their strategic model. It’s effort that uses up power while slowing you down instead of pushing you forward: execution friction. Again, it’s viewing IT as a purchase, when today more than ever it should be viewed as a strategic lever to accelerate a company’s ability to deliver.

In his book on strategic execution, Ram Charan says that to understand execution, you need to keep three key points in mind:

  • Execution is a discipline, and integral to strategy
  • Execution is the major job of the business (and IT) leader
  • Execution must be a core element of an organization’s culture

Charan’s observations underline what jumps out at me in the data reported by the VMware/EMC study: that you can’t execute effectively without strategic clarity.

Wise IT leaders, then, will make and take the time necessary to get strategically clear on their intended capability outcomes as soon as possible, then document that strategy, share it, and work from it with their teams in order to achieve excellent execution. If more companies do that, we’ll see silos disappearing in a meaningful way, too, because more will be executing on their strategy with success.

=======

Heman Smith is a Strategist with VMware Accelerate Advisory Services and is based in Utah.  

Moving Beyond Infrastructure as a Service to Platform as a Service

Brian MartinezBy Brian Martinez

What's Next - PaaSMy VMware colleague Josh Miller recently explored how companies are extending a DevOps model into their infrastructure organizations and what can be done to speed that essential transition.

I want to talk about the step after that. Where do you go after achieving infrastructure-as-a-service?

Here’s how I think of it. Infrastructure as a service (IaaS) focuses on deploying infrastructure as quickly as possible and wrapping a service-oriented approach around it. That’s essential. But infrastructure in itself doesn’t add direct value to a business. Applications do that. In more and more industries the first company to release that new killer app is the one that wins or at least draws the most value.

So, while it’s essential that you deliver infrastructure quickly, it’s worth lies in helping deploy applications faster, build services around those applications, and speed time to market.

So you have IaaS, what’s next?  Enter the concept of the platform as a service (PaaS). PaaS can be realized in a variety of ways. It might be through second generation platforms such as database-as-a-service or middleware-as-a-service. Or it could be via third generation platforms based on unstructured PaaS like containers (think Docker) or structured PaaS (think Pivotal Cloud Foundry).

The flexibility you have in terms of options here is significant and your strategy should be based on the needs of your developers.   Many times we see strategies built around a tool name instead of the outcomes needed from that tool.  Listening to the developer’s needs should help determine what the requirements are.  Then build backwards from there.  Often we you won’t end up with the same tooling then you thought you would.

All the approaches to PaaS, though, share a key feature: they are driven by both a holistic and a life-cycle view of IT. In other words, it’s dangerous to view any IT function today as either separate from any other, or as a one-time deal. Instead, we need to be thinking of everything as connected and at the same time being constantly iterated and improved.

Work from that perspective and it’s easier to navigate the often daunting array of options you have when it comes to PaaS.

Certainly, as you move along this path, it’s very possible to end up with multiple, small cloud-native apps deployed on multiple platforms spread across multiple different data sets – so be aware of the lifecycle.

One other note: there are so many different tools coming to market so quickly in this space that what you pick now may not be what you use in a couple of years. A lot of our customers are nervous about that. So it’s worth remembering that these tools are designed so that you can move your code, and the work that you’re doing with the code, to whatever platform is best suited to deliver it to your customer.

The bottom line: Encourage your customers to try things out so they can create DevOps learning experiences.  Be responsive in enabling developers to access new tools, while setting the right boundaries on how they can use those tools (think service definition) and where they bring them to bear.  Approaching PaaS with a unified culture of continuous iteration and improvement will enable your developers with the tools they need to move fast, without losing the control and stability essential to IT operations.

=======

Brian Martinez is a Strategist with VMware Advisory Services and is based in New York.

IT’s Payback Time – Calculating the ROI on IT Innovation – Part 1

To justify investment in new IT projects, we need to show that it pays off – here’s how to do that.

Les Viszlaiby Les Viszlai

ROI Calculation“Innovation in IT pays for itself.”  That’s something pretty much everyone in IT believes. But it’s also something that a surprising number of companies I visit aren’t in a position to prove.  Why? Because most companies don’t actually know what it costs to provide their IT services and can’t quite put a figure on the benefits IT innovation projects can bring.  Missing these key data points can make it very difficult to quantify the Return on Investment (ROI) or payback on any IT project, making it harder for IT to compete internally with other departments for scarce business funding.  Many times, approved IT budgets get frozen or delayed because the business does not understand the value of the projects in question and opportunities are missed or delayed.

In Part 1 of this blog series, let’s begin at a basic level in order to get you familiar with the topic of calculating ROI.  We’ll dig in to what you can do to calculate whether an IT project will be self-funding.

Calculating Basic ROI

ROI Increased Revenue Cost ReductionEconomists have used many formal models to calculate ROI (Return on Investment), TCO (Total Cost of Ownership), as well as methods for determining IT Business Value and payback periods.  For this conversation, let’s focus on basic ROI, and ask the question: if we spend X dollars on a new IT project or service in order to get a new or existing capability, will we spend less money than we are paying now for the equivalent capability or service that we will replace?   If an initiative does this, then we can easily make the case for moving forward with that innovation program.

To figure this out, we’ll look at these two areas:

  • Reduced or avoided capital and/or operational costs
  • Increased/Enhanced Revenue

Hard Costs and Soft Costs

Hard cost is money we have to pay. Most hard cost savings or cost avoidance opportunities are fairly easy to quantify. These savings will include the cost of hardware and software you no longer need to pay for and savings from staff reductions and licenses you will no longer need.   However, don’t forget to factor the added cost of the new hardware and software you are installing, any one time professional services fees you will need in order to deploy everything in place and any new staffing needs.  But this should all be relatively easy to quantify from a hard cost standpoint.

Soft cost savings or cost avoidance is more complex, because the benefits accrued are harder to put actual numbers on and its harder to get internal agreement on how its determined. In addition, most companies capture this information over a 3 to 5-year period, which may compete with short-term goals.

If you are already measuring soft costs today, then you’re ahead of the game.  However, you might be surprised by how often I see organizations failing to quantify them. The main reason, typically, is that nobody wants to do the work or no one understands the benefit. Quite often, I see companies look at an IT project purely from a hard cost savings perspective and say, “We can’t figure out how much time this will save, or how much happier this will make the client, so we’re not going to use these additional metrics as a measurement for this project.”

For those of you that want to start looking at this, I suggest reviewing the benefits below to see if they are addressed in the proposed project.  These project benefits are easier to quantify and can easily add up to substantial savings over time.  To calculate the savings for projects designed to improve existing capabilities, look at the current delivery time and associated costs and then subtract those numbers from the new projected delivery time and costs.

Will this IT project:

  • Provide faster delivery times?
    With simplified work flows and more repeatable processes being done more often by a machine automation, we can look forward to faster delivery times.  In order to calculate this, we multiply the current hourly FTE costs by the average delivery duration, by the number of requests on a yearly basis and compare that to the new times and costs.
  • Reduce the cost of training?
    With a simplified system, can we reduce training times for people new to the company, and likely employ more junior staff and divert more senior staff to innovation activities.   These savings can be quite high in organizations that have seasonal hiring needs and organizations that have a high staff turnover.
  • Lower regulatory and compliance costs?
    Automation and simplification activities can have a significant impact on reducing the cost of compliance, especially in regulation-intensive sectors like healthcare or finance.  These savings can be calculated by tracking the current FTE time used to manually record and document audit related activities and compare that to the improvements driven by the project.
  • Reduce human and machine errors?
    With simpler, more repeatable processes being done more often by a machine, we can look forward to less failures.  In order to calculate this, we multiply the current hourly loss, by the average downtime duration, by the number of times this happens on a yearly basis.
  • Drive faster resolution times?
    Using MTTR (how long it takes, on average, to restore a system) we multiply the number of incidents, by the time it takes to resolve, by the cost of personal on a yearly basis.

The above is the short list of soft cost savings you can use as a starting point.  They are easier to quantify and get agreement on, and collectively they can seriously add up.

Projecting Increases in Revenue

It should also be entirely possible to figure what the IT project change will do for your revenues.  Just to be clear: we’re not talking about the results of funding an entirely new product. We’re talking about the revenue enhancements that come with the cost avoidance/reductions and efficiencies tied to existing product/service lines.

Let’s take this scenario for quantifying IT Project payback: A business owner is running a web store where it takes a customer 3 minutes to buy something, but 90% of customers abandon the sale after 38 seconds. Along comes the innovative IT team, offering a project that reduces the average time-to-purchase down to 30 seconds. It’s entirely feasible, then, to figure the increased revenues that ought to accrue, all other things being equal, from the technology change and the faster buy time.

Again, the biggest thing that I see getting in the way of these kinds of calculations is that businesses first have to commit to doing them. I don’t think it really matters which method we use (ROI, EVA, TCU, they’re all fine). We just have to get agreement to pick one.

By doing the work upfront and having those numbers available for review, you put senior leadership in a better position to approve the IT project proposal.  It also leaves very little room for debate on the savings value of the project since we have established agreement within the organization on how the ROI is determined.

Key Take-Aways

  • Don’t forget to establish how you will calculate the expected ROI as you set an innovation strategy.
  • Don’t be hesitant to dive in.  Just pick an accounting method, get agreement on it within your organization, and then start doing the math.
  • This pays off!  In all likelihood it will help you prove that IT innovation does indeed pay for itself.
  • When IT innovation can pay for itself, this leads to more innovation, and that leads to increased customer satisfaction or added brand value, which of course will have a positive direct impact on your business.

Stay tuned.  In my next blog we will dig into the obstacles to watch out for that impact our ability achieve the projected savings.

=======

Les Viszlai is a principal strategist with VMware Advisory Services based in Atlanta, GA.

How DevOps is Changing Infrastructure and Providing Business Value

Josh MillerBy Josh Miller

Infrastucture Building Business ValueInfrastructure organizations are feeling more pressured than ever to innovate. They are being pushed by business unit leads and application teams to deliver on their part of software toolchain stacks at a faster pace. They are increasingly expected to be flexible and agile in how they operate and manage the platforms they engineer.

Despite this, many infrastructure groups still focus primarily on the delivery of physical hardware platforms rather than viewing their roles from a more holistic, ready-to-consume service perspective. In my opinion, that unwillingness to grow beyond engineering physical infrastructure, no longer a key differentiator within IT systems, is the single most limiting hurdle that infrastructure practices face today.

In this blog post I want to delve a little further into what I’m seeing in the field when it comes to changing infrastructure consumption models. I then suggest what I believe needs to happen for more companies to realize the tremendous advantages that a DevOps approach to infrastructure can bring.

Infrastructure Evolution

When I’m out performing assessments, I’m seeing companies at three stages:

  • While many customers are pushing at the boundaries of compute virtualization and often do have highly virtualized compute environments, the majority of VMware customers are still not taking advantage of the benefits that storage and network virtualization technologies offer in terms of abstracting, pooling, and creating the potential for automation of provisioning and management. In contrast, the most progressive infrastructure leaders respond to the needs of IT stakeholders by virtualizing the entirety of their physical infrastructure (compute, storage, and network). Doing so  adds a layer of software-defined abstraction across the board rather than in the singular silo of compute. Completing the final steps of the virtualization journey that began over a decade ago, then, is really the first step to becoming a DevOps-driven infrastructure practice.
  • With the foundation of virtualized compute, storage, and network platforms in place, the next step is to develop a service orientation. Infrastructure teams that are at this point package infrastructure capabilities into fully-defined services, enabling more advanced consumption models such as self-service consumption of infrastructure services (IaaS, PaaS, etc.). The services are exposed via portal-based user interfaces or via standardized APIs.
  • The final and perhaps the most important change that infrastructure leaders drive is bridging the gap between applications and operations teams that developed over the past few decades. They are creating cross-functional teams that include all of the skills required to deliver an end-to-end infrastructure service to market in a standardized, iterative fashion.

By initiating and driving these three key changes, infrastructure leads are opening the door for their practitioners to apply best practice DevOps principles. Examples include continuous integration and deployment and automated delivery of infrastructure services and capabilities.

Key Benefits of DevOps Approach

Consider an example of the very real benefits that the approach can bring: one of our clients adopted a DevOps-oriented, agile approach to development and reduced the delivery cycle for infrastructure services from months to weeks almost immediately upon completing the transition. This resulted in deploying more functionality to the newly developed cloud infrastructure platform during each four-week delivery cycle than they had delivered in the previous year’s worth of development. Application developers immediately recognized the effects of this change and the organization’s CTO significantly increased the team’s budget for the next financial year. The intent of that budget was to accelerate the deployment and adoption of private and public cloud services across IT.

Stories like this suggest where infrastructure organizations should increase focus in the future: moving towards fully embracing DevOps not so much as sequence of particular steps to take in a specific order, but as guiding the organization’s culture.

DevOps is not, after all, a prescriptive framework. It’s much more a way of doing things – “a culture, movement or practice that emphasizes the collaboration and communication of both software developers and other information-technology (IT) professionals while automating the process of software delivery and infrastructure changes,” to quote Wikipedia’s pretty good definition.

Easing the Shift for Infrastructure Teams

What I’m also seeing is that DevOps isn’t an easy shift for infrastructure teams. Fear of change and a lack of exposure to DevOps concepts and practices are very hard to overcome. The territorial boundaries developed by operations over years, operating in silos, become comfort zones that are not easily penetrated. Operations employees, like anyone else, are susceptible to a general feeling of hopelessness, thanks to the fact that they are usually buried in existing work (break/fix, project enablement, etc.) and have no time to spare for true innovation.

Infrastructure teams, therefore, need assistance. However, help they need is not usually what they think they need, for example just another tool, application, or quick fix. What is needed are targeted initiatives that jumpstart more holistic change across all the fronts of people, process, and technology. Further they need ongoing mentoring and coaching to usher change from the initial stages of incubation to full adoption across the entirety of their organizations.

The payoff is tremendous. Successful DevOps transformations empower infrastructure organizations to deliver each release more robustly and better aligned to customers’ needs. When those needs change, they’re not stuck in a long delivery cycle, but can instead reprioritize and deliver something of immediate value in the next cycle. By increasing speed and frequency of releases, they offer better value per release, and better time to market – directly impacting business results. That, ultimately, is the only purpose that IT should be focused on, because without measurable business results, there may well be no business for IT to support.

=======

Josh Miller is a Business Solution Strategist within VMware’s Accelerate Advisory Services practices and is based in Oklahoma City, Ok. You can connect with him on LinkedIN.

Why Should CIOs Invest in Network Virtualization with NSX?

kai_holthaus (150x150)By Kai Holthaus

Data-center virtualization is nearly all-encompassing by now. Most corporations have achieved a compute virtualization rate of over 80%. Only very few workloads remain on physical hardware instead of being handled by a virtual machine, and usually that’s because of very specialized requirements of the applications themselves. Storage is following closely behind.

Network VirtualizationThe main holdout to the software-defined data center (SDDC) is the network infrastructure. Most networks are still being managed on the physical hardware itself, instead of virtualizing the network layer as well, and moving the management of the network into software. With NSX, VMware has the premier network virtualization software, and NSX can help you reap the benefits of a virtualized network.

But why would a CIO invest in the network virtualization?  This blog post will explore the main use and business cases.

Use Case 1: Security

The importance of good security has only grown in recent years. Practically every week we hear of data breaches and hackers gaining access to sensitive data in some way, shape or form. The average cost of such a data breach in the US is over $6.5M [1].

Transformed Security with NSXData Center SecuritySecurity is complicated and costly. In a hardware-managed network environment, security must be designed in from the ground up, and implementing changes to the security setup become relatively big projects relatively quickly.

With NSX, you can implement micro-segmentation of the network. Network administrators can easily define and implement strong firewalls on each deployed virtual machine and on the hypervisors running those virtual machines. Changes in the requirements for the security can be implemented quickly, because it only requires the reconfiguration of the NSX setup, instead of having to reconfigure the physical hardware. Since deploying those additional firewalls is handled in software, the task to configure stronger firewall rules becomes easier, and network administrators gain the ability to control the network traffic flowing between different VMs in a more granular fashion.

For an easy to understand primer on micro-segmentation, check out my colleague’s blog on Understanding Software-Defined Networking for IT Leaders.

Use Case 2: Agility

The network is typically the bottleneck to rapidly deploying new virtual machines or new environments for virtual machines. This happens because the network is hardware-managed, which limits the ability of the network team to quickly change the network topology to accommodate new subnets or VLANs. It also means that provisioning a new VM cannot always be fully automated, because there is the potential for a manual reconfiguration of the network being required.

Moving management into software allows the full automation of the VM provisioning and configuration processes. Configuring new VMs now becomes a matter of minutes, if not seconds. Moving VMs between hosts can now easily been done, because NSX can automatically re-configure the network so that the VM can keep its network configuration, even when moving it somewhere else.

Having this ability to quickly set up and tear down entire networks, and reconfiguring the network on the fly is an essential requirement for continuous deployment and integration. Techniques like this allow DevOps-centric organizations to rapidly implement new functionality for their applications up to a rate of several changes to production systems within just a single minute.

Use Case 3: Availability / Disaster Recovery

Failing over to a Disaster Recovery (DR) site typically involves reconfiguring the network infrastructure to point at new servers. This is very time-consuming and error-prone. Moving management of the network into software now allows network teams to leave the physical network infrastructure alone when failing over to DR resources. The network traffic will simply be routed to a different VM when the original VM becomes unavailable. Integrating NSX into the DR plans, and into other data center management software, will therefore allow network teams to reduce RTO significantly.

These are only three use cases for why virtualizing the network using NSX is a winning business proposition. There are additional use cases, like enabling hybrid cloud environments, which further improve your return on investment for NSX.

Broad adoption of compute virtualization took about 10 years. With these use cases and benefits, it should not take 10 years to reach broad adoption of network virtualization.

=======

Kai Holthaus is a Sr. Transformation Consultant with VMware Operations Transformation Services and is based in Oregon.

[1] 2015 Cost of a Data Breach Study, Ponemon Institute

 

The new culture of IT echoes the industry’s earliest days.

In many ways, it’s back to the future – but we also need some things to change.

Reg Loby Reg Lo

IT cultureTo get a sense of what’s happening in IT today, it can help to have a long term perspective. Think back to the earliest days of computing, for example, and you can see that we’ve almost come full circle – a reality that underscores the major cultural shift that the business is undergoing right now.

When enterprise computers were first commercially available, companies used to buy their hardware from someone else but write their own software, simply because there wasn’t very much packaged software out there to buy.

Then by the ’90s or so, it became the norm to purchase configurable software for the business to use. That worked well for a while, as companies in many different industries deployed similar software, e.g. ERP, CRM, etc.

Today we expect software to do a lot more. Moreover, we expect software to differentiate a business from its competitors – and that’s returning IT to their roots as software developers. After all, the ability to create digital enterprise innovation requires software development skills. And so we’ve made a full arc from a software development perspective.

The Expanding Reach of IT

Now add another historic change that we’re seeing: IT departments used to just provide services for their business, their internal customer, but the advent of the fully digital enterprise is expanding who gets touched by IT. IT departments now need to reach all the way to the customer of the business, the consumer. When we talk about omnichannel marketing, for example, we’re expecting IT to help maintain connections with consumers over web, phone, chat, social media, and more. The same goes for the Internet of Things, where it’s not so much the consumer as a remote device or sensor out in the field somewhere that IT needs to be worried about.

Both broad trends have changed the scope of IT and both are making IT much more visible. More importantly, they mean that IT is now driving revenue directly. If it’s successful, IT makes the business highly successful. But if IT fails, it will directly impede the business revenue flow.

Becoming Agile Innovators

That brings me to my last point. Here’s what hasn’t changed from the past: for the last 30 years or so, the mantra in IT cultures has been “Bigger is Better.” Software Development and Release processes got increasingly bureaucratic and terribly slow (think of those epic waits for the next ERP release). The standard mind-set was to package multiple changes into a single release that they’d roll out every six months or so, if they were lucky.

But that culture is also something that we need to be moving away from, precisely because the relationship between IT and the business it serves has changed. Businesses used to perceive IT as just a cost center that should be squeezed for more and more savings. But when IT touches the end-customer experience directly, business needs IT to be both cheaper and faster – to support and enable the kinds of innovation that will keep the business one step ahead.

We now have the technologies (cloud computing, cloud-native applications) and methodologies (agile development, DevOps) to make smaller, much more frequent, incremental releases that are simpler, less likely to be faulty, and easy to roll back if anything goes wrong.

What we’re still lacking – which I still see when I’m out in the field – is the widespread cultural change required for it to happen. Most importantly, that means adopting what I could call a DevOps mindset across the entire IT organization. At its essence, this mindset views the entire work of IT through a software lens. It makes everything, including infrastructure, code.

For IT long-timers, in many ways that’s simply returning software to the centrality it once enjoyed. But if it takes us back to the early days of computing, it also points us to what we must change if we’re to succeed in a future that’s entirely new.

=======

Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

End User Computing Modernisation – Observations of Success

Charles BarrattBy Charles Barratt

As I come to the end of what has been a long customer engagement I find myself reflecting on what went well, not so well and REALLY well. I engaged with a client who was struggling with desktop iStock_000056305548_Large_modernization (300x200)transformation, having been shackled to Windows XP for too long, and had little direction to move in apart from the tried and tested approach of fat client refresh and System Center Configuration Manager (SCCM) application delivery; hardly transformative or strategic. Compared to what they were doing in the datacenter, the desktop environment was light-years behind, yet they had the capability of a modern datacenter to deliver a transformative digital workspace.

All too often, I witness organisations treating their desktop as second-class citizens to the datacenter, when in reality the datacenter is the servant to the endpoint. Those organisations that truly transform their end user computing (EUC) environments do so with three key principles in mind:

Engagement

All too often, IT starts with technology rather than thinking about what impact modernisation will have on users, their productivity and the financial model associated with end user IT. Gone are the days when we simply issued users with devices and mobile phones and never spoke to them again until they had an issue. Our end users are far more technically savvy and operate their own networks at home, they want to be engaged, they want a say on the appropriate application of technology and they want workplace flexibility; happy workers tend to stay where they are.

Users deserve to be engaged and by engaging them early on EUC transformation you create advocates who are part of the process and want to see it succeed. Don’t underestimate this vital stage. Simply put, “Stop starting with technology.”

Integration

It is no longer appropriate to operate end user computing environments in isolation to the rest of the IT organisation. Virtualisation stopped that trend from happening when we saw a movement of the desktop into the datacenter. As organisations start to consume different application and security models your EUC environment needs to be close to the action for performance and operational gains.

To fully harness this change, we see organisations starting to build out a centre of excellence containing members that span the many moving parts of an EUC environment from endpoint, applications security, networks, datacenter and operations. In doing so you can be confident that there will not be overspending on technology, there will be appropriate capacity to support your requirements and the best experience will be delivered to your end users.

Simplicity

I recently saw the lightbulb moment in my client’s eyes when discussing the simplification of application delivery; we were introducing AppVolumes. Rather than dazzle them with science, we had a simple demonstration and a discussion around the time tested install process of “Next, Next, Next Finish” into an AppStack and made them realize that the world has moved on.

As organisations look to re-architect critical applications they need to think about simplifying the application lifecycle management (ALM) for legacy applications, a key capability of AppVolumes. IT brings the ability to shorten the ALM process significantly, from request fulfillment through patching and updates, to drive consistency and stability whilst minimizing the cost associated with lifecycle and change processes.

As with all technologies, you need to make sure the investment reduces the problem and the financial gain supports the change. The architecture and minimal impact on existing processes places AppVolumes in a very desirable place to solve application delivery challenges.

Opportunities to transform the end user computing environment don’t come along very often but their impact on end user computing is prolific. There has never been a more exciting yet complicated time to be associated in this space.

To use the words of the late Steve Jobs, “You have to start with the customer experience and work back towards the technology.”

=======

Charles Barratt is an EUC Business Solutions Strategist for VMware’s Advisory Services team and based in the UK.

IT Innovation has a Major Impact on Attracting – and Retaining – Talented Staff

Mark Sternerby Mark Sterner

When CIOs adopt leading technologies like self–service provisioning, software defined networks, cloud native applications, and mobile solutions, they’re typically motivated by the significant business efficiencies and agility that these new technologies can deliver.

Those are essential considerations, of course, but I’m going explore another, often overlooked, reason to upgrade to IT’s cutting edge: that the technology you deploy for internal use plays a major role in attracting – and retaining – talented staff that will transform your business to a digital enterprise.

You’re only as good as your talent, after all, and anything that frustrates them or otherwise drives your employees – especially the best ones – to think about jumping ship, is a problem you need to deal with.

Attracting Millennials

Attract Top TalentThis problem is becoming more urgent as Millennials join the workforce. Young people joining the workforce today have expectations of mobility, interoperability, ease of use, speed of technology upgrades, the consumerization of IT and more based on their experience with technology since grade school. With next year’s new hires, those expectations will only increase.

This has an even greater impact on companies that are making serious investments in customer-facing technology. I’ve heard young employees at a well known IT enterprise, for example, say, “I can’t believe I work for a tech company and I can’t get everything on my phone and that the applications are so slow and so hard to maneuver.”

I’ll write more about how Millennials are changing IT in my next post, but here I’ll just add that young people who arrive at companies with outdated internal IT are going to be looking to leave as soon as possible, bringing all the associated costs and delays that come with having to replace people who were performing well.

Retaining Top Talent

Of course, attracting and retaining talent isn’t just about your newest hires. I’ve also seen highly experienced employees motivated to move because they’re asked to work with outdated systems, processes, and tools. These employees know how much better they could be performing with better technologies at their disposal and are simply frustrated at dealing with antiquated infrastructure, manual processes, paper-based systems, and having to constantly put out fires instead of focusing on innovation.

This was made even more apparent to me when I worked with a large pharma company that spun off one of their divisions with a new greenfield approach to internal IT (but no real difference in their customer-facing business). They advertised jobs in the spin-off internally, and a large number of their best people jumped at the chance, leaving the parent company badly lacking in experience.

Ambitious IT professionals can be even harder to keep.  Those individuals take it on themselves to keep learning and pick up the very latest skills. If their company isn’t supporting their personal development because it has no ambition to deploy those technologies, employees will take that as a signal that they should be working elsewhere.

There’s one further cost to holding back on new technologies that future-oriented employees – of whatever age – are keen to use. If you finally spend money on new technology after the best of them have left, you’ll be short of the skills to make full use of the capabilities you’ve invested. And in the age of the fully-digital enterprise when IT is no longer simply a support function, you’ll be failing to get maximum benefit from an essential competitive differentiator.

How Do You Stay Ahead? (Spoiler: It’s not all about technology!)

Clearly, this adds weight to any efforts you have underway to advance your internal systems. It bolsters the case for investing in flexible, virtualized work environments that are mobile-friendly and device agnostic. As you free employees to work from anywhere and on any device, and on modern systems that are fast, adaptable, and efficient, you will set yourself apart in the marketplace for talent. Existing employees will view your company more positively – meaning they’ll be far less likely to look elsewhere and that you’ll get a reputation among talented, forward-looking people in your sector as the place work.

But investing in internal IT for talent retention isn’t just about the technology. People and process are crucial considerations, too.

Your best staff will know about and want to use the latest solutions, but they can’t be expected to make maximum use of them without training and support. So when you do update your IT, you need to be sure that employees are supported in the transition and that your organization is prepared to shift its operating model to fully exploit the systems you are putting in place. And you need to be ready to get help to do that if needed.

In addition, empower your tech staff to help guide the technology roadmap you create. It helps build the sense of ownership that will keep them attached to the organization, but it’s also smart management. These people have experience, knowledge of the business, and proven ambition. You’re always going to build a better system if you include them in your planning than you would if you present them with a plan that’s already a done deal.

========

Mark Sterner brings over 14 years of experience in IT Service Management. He has worked in both the process development and ITIL implementation areas for large IT organizations. Mark is currently a Transformation Consultant at VMware, Inc.

Transforming IT into a Cloud Service Provider

Reg LoBy Reg Lo

Until recently, IT departments thought that all they needed to do was to provide a self-service portal to app dev to provision VMs with Linux or Windows, and they would have a private cloud that was comparable to the public cloud.

Today, in order for IT to become a cloud service provider, IT must not only embrace the public cloud in a service broker model, IT needs to provide a broader range of cloud services.  This 5 minute webinar, describes the future IT operating model as IT departments transform into cloud service providers.

Many IT organizations started their cloud journey by creating a new, separate cloud team to implement a Greenfield, private cloud.  Automation and proactive monitoring using a Cloud Management Platform was key to the success for their private cloud.  By utilizing VMWare’s vRealize Cloud Management Platform, IT could easily expand into the hybrid cloud, provisioning workloads to vCloud Air or other public clouds from a single interface.  Effectively, creating “one cloud” for the business to consume and “one cloud” for IT to manage.

However, the folks managing the brownfield weren’t staying still.  They too wanted to improve the service they were providing the business and they too wanted to become more efficient.  So they also invested in automation.  Without a coherent strategy, both Brownfield and Greenfield took their own separate forks down the automation path, confusing the business on which services they should be consuming.  We started this journey by creating a separate cloud team.  However, it may be time to re-think the boundaries of the private cloud and bring Greenfield and Brownfield together to provide consistency in the way we approach automation.

In order to be immediately productive, the app dev teams are looking for more than infrastructure-as-a-service.  They want platform-as-a-service.  These might be second generation platforms such as database-as-a-service (Oracle, MSSQL, MySQL, etc.) or middleware-as-a-service (such as Web Methods).  Or they need third generation platforms based on unstructured PaaS like containers or structured PaaS like cloud foundry.  The terms first, second and third generation map to the mainframe (1st generation), distributed computing (2nd generation), and cloud native applications (the 3rd generation).

Multiple cloud services can be bundled together to create environment-as-a-service.  For example, LAMP-stacks – Linux, Apache, MySQL and PHP (or Python).  These multi-VM application blueprints lets entire environments be provisioned at a click of a button.

A lot of emphasis has been placed on accessing these cloud services through a self-service portal.  However, DevOps best practices is moving towards infrastructure as code.  In order to support developer-defined infrastructure, IT organizations must also provide an API to their cloud.  Infrastructure-as-code lets you version the infrastructure scripts with the application source code together, ultimately enabling the same deployment process in every environment (dev, test, stage and prod) – improving deployment success rate.

Many companies are piloting DevOps with one or two application pipelines.  However, in order to scale, DevOps best practices must be shared across multiple app dev teams.  App dev teams are typically not familiar with architecting infrastructure or the tools that automate infrastructure provisioning.  Hence, a DevOps enablement team is useful for educating the app dev teams on DevOps best practices and providing the DevOps automation expertise.  This team can also provide feedback to the cloud team on where to expand cloud services.

This IT operating model addresses Gartner’s bimodal IT approach.  Mode 1 is traditional, sequential and used for systems of record.  Mode 2 is agile, non-linear, and used for systems of engagement.  Mode 1 is characterized by long cycle times measured in months whereas mode 2 has shorter cycle times measured in days and weeks.

It is important to note that the business needs both modes to exist.  It’s not one or the other.  Just like how the business needs both interfaces to the cloud: self-service portal and API.

What does this mean to you?  IT leaders must be able to articulate a clear picture of the future-state that encompasses both mode 1 and mode 2, that leverages both a self-service portal and API to the organization’s cloud services.  IT leaders need a roadmap to transform their organization into cloud service providers that traverse the hybrid cloud.  The biggest challenge to the transformation is changing people (the way they think, the culture) and processes (the way they work).  VMware can not only help you with the technology; VMware’s AccelerateTM Advisory Services can help you address the people and process transformation.

 


Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.