Home > Blogs > VMware Accelerate Advisory Services

eBook- Agents of Change: CIO Priorities for 2016

Today’s most successful enterprises are transforming themselves, upending business models, disrupting markets. What’s more, they’re turning on a dime – and the pace at which they’re doing so is only increasing. For these winners, that agility translates into increased customer satisfaction, better margins, and higher sales. For their IT functions – responsible for so much of this new flexibility and speed – transformation drives a new relationship with the business. IT is now a fundamental and ongoing contributor to accelerating business value.

As CIOs look to transform their own IT organizations in year ahead, their greatest challenge lies in delivering that change in an environment that is itself fast moving.

In 2016 and beyond, IT can only expect increased pressure to deploy continuous innovation to capture both business value and further efficiencies.

Our experts see this daily as they work with customers around the world, gaining insight into the challenges that companies face and the strategies that are working on the transformation front-lines.

This eBook explores three main trends that we believe CIOs need to be aware of as they consider embarking upon, or continuing, transformations of their own:

  • Companies are looking to scale DevOps beyond individual application pipelines and pilots.
  • IT needs to be able to work at multiple speeds. It’s all about being multi-modal.
  • Security offers a challenge, but a major opportunity, too.

Download the free eBook, written by our Advisory Services and Operations Transformation Services experts, to whether these innovations in the way we manage, deliver and secure IT should be a part of your strategy.

A Foundation for DevOps: Establishing Continuous Delivery

Peg EatonBy Peg Eaton

As Practice Director for VMware’s DevOps & Cloud Native Apps Professional Services team, I lead a a specialized team of developers with decades of experience helping customers reach their DevOps goals.

In our experience, many organizations have accepted that they need to apply DevOps best practices to accelerate application delivery, and they are in the early stages of developing their strategy for moving forward.

They’ve heard terms like continuous integration and continuous delivery, but have difficulty mapping out what their optimal DevOps tool chain would be, faced with a large, and growing number of technology options, a lack of standardization, and very often, widely differing opinions within the organization on what’s right for them.

In this short video, I walk through our own best-practice example of a Continuous Delivery Pipeline, discuss the software stacks that comprise the tool chain, and pass on DevOps best practice advice along the way.

The Continuous Delivery Pipeline

The Continuous Delivery Pipeline contains inter-related software stacks which support the pipeline stages and activities; enabling repeatable and reliable software delivery by application teams.

You will notice notice as we go through the stacks….you have options. You might choose VMware solutions for some of these areas.  Or you might have made other investments. That’s ok. We believe in our products, but we’re committed to helping you create a seamless integration between all these moving pieces, regardless of which software solutions you choose.

Let’s start with the Planning Stack. This software stack supports agile development throughout the delivery pipeline — planning and tracking software releases; creation of user stories; sprint planning, backlog management, and issue tracking. Typical tools used in planning are JIRA, Redmine, Trello, GitHub Pages, and MS TFS

Next in the toolchain is the Coding Stack. The software in this stack is used for the coding effort against the user stories. We have integrated development environments, editors, debugging tools, and unit-test tools. Geany, Atom, Eclipse, MS-TFS, and vRealize Orchestrator are commonly used tools. Other software in this stack used to support application development: pre-configured developer-workstations, and the environments for application unit-tests (essentially a set of VM blueprints).

The Commit Stack supports version control and many best practices including: daily check-ins, committing assets early and often, automated style-checking, and code reviews. Typical tools include Git, GitHub, MS TFS, and Gerrit.

Developers need fast useful feedback once code is committed; continuous integration tools support automated software builds and automated smoke tests. The Continuous Integration Stack, uses tools like Jenkins, Gerrit Triggers, and vRealize Automation.

The Testing Stack is used in managing testing throughout the software development lifecycle. The key to high productivity and quality code is developer-driven testing in production-like environments for each phase of the lifecycle — it is more effective to expose and remediate issues earlier in the lifecycle. For this stack, you can use tools such as Selenium, REST-assured, soapUI, SonarQube, and vRealize Automation.

The Artifact Management Stack supports the management of application artifacts including binaries; and provides package version control and dependency management of the artifacts. Tools for artifact management include JFrog Artifactory, and vRealize CodeStream.

The next stack focuses on Continuous Deployment and provides support for consistent deployments to every environment – UAT, Staging, and Production. You can use tools like Jenkins, Ansible, vRealize Automation, and vRealize Orchestrator, and vRealize CodeStream.

Next in the tool chain is the Configuration Management Stack, which supports application and environment configuration and is often tightly integrated with the deployment stack. Typical tools in this stack are Ansible, Chef, Puppet, and Chocolatey.

The Control Stack comes next, and is used for application and infrastructure behavior monitoring — alerting, dashboards, logging, capacity management throughout the release process and into production. You can use the vRealize Suite at this stage, including vRealize Operations and vRealize Log Insight, as well as Nagios.

Last but not least is the Feedback Stack, which provides automated feedback to the right people at the right-time during all phases — alerts, auditing, test results, build results, deployment — touching all areas of the pipeline. Github Issues and Slack can be effective tools in this stage.

There are a LOT of moving pieces – it is COMPLEX – and I am sure you are concerned about how in the world to make them all play nicely together! Our capability is in getting all of these moving pieces to work together for a continuous, well monitored flow.

Shift Left

So how does this tool chain help your Dev and Ops team to accelerate application delivery?
Well, application development teams today have moved into the traditional infrastructure and operations space in order to accelerate application delivery.

Application teams require fast-feedback and optimal execution flow during the development life-cycle, and for this they need consistent, production-like environments everywhere and on-demand, continuous integration and deployment, and automated testing throughout the application release lifecycle.

As a result, the dev teams are deploying and managing tools, and automating tasks throughout the life-cycle to support these requirements.

Infrastructure & Operations teams that are concerned with governance and stability can participate in this “shift-left” by working in a high-trust culture where Dev & Ops collaborate to build the optimal execution flow.

Treating infrastructure like code benefits both Dev and Operations, enabling you to version infrastructure definitions with code and use consistent, production-like environments everywhere

A shift-left takes the burden of tool-chain management off the development teams and provides the infrastructure & operations team the visibility, governance, and control to support the business.

VMware DevOps Foundation Solution

The VMware DevOps Foundation Solution (currently available in North America only) enables our customers to implement and operate a Continuous Delivery tool chain with prescriptive stacks of best of breed tools based on common industry patterns and your own environment’s requirements, whether your apps are developed using C#, Java, Python, Go or other programming languages.

The fact that our stacks are pre-integrated means they are ready to use much more quickly, thereby providing a faster time-to value in delivering your applications.

Worried that you are already using a certain tool that you didn’t see in this presentation – don’t worry, these stacks are flexible and we have worked with almost every tool on the landscape.

Creating the DevOps tool chain that will work for you can be complicated, but we’re here to help.

To get started, contact your VMware rep to set up a 1-hour meeting with my team to discuss your goals for Continuous Development. We will then schedule a half-day workshop on continuous delivery and DevOps best practices with your development and operations leaders to kick the project off on the right foot.

We look forward to working with you to make your dreams for accelerated application delivery a reality!

=======

Peg Eaton is a practice director for VMware’s DevOps and Cloud Native Apps Services Organization and is based in Massachusetts. 

Join us at EMC World for an IT Transformation Quick Chat

EMC WorldAre you attending EMC World next week in Las Vegas?  Join us on Monday, May 2 at 2:00 pm or on Tuesday, May 3rd at 10:00am for a Quick Chat in the Veronesse 2401B conference room.

The State of IT Transformation
with Bill Irvine, Principal Strategist at VMware

Gain strategic insights from our overview of “The State of IT Transformation” report, which was recently published by EMC and VMware after an analysis of data provided by more than 660 global firms.

Speaker
Bill Irvine is a Principal Strategist within the VMware Accelerate Advisory services team. As a pragmatic strategic consultant and an ITIL® certified Service Manager, Bill has worked with some of the top Fortune 1000 companies to identify and grow business value by developing practical and “right-sized” solution strategies with actionable roadmaps.

Successful Transformations Require Clarity in Strategy and Execution

Heman Smithby Heman Smith

blog_graphic_CIO_clarityThe recent “The State of IT Transformation” report by VMware and EMC is an up-to-the-minute overview of how companies across multiple industries are faring in their efforts to transform their IT organizations.

The report offers valuable insights into the pace and success of IT transformation over the last few years and outlines where companies feel they have the most to do. But two specific data points in the report – highlighting gaps between companies’ ambitions and their actual achievements – struck me in particular. Here they are:

  • 90% of companies surveyed felt it important to have a documented IT transformation strategy and road map, with executive and line of business support. Yet over 55% have nothing documented.
  • 95% of the same organizations thought it critical that an IT organization has no silos and works together to deliver business focused services at the lowest cost. And yet less than 4% percent of organizations report that they currently operate like this.

Both of these are very revealing, I think, and worth digging into a little deeper.

Taking the second point first, my immediate reaction here is: Could IT actually operate with no silos? Is that ever achievable?

To answer, you have to define what “silo” means. A silo can be a technology assignment (storage, networking, compute, etc..) and that’s usually what’s meant within IT by the word.  Sometimes, though, it represents a team assignment, whether by expertise or a focus on delivering a particular service, that is in its own way a type of silo.

So when companies say they wish they could operate with no silos and be able to work together, I wonder if that’s really an expression of frustration with poor collaboration and poor execution? My guess is that what they’re really saying is: “we don’t know how to get our teams, our people, to collaborate effectively and execute well.”

If I’m right, what can they do about it? How can companies improve IT team collaboration, coordination, and execution?

Being clear about clarity

The answer takes us back to the first data point, that 90% of companies feel it’s important to have a documented IT transformation strategy and road map with executive and line of business support, yet over 55% have nothing documented. A majority of companies, in other words, lack strategic clarity.

Without strategic clarity, it’s very difficult for teams to operate and execute toward an outcome that is intentional and desired. Instead they focus on the daily whirlwind that surrounds them, doing whatever the squeakiest wheels dictate. I’m reminded of what Ann Latham, president of Uncommon Clarity, has said: “Over 90% of all conflict comes from a lack of clarity.”

Clarity, in my experience, has three different layers.

Clarity of intent.

This is what you want to accomplish (the vision); why you want to do it (the purpose); and when you want it done (the end point). You can also frame this as, “We want to go from X (capability) to Y (capability) by Z (date).”

Clarity of delivery.

As you move towards realizing your vision, you learn a lot more about your situation, which brings additional clarity.

Clarity of retrospect.

We joke about 20/20 hindsight, but it’s valuable because it lets us compare our original intentions with outcomes and learn from what happened in between.

Strategic clarity is really about that first layer. If companies are not clear upfront about what they want, it’s almost impossible for their teams and employees to understand what’s wanted from them and how they can do it – or to track their progress or review it once a project is complete. Announce a change without making it clear how team members can help make it a reality and you invite fear and inertia. While waiting for clarity, people disengage and everything slows down.

I’ve seen, for example, companies say they’re going to “implement a private cloud.”  That’s an aspirational statement of desire, but not one of clear intent. A clear statement of intent would be: “We’re going to use private cloud technologies to shift our current virtual environment deployment pace of 4+ weeks into production to less than 24 hours by the end of June 2016.” Frame it like, and any person on the team can figure out how they can or cannot contribute toward that exact, clear goal. More importantly, the odds of them collectively achieving the outcome described by the goal are massively increased.

I suspect that the overwhelming majority of companies reporting that they’d like a strategic IT transformation document and road map but don’t yet have one, have for the most part failed to decide what exact capabilities they want, and by when.

This isn’t new. For the last 30 plus years, IT has traditionally focused on technologies themselves rather than the outcomes that those technologies can enable. Too many IT cultures do technology first and then “operationalize it.” But that’s fundamentally flawed and backwards, especially in today’s services-led environment.

Operating models and execution

Delivering on your strategic intent requires more than clarity in how you describe it, of course. Your operating model must also be as simple and as focused as possible on delivering the specific outcomes and capabilities outlined in your plan. Otherwise, you are placing people inside the model without knowing how they can deliver the outcomes it expects, because they don’t know what they’re trying to do.

Implementing an effective operating model means articulating the results you are looking for (drawn from your strategy), then designing a model that lets employees do that as directly and rapidly as possible. That’s true no matter what you’re building – an in-house private cloud, something from outside, or a hybrid. Everyone needs to know how they can make decisions – and make them quickly – in order to deliver the results that are needed.

That brings me to the my last observation. When companies have no documented strategy or road map (and remember that’s 55% of companies surveyed in the VMware/EMC report), they are setting themselves up for what I call “execution friction.” With no clear strategy, companies focus on technology first and “operationalize” later. They end up in the weeds of less-than-successful technology projects, and spend energy and resources on upgrading capacity, improving details, and basic IT pools, while failing to craft a technology model that supports delivering the capabilities written in their strategic model. It’s effort that uses up power while slowing you down instead of pushing you forward: execution friction. Again, it’s viewing IT as a purchase, when today more than ever it should be viewed as a strategic lever to accelerate a company’s ability to deliver.

In his book on strategic execution, Ram Charan says that to understand execution, you need to keep three key points in mind:

  • Execution is a discipline, and integral to strategy
  • Execution is the major job of the business (and IT) leader
  • Execution must be a core element of an organization’s culture

Charan’s observations underline what jumps out at me in the data reported by the VMware/EMC study: that you can’t execute effectively without strategic clarity.

Wise IT leaders, then, will make and take the time necessary to get strategically clear on their intended capability outcomes as soon as possible, then document that strategy, share it, and work from it with their teams in order to achieve excellent execution. If more companies do that, we’ll see silos disappearing in a meaningful way, too, because more will be executing on their strategy with success.

=======

Heman Smith is a Strategist with VMware Accelerate Advisory Services and is based in Utah.  

Moving Beyond Infrastructure as a Service to Platform as a Service

Brian MartinezBy Brian Martinez

What's Next - PaaSMy VMware colleague Josh Miller recently explored how companies are extending a DevOps model into their infrastructure organizations and what can be done to speed that essential transition.

I want to talk about the step after that. Where do you go after achieving infrastructure-as-a-service?

Here’s how I think of it. Infrastructure as a service (IaaS) focuses on deploying infrastructure as quickly as possible and wrapping a service-oriented approach around it. That’s essential. But infrastructure in itself doesn’t add direct value to a business. Applications do that. In more and more industries the first company to release that new killer app is the one that wins or at least draws the most value.

So, while it’s essential that you deliver infrastructure quickly, it’s worth lies in helping deploy applications faster, build services around those applications, and speed time to market.

So you have IaaS, what’s next?  Enter the concept of the platform as a service (PaaS). PaaS can be realized in a variety of ways. It might be through second generation platforms such as database-as-a-service or middleware-as-a-service. Or it could be via third generation platforms based on unstructured PaaS like containers (think Docker) or structured PaaS (think Pivotal Cloud Foundry).

The flexibility you have in terms of options here is significant and your strategy should be based on the needs of your developers.   Many times we see strategies built around a tool name instead of the outcomes needed from that tool.  Listening to the developer’s needs should help determine what the requirements are.  Then build backwards from there.  Often we you won’t end up with the same tooling then you thought you would.

All the approaches to PaaS, though, share a key feature: they are driven by both a holistic and a life-cycle view of IT. In other words, it’s dangerous to view any IT function today as either separate from any other, or as a one-time deal. Instead, we need to be thinking of everything as connected and at the same time being constantly iterated and improved.

Work from that perspective and it’s easier to navigate the often daunting array of options you have when it comes to PaaS.

Certainly, as you move along this path, it’s very possible to end up with multiple, small cloud-native apps deployed on multiple platforms spread across multiple different data sets – so be aware of the lifecycle.

One other note: there are so many different tools coming to market so quickly in this space that what you pick now may not be what you use in a couple of years. A lot of our customers are nervous about that. So it’s worth remembering that these tools are designed so that you can move your code, and the work that you’re doing with the code, to whatever platform is best suited to deliver it to your customer.

The bottom line: Encourage your customers to try things out so they can create DevOps learning experiences.  Be responsive in enabling developers to access new tools, while setting the right boundaries on how they can use those tools (think service definition) and where they bring them to bear.  Approaching PaaS with a unified culture of continuous iteration and improvement will enable your developers with the tools they need to move fast, without losing the control and stability essential to IT operations.

=======

Brian Martinez is a Strategist with VMware Advisory Services and is based in New York.

IT’s Payback Time – Calculating the ROI on IT Innovation – Part 1

To justify investment in new IT projects, we need to show that it pays off – here’s how to do that.

Les Viszlaiby Les Viszlai

ROI Calculation“Innovation in IT pays for itself.”  That’s something pretty much everyone in IT believes. But it’s also something that a surprising number of companies I visit aren’t in a position to prove.  Why? Because most companies don’t actually know what it costs to provide their IT services and can’t quite put a figure on the benefits IT innovation projects can bring.  Missing these key data points can make it very difficult to quantify the Return on Investment (ROI) or payback on any IT project, making it harder for IT to compete internally with other departments for scarce business funding.  Many times, approved IT budgets get frozen or delayed because the business does not understand the value of the projects in question and opportunities are missed or delayed.

In Part 1 of this blog series, let’s begin at a basic level in order to get you familiar with the topic of calculating ROI.  We’ll dig in to what you can do to calculate whether an IT project will be self-funding.

Calculating Basic ROI

ROI Increased Revenue Cost ReductionEconomists have used many formal models to calculate ROI (Return on Investment), TCO (Total Cost of Ownership), as well as methods for determining IT Business Value and payback periods.  For this conversation, let’s focus on basic ROI, and ask the question: if we spend X dollars on a new IT project or service in order to get a new or existing capability, will we spend less money than we are paying now for the equivalent capability or service that we will replace?   If an initiative does this, then we can easily make the case for moving forward with that innovation program.

To figure this out, we’ll look at these two areas:

  • Reduced or avoided capital and/or operational costs
  • Increased/Enhanced Revenue

Hard Costs and Soft Costs

Hard cost is money we have to pay. Most hard cost savings or cost avoidance opportunities are fairly easy to quantify. These savings will include the cost of hardware and software you no longer need to pay for and savings from staff reductions and licenses you will no longer need.   However, don’t forget to factor the added cost of the new hardware and software you are installing, any one time professional services fees you will need in order to deploy everything in place and any new staffing needs.  But this should all be relatively easy to quantify from a hard cost standpoint.

Soft cost savings or cost avoidance is more complex, because the benefits accrued are harder to put actual numbers on and its harder to get internal agreement on how its determined. In addition, most companies capture this information over a 3 to 5-year period, which may compete with short-term goals.

If you are already measuring soft costs today, then you’re ahead of the game.  However, you might be surprised by how often I see organizations failing to quantify them. The main reason, typically, is that nobody wants to do the work or no one understands the benefit. Quite often, I see companies look at an IT project purely from a hard cost savings perspective and say, “We can’t figure out how much time this will save, or how much happier this will make the client, so we’re not going to use these additional metrics as a measurement for this project.”

For those of you that want to start looking at this, I suggest reviewing the benefits below to see if they are addressed in the proposed project.  These project benefits are easier to quantify and can easily add up to substantial savings over time.  To calculate the savings for projects designed to improve existing capabilities, look at the current delivery time and associated costs and then subtract those numbers from the new projected delivery time and costs.

Will this IT project:

  • Provide faster delivery times?
    With simplified work flows and more repeatable processes being done more often by a machine automation, we can look forward to faster delivery times.  In order to calculate this, we multiply the current hourly FTE costs by the average delivery duration, by the number of requests on a yearly basis and compare that to the new times and costs.
  • Reduce the cost of training?
    With a simplified system, can we reduce training times for people new to the company, and likely employ more junior staff and divert more senior staff to innovation activities.   These savings can be quite high in organizations that have seasonal hiring needs and organizations that have a high staff turnover.
  • Lower regulatory and compliance costs?
    Automation and simplification activities can have a significant impact on reducing the cost of compliance, especially in regulation-intensive sectors like healthcare or finance.  These savings can be calculated by tracking the current FTE time used to manually record and document audit related activities and compare that to the improvements driven by the project.
  • Reduce human and machine errors?
    With simpler, more repeatable processes being done more often by a machine, we can look forward to less failures.  In order to calculate this, we multiply the current hourly loss, by the average downtime duration, by the number of times this happens on a yearly basis.
  • Drive faster resolution times?
    Using MTTR (how long it takes, on average, to restore a system) we multiply the number of incidents, by the time it takes to resolve, by the cost of personal on a yearly basis.

The above is the short list of soft cost savings you can use as a starting point.  They are easier to quantify and get agreement on, and collectively they can seriously add up.

Projecting Increases in Revenue

It should also be entirely possible to figure what the IT project change will do for your revenues.  Just to be clear: we’re not talking about the results of funding an entirely new product. We’re talking about the revenue enhancements that come with the cost avoidance/reductions and efficiencies tied to existing product/service lines.

Let’s take this scenario for quantifying IT Project payback: A business owner is running a web store where it takes a customer 3 minutes to buy something, but 90% of customers abandon the sale after 38 seconds. Along comes the innovative IT team, offering a project that reduces the average time-to-purchase down to 30 seconds. It’s entirely feasible, then, to figure the increased revenues that ought to accrue, all other things being equal, from the technology change and the faster buy time.

Again, the biggest thing that I see getting in the way of these kinds of calculations is that businesses first have to commit to doing them. I don’t think it really matters which method we use (ROI, EVA, TCU, they’re all fine). We just have to get agreement to pick one.

By doing the work upfront and having those numbers available for review, you put senior leadership in a better position to approve the IT project proposal.  It also leaves very little room for debate on the savings value of the project since we have established agreement within the organization on how the ROI is determined.

Key Take-Aways

  • Don’t forget to establish how you will calculate the expected ROI as you set an innovation strategy.
  • Don’t be hesitant to dive in.  Just pick an accounting method, get agreement on it within your organization, and then start doing the math.
  • This pays off!  In all likelihood it will help you prove that IT innovation does indeed pay for itself.
  • When IT innovation can pay for itself, this leads to more innovation, and that leads to increased customer satisfaction or added brand value, which of course will have a positive direct impact on your business.

Stay tuned.  In my next blog we will dig into the obstacles to watch out for that impact our ability achieve the projected savings.

=======

Les Viszlai is a principal strategist with VMware Advisory Services based in Atlanta, GA.

How DevOps is Changing Infrastructure and Providing Business Value

Josh MillerBy Josh Miller

Infrastucture Building Business ValueInfrastructure organizations are feeling more pressured than ever to innovate. They are being pushed by business unit leads and application teams to deliver on their part of software toolchain stacks at a faster pace. They are increasingly expected to be flexible and agile in how they operate and manage the platforms they engineer.

Despite this, many infrastructure groups still focus primarily on the delivery of physical hardware platforms rather than viewing their roles from a more holistic, ready-to-consume service perspective. In my opinion, that unwillingness to grow beyond engineering physical infrastructure, no longer a key differentiator within IT systems, is the single most limiting hurdle that infrastructure practices face today.

In this blog post I want to delve a little further into what I’m seeing in the field when it comes to changing infrastructure consumption models. I then suggest what I believe needs to happen for more companies to realize the tremendous advantages that a DevOps approach to infrastructure can bring.

Infrastructure Evolution

When I’m out performing assessments, I’m seeing companies at three stages:

  • While many customers are pushing at the boundaries of compute virtualization and often do have highly virtualized compute environments, the majority of VMware customers are still not taking advantage of the benefits that storage and network virtualization technologies offer in terms of abstracting, pooling, and creating the potential for automation of provisioning and management. In contrast, the most progressive infrastructure leaders respond to the needs of IT stakeholders by virtualizing the entirety of their physical infrastructure (compute, storage, and network). Doing so  adds a layer of software-defined abstraction across the board rather than in the singular silo of compute. Completing the final steps of the virtualization journey that began over a decade ago, then, is really the first step to becoming a DevOps-driven infrastructure practice.
  • With the foundation of virtualized compute, storage, and network platforms in place, the next step is to develop a service orientation. Infrastructure teams that are at this point package infrastructure capabilities into fully-defined services, enabling more advanced consumption models such as self-service consumption of infrastructure services (IaaS, PaaS, etc.). The services are exposed via portal-based user interfaces or via standardized APIs.
  • The final and perhaps the most important change that infrastructure leaders drive is bridging the gap between applications and operations teams that developed over the past few decades. They are creating cross-functional teams that include all of the skills required to deliver an end-to-end infrastructure service to market in a standardized, iterative fashion.

By initiating and driving these three key changes, infrastructure leads are opening the door for their practitioners to apply best practice DevOps principles. Examples include continuous integration and deployment and automated delivery of infrastructure services and capabilities.

Key Benefits of DevOps Approach

Consider an example of the very real benefits that the approach can bring: one of our clients adopted a DevOps-oriented, agile approach to development and reduced the delivery cycle for infrastructure services from months to weeks almost immediately upon completing the transition. This resulted in deploying more functionality to the newly developed cloud infrastructure platform during each four-week delivery cycle than they had delivered in the previous year’s worth of development. Application developers immediately recognized the effects of this change and the organization’s CTO significantly increased the team’s budget for the next financial year. The intent of that budget was to accelerate the deployment and adoption of private and public cloud services across IT.

Stories like this suggest where infrastructure organizations should increase focus in the future: moving towards fully embracing DevOps not so much as sequence of particular steps to take in a specific order, but as guiding the organization’s culture.

DevOps is not, after all, a prescriptive framework. It’s much more a way of doing things – “a culture, movement or practice that emphasizes the collaboration and communication of both software developers and other information-technology (IT) professionals while automating the process of software delivery and infrastructure changes,” to quote Wikipedia’s pretty good definition.

Easing the Shift for Infrastructure Teams

What I’m also seeing is that DevOps isn’t an easy shift for infrastructure teams. Fear of change and a lack of exposure to DevOps concepts and practices are very hard to overcome. The territorial boundaries developed by operations over years, operating in silos, become comfort zones that are not easily penetrated. Operations employees, like anyone else, are susceptible to a general feeling of hopelessness, thanks to the fact that they are usually buried in existing work (break/fix, project enablement, etc.) and have no time to spare for true innovation.

Infrastructure teams, therefore, need assistance. However, help they need is not usually what they think they need, for example just another tool, application, or quick fix. What is needed are targeted initiatives that jumpstart more holistic change across all the fronts of people, process, and technology. Further they need ongoing mentoring and coaching to usher change from the initial stages of incubation to full adoption across the entirety of their organizations.

The payoff is tremendous. Successful DevOps transformations empower infrastructure organizations to deliver each release more robustly and better aligned to customers’ needs. When those needs change, they’re not stuck in a long delivery cycle, but can instead reprioritize and deliver something of immediate value in the next cycle. By increasing speed and frequency of releases, they offer better value per release, and better time to market – directly impacting business results. That, ultimately, is the only purpose that IT should be focused on, because without measurable business results, there may well be no business for IT to support.

=======

Josh Miller is a Business Solution Strategist within VMware’s Accelerate Advisory Services practices and is based in Oklahoma City, Ok. You can connect with him on LinkedIN.

Why Should CIOs Invest in Network Virtualization with NSX?

kai_holthaus (150x150)By Kai Holthaus

Data-center virtualization is nearly all-encompassing by now. Most corporations have achieved a compute virtualization rate of over 80%. Only very few workloads remain on physical hardware instead of being handled by a virtual machine, and usually that’s because of very specialized requirements of the applications themselves. Storage is following closely behind.

Network VirtualizationThe main holdout to the software-defined data center (SDDC) is the network infrastructure. Most networks are still being managed on the physical hardware itself, instead of virtualizing the network layer as well, and moving the management of the network into software. With NSX, VMware has the premier network virtualization software, and NSX can help you reap the benefits of a virtualized network.

But why would a CIO invest in the network virtualization?  This blog post will explore the main use and business cases.

Use Case 1: Security

The importance of good security has only grown in recent years. Practically every week we hear of data breaches and hackers gaining access to sensitive data in some way, shape or form. The average cost of such a data breach in the US is over $6.5M [1].

Transformed Security with NSXData Center SecuritySecurity is complicated and costly. In a hardware-managed network environment, security must be designed in from the ground up, and implementing changes to the security setup become relatively big projects relatively quickly.

With NSX, you can implement micro-segmentation of the network. Network administrators can easily define and implement strong firewalls on each deployed virtual machine and on the hypervisors running those virtual machines. Changes in the requirements for the security can be implemented quickly, because it only requires the reconfiguration of the NSX setup, instead of having to reconfigure the physical hardware. Since deploying those additional firewalls is handled in software, the task to configure stronger firewall rules becomes easier, and network administrators gain the ability to control the network traffic flowing between different VMs in a more granular fashion.

For an easy to understand primer on micro-segmentation, check out my colleague’s blog on Understanding Software-Defined Networking for IT Leaders.

Use Case 2: Agility

The network is typically the bottleneck to rapidly deploying new virtual machines or new environments for virtual machines. This happens because the network is hardware-managed, which limits the ability of the network team to quickly change the network topology to accommodate new subnets or VLANs. It also means that provisioning a new VM cannot always be fully automated, because there is the potential for a manual reconfiguration of the network being required.

Moving management into software allows the full automation of the VM provisioning and configuration processes. Configuring new VMs now becomes a matter of minutes, if not seconds. Moving VMs between hosts can now easily been done, because NSX can automatically re-configure the network so that the VM can keep its network configuration, even when moving it somewhere else.

Having this ability to quickly set up and tear down entire networks, and reconfiguring the network on the fly is an essential requirement for continuous deployment and integration. Techniques like this allow DevOps-centric organizations to rapidly implement new functionality for their applications up to a rate of several changes to production systems within just a single minute.

Use Case 3: Availability / Disaster Recovery

Failing over to a Disaster Recovery (DR) site typically involves reconfiguring the network infrastructure to point at new servers. This is very time-consuming and error-prone. Moving management of the network into software now allows network teams to leave the physical network infrastructure alone when failing over to DR resources. The network traffic will simply be routed to a different VM when the original VM becomes unavailable. Integrating NSX into the DR plans, and into other data center management software, will therefore allow network teams to reduce RTO significantly.

These are only three use cases for why virtualizing the network using NSX is a winning business proposition. There are additional use cases, like enabling hybrid cloud environments, which further improve your return on investment for NSX.

Broad adoption of compute virtualization took about 10 years. With these use cases and benefits, it should not take 10 years to reach broad adoption of network virtualization.

=======

Kai Holthaus is a Sr. Transformation Consultant with VMware Operations Transformation Services and is based in Oregon.

[1] 2015 Cost of a Data Breach Study, Ponemon Institute

 

The new culture of IT echoes the industry’s earliest days.

In many ways, it’s back to the future – but we also need some things to change.

Reg Loby Reg Lo

IT cultureTo get a sense of what’s happening in IT today, it can help to have a long term perspective. Think back to the earliest days of computing, for example, and you can see that we’ve almost come full circle – a reality that underscores the major cultural shift that the business is undergoing right now.

When enterprise computers were first commercially available, companies used to buy their hardware from someone else but write their own software, simply because there wasn’t very much packaged software out there to buy.

Then by the ’90s or so, it became the norm to purchase configurable software for the business to use. That worked well for a while, as companies in many different industries deployed similar software, e.g. ERP, CRM, etc.

Today we expect software to do a lot more. Moreover, we expect software to differentiate a business from its competitors – and that’s returning IT to their roots as software developers. After all, the ability to create digital enterprise innovation requires software development skills. And so we’ve made a full arc from a software development perspective.

The Expanding Reach of IT

Now add another historic change that we’re seeing: IT departments used to just provide services for their business, their internal customer, but the advent of the fully digital enterprise is expanding who gets touched by IT. IT departments now need to reach all the way to the customer of the business, the consumer. When we talk about omnichannel marketing, for example, we’re expecting IT to help maintain connections with consumers over web, phone, chat, social media, and more. The same goes for the Internet of Things, where it’s not so much the consumer as a remote device or sensor out in the field somewhere that IT needs to be worried about.

Both broad trends have changed the scope of IT and both are making IT much more visible. More importantly, they mean that IT is now driving revenue directly. If it’s successful, IT makes the business highly successful. But if IT fails, it will directly impede the business revenue flow.

Becoming Agile Innovators

That brings me to my last point. Here’s what hasn’t changed from the past: for the last 30 years or so, the mantra in IT cultures has been “Bigger is Better.” Software Development and Release processes got increasingly bureaucratic and terribly slow (think of those epic waits for the next ERP release). The standard mind-set was to package multiple changes into a single release that they’d roll out every six months or so, if they were lucky.

But that culture is also something that we need to be moving away from, precisely because the relationship between IT and the business it serves has changed. Businesses used to perceive IT as just a cost center that should be squeezed for more and more savings. But when IT touches the end-customer experience directly, business needs IT to be both cheaper and faster – to support and enable the kinds of innovation that will keep the business one step ahead.

We now have the technologies (cloud computing, cloud-native applications) and methodologies (agile development, DevOps) to make smaller, much more frequent, incremental releases that are simpler, less likely to be faulty, and easy to roll back if anything goes wrong.

What we’re still lacking – which I still see when I’m out in the field – is the widespread cultural change required for it to happen. Most importantly, that means adopting what I could call a DevOps mindset across the entire IT organization. At its essence, this mindset views the entire work of IT through a software lens. It makes everything, including infrastructure, code.

For IT long-timers, in many ways that’s simply returning software to the centrality it once enjoyed. But if it takes us back to the early days of computing, it also points us to what we must change if we’re to succeed in a future that’s entirely new.

=======

Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

End User Computing Modernisation – Observations of Success

Charles BarrattBy Charles Barratt

As I come to the end of what has been a long customer engagement I find myself reflecting on what went well, not so well and REALLY well. I engaged with a client who was struggling with desktop iStock_000056305548_Large_modernization (300x200)transformation, having been shackled to Windows XP for too long, and had little direction to move in apart from the tried and tested approach of fat client refresh and System Center Configuration Manager (SCCM) application delivery; hardly transformative or strategic. Compared to what they were doing in the datacenter, the desktop environment was light-years behind, yet they had the capability of a modern datacenter to deliver a transformative digital workspace.

All too often, I witness organisations treating their desktop as second-class citizens to the datacenter, when in reality the datacenter is the servant to the endpoint. Those organisations that truly transform their end user computing (EUC) environments do so with three key principles in mind:

Engagement

All too often, IT starts with technology rather than thinking about what impact modernisation will have on users, their productivity and the financial model associated with end user IT. Gone are the days when we simply issued users with devices and mobile phones and never spoke to them again until they had an issue. Our end users are far more technically savvy and operate their own networks at home, they want to be engaged, they want a say on the appropriate application of technology and they want workplace flexibility; happy workers tend to stay where they are.

Users deserve to be engaged and by engaging them early on EUC transformation you create advocates who are part of the process and want to see it succeed. Don’t underestimate this vital stage. Simply put, “Stop starting with technology.”

Integration

It is no longer appropriate to operate end user computing environments in isolation to the rest of the IT organisation. Virtualisation stopped that trend from happening when we saw a movement of the desktop into the datacenter. As organisations start to consume different application and security models your EUC environment needs to be close to the action for performance and operational gains.

To fully harness this change, we see organisations starting to build out a centre of excellence containing members that span the many moving parts of an EUC environment from endpoint, applications security, networks, datacenter and operations. In doing so you can be confident that there will not be overspending on technology, there will be appropriate capacity to support your requirements and the best experience will be delivered to your end users.

Simplicity

I recently saw the lightbulb moment in my client’s eyes when discussing the simplification of application delivery; we were introducing AppVolumes. Rather than dazzle them with science, we had a simple demonstration and a discussion around the time tested install process of “Next, Next, Next Finish” into an AppStack and made them realize that the world has moved on.

As organisations look to re-architect critical applications they need to think about simplifying the application lifecycle management (ALM) for legacy applications, a key capability of AppVolumes. IT brings the ability to shorten the ALM process significantly, from request fulfillment through patching and updates, to drive consistency and stability whilst minimizing the cost associated with lifecycle and change processes.

As with all technologies, you need to make sure the investment reduces the problem and the financial gain supports the change. The architecture and minimal impact on existing processes places AppVolumes in a very desirable place to solve application delivery challenges.

Opportunities to transform the end user computing environment don’t come along very often but their impact on end user computing is prolific. There has never been a more exciting yet complicated time to be associated in this space.

To use the words of the late Steve Jobs, “You have to start with the customer experience and work back towards the technology.”

=======

Charles Barratt is an EUC Business Solutions Strategist for VMware’s Advisory Services team and based in the UK.