Home > Blogs > VMware Accelerate

We’re going to the Cloud. Do I Still Need ITSM?

Greg LinkBy Greg Link

As of this writing, Boeing Aircraft Company is demonstrating its 787 Dreamliner at the Paris Air show. After a normal takeoff roll, the aircraft jumps off the runway into what appears to be a near vertical climb to the clouds! That is impressive. Recently a different form of cloud; cloud computing - has appeared on the horizon and appears to be here to stay. Forrester expects cloud computing will increase from approximately $41 billion this year, and rise to more than $240 billion in 2020. A 600% increase in 5-years is impressive indeed. Traditional IT shops have embraced IT Service Management (ITSM) frameworks, such as the Information Technology Infrastructure Library (ITIL®), to help them respond to dynamic business requirements. But is this framework still needed as more and more companies turn to the cloud?

What is the purpose of ITSM?

ITSM and ITIL are often used interchangeably. They are synonymous because ITIL has become the de facto or gold standard for the design, delivery and operation of quality IT services that meet the needs of customers and users. Its approach focuses on 3 major areas:

  • Using a process approach to
  • Deliver IT services rather than IT systems or applications, while
  • Stressing continual service improvement.

With the success of ITSM initiatives all across the globe there is little reason to abandon these critical practices simply due to a shift in the way storage and computing are being performed. The cloud is merely providing companies with powerful utility to meet business requirements and demand.

5 reasons why ITSM is still needed in the cloud

Your transformation to the cloud can take several paths. You can go the route of the Private Cloud where you have total control over the infrastructure and applications that will be used to do business. Another option is the Public Cloud where you contract with a provider that will host storage and computing resources. The final option is the Hybrid Cloud – combination of the two – that allows companies to meet seasonal or growing demand. Any path chosen will still require the basics covered in ITIL.

1) Service Strategy

Service strategy helps in understanding the market and customer needs to create a vision of the services needed to meet business objectives. In this phase we look at things like Financial Management. The cloud is best served when costs and fees are known and predictable. This segment also looks at Demand Management. This discipline proactively manages the dynamics of how the service will be governed from a resource perspective. Both of these are critical to success in the cloud.

5) Service Design

Service Design is the practice of taking a holistic approach to end-to-end service design while considering such things as people, process, technology and vendor relationships. Processes within this phase that are critical with the cloud are:

  • Service Level Management – setting and keeping service targets
  • Supplier Management – this is essential if you are considering going into the Public Cloud as the vendor you choose will be responsible for the delivery of your services.
  • Service Continuity Management – what is the backup plan and if needed, the recovery plan to resume business services should there be a disruption?
  • Capacity and Availability Management – will resources be available and in quantities needed to meet the business requirements in a cost effective manner?

3) Service Transition

Service Transition assists in getting a service into production with processes such as:

  • Transition Planning and Support - planning and coordination of resources in ensure your IT service is market ready.
  • Evaluation – does the service perform and do what it is supposed to do (warranty and utility)?
  • Knowledge Management - make sure that people have the information they need, in whatever capacity, to support the service.
  • Change Management – Private cloud users will follow existing process, however a well coordinated change management process will be needed for Public and Hybrid cloud users. Your vendor’s changes may affect your application in unwanted ways. Additionally, if the cloud is to be used for provisioning servers and environments, the Change Management should be optimized for agility and repeatability.

4) Service Operation

Service Operation manages how a company balances areas in consistency and responsiveness. Processes important when considering deployment to a cloud environment are:

  • Incident Management – where do cloud users turn to when things don’t go as they should and how is that managed?
  • Access Management – the method by which only authorized users are allowed access to use the application and other resources used in the delivery of services.

5) Continual Service Improvement 

  • Now that you’ve deployed to the cloud, how are customers reacting to your service? Is the service meeting their needs? Is it fast enough clear enough, secure enough…?

IT Service Management principles will help guide you into a successful cloud experience with ease and confidence.  Luckily, you’re not alone.  VMware professional services are not only experts in cloud innovation, we have ITIL Experts on staff who can help you ensure ITSM best practices are applied throughout your operating model.


Greg Link is a Transformation Consultant based in Las Vegas, NV

Understanding Software-Defined Networking for IT Leaders – Part 1

Reg Lo By Reg Lo

Software-defined networking (SDN) is revolutionizing the datacenter much like server virtualization has done. It is important for IT leaders to understand the basic concepts of SDN and the value of the technology: security, agility through automation and cost-savings. This blog post explores some of the security benefits of SDN using a simple analogy.

DomiNations

Courtesy of the game DomiNations, Nexon M, Inc.

My kids are playing DomiNations – a strategy game where you lead your nation from the Stone Age to the Space Age. I recruited their help to illustrate how SDN improves security. In this analogy, the city is the datacenter; walls are the firewall (defense against attackers/hackers), and the workloads are the people/workers.

The traditional way of defending a city is to create walls around the city. In the same manner, we create a perimeter defense around our datacenter using firewalls. However, imagine there is a farm outside the walls of the city. Workers need to leave the protection of the city walls to work in the farm. This leaves them vulnerable to attack. In the same way, as workloads or virtual machines are provisioned in public or hybrid clouds outside the datacenter firewalls, what is protecting these workloads from attack?

DomiNations

Courtesy of the game DomiNations, Nexon M, Inc.

In an ideal world, let’s say my kids have magical powers in the game and they enchant the city walls so they can expand and contract to continuously protect the workers. When a worker goes to the farm, the walls automatically extend to include the worker in the farm. When they return back to the city, the walls return to normal. SDN is like magic to your firewalls. Instead of your firewalls being defined by physical devices, a software-defined firewall can automatically expand into the public cloud (or the part of the hybrid cloud that is outside of your datacenter) to continuously protect your workloads.

This ability to easily and automatically configure your firewalls provides another benefit: micro-segmentation. As mentioned before, in a traditional city, the city walls provide a perimeter defense. Once an attacker breaches the wall, they have free range to plunder the city. Traditional datacenters have a similar vulnerability. Once a hacker gets through the firewall, they have free range to expand their malicious activity from one server to the next.

DomiNations

Courtesy of the game DomiNations, Nexon M, Inc.

Micro-segmentation of the network is like having city walls around each building. If an attacker breaches the outer perimeter, they can only destroy one building before having to re-start the expensive endeavor of attacking the next line of defense. In a similar fashion, if a hacker penetrates one application environment, micro-segmentation prevents them from gaining access to another application environment.

Software-defined networking can improve information security. Every few months there is a widely publicized security breach that damages a company’s brand. CIOs and other IT leaders have lost their jobs because of these breaches. SDN is a key technology to protect your company and your career.

In Part 2 and 3 of this series, “Understanding Software-Defined Networking for IT Leaders,” we’ll explore how SDN increases agility and drives cost savings.


Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

Solving the Shadow IT Problem: 4 Questions to Ask Yourself Now

Harris_SeanBy Sean Harris

Most IT organizations I speak to today admit they are concerned about the ever-increasing growth in the consumption of shadow IT services within the business; the ones that are not concerned I suspect are in denial. A common question is, “How do I compete with these services?” The answer I prefer is: “Build your own!”

What Would It Mean for My IT Organization to Truly Replace Shadow IT?

Totally displacing Shadow IT requires building an organization, infrastructure and services portfolio that fulfills the needs of the business as well as—or better than—external organizations can, at a similar or lower cost. In short, build your own in-house private cloud and IT-as-a-Service (ITaaS) organization to run alongside your traditional IT organization and infrastructure.

Surely your own in-house IT organization should be able to provide services that are a better fit for your own business than an external vendor.

IT service providers often provide a one-size-fits-all service for a variety of businesses in different verticals: commercial, non-commercial and consumer. And in many cases, your business has to make compromises on security and governance that may not be in its best interests. By definition, in-house solutions will comply with security and governance regulations. Additionally, the business will have visibility into the solution, so the benefits are clear.

How Do I Build My Own Services to Compete With Shadow IT?

Answering the technical part of this question is easy. There are plenty of vendors out there offering their own technical solutions to help you build a private cloud. The challenge is creating the organizational structure, developing in-house skills, and implementing the processes required to run a true ITaaS organization. Most traditional IT organizations lack key skills and organizational components to do this, and IT organizations are not typically structured for this. For example:

  • To capture current and future common service requirements and convert these into service definitions, product management-type skills and organizational infrastructure are needed.
  • To promote the adoption of these services by the business, product marketing and sales-type functions are required.

These are not typically present in traditional IT organizations. Building this alongside an existing IT organization has three main benefits:

  • It is less disruptive to the traditional organization.
  • It removes the pain of trying to drive long-term incremental change.
  • It will deliver measurable results to the business quicker.

What If External IT Services Really Are Better?

It may be discovered after analyzing the true needs of the business that an external provider really can deliver a service that is a better fit for the business needs – and maybe even at a lower cost than the internal IT organization can offer. In this case, a “service broker” function within the IT organization can integrate this service into the ITaaS suite offered by IT to the business far more seamlessly than a traditional IT organization can. The decision should be based on business facts rather than assumptions or feelings.

How Do We Get Started?

As part of VMware’s Advisory Services and Operation Transformation Services team, I work with customers every day to map out the “Why, What and How” of building your own ITaaS organization to compete with Shadow IT services:

  • Why
    • Measurable business benefits of change
    • Business case for change
  • What
    • Technology change
    • Organizational change
    • People, skills and process change
  • How
    • Building a strategy and roadmap for the future
    • Implementing the organization, skills, people and process
    • Measuring success

In the end, customers will always choose the services that best meet their needs and cause them the least amount of pain, be it financial or operational. Working to become your business’ preferred service provider will likely take time and resources, but in the long run, it can mean the difference between a role as a strategic partner to the business or the eventual extinction of the IT department as an antiquated cost center.


Sean Harris is a Business Solutions Strategist in EMEA based out of the United Kingdom.

What Is DevOps, and Why Should I Care? -- The IT Leadership Perspective

kai_holthausBy Kai Holthaus

One of the newest buzz words in IT organizations is “DevOps.” The principles of DevOps are contrary to how IT has traditionally managed software development and deployment. So, why are more and more organizations looking to DevOps to help them deliver IT services to customers better, cheaper and faster?

But…what exactly is DevOps anyway? It is not a job or a tool or a market segment; it's best defined as a methodology, or an approach. It has ideological elements that are in common with techniques like Sigma Six and Lean. More on this later.

Software Development and Deployment Today

DevOps todayIn today’s world, there’s a “wall” between the teams that develop software and the teams that deploy software. Developers work in the “Dev” environment, which is like a sandbox where they can code, try, test, stand up servers and tear them down, as the coding work requires. Teams working in the “Ops” environment deploy the software into a production environment, where they also ensure the software is operational. Often, the developed software is literally ‘thrown over the wall’ to operations teams with little cooperation during the deployment.

Why Was It Set Up That Way?

This “wall” is an unintended consequence of the desire to allow developers to perform their tasks, and operational teams to do theirs. Developers are all about change; their job is to change the existing (and functional) code base to produce additional functionality. Operations teams, on the other hand, desire stability. Once the environment is stable, they would like to keep it that way, so that customers and users can do their work.

For that reason, developers usually do not even have access to the production environment. Their work—by nature—is considered too disruptive to the stability of the production environment.

The Problems with Today’s Development and Deployment

Because of the separation between developers and operations, deployment is often cumbersome and error-prone. While developers are usually asked to develop deployment techniques and scripts, these techniques first have to be adapted to the production environment. Then they have to be tested. And since the deployment teams don’t usually understand the new code (or its requirements) as well as the developers, the risk to introduce errors rises. Worst case, it will lead to incidents, which are all too common.

Additionally, in today’s datacenters that are not yet software-defined, new infrastructure—such as compute, storage or network resources—is hard to set up and integrate into the environment. This further slows down deployments and raises the possibility of disruptions to the environment.

DevOps to the Rescue

At its core, DevOps is a new way of developing and maintaining software that stresses collaboration, integration and automation. It attempts to break down the “wall” between development and operations that exists today by removing the functional separation between the teams. It uses agile development and testing methodologies, such as Scrum, and relies on virtualization and automation to migrate entire environments instead of migrating just the codebase between environments.

The main goal of implementing a DevOps approach is to improve deployment frequency—up to “continuous deployment”—of small, incremental improvements to the functionality of software. Essentially, “dot releases,” and with software updates evolving from manufactured media to online streaming, this makes complete sense.

DevOps_graphic2

Why DevOps? Why Now?

With the increased availability and utilization of software to provision, manage and decommission resources—such as compute, storage and network in datacenters and IT environments—the DevOps approach is becoming more and more common. IT organizations are now able to create new resources, integrate them quickly into environments, and even move them between environments with the click of a button. This allows developers to develop their code in a particular technology stack, and then easily migrate the entire stack to the production environment, without disrupting the existing environment.

Sounds Great, How Do I Start?

The three main aspects that need to be addressed to implement a DevOps approach for the development and operations of software come down to the familiar elements of project management:

  • People
  • Process
  • Technology

On the people side, teams have to be established that have accountability over the software across its entire lifecycle. The same teams that develop the software will also assure the quality of the software and deploy the software into the production environment.

From a process perspective, an agile development methodology, such as Scrum, must be implemented to increase the frequency at which deployable packages of software are being created. Reducing the amount of change at each deployment cycle will also increase the success rate of the deployments, and reduce the number of incidents being created.

On the technology side, DevOps relies heavily on the Software-defined Datacenter (SDDC), including high levels of automation for the provisioning, management and decommissioning of datacenter resources.

VMware Is Here to Help

VMware has been the leader in providing the software to enable the SDDC. VMware also has the knowledge and technology to enable you to use DevOps principles to improve your software-based service delivery. VMware vRealize Code Stream enables continuous deployment of your software. And if you like to be on the leading edge of technology, check out VMware Photon for a technology preview of software that allows you to deploy new apps into your environments in seconds.


Kai Holthaus is a delivery manager with VMware Operations Transformation Services and is based in Oregon.

4 Key Elements of IT Capability Transformation

worthingtonp-crop-150x150By John Worthington

As with any operational transformation, an IT organization should start with a roadmap that lays out each step required to gain the capabilities needed to achieve their desired IT and business outcomes. A common roadblock to success is that organizations can overlook one or more of the following key elements of IT capability transformation:

Technical

Technical capabilities describe what a technology does. The rate of technological change can drive frantic cycles of change in technical capabilities, but often, it is critical to examine other transformation elements to realize the value of technical capabilities.

People

A people-oriented view of a capability focuses on an organization’s workforce, including indicators of an organization’s readiness for performing critical business activities, the likely results from these activities, and the benefit from investments in process improvement, technology and training.

Transitioning people requires understanding what roles, organizational structure and knowledge will be needed at each step along the transformation path.

Process

A process sharpens the view of an organizational capability. Formally calibrating process capabilities and maturity requires processes be defined in terms of purpose/objectives, inputs/outputs and process interfaces. One advantage of a process view is that it combines other transformation elements, including people (roles) and technology (support for the process).

Transitioning processes is more about adaptation – assuming the organization has defined processes to adapt. While new working methods associated with cloud computing—such as agile development and continual deployment—are driving ‘adaptive’ process techniques, an organization’s process capability will remain a fundamental and primary driver of organizational maturity.

Service

A service consciously abstracts the internal operations of a capability; its focus is on the overall value proposition. A service view of a capability is directly tied to value, since services—by definition, according to ITIL—are a “means of delivering value to customers by facilitating outcomes customers want to achieve without the ownership of specific costs and risks.”

As you assess your organization’s readiness for transformation, and create a roadmap specific to your organization, it’s essential that these transformation elements are understood and addressed.


John Worthington is a VMware transformation consultant and is based in New Jersey. Follow @jMarcusWorthy and@VMwareCloudOps on Twitter.

What is the Difference Between Being Project-Oriented vs. Service-Oriented?

Reg LoBy Reg Lo

Today, IT is project-oriented. IT uses “projects” as the construct for managing work. These projects frequently begin their lifecycle as an endeavor chartered to implement new applications or complete major enhancement or upgrades to an existing application. Application projects trigger work in the infrastructure/engineering teams, e.g., provision new environments include compute, storage, network and security, with each project having its own discrete set of infrastructure provisioning activities.

Project-Oriented vs. Service Oriented

Project-Oriented vs. Service Oriented

This project-oriented approach results in many challenges:

  • There is a tendency to custom-build each environment for that specific application. The lack of standardization across the infrastructure for each application results in higher operational and support costs.
  • Project teams will over-provision infrastructure because they believe they have “one shot” at provisioning. Contrast this to a cloud computing mindset where capacity is elastic, i.e., you procure just enough capacity for your immediate needs and can easily add more capacity as the application grows.
  • The provisioned infrastructure is tied to the project or application. Virtualization allows IT to free-up unused capacity and utilize it for other purposes, reducing the overall IT cost for the organization. However, the project or application team may feel like they “own” their infrastructure since it was funded by their project or for their application, so they are reluctant to “give up” the unused capacity. They do not have faith in the elasticity of the cloud, i.e., they do not believe that when they need more capacity, they can instantly get it; so they hoard capacity.
  • A project orientation makes an organization susceptible to understating the operations cost in the project business case.
  • It makes it difficult to compare internal costs with public or hybrid cloud alternatives – the latter being service-oriented costs.

When IT adopts a service-oriented mindset, they define, design and implement the service outside the construct of a specific application project. The service has its own lifecycle, separate to the application project lifecycles. Projects consume the standardized pre-packaged service. While the service might have options, IT moves away from each application environment since the solutions are custom-built. IT needs to define their Service Lifecycle, just like they have defined their Project Lifecycle. You can use VMware’s Service Lifecycle, illustrated below, as a starting point.

The Service Life-cycle

The Service-Oriented Mindset

This service-oriented mindset not only needs to be adopted by the infrastructure/operations team, but also by the application teams. In a service-oriented world, application teams no longer “own” the specific infrastructure for their application, e.g., this specific set of virtual machines with a given number of CPUs, RAM, storage, etc. Instead, they consume a service at a given service level, i.e., at a given level of availability, with a given level of performance, etc. With this mindset, IT can provide elastic capacity, (add capacity and repurpose unused capacity) without causing friction with the application teams.

The transformation from a project-orientation to a service-orientation is a critical part of becoming a cloud-enabled strategic service provider to the business.  When IT provides end-to-end services to the business, the way the business and IT engage is simplified, services are provisioned faster and the overall cost of IT is reduced.


Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

Managing Your Brand: Marketing for Today’s IT

Most IT departments lack expertise in how to market their capabilities and communicate value. Now, more than ever before, IT organizations have to contend with outside service providers that are typically more experienced in marketing their services and must accept that managing customer perception is essential to staying competitive. Marketing your IT services and capabilities is not just building out your IT implementation campaign; it’s changing the internal culture of your IT organization to think and act like a hungry service provider.

In this short video by Alex Salicrup—“Managing Your Brand: Marketing for Today’s IT”—you will learn about the key areas to consider as you build your marketing and communication strategy.

 

Alex Salicrup Video Marketing for Today's IT

 

 

The Benefits of Linking IT Spend to Business Returns

Harris_SeanBy Sean Harris

For just a moment, consider the following fictitious organization, Widget Warehouse.

Widget Warehouse is making a gross margin of 15 percent and is able to spend five percent of its revenues on IT (you can replace these numbers with your own). The company would like to improve the financial performance of its business and is considering three IT programs to do that, as well as the likely impact on the CIO, CFO and CEO/shareholders.

 

  1. Leveraging IT agility to raise revenue by five percent without cutting IT spend – Assuming business costs rise—but that IT costs do not—this will generate an additional 0.23 percent of revenue, boosting gross margin to 15.23 percent. If both business and IT costs do not rise, it will boost gross margin to 20 percent. Most importantly, this would show a dynamic growing business rather than a static one, as seen in the following two scenarios.
  2. Leveraging improvements in IT agility, reliability and security to cut business costs by five percent (and not cutting IT spend) – This will deliver a four percent improvement in gross margin.
  3. Cutting IT spending by 20 percent – This will improve gross margin by one percent.

Now, consider the reactions of the Widget Warehouse CIO, CFO and CEO/shareholders to the three scenarios.

  • Scenario 1 – The CEO and shareholders will be most interested in this one, seeing not only improved margin, but also a growing business. This will generate the most interest from the CFO as well, and the CIO is now recognized as a contributor to the growth.
  • Scenario 2 – This will still be of strong interest to the CFO, but of lesser interest to the CEO and shareholders. The CIO will still be seen in a very positive light, but not necessarily a contributor to growing the business.
  • Scenario 3 It is still likely to be of some interest to the CFO, but of limited interest to the CEO and shareholders. It will more than likely generate a whole heap of pain for the CIO, since a chunk of the cost cutting will involve people and inevitably damage morale (and productivity) in the IT department.

The most appealing scenario to all parties is a combination of scenarios one and two, which can be achieved in parallel.

So, having agreed that Widget Warehouse wants to focus on the first two scenarios, they now face a critical question: “How do we approach it?”

Shifting the Focus of IT Projects

It is widely accepted that the use of cloud computing—public, private and/or hybrid—and the delivery of IT-as-a-Service (ITaaS) should provide benefits on three axes:

  • Efficiency Cost containment and reduction
  • Reliability – Reduced outage and improved availability
  • Agility – The ability to respond quicker to the needs of the business, customers and market

Using the software-defined datacenter to deliver the enterprise cloud adds a fourth axis, which is security.

Most IT organizations I speak to are always ready to discuss business cases or return on investment (ROI) based on the efficiency axis, and indeed ITaaS has much to offer in that space, but for the purpose of this discussion I will focus on the impact of agility, reliability and security, and how these can be linked to business benefits.

  • Reliability – Most organizations can easily measure the loss of business during an unplanned outage. The key here is to ensure you measure your availability in terms of business availability, and not IT services availability. For example, an IT group that is supporting five IT services—one of which is experiencing outages—might consider themselves to be 80 percent available. However, if that one service happens to be authentication and authorization, then it is likely all business services are not available, so IT services are actually 100 percent unavailable. It is therefore vital as a first step to comprehensively map business services to IT services and systems.

The most major impact of service outages on the business is reputation and brand equity. Much has been published on the cost to the Royal Bank of Scotland from their 2012 outage. They’ve admitted that due to decades of IT neglect their systems crashed, leaving millions of customers unable to withdraw cash or pay for goods. What is the risk to your business if during an outage your customers try an alternative…and never return?

Another consideration is not unplanned downtime, but rather overall availability. Most IT departments do not consider planned downtime as having an impact on the business or on IT service reliability, but is it possible that by reducing planned downtime you could increase revenues? For example, you could extend trading hours or re-use infrastructure for new services.

  • Security –In addition to the loss of business during a security breach, consider the permanent reputation damage resulting from public disclosure. The 2014 security breaches at SONY will cost the company $35 million in IT repairs in addition to the more intangible, but arguably more serious, harm to their brand’s reputation.
  • Agility – While examining reliability and security as the crucial axis, it can seem as though you are focusing on the negative impacts IT can have on the business, whereas the agility axis looks squarely at delivering positive impact and business value. To generate the metrics in this space requires a new form of communication between IT and the business: the conversation must shift away from pure cost pressures on IT.
    • By delivering agility, what is the impact IT can have on improving business efficiency (scenario 2)?
    • By delivering agile IT, what is the impact on revenues that can result from shorter time-to-market? What is the long-term impact on market share by being first to market? The first player in a market will often maintain a market leadership position, and be an established premier brand long after others enter the market.

Hopefully with this brief discussion I have whet your appetite for refocusing some of your IT transformation effort on not just driving greater efficiency in IT, but in using IT to be able to drive greater efficiency in the business, or even drive the business. This in turn will change the role of IT from being seen as a cost to the business (as it is in most organizations) to being an enabler and vital part of a successful business.

VMware Accelerate Advisory Services can help IT organizations like yours build a roadmap to transform IT into a business enabler and assist in building the business case for change – based not just on the cost of IT, but on the true value IT can contribute to the business.


Sean Harris is a Business Solutions Strategist in EMEA based out of the United Kingdom

Establishing Cost Transparency and Changing Your Relationship with the Customer

Today’s IT organizations need a clear picture of what each service costs to facilitate strategic conversations with their customer, the business. This is the only way to continue innovating while operating within budget and competing with the growing prevalence of shadow IT.

To explore some of the most common use cases addressed by IT Financial Management (ITFM), join two of VMware’s most experienced ITFM consultants for this on-demand webinar as they discuss the business issues, solutions, challenges and benefits of a service costing system.

In this webinar, Michael Fiterman, Senior Consultant for vRealize Business, and Brendan O’Connor, Senior Technical Consultant for vRealize Business, will walk you through:

  • Cost transparency:
    • What is real cost transparency?
    • How can it be achieved, and what are the immediate benefits?
  • Customer intimacy:
    • How can we change the conversation with IT consumers?
    • How will it change our business?

IT Transformation in the Insurance and Financial Services Industries

Gowrish_MallyaBy Gowrish Mallya

Insurance and Financial Services companies are undergoing rapid transformation due to the advent of technological innovations. By 2018, nearly one-third of the insurance industries’ business is expected to be generated digitally. In order to be digitally competent, insurance companies need to1:

  • Reduce barriers to customer interaction
  • Use new business models

VMware’s Accelerate Benchmarking Database provides interesting insights into the current state of IT readiness of insurance and financial services companies – and their target state goals. Let’s take a closer look at the two requirements for digital competence.

1.      Reduce Barriers to Customer Interaction

In a perfect environment, all of the Tier-1 applications would be written in lightweight, highly-portable application frameworks, and be capable of harnessing cloud-connectivity and scalability. Virtualizing Tier-1 applications decouples the software stack from the hardware, thereby easing operations like planned maintenance; as a result there is tighter alignment between IT and business needs. IT would then be able to develop applications to keep up with market needs and serve their end customers better.

VMware’s Accelerate Benchmark Database shows that in Insurance and Financial Services industries, currently only 14 percent of the companies have 75 percent or more of their Tier-1 applications virtualized; the industry-wide company average is around 25 percent.

The data also shows that only 34 percent of the companies have executive or line-of-business support for cloud as a strategy. IT can contribute significantly to reduce computing cost, but without management support, cloud efforts will be difficult and challenged, as the true benefit potential cannot be effectively communicated to business units and end users.

By 2018, insurers anticipate nearly one-fifth (19.7 percent) of their business will be generated through Internet-connected PCs, up from 12.7 percent in 2013. Another 10.9 percent is expected to come via mobile channels, up from a mere 1.5 percent in 2013.2 Application virtualization is key to help businesses cater to such exponential growth that will come from the  Internet and mobile devices, as it will help reduce time to market for new features or products across all customer segments.

2.      Use New Business Models

For organizations in this industry, making quick, informed decisions and acting swiftly defines mediocrity from success. Being able to deploy infrastructure at the earliest point in time helps organizations achieve their goals in the shortest time possible. To achieve higher levels of cost performance, agility, scalability and compute, virtualization must be nearly ubiquitous.

Monitoring of the deployed infrastructure is vital for an organization that enables it to run in an optimal state by:

  • Keeping a check on capacity and provisioning issues by giving out an early warning
  • Providing transparency and control over cost, services and quality
  • Benchmarking the IT systems performance

VMware’s Accelerate Benchmark Database shows that 92 percent of the companies are at least 40 percent compute virtualized. Also, 50 percent of the companies do not have storage virtualized, and 56 percent are not network virtualized. Virtualizing storage and network infrastructure can reduce day-to-day operational tasks and costs associated with important—but non-strategic—processes.

The data also shows that 78 percent of the companies have either no ability to meter IT usage, or they do it manually. Also shown is that 86 percent of the companies intend to partially or fully automate IT service metering. With metering of IT service usage completely automated, there is predictive capability to understand when usage will trigger an elastic event within the environment, thereby aiding in achieving a flexible and scalable IT infrastructure.

The insurance sector is witnessing new business models from new entrants. German company Friendsurance has implemented the concept of online peer-to-peer insurance. Friendsurance uses social media to link friends together to buy collective non-life policies from established insurers. A small amount of cash is set aside to cover small claims, and if the pool is untouched at year-end, it is shared among the group.3 In order to be agile, the companies need to focus mainly on infrastructure virtualization and analytics.

 

1 PWC white paper, “Insurance 2020: The digital prize – Taking customer connection to a new level
2 Capgemini: World Insurance Report 2014[JP1]
3 EY Global Insurance Digital Survey 2013


Gowrish Mallya brings around 8 years of experience in value engineering and benchmarking. He works closely with account teams and strategists across AMER & EMEA to address VMware customer’s IT challenges and demonstrate our solution value. Gowrish is currently a Value Engineering consultant within Field Sales Services team in India