Home > Blogs > VMware Accelerate Advisory Services > Monthly Archives: June 2015

Monthly Archives: June 2015

Understanding Software-Defined Networking for IT Leaders – Part 1

Reg Lo By Reg Lo

Software-defined networking (SDN) is revolutionizing the datacenter much like server virtualization has done. It is important for IT leaders to understand the basic concepts of SDN and the value of the technology: security, agility through automation and cost-savings. This blog post explores some of the security benefits of SDN using a simple analogy.


Courtesy of the game DomiNations, Nexon M, Inc.

My kids are playing DomiNations – a strategy game where you lead your nation from the Stone Age to the Space Age. I recruited their help to illustrate how SDN improves security. In this analogy, the city is the datacenter; walls are the firewall (defense against attackers/hackers), and the workloads are the people/workers.

The traditional way of defending a city is to create walls around the city. In the same manner, we create a perimeter defense around our datacenter using firewalls. However, imagine there is a farm outside the walls of the city. Workers need to leave the protection of the city walls to work in the farm. This leaves them vulnerable to attack. In the same way, as workloads or virtual machines are provisioned in public or hybrid clouds outside the datacenter firewalls, what is protecting these workloads from attack?


Courtesy of the game DomiNations, Nexon M, Inc.

In an ideal world, let’s say my kids have magical powers in the game and they enchant the city walls so they can expand and contract to continuously protect the workers. When a worker goes to the farm, the walls automatically extend to include the worker in the farm. When they return back to the city, the walls return to normal. SDN is like magic to your firewalls. Instead of your firewalls being defined by physical devices, a software-defined firewall can automatically expand into the public cloud (or the part of the hybrid cloud that is outside of your datacenter) to continuously protect your workloads.

This ability to easily and automatically configure your firewalls provides another benefit: micro-segmentation. As mentioned before, in a traditional city, the city walls provide a perimeter defense. Once an attacker breaches the wall, they have free range to plunder the city. Traditional datacenters have a similar vulnerability. Once a hacker gets through the firewall, they have free range to expand their malicious activity from one server to the next.


Courtesy of the game DomiNations, Nexon M, Inc.

Micro-segmentation of the network is like having city walls around each building. If an attacker breaches the outer perimeter, they can only destroy one building before having to re-start the expensive endeavor of attacking the next line of defense. In a similar fashion, if a hacker penetrates one application environment, micro-segmentation prevents them from gaining access to another application environment.

Software-defined networking can improve information security. Every few months there is a widely publicized security breach that damages a company’s brand. CIOs and other IT leaders have lost their jobs because of these breaches. SDN is a key technology to protect your company and your career.

In Part 2 and 3 of this series, “Understanding Software-Defined Networking for IT Leaders,” we’ll explore how SDN increases agility and drives cost savings.

Reg Lo is the Director of VMware Accelerate Advisory Services and is based in San Diego, CA.  You can connect with him on LinkedIn.

Solving the Shadow IT Problem: 4 Questions to Ask Yourself Now

Harris_SeanBy Sean Harris

Most IT organizations I speak to today admit they are concerned about the ever-increasing growth in the consumption of shadow IT services within the business; the ones that are not concerned I suspect are in denial. A common question is, “How do I compete with these services?” The answer I prefer is: “Build your own!”

What Would It Mean for My IT Organization to Truly Replace Shadow IT?

Totally displacing Shadow IT requires building an organization, infrastructure and services portfolio that fulfills the needs of the business as well as—or better than—external organizations can, at a similar or lower cost. In short, build your own in-house private cloud and IT-as-a-Service (ITaaS) organization to run alongside your traditional IT organization and infrastructure.

Surely your own in-house IT organization should be able to provide services that are a better fit for your own business than an external vendor.

IT service providers often provide a one-size-fits-all service for a variety of businesses in different verticals: commercial, non-commercial and consumer. And in many cases, your business has to make compromises on security and governance that may not be in its best interests. By definition, in-house solutions will comply with security and governance regulations. Additionally, the business will have visibility into the solution, so the benefits are clear.

How Do I Build My Own Services to Compete With Shadow IT?

Answering the technical part of this question is easy. There are plenty of vendors out there offering their own technical solutions to help you build a private cloud. The challenge is creating the organizational structure, developing in-house skills, and implementing the processes required to run a true ITaaS organization. Most traditional IT organizations lack key skills and organizational components to do this, and IT organizations are not typically structured for this. For example:

  • To capture current and future common service requirements and convert these into service definitions, product management-type skills and organizational infrastructure are needed.
  • To promote the adoption of these services by the business, product marketing and sales-type functions are required.

These are not typically present in traditional IT organizations. Building this alongside an existing IT organization has three main benefits:

  • It is less disruptive to the traditional organization.
  • It removes the pain of trying to drive long-term incremental change.
  • It will deliver measurable results to the business quicker.

What If External IT Services Really Are Better?

It may be discovered after analyzing the true needs of the business that an external provider really can deliver a service that is a better fit for the business needs – and maybe even at a lower cost than the internal IT organization can offer. In this case, a “service broker” function within the IT organization can integrate this service into the ITaaS suite offered by IT to the business far more seamlessly than a traditional IT organization can. The decision should be based on business facts rather than assumptions or feelings.

How Do We Get Started?

As part of VMware’s Advisory Services and Operation Transformation Services team, I work with customers every day to map out the “Why, What and How” of building your own ITaaS organization to compete with Shadow IT services:

  • Why
    • Measurable business benefits of change
    • Business case for change
  • What
    • Technology change
    • Organizational change
    • People, skills and process change
  • How
    • Building a strategy and roadmap for the future
    • Implementing the organization, skills, people and process
    • Measuring success

In the end, customers will always choose the services that best meet their needs and cause them the least amount of pain, be it financial or operational. Working to become your business’ preferred service provider will likely take time and resources, but in the long run, it can mean the difference between a role as a strategic partner to the business or the eventual extinction of the IT department as an antiquated cost center.

Sean Harris is a Business Solutions Strategist in EMEA based out of the United Kingdom.

What Is DevOps, and Why Should I Care? — The IT Leadership Perspective

kai_holthausBy Kai Holthaus

One of the newest buzz words in IT organizations is “DevOps.” The principles of DevOps are contrary to how IT has traditionally managed software development and deployment. So, why are more and more organizations looking to DevOps to help them deliver IT services to customers better, cheaper and faster?

But…what exactly is DevOps anyway? It is not a job or a tool or a market segment; it’s best defined as a methodology, or an approach. It has ideological elements that are in common with techniques like Sigma Six and Lean. More on this later.

Software Development and Deployment Today

DevOps todayIn today’s world, there’s a “wall” between the teams that develop software and the teams that deploy software. Developers work in the “Dev” environment, which is like a sandbox where they can code, try, test, stand up servers and tear them down, as the coding work requires. Teams working in the “Ops” environment deploy the software into a production environment, where they also ensure the software is operational. Often, the developed software is literally ‘thrown over the wall’ to operations teams with little cooperation during the deployment.

Why Was It Set Up That Way?

This “wall” is an unintended consequence of the desire to allow developers to perform their tasks, and operational teams to do theirs. Developers are all about change; their job is to change the existing (and functional) code base to produce additional functionality. Operations teams, on the other hand, desire stability. Once the environment is stable, they would like to keep it that way, so that customers and users can do their work.

For that reason, developers usually do not even have access to the production environment. Their work—by nature—is considered too disruptive to the stability of the production environment.

The Problems with Today’s Development and Deployment

Because of the separation between developers and operations, deployment is often cumbersome and error-prone. While developers are usually asked to develop deployment techniques and scripts, these techniques first have to be adapted to the production environment. Then they have to be tested. And since the deployment teams don’t usually understand the new code (or its requirements) as well as the developers, the risk to introduce errors rises. Worst case, it will lead to incidents, which are all too common.

Additionally, in today’s datacenters that are not yet software-defined, new infrastructure—such as compute, storage or network resources—is hard to set up and integrate into the environment. This further slows down deployments and raises the possibility of disruptions to the environment.

DevOps to the Rescue

At its core, DevOps is a new way of developing and maintaining software that stresses collaboration, integration and automation. It attempts to break down the “wall” between development and operations that exists today by removing the functional separation between the teams. It uses agile development and testing methodologies, such as Scrum, and relies on virtualization and automation to migrate entire environments instead of migrating just the codebase between environments.

The main goal of implementing a DevOps approach is to improve deployment frequency—up to “continuous deployment”—of small, incremental improvements to the functionality of software. Essentially, “dot releases,” and with software updates evolving from manufactured media to online streaming, this makes complete sense.


Why DevOps? Why Now?

With the increased availability and utilization of software to provision, manage and decommission resources—such as compute, storage and network in datacenters and IT environments—the DevOps approach is becoming more and more common. IT organizations are now able to create new resources, integrate them quickly into environments, and even move them between environments with the click of a button. This allows developers to develop their code in a particular technology stack, and then easily migrate the entire stack to the production environment, without disrupting the existing environment.

Sounds Great, How Do I Start?

The three main aspects that need to be addressed to implement a DevOps approach for the development and operations of software come down to the familiar elements of project management:

  • People
  • Process
  • Technology

On the people side, teams have to be established that have accountability over the software across its entire lifecycle. The same teams that develop the software will also assure the quality of the software and deploy the software into the production environment.

From a process perspective, an agile development methodology, such as Scrum, must be implemented to increase the frequency at which deployable packages of software are being created. Reducing the amount of change at each deployment cycle will also increase the success rate of the deployments, and reduce the number of incidents being created.

On the technology side, DevOps relies heavily on the Software-defined Datacenter (SDDC), including high levels of automation for the provisioning, management and decommissioning of datacenter resources.

VMware Is Here to Help

VMware has been the leader in providing the software to enable the SDDC. VMware also has the knowledge and technology to enable you to use DevOps principles to improve your software-based service delivery. VMware vRealize Code Stream enables continuous deployment of your software. And if you like to be on the leading edge of technology, check out VMware Photon for a technology preview of software that allows you to deploy new apps into your environments in seconds.

Kai Holthaus is a delivery manager with VMware Operations Transformation Services and is based in Oregon.

4 Key Elements of IT Capability Transformation

worthingtonp-crop-150x150By John Worthington

As with any operational transformation, an IT organization should start with a roadmap that lays out each step required to gain the capabilities needed to achieve their desired IT and business outcomes. A common roadblock to success is that organizations can overlook one or more of the following key elements of IT capability transformation:


Technical capabilities describe what a technology does. The rate of technological change can drive frantic cycles of change in technical capabilities, but often, it is critical to examine other transformation elements to realize the value of technical capabilities.


A people-oriented view of a capability focuses on an organization’s workforce, including indicators of an organization’s readiness for performing critical business activities, the likely results from these activities, and the benefit from investments in process improvement, technology and training.

Transitioning people requires understanding what roles, organizational structure and knowledge will be needed at each step along the transformation path.


A process sharpens the view of an organizational capability. Formally calibrating process capabilities and maturity requires processes be defined in terms of purpose/objectives, inputs/outputs and process interfaces. One advantage of a process view is that it combines other transformation elements, including people (roles) and technology (support for the process).

Transitioning processes is more about adaptation – assuming the organization has defined processes to adapt. While new working methods associated with cloud computing—such as agile development and continual deployment—are driving ‘adaptive’ process techniques, an organization’s process capability will remain a fundamental and primary driver of organizational maturity.


A service consciously abstracts the internal operations of a capability; its focus is on the overall value proposition. A service view of a capability is directly tied to value, since services—by definition, according to ITIL—are a “means of delivering value to customers by facilitating outcomes customers want to achieve without the ownership of specific costs and risks.”

As you assess your organization’s readiness for transformation, and create a roadmap specific to your organization, it’s essential that these transformation elements are understood and addressed.

John Worthington is a VMware transformation consultant and is based in New Jersey. Follow @jMarcusWorthy and@VMwareCloudOps on Twitter.