Tag Archives: DevOps

3 Analogies for Cloud/DevOps Transformation That Can Turn Your Resisters into Champions


By Pierre Moncassin and Peter Stokolosa 

Resistance to change can sideline any project. Customers who embark on the transformation journey towards VMware’s cloud platforms, increasingly often as a stepping stone towards DevOps, inevitably must confront the challenge of resistance to change, which manifests itself in many forms and behaviors.

Fostering a change in mindset towards transformation projects is key to a cloud/DevOps project’s success. No matter how much technical expertise the stakeholders bring to the project success remains elusive until they can be persuaded to adopt not only new tools, but to adapt new ways of working, thinking and participating in the advancement of the project.

We have found that the introduction of meaningful analogies to explain the character  and necessity of change can help to unlock the key to motivational and behavioral changes in project teams.

Resistance to change usually boils down to two main factors:

  • fear of loss – because change involves departing from the known environment (often perceived as a comfort zone);
  • inability to develop a clear vision of the future state and the needed steps to arrive there engenders passive resistance and lack of motivation

Well-crafted analogies can help tackle both factors. Analogies are grounded in known, familiar environments. They are re-assuring because they build a cognitive bridge that begins with the known stage while offering a path to the future.

Next, let us share three frequently-used analogies that have been proven to resonate well with our audiences, when discussing cloud transformation.

Introducing a commercial electric power grid to replace local power generation (as an analogy for switching from physical IT to cloud).

The utility metaphor has been popular since the early days of the cloud – it was actually used well before the cloud era, first appearing in 1961 from John McCarty.

Early in the 20th century, the common approach to generating electricity was to own a private generator.  Switching to the public utility model meant giving up ownership and control of the private power generator.  It implied trusting a third party to provide electricity consistently and reliably, at a reasonable cost

The shift meant a radical change of focus from production to consumption. It meant that consumed resources are now commoditized, pervasive and always available.

It is worth noting that electricity consumption is also associated with simple metering – the ability to monitor consumption and therefore costs in real-time.  This is a useful introduction to the cloud costing models

A retail shop versus a factory (as an analogy for the two teams in a cloud organization: one customer facing and one, infrastructure focused).

One of the tenets of the cloud organization, as recommended by VMware best practices, is to define two teams with complementary objectives.

The first team drives the communication and relationship with the business, we call this the Cloud Service Team. They work closely with business stakeholders and meet customer requirements with innovative solutions. They can be equated to the “retail shop” of the cloud organization – their main task is to provide compelling services (products) that are constantly adapted to customer demand.

The second team manages the overall infrastructure we call this the Cloud Service Infrastructure team.  They can be equated to the “factory” of the cloud organization.  Their objectives include standardization, efficiency and economies of scale in order to deliver cloud services with the best quality/cost ratio.

As with every analogy this one has its limitations.  It understates the agility of the cloud infrastructure services.  As this team progresses towards always-higher levels of automation, their day-to-day activities resemble more the engineering room (focused on design activities) than a traditional production chain (repetitive tasks are automated away).

Cloud Org Model

From Farm to Fork. The modernization journey of agriculture (as an analogy for the evolution of IT roles from managing physical IT to operating a cloud).

(Note – this analogy will resonate best in countries with a strong rural tradition – think of France for example!).

Before the 1930’s, the farm ecosystem was largely run by local family businesses, with small units and limited mechanization.  Farm professions were based close to the place of production: farmer, miller, carter (and many others).  The path from farm to consumers (farm to fork) was relatively short and traceable:  consumers could generally assume that their food was produced locally.  Because production was limited there was a need for more families to farm for a livelihood.

Within a generation farming methods changed while mechanization brought significant increases in productivity, but it also meant that change to the métier of farming was inevitable.  To respond to increasing customer demand, they standardized and consolidated to produce a greater quantity while implementing increased control and norms (quality).

Increases in both automation and market demand lead to sweeping changes in the farming “workplace”.  Many traditional jobs and activities became less relevant or obsolete (eg laborers with horse and carts).  New,  specialized jobs developed , or became significantly more visible: traders, operations managers (for processing factories).  In general, there was a shift from labor-intensive production to sales, marketing, distribution, quality control and standardized, automated production.

It’s worth noting, the path from production to consumers became considerably extended. Consumers have very little awareness of where their food is grown (unless specific labeling shows this sourcing information).  Although the system required to deliver the product became more complex, the consumption aspects of the product were simplified.

Compare this to ‘traditional’ ways of running IT in technical silos.  IT tends to be operated by silos of local expertise with numerous, labor-intensive tasks.   As a correlation of silos (fragmentation of work), there is little standardization and the path from production to consumers tends to be short . IT tends to be “sourced locally,” so consumers may be familiar with the hardware and cabling as well as with the administrators who operate the platform.  We have all heard stories of business users walking over to the IT administrators to resolve their problems (rather than raise a ticket with a remote service desk!).

As cloud automation is introduced, these tasks and roles will evolve along similar trends:

  • Standardization of processes and architectures
  • Leverage automation throughout
  • Increased consumer expectations (measurable, formalized service levels, control over costs, agility)
  • The path from production to consumers is significantly extended. In most instances cloud consumers are not aware of the location of their physical IT. There is a separation of accountabilities so that business lines do not usually communicate with systems operators – they would liaise via the service desk (for routine operations) and via the Cloud Service team (for more complex requests).

The technical transformation leads to new or transformed IT roles:

  • Focus shifts from production (hardware, infrastructure) to consumption (cloud services). The new cloud organization requires increased effort on “marketing” of cloud services and “distribution” (teams focus on finding ways of making the services consumable e.g. publishing them on self-service catalog portal).
  • There is growing demand for automation specialists who can translate the technical knowledge into workflows and scripts.
  • New roles emerge: such as Service Blueprint Manager, a specialist with skills to leverage automation in tools such as VMware’s vRealize Automation.
  • Traditional Computer Operations roles evolve, requiring more coding skills.
  • The mission of IT teams changes from “maintenance” to value creation.
  • Consumption is facilitated and simplified.

All in all – analogies are a powerful tool to help overcome resistance to change. One caveat though: do not over-use them as they risk becoming oversimplified, losing their pertinence and distracting from that main issue of how an organization must adapt to address inevitable change.


  • Start from the point of view that resistance to change is predictably human and normal. It is part of the change process cycle and it is not a problem. It can turn into one, however, if it is not dealt with correctly
  • Leverage analogies in order bring a concrete dimension to abstract concepts such as cloud services; they can help to advance projects, but adapt your references with sensitivity to the audience’s culture, maturity and environment.
  • Set clear remit for using your analogies. Keep in mind that all analogies have intrinsic limitations. Although they are useful tools to walk across the cognitive bridge, they have a limited shelf-life when it comes to get across a given message to an new audience – so their use should be focused.


Pierre Moncassin is an operations architect with the VMware Operations Transformation global practice and is based in Taiwan.

Peter Stokolosa is an operations architect with the VMware Operations Transformation Services and is based in France.

3 Common Mistakes when Breaking Organizational Silos for Cloud and DevOps

Pierre Moncassin-cropOrganizational SilosBy Pierre Moncassin

Every customer’s journey to Cloud, DevOps or other transformative initiatives is unique to an extent.  Yet all those journeys will come across a similar set of challenges.  With the exception of truly green-field projects, each transformation to Cloud/DevOps involves dealing with the weight of legacy – organizational and technical silos that hamper their momentum towards change.

This is why I often hear from customer teams: “We know that we need to break down those silos – but exactly how do you break them?”

Whilst I do not advocate a one-size-fit-all answer, I want to share some recommendations I promote on how to go about breaking those silos – and some mistakes to avoid along the way.

From where do silos come?

As discussed in earlier blogs, silos usually come into existence for valid reasons – at the origin. For example, when infrastructure administration relies on manual and highly specialized skills, it appears to make sense to group the skills together into clusters of deep expertise. Unix gurus, for example, might cluster together, as might Microsoft Windows experts, SQL database specialists and so on.  These are examples of silos teams build around infrastructure skills – experts of all those areas need to align their mode of operation to support cloud infrastructure services.

Other examples of commonly found silos include:

  • Application Development to Operations: DevOps emerged precisely as a way to break down one of the ‘last great silos’ of IT – the persistent gap between the Development teams and Operations teams.
  • Business to IT: When IT becomes so reliant on a specialist set of skills (think mainframe programming) significant inefficiencies arise in cross-training IT staff to business or vice-versa. In transitioning to Cloud/DevOps, this is another of the ‘great silo risks’ that the transformation will mitigate and ultimately break down completely as Business, Application Development and Operations function as an integrated team.

Common mistakes when attempting to break down silos.

a) Toolset-only approach.

A frequent temptation for project teams is to install software-based tools and assume (or rather, hope) that the silos will just vanish by themselves. In Cloud transitions, teams might install automated provisioning, but forget to work across the business/IT silos. Result – adoption by the business generally ends up minimal. In DevOps transition attempts, the technology approach might consist of deploying, for example, Jenkins, Code Stream etc. – tools meant for continuous delivery efforts, but failing to bridge the gap fully with day two operations management, for example without governance around incident-handling or idempotent configuration management. Without a clear path to resolution that cuts across the silos, it is easy to see when issues are not resolved satisfactorily. The impact on customer satisfaction is predictably less than optimal.

b) Overlook the value of ‘traditional’ skills

During the transition to Cloud/DevOps, especially when considering a toolset-only approach, it can appear at first sight that many of the legacy skills have become irrelevant.   But this is often a mistaken perception. Legacy skills are likely still relevant, they simply need to be applied differently.

For example, traditional operating systems skills are almost always relevant for Cloud, however they will be applied differently. Instead of manually configuring servers, the administrators will develop blueprints to provision servers automatically. They will use their knowledge to define standardized operating system builds.

Traditional skills become all the more critical when we look into soft skills. The ability to manage stakeholder relationships, communicate across teams, organizational and business specific knowledge – are all essential to running an effective Cloud/DevOps organization.

c) Focus on problem not solution

This is a well-known principle of change management – focusing on the problem will not solve it. Rather than present the teams with a problem, for example existence of a silo, it is often far more effective to work on the solution – cross-silo organization and processes.

Does it work? I can certainly relate the experience of ‘seeing light bulb’ moments with highly specialized teams.  Once they see the value of a cross-silo solution, the response is far more often “we can do this” as opposed to defending the status quo of individual silos.

In sum, focus on the vision, the end-state and the value of the end-to-end solutions.

Five recommendations to help break down silos.

  1. Shift from silo mindset to Systems Thinking. Conceptually, all the ‘common mistakes’ that I mentioned above can be traced back to the persistence of a silo mindset – whether focusing on traditional (versus leading-edge skills), new toolsets (versus legacy ones), or isolated ‘problem’ areas. The better approach is Systems Thinking. Systems thinking implies an understanding that the overall organization is more than the sum of the parts. It means looking for ways not just to improve the efficiency of individual elements (skillsets, tools, process steps) but optimize the way these elements interact.
  2. Create vision. As mentioned earlier, creating the vision is a vital step to get the team’s buy-in and to overcome silos. This can entail an initial catalog of services and outline workflows to fulfill these services. Potentially, it may be worth setting up a pilot platform to showcase some examples.
  3. Build momentum. Building the vision is important but not enough. One the initial acceptance is reached, the transformation team will need to build the momentum. For example by recruiting ‘champions’ in each of the former silos.
  4. Proceed in incremental steps, building up a track record of ‘small wins’ and gradually increasing the pace of change.
  5. Establish the permanent structure. One the change in motion, it will be necessary to define the long-term roles that operate the Cloud/ DevOps operations. These roles are detailed in ‘Organizing for the Cloud’: https://www.vmware.com/files/pdf/services/VMware-Organizing-for-the-Cloud-Whitepaper.pdf.


  • Breaking silos is a result rather than the end. Start by building the vision to engage teams and motivate them to break the silos themselves.
  • Do not rely on technology alone. Toolsets augment processes, but do by themselves overcome silos (e.g. vRealize Code Stream, vRealize Cloud Automation and other VMware Cloud automation tooling) as long at they are leveraged to sustain the vision and constantly build momentum.
  • Leverage existing skills. Many of the legacy, previously silo’ed skills can be adapted to the future cloud/DevOps organization.


Pierre Moncassin is an operations architect with the VMware Operations Transformation global practice and is based in the UK.

DevOps: The Operations Side

Ahmed_croppedBy Ahmed Al-Buheissi

DevOps is about getting Development and Operations to work together and avoid conflicts in how they operate is to achieve their goals. The most commonly noted objective is shifting to Agile processes where applications are released more often and with better quality. While development and operations are of equal importance to a DevOps methodology, this article focuses on the role of Operations in facilitating an efficient and successful DevOps implementation.

In a DevOps environment, the operations team participates in the following activities:

Automation Tools

DevOps AutomationAutomation is a cornerstone to DevOps, as it facilitates continuous integration and delivery of applications into various environments (dev, test, prod, etc.). An example of such automation tools is VMware’s vRealize CodeStream, which allows the creation of release pipelines (e.g., from dev, to test, to production), with various tasks to retrieve application builds, deploy environments, automation testing, and etc. These tools are typically implemented and maintained by the operations teams.


Infrastructure and application blueprints may consist of a number items, such as VM templates, configuration management code, or workflows. Configuration code, e.g., Puppet manifest or Chef cookbooks, are used to configure deployed VM’s and the applications running thereon. Configuration Workflows may also be developed using tools such as vRealize Orchestrator. Dev and operations teams share responsibility for developing the blueprints to ensure deployed environments are correct and ready for use in the various release stages.

Patching and Upgrading

Historically, operations teams held responsibility for maintaining the various tools used by the development and release teams, such as build tools, source-code management tools, automated testing systems, and etc. However, the lines are blurring here as developers take on more coding responsibility for such management. This means Operations teams are housing development teams capable of developing the management automation.


This is one of the areas that are frequently overlooked, or at least rarely mentioned, in a DevOps environment. Monitoring applications through the various promotion environments is very important to ensure a fail-fast approach: potential issues are reported and investigated in early stages (dev and test), before they become real problems.

The operation team also builds dashboards for developers and operations so the application and its environment can be monitored throughout the Continuous-Integration/Continuous-Delivery process. This provides developers with feedback on the applications impact on the environment in which it runs, allows operations to become familiar with the same from an environment (VM/vApp) perspective, and provides confidence to the operations team that the Continuous-Integration/Continuous-Delivery process is working and there will be no issues when the application is released into production

It is worth mentioning that collaboration between development and operations should start very early, as developers need to embed operations considerations in their application code (such as adequate logging information), while the operations team need to ensure infrastructure availability for developers to start their work.


Ahmed Al-Buheissi is an operations technical architect with the VMware Operations Transformation global practice and is based in Melbourne, Australia.

3 Capabilities Needed for DevOps that You Should Already Have in Your Cloud Organization

Pierre Moncassin-cropBy Pierre Moncassin

A number of enterprise customers have established dedicated organizations to leverage VMware’s cloud technology. As these organizations reach increasing levels of cloud maturity, we are more and more often asked by our customers: “how is our organization going to be impacted by DevOps?“

Whilst there are many facets – and interpretations – to DevOps, I will highlight in this blog that many of the skills needed for DevOps are already inherent to a fully- functioning cloud organization. Broadly speaking, my view is that we are looking at evolution, not revolution.

First, let’s outline briefly what we understand by DevOps from a people/process/technology point of view:

  • DevOps EvolutionPeople: DevOps originated as an approach, even a philosophy that aims to break down organization silos, specifically the traditional gap between application developers and operations teams. This is why it is often said that DevOps is first of all, about people and culture. Application Developers are sometimes depicted as “agents of change” whilst the Operations team are seen as “guardians of stability” – teams with opposite objectives that can lead to well-documented inefficiencies.
  • Process: From a methodology point of view, DevOps integrates principles such as “agile development”. Agile this provides the methodological underpinning for Continuous Delivery, an approach that relies on the frequent release of production-ready code. Whilst Agile development was originally about applications, DevOps extends the principle to infrastructure (leading to the idea of “agile infrastructure”).
  • Technology: DevOps processes necessarily incorporate the use of development and automation technologies such as: source code control and management (e.g, Git); code review systems (e.g., Gerrit); configuration management (e.g., Puppet, Chef, Ansible, SaltStack); task execution and management (e.g., Jenkins); artifact and application release tooling (e.g., VMware vRealize Codestream); and others. In order to manage those tools as well as applications generated by them, DevOps also incorporates operations tooling such as provisioning and monitoring of the underlying infrastructure (e.g., vRealize Automation and vRealize Operations).

Features of a cloud organization adapted for VMware’s cloud technology, are described in detail in the white paper “Organizing for the Cloud” (link below):


DevOps Organizational Model

Here are, in my view, some key capabilities in the cloud organization as recommended by VMware:

1) The rise of developers’ reach.

As development departments mature beyond  writing strictly  application code, their reach spans broader knowledge bases. This includes writing code that performs end-to-end automation of application development, deployment and management: applications and infrastructure as code. Developers utilize the same skills traditionally relied on in application teams and apply them towards  cloud services:

  • Provisioning for example with VMware vRealize Automation.
  • Automating network configuration with VMware NSX
  • Automating monitoring and performance management (VMware vRealize Operations).

This shift in reach from Ops to Dev forms the the basis of ‘infrastructure-as-code’ – a now relatively standard cornerstone of DevOps.

2) Ability to work across silos

One of the defining capabilities of a cloud team  – and a key skill required of all team members, is to be able to break the boundaries between silos:

  • Technical silos: for example the customer-facing team (Tenant Operations, also known as IT Service Center) will define end-to-end cloud services across technical silos such as compute (servers), networks and storage. Service Owners and Service Architects will define the scope and remit of such services; Service Developers will put together the workflows and scripts to allow end users to provision those services automatically.
  • Functional silos – merging “Design” and “Run”. Whilst traditional IT organizations tend to separate teams of architects/designers from operations team, the cloud development teams bring those skills together. Service Developers for example will build workflows that include not only the deployment of infrastructure, but automate its monitoring and configuration management at runtime. Service Owners are involved both in the definition of services but also act as point of contact in resolving incidents impacting those services.  DevOps takes this trend to the next level by merging the “dev” and “ops” teams.

3) Increased alignment with the business

Whilst all IT organizations aim to align with the business,  A model organization (as described in “Organizing for the Cloud”) aligns business lines with practical structures and roles.  For example this model defines dedicated roles such as:

  • Service Architects who translate business requirements into functional and technical architectures.

DevOps continues this trend towards business alignment: in a context where business is increasingly driven by revenue-generating applications, application development becomes integral to the lines of business.

DevOps Organization

In sum, a well-functioning cloud team will have established many of the positive traits needed for DevOps – a preference for rapid development over fire-fighting, for bridging silos across technologies and processes, and for close cooperation across business lines.

Going one step further DevOps pushes these traits to the extreme – preferring continually improving development and automation of application and infrastructure. For example a Devops team might leverage VMware’s Cloud Native Apps capabilities to build applications optimized to run on cloud from “day one” (for more details see https://www.vmware.com/cloudnative/technologies).

Take-away – practical ways to prepare your cloud team for DevOps;

  • Encourage job rotation of key team members across technical skills and functions.
  • Continuously expand your team’s knowledge and practice of cloud automation tools. This can include advanced training on tool such as vRealize Automation, vRealize Operations; as well as generic skills in analysis and design.
  • Ensure that key tenant operations roles (i.e. customer facing roles) are in place and give them increasing exposure to application development and business lines.
  • Develop an awareness of Agile approach for example by formal training and/or nominating ‘Champions’ in your team.
  • Build up a skill base in Continuous delivery, for example leveraging training or a pilot with vRealize Codestream.

Pierre Moncassin is an operations architect with the VMware Operations Transformation global practice and is based in the UK.

Top 3 Tips for Optimizing DevOps

More collaboration is a noble goal. Make the reality match the promise.

Optimizing DevOpsThe concept of DevOps is so appealing. Who wouldn’t agree that better communication between development and operations teams will expedite release cycles, improve software quality, and make the business more agile? Just one question: why is DevOps still a “concept” at most companies rather than an operational reality? The short answer is that DevOps requires new ways of working, and that can create cultural upheaval.

Download 3 Top Tips for Optimizing DevOps from our Consultant Corner for guidance around addressing the people and process issues of DevOps in a VMware environment—so you can reap the business benefits sooner.

VMworld US – Day 2

Monday, August 31

From an Operations Transformation Services perspective, the first full day of VMworld was a cracker! (I’m British – that means very good!)

dc2105-150x150By Andy Troup

Our presenters had a number of insights to share (remember, with your VMworld conference pass you have access to recordings of any sessions you might have missed within 24 hours, found either on the VMworld mobile app or on vmworld.com). Dave Crane, one of our Operations Transformation Services solution architects, offered this advice in the Advanced Automation Use session this morning:

If you only take away one key point from this session, it should be about the importance of a reference framework“.

The reference framework is oriented around a specific capability (in the example presented in this session, the automated provisioning process). The reference framework document describes all of the steps in the capability, ensures business and IT alignment, and provides the baseline for your governance activities (leave a comment on this post if you have a specific question for Dave regarding this topic).

One of our customer presenters took us through their multi-year transformation journey story (people and process alignment featuring prominently, again!), and the critical role that the vRealize Operations tool plays in terms of visibility and management along that journey.

Tomorrow will be another really interesting day, with a variety of transformation topics. Of particular note, I’d like to call attention to the Organizational Change Group Discussion at 12:30 PM (OPT 4743) where a number of our solution architects with extensive on-site customer experience will share real-world organizational change insights, best practices and pitfalls in an interactive format.

Here’s the schedule for Tuesday, September 1:

  • 11:30 AM – OPT 4953:
    Operationalizing VMware NSX: Practical Strategies and Lessons from Real-World Implementations
  • 12:30 PM – OPT 4743:
    Organizational Change Group Discussion
  • 1:00 PM – OPT 4868:
    DevOps Transformation: Culture, Technology or Both?
  • 2:30 PM – OPT 4992:
    vRealize CodeStream: Is DevOps about Tools or Transformation?
  • 4:00 PM – OPT 5222:
    Keys to Successfully Marketing and Managing your vRealize Automation Service Catalog
  • 5:30 PM – OPT 5075:
    Six Steps to Establish Your IT Business Management Office (ITBMO) with vRealize Business

Visit the VMworld mobile app to locate these sessions, and be sure to follow us on Twitter to find more information and resources: @VMwareCloudOps.

See you at Moscone.

Andy Troup is a Cloud Operations Architect with over 25 years of IT experience. He specializes in Cloud Operations and Technology Consulting Service Development. Andy is also a vCAP DCA and VCP. Andy possesses a proven background in design, deployment and management of enterprise IT projects. Previously, Andy co-delivered the world’s first and subsequent vCloud Operational Assessments (Colt Telecomm & Norwegian Government Agency) to enable the early adoption of VMware’s vCloud implementation.


VMware vRealize Code Stream: Is DevOps Tools or Transformation?

By Ahmed Al-Buheissi

Ahmed_croppedIn a previous blog, I wrote about the need for DevOps to rapidly release software and argued that DevOps is first and foremost about transforming operations.

The Operations and Development teams can be a roadblock to each other’s work schedule. The Development team frequently requires infrastructure and platform to test their code, while Operations does not always have the resources readily available to satisfy Developers’ requirements. This means schedules slip and releases become few and far apart, resulting in dissatisfied developers.

Since that blog, VMware released VMware vRealize Code Stream for facilitating DevOps, which I spent some time recently installing, configuring, and exploring—and I wanted to share my experience with you. Amazingly, I found that VMware’s response to DevOps is a tool that can be described in three words: automate, automate, and automate. Yes, vRealize Code Stream automates the entire software release process:

  • Infrastructure provisioning automation
  • Software build automation, and
  • Testing cycles automation

And these automation steps can be executed across all environments: Development, Testing, Staging, and Production, with the premise that automation will ensure consistency across all environments and prevent issues due to human errors. The figure below shows examples of tools used in software development, and how vRealize Code Stream integrates and automates these tools.

vRealize Code Stream SDLC


So is it all about the tools?
There are several tools available that will assist in implementing DevOps, including VMware vRealize Code Stream. These tools can automate and expedite the software release process, and apply the organization’s policies. But is that all there is to DevOps? As this is a new paradigm and a new way of operating, the organization will also need to transform. After all, DevOps disrupts the Software Development Lifecycle (SDLC) as we know it:

  • The process is not always linear (or circular); for example the Design phase does not always precede Implementation; sometimes the design is written as the code is being developed.
  • Testing cycles (Unit Testing, System Testing, and User Acceptance Testing) are executed concurrently, and will test only random pieces of code.

Organizations implementing DevOps must also apply operational methodologies, which will clearly define the transformation processes and document the evolving roles within the IT organization.

Why transform operations?
Even though the deployment and release processes are automated, we still need to tranform operations from both people and process perspectives, including:

  • The need to define the relationship between Operations and Development (as well as other teams). Development will depend on Operations only when creating the automation workflows, and subsequently they can release the software themselves using these scripts. Operations will focus more on automation project work, and much less on reactive day-to-day operations.
  • While the focus is on automating the release process, there is little emphasis on post-release monitoring. Developers need to have access to monitoring tools to ensure deployed software is performing adequately.

To conclude…consider these operations-related steps prior to adopting tools to implement DevOps:

  1. Determine the scope of your DevOps implementation; which teams, which applications, and which environments will be impacted.
  2. Document the process that will govern the interactions between various DevOps tasks and roles.
  3. Define the roles, for example Development, Operations, and Quality Assurance, as well as their responsibilities.

The figure below depicts the process of implementing DevOps, and steps required to roll out such environment.

DevOps Blog2_Implementation_Process_v0.1


Also, this short video below will provide you with high-level information about VMware vRealize Code Stream, along with technical white paper: “Releasing High Quality Applications More Quickly with vRealize Code Stream.”

Ahmed Al-Buheissi is an operations technical architect with the VMware Operations Transformation global practice and is based in Melbourne, Australia.


Optimizing IT Services for DevOps, Agility, and Cloud Capabilities

By Reginald Lo

ReginaldLo-cropA recent Gartner survey[1] revealed that 85 percent of IT departments are pressured by their customers to deploy new or changed IT systems or services faster. As the speed of business continues to increase, IT is having a harder time keeping up. As a result, 41 percent of business leaders attribute faster service delivery time or time to market as the reason they use outside IT service providers[2]. IT must transform itself into an agile organization in order to become a strategic partner to the business.

In the last several years, there have been some key technology and IT management innovations to improve time to market. DevOps, Agile, and cloud computing are all attempts at increasing the speed of IT. However, IT needs to systematically design its services to exploit these innovations.

In this post I’ll explore how to change the way you design IT services so they are optimized for DevOps, Agile, cloud computing, and the service broker IT business model. I’ll also provides suggestions on how to start transforming IT into a nimble organization.

Service Design and DevOps

DevOps describes an approach for re-thinking the collaboration between App Dev, QA, and IT operations. Its purpose is to remove barriers between these teams and align them to the common goal of reducing time to market while maintaining service quality. This alignment sounds easy but is actually difficult because App Dev and IT Ops have traditionally had different objectives: the former strove for innovation and speed, while the later preferred stability.

DevOps focuses on more frequent deployments, lower failure rate, faster mean time to restore and automated processes. In order to design an IT service for DevOps, you not only need to design the system that underpins the service, but also design how the release process will be automated. Nirvana is the ability to perform continuous deployments.

One mechanism you can leverage is the service request catalog and the back-end automation and orchestration. Since App Dev is familiar with provisioning IT services from the catalog, you can also use the catalog as the interface for App Dev to request automated deployments of specific systems. The same automated orchestration capability used to provision IT services, can be used to automate the deployment process by taking the binary outputs from App Dev from the source control system and deploy them into specified environments, such as test, prod, and so forth.

When designing the support process for your new or changed IT service in a DevOps environment, consider a model that integrates both IT Ops and App Dev in the process. In the past, IT Ops shielded App Dev from 24×7 support. However, to reduce failure rate and improve mean time to restore, increasing the App Dev’s role in support creates accountability for App Dev to reduce the number of bugs and develop ways to restore service faster.

Service Design and Agile Software Development

The Agile methodology is characterized by multiple sprints leading to a release. In fact, with DevOps, a single sprint may result in a release. App Dev teams maintain a backlog of stories—a way of describing requirements. It is important to note that the Agile approach is particularly useful when the requirements are not fully known at the start of the project. The implications to service design is that you cannot expect the waterfall approach of having all the requirements upfront and then build the IT infrastructure to satisfy the supposedly stable requirements. The very nature of the Agile approach means that requirements will be discovered or evolve over time. You need to plan and design with the assumption that requirements will change.

This has interesting implications on how infrastructure or supporting services are designed. IT has generally focused on the initial provisioning process when establishing its private cloud services. However, if we know requirements change, we should also invest effort into “day 2” activities, for example, making it easy to expand compute, memory, storage, or change network and security controls sometime after the initial provisioning. Hence, your catalog should not just be an entry point for provisioning requests; it also needs to be the entry point for change requests, continuous deployment requests (as I discussed with regards to DevOps), and retirement requests.

Service Design and Cloud Computing

Cloud computing can be a model for how to deliver business-facing IT services, software as a service (SaaS), or a way to deliver infrastructure and supporting services or infrastructure as a service (IaaS), and platform as a service (PaaS). Cloud-based services have certain characteristics that distinguish them from traditional IT services:

  • Self-service: The customer can easily request the service through a portal.
  • On-demand: Instantly deliver the service when it is requested.
  • Elastic capacity:  Dynamically provision more resources (and release those resources when they are no longer required) based on fluctuation in demand.
  • Highly available and resiliency: When the underlying infrastructure components suffer an outage, the service is architected in such a way that it is still available to the customer.
  • Pay as you go: The cost of the service is linked to the amount of the service that is consumed. This allows the business to make return on investment (ROI) decisions on how much of the service to consume. Contrast this to a cost allocation model for IT, where the business has no incentive to self-manage their demand of IT.

When designing an IT service, whether it is an IaaS, PaaS, or a business-facing service that sits on top of IaaS or PaaS, you should address these cloud characteristics.

cloud tableAnother aspect of service design is “Where is the service going to be hosted?” —in the private cloud, the hybrid cloud, or the public cloud?  The answer may not be straightforward. You may want to pilot the service in the public cloud and then when it grows, bring it back into the private cloud in order to manage costs. Or you may have a service hosted in the private cloud but have the ability to burst into the hybrid cloud to handle peaks in demand. These design decisions impact how App Dev might build the application. For example, if an application starts in the public cloud but may be migrated into the private cloud in the future, App Dev cannot use the public cloud provider’s proprietary technologies, such as an AWS NoSQL database, that isn’t available internally.

Service Design and Becoming a Service Broker

If your internal customers or lines of business are using shadow IT—external IT service providers—in order to meet their time to market requirements, instead of taking an adversarial position against using those external cloud services, your IT department should embrace these vendors and leverage the value that they can provide. IT must become a service broker, helping your IT service consumers select the most appropriate platform for the service they require, whether it is private, hybrid, or public.

A service broker is more than just the ability to show VMware vCloud Air, AWS, or Azure services in your catalog. In fact, showing all the offerings and options from these vendors could become confusing to your customer. Instead, the catalog should ask questions about the requirements, such as:

  • Is the environment for dev, test, or prod?
  • Will the environment store any confidential information, such as PII (personal identifying information), HIPAA, and similar?
  • Does the environment need to adhere to certain levels of compliance such as SOX, PCI?
  • What service levels do you need?

And the based on the answers, automatically provision the environment into the right cloud—private, hybrid, or public—through the automatic enforcement of policies, as shown below.

Figure 1: Service broker—providing the service regardless of where it resides

Figure 1: Service broker—providing the service regardless of where it resides

Becoming a service broker raises some interesting questions:

  • Does supplier management need to be matured in order to manage the cloud providers better and ensure they are meeting their service-level obligations?
  • How does IT provide a seamless user experience regardless of the underlying cloud provider?  How do you make the user perceive the private, hybrid, and public cloud as “one cloud”?
  • What is the support model—for example, are there hand-off points between IT and the cloud vendor?  How do you get visibility into the full incident lifecycle as it moves from IT to the cloud vendor?

Where Do You Start?

I’ve introduced how service design needs to change in order to take advantage of DevOps, Agile, cloud, and the service broker model. The next question to answer is: “How do I transform IT so that our services are designed differently?”  Here are some suggestions:

  • Build momentum and support—First, you need to educate stakeholders on what the vision of success will look like, the problems that this “new IT” will solve, and the value that this “new IT” will deliver. This article is a good starting point but you will need to give presentations, conduct workshops, and continue to provide information to stakeholders on where the IT industry is going.
  • Establish new roles—As part of the transformation, you will need to establish new roles such as service architect, service developer, automation engineer, and so forth. And it’s not sufficient just to define their responsibilities. You also need to give the people in these new roles the training and enablement to be successful.
  • Pilot the new service design model—It may be easier starting with industry-recognized services to demonstrate how the new operating model will work end to end, such as establish IaaS or PaaS an exemplar of the new way of delivering services.
  • Think “service lifecycle”—Traditional IT is project-based. Infrastructure is built in response to specific application projects. In a service lifecycle approach, the infrastructure services are designed and built outside the context of a specific application project. Once the infrastructure service is available, application projects then request the infrastructure service as needed. This challenges the way you fund the infrastructure, as the initial creation of the infrastructure service is not tied to the business justification of the application project.
    ITIL presents a service lifecycle, but it does not going into depth regarding the specific activities in each stage of the lifecycle (instead, it focuses on the service management processes in each stage). Your organization will need to develop a methodology that defines the specific activities within the service lifecycle.

Next, you will need to tie together the new roles and the new activities from the service lifecycle. Again, a high-level example is provided below.

Figure 2: Service lifecycle example

Figure 2: Service lifecycle example


I’ve described how service design needs to change to take advantage of the innovations brought about through DevOps, Agile, and cloud, along with tips on how to become a service broker. And, I’ve given pointers on where to start your transformation journey. Ultimately, this transformation will help your IT organization deliver at the speed of business, resulting in exploiting business revenue opportunities earlier or realizing business cost savings sooner.

Reginald Lo is Director of Service Management Transformation with VMware Accelerate Advisory Services and is based in California.

[1] Gartner, Inc., “2014 Service Transition Survey”

[2] IDG Research Services , “Dual Perspectives on ITaaS:  The World According to IT and Business”

Incorporating DevOps, Agile, and Cloud Capabilities into Service Design

By Reg Lo

ReginaldLo-cropShadow IT is becoming more prevalent because the business demands faster time-to-market and the latest innovations—and business stakeholders believe their internal IT department is behind the times or hard to deal with (or both!). The new IT requires new ways of designing and quickly deploying services that will meet and exceed customer requirements.

Check out my recent webcast to learn how to design IT services that:
• Take advantage of a DevOps approach
• Embody an Agile methodology to improve time-to-market
• Leverage cloud capabilities, such as elastic capacity and resiliency
• Remove the desire of the business to use shadow IT

BrightTalk webinar

Reg Lo is the Director of the Service Management practice for VMware Accelerate Advisory Services and is based in California.



Making IT Go Faster – Forrester Research Sheds Light on How

By Kurt Milne

kurtmilne-cropToday’s IT managers face increasing pressure to be more responsive and move faster. However, most IT organizations have built their IT organization to promote control and safety. People, process, and tools have traditionally been deployed to strictly limit change in order to optimize service quality and efficiency. In fact many of the most successful IT organizations have built their reputation by deploying elements of ITIL or other control frameworks to ensure critical system uptime.

Latest Forrester Research lays out a path forward for IT organizations that want to increase agility without losing control

orrester coverIt is easy to say – “Let’s use the cloud to move faster and be more responsive to the business.” But how do those with investment in ITIL, or who have thoughtfully developed process control methodologies, adapt to new demands for speed? Demands forcing IT to do things it may not be comfortable with? A new Forrester study based on interviews with 265 IT professionals in North America and Europe sheds some light on the best path forward.

Forrester found that:

  • IT organizations are quickly moving to on-demand, dynamic IT infrastructure
  • Users demand faster provisioning and want IT to be easy to consume
  • Those companies that have already deployed more dynamic change models are moving away from a centralized CMDB strategy
  • Developers are the primary consumers of ready-to-use application middleware stacks
  • IT can support rapid change without sacrificing configuration, compliance, and governance controls

If you have investment in IT process maturity and are looking to improve IT agility and deploy more automation without sacrificing control, then read the full Forrester report.
Follow @kurtmilne on Twitter.