Home > Blogs > VMware Operations Transformation Services > Monthly Archives: June 2013

Monthly Archives: June 2013

The Cloud Service Initiation Process – In Four Acts

By Andy Troup and David Crane

Business users’ demands on IT are fairly constant – as is the tension between the two. The business wants to do things quickly and cost effectively so that they have a competitive advantage. IT, meanwhile, wants to maintain control and proceed carefully to ensure security, performance, and service quality – which are table stakes for running IT.

The goal of IT as a cloud provider is much the same: to meet the business requirements in as efficient a manner as possible while still maintaining the control it needs. This way, the business user (now known as the cloud customer) gains the trust of the cloud provider and has no motivation to bypass IT and take on 3rd party “shadow IT” cloud implementations (i.e. removing its “IT overhead”, but leaving no control or policies in place).

So far, so good. But how do you get there? This post details a process for service initiation that, based on our experience with a variety of customers, allows IT to offer the business the services it needs without it having to resort to playing in the shadows.

Think of the process as a (hopefully, not too dramatic!) story in four acts.

Act 1- Outlining the Service

First, of course, you need to define the services you will deliver.

Just to be clear, by ‘service’ we’re referring to the creation of something that will ultimately be offered from within a portal and that end users will then be able to deploy. So when a business user has a requirement for a new service, it’s the responsibility of the customer relationship manager to have the conversation with them to understand exactly why (from a business point of view) the service is required, and what the service will look like when it’s been implemented.

The customer relationship manager (as we’re defining the role) will document this information in a service proposal, detailing things such as business benefits, financial benefits & costs, risks, organizational considerations, service overview, high level service requirements and high level functional & non-functional requirements.

Act 2 – Enter the Service Portfolio Manager

On completion, this proposal & business justification is handed off to the service portfolio manager for review. This is essential, because the service portfolio manager is in the unique position of having a full overview of the current portfolio of cloud services – including services that are in the pipeline (i.e. either being built, or planned to be built), services that are currently live and are being offered from the cloud service catalog for customers to deploy, and services that are no longer required and that have been retired.

Additionally, if the organization is of a reasonable size, it is very possible that a number of customer relationship mangers, each with their own set of cloud customers, are documenting and passing on their own ideas for new services to the same service portfolio manager . The service portfolio manager, then, is uniquely positioned to view all the new services that are being requested from all the cloud customers, and thus notice those with similar requirements that could be combined into a single service offering.

So now the service portfolio manager has:

  • The service proposal & business justification,
  • Knowledge of the current portfolio of cloud services
  • Knowledge of other requested cloud services

That means he or she is now in an excellent position to make a decision on the service proposal.

It’s quite likely that service portfolio managers won’t make that decision in isolation (although they may). They may seek advice from different individuals depending on the service proposal, its requirements and any potential impact on the cloud implementation (it may have huge capacity requirements for example). There could also be executive sponsors to consult, business unit managers, the tenant operations leader, a project stakeholder board, etc.

Act 3 – The Assessment and its Aftermath

Now, though, comes the assessment. For that, we’ve established that the service portfolio manager needs to look at 3 considerations:

  1. Does the business justification stack up, and does the service portfolio manager think the benefits detailed will be realised?
  2. Does the service portfolio manager think the service requirements can be fulfilled by the cloud implementation they are managing the portfolio for?
  3. What is the demand for the service, both today and in the future, and does this demand warrant providing the service?

When the decision is made, if the service proposal is rejected, then the customer relationship manager will need to inform the customer and work with them to decide whether to move on, or review and update the proposal.

What they decide will likely depend on the feedback provided by the service portfolio manager as to why the proposal was rejected. For example, if the business case doesn’t stack up, then the service may be dropped. But if there are specific requirements that couldn’t be fulfilled, then a decision may be made to adjust the requirements so they can be satisfied.

If the service portfolio manager approves the service, the next decision to make is whether the service is required now, or whether this is a service for the future. Future services, like those that depend on future cloud capabilities, are placed into the service portfolio pipeline until the service needs to be created. This service portfolio pipeline now becomes your road map of cloud services and will develop in maturity, providing you with a good view of how the cloud services will change over time.

Act 4 – Assigning the Service Owner

The final act in the cloud service initiation process is to assign a service owner for services that are approved and that are required now. The tenant operations leader assigns the service owner to the service, and from that point forward the service owner is responsible for the overall life cycle of the service from definition to creation, to release to maintenance, to retirement and all points in between.

To recap, here are the four steps to service initiation:

  • Outline the service
  • Hand off to the Service Portfolio Manager
  • Do the assessment
  • (If it’s a go), assign the service owner

Stay tuned for the sequel, where Khalid Hakim will start discussing how best to approach defining your cloud services.

For more information, see the related webcast by David Crane and Kurt Milne, Service Initiation: Understanding the People and Process Behind the Portal.

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

Note: This blog uses the roles that are part of the Tenant Operations organization as described in the VMware white paper, Organizing for the Cloud.

VMware CloudOps Is Heading to Silicon Valley DevOps Days

(Photo from DevOpsDays.org)

As you may have heard, we’ll be on-site in Santa Clara for this year’s DevOps Days, taking place tomorrow, June 21st and this Saturday, June 22nd. If you’re attending the conference, make sure to swing by our table to say hello and to learn more about VMware’s cloud operations solutions and services.

We’ll be live tweeting from the show floor with exclusive photos and videos, and we’ll also be covering the following sessions:

Day 1:

  • 10:15-10:45 – DevOps + Agile = Business Transformation
  • 12:00-12:30 – Leveling Up a New Engineer in a Devops Culture; Healthy Sustainability

Day 2:

  • 10:15-10:45 – Leading the Horses to Drink: A Practical Guide to Gaining Support and Initiating a DevOps Transformation
  • 11:30-12:00 – Analysis Techniques for Identifying Waste in Your Build Pipeline
  • 12:00-12:30 – Clusters, Developers and the Complexity in Infrastructure Automation

During each afternoon of DevOps Days, there are open spaces for attendees to propose sessions to present. Once these sessions have been selected, we’ll tweet the sessions we’ll be live-tweeting from.

We’re also giving away t-shirts at DevOps Days: follow us at @VMwareCloudOps during the conference and you could soon be the proud owner of one of these shirts:

We hope to see you at DevOps Days!

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

Workload Assessment for Cloud Migration, Part 1: Identifying and Analyzing Your Workloads

By: Andy Troup

Conducting a thorough workload analysis can make or break the success of a cloud strategy.

If you are successful with assessing workloads and placing them in the appropriate private, hybrid and public cloud environments, then this will help you fulfill your cloud strategy, thus helping you enable greater agility and cost efficiency. If your assessment is unsuccessful, then these benefits will be much harder to achieve and you could see higher costs, lower performance and unhappy customers.  Remember, success breeds success, so if you have happy customers who are realizing the benefits of your cloud implementation, others will be knocking at your door. If you are unsuccessful, the pipeline of customers will very rapidly dry up.

In this four-part series, I’ll explain four main considerations that you should examine when performing a workload assessment. In this blog, I’ll suggest a framework to use to classify workloads as potential candidates for moving to a cloud environment. My next three blog posts in this series will cover service portfolio mapping, analyzing the cost and benefits of moving to the cloud, and last but not least, stakeholder analysis.

Common Questions

When assessing workloads to identify candidates, I often find myself asking:

  • What criteria should be considered when determining what workloads are a good fit for a new cloud environment?
  • What is the best way to capture and evaluate the criteria with minimal effort and impact on a busy IT department?

A thoughtful and efficient workload assessment framework can simplify and streamline the analysis. Without the right methodology, it can be difficult to know where to start, let alone where to finish. The larger the number of workloads, the more complex the prioritization task becomes.

Here are common considerations and requirements that factor into a potential migration:

Business Impact:

  1. Take a look at the workload and evaluate its impact on your business. Is it a business critical workload? How does it affect and impact your company? Take the answer to this question and assess it against where you are on your cloud journey. You wouldn’t want to move mission critical workloads in to your cloud during your first days after “go live” would you?
  2. For which application lifecycle phase will the workload be used (for example, development, test or production)? What are the different requirements for each environment?

Application Architecture:

  1. Is the application written for cloud environment? If not, make sure you understand the impact of migrating it into the cloud.
  2. How hard/expensive is it to refactor the application for new environment e.g. do you need to remove hard coded resource paths? What are the scaling considerations, can you already horizontally scale to add capacity by adding instances or can you only scaling up by adding more resource to a single instance?

Technical Aspects:

  1. What operating systems, databases or application servers are being consumed or provided and how hard will it be to also migrate them into the cloud?
  2. Do your database, application server and web server run on the same type of platform?
  1. What quantity of CPU, memory, network and storage are typically used/needed? Can your cloud implementation support this?
  2. What commercial and custom software support the workload?
  3. What are the dependencies or integration touch points with other workloads?

Non-Functional Requirements:

  1. What are the required service levels, performance, capacity, transaction rates and response time? Again, can your cloud implementation support this?
  2. What are the supporting service requirements?  Backup, HA/DR, security or performance monitoring?  Are specific monitoring or security agents required?
  3. Are there encryption, isolation or other types of security and regulatory compliance requirements?

Support & Costs:

  1. What are the support resources and cost for a given workload? For example, two full-time equivalent employees per server – how much does this resource cost?  Also, don’t forget licensing, how does the software vendor deal with cloud implementations of their software and what are the cost implications?
  2. What are the operational costs for space, power, cooling and so on? What will be saved by migration?

One thing remains through all of this – the benefits of moving these workloads must always outweigh the costs and the risks.

To get started on the journey of migrating your workloads to the cloud, remember these takeaways:

  • Always think about how your workload directly affects your company. With a thorough review of each of your workloads, you’ll know what changes to anticipate when you begin the migration process.
  • Make sure you’re thinking in the cloud mindset. Before beginning the migration process, make sure your applications are cloud-ready. If they aren’t already, make sure you have the proper strategy in place to bring them up to cloud-ready speed.
  • Be prepared. Not only do your employees need to know about these changes, but make sure your cloud implementation is prepared for the capacity (including cost) it will take your company to migrate to the cloud.

Check out our list of great blogs on workload migration and stay tuned for Part 2 of this series, where we’ll look at service portfolio mapping and how to determine the target cloud service and deployment model for each candidate workload.

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

DevOps and All The Other “Ops Religions”

By: Kurt Milne

I didn’t wake up yesterday thinking, “Today I’ll design a T-shirt for the DevOps Days event in Mountain View.”  But as it turns out – that is what happened.

Some thoughts on what went into my word cloud design:

1. DevOps is great. This will be my 4th year attending DevOps Days.  I get the organic, bottoms up nature of the “movement.” I’ve been on the receiving end of the “throw it over the wall” scenario. A culture of collaboration and understanding go a long way to address the shortcomings of swim lane diagrams, phase gate requirements and mismatch of incentives that hamper effective app lifecycle execution. Continuous deployment is inspirational, and the creativity and power of the DevOps tool chain is very cool.

2. EnterpriseOps is still a mighty force. I remember an EnterpriseOps panel discussion at DevOps Days 2010. The general disdain for ITIL, coming from a crowd that was high off of 2 days of Web App goodness at Velocity 2010, was palpable. The participant from heavy equipment manufacturer Caterpillar asked the audience to raise their hand if they had an IT budget of more than $100M. No hands went up in the startup-dominated audience. His reply – “We have a $100M annual spend with multiple vendors.” The awkward silence suggested that EnterpriseOps is a different beast. It was. It still is. There is a lot EnterpriseOps can learn from DevOps, but the problems dealing with massive scale and legacy are just different.

3. InfraOps, AppOps, Service Ops. This model developed by James Urquhart makes sense to me.  It especially makes sense in the era of Shape Shifting Killer Apps. We need a multi-tier model that addresses the challenges of running infrastructure (yes, even in the cloud era), the challenges of keeping the lights on behind the API in a distribute component SOA environment and the cool development techniques that shift uptime responsibility to developers, as pioneered by Netflix. Clear division of labor with separation of duties, and a bright light shining on the white space in between, is a model that seems to address the needs of every cloud era constituent.

4. Missing from this 3-tier model is ConsumerOps. Oops. Too late to update the shirt design. Many are consuming IT services offered by cloud service providers; there must be a set of Ops practices that help guide cloud consumption. Understanding and negotiating cloud vendor SLAs and architecting multiple AWS availability zones immediately come to mind. Being a service broker and including 3rd party cloud services as part of an integrate service catalog is another.

5. Tenant Ops. As far as I can tell, this term was coined by Kevin Lees and the Cloud Operations Transformation services team at VMware. See pages 17 and 21 in Kevin’s paper on Organizing for the Cloud. It includes customer relationship management, service governance, design and release, as well as ongoing management of services in a multi-tenant environment. VMware internal IT uses the term to describe what they do running our private cloud internally. They have a pie chart that shows the percentage of compute units allocated to different tenants (development, marketing, sales, customer support, etc). It works. It may be similar to ServiceOps in the three tier model, but feels different enough, with a focus on multi-tenancy and not API driven services, to deserves its own term.

6. Finally CloudOps. This term is meta. It encompasses many of the concepts and practices of all the others. This is a term that describes IT Operations in the Cloud Era. Not just in a cloud, or connected to a cloud. But in the cloud era. The distinction being that the “cloud era” is different than the “client server era,” and implies that many practices developed in the previous era no longer apply. Many still do. But dynamic service delivery models are a forcing function for operational change. That change is happening in five pillars of cloud ops: People, Process, Organization, Governance, and IT business.

So while some of the sessions at this year’s DevOps conference are focused on continuous deployment. I’d bet that all the topics of the “Ops religions” will be covered.  Hence the focus on the term CloudOps.

We’ll be live tweeting from DevOps next Friday. Follow us @VMwareCloudOps or join the discussion using the #CloudOps hashtag.

Consider joining the new VMUG CloudOps SIG or find out more about it during VMUG June 27th webcast.

Reaching Common Ground When Defining Services – Highlights from #CloudOpsChat

On May 30th, we hosted our monthly #CloudOpsChat on “Reaching Common Ground When Defining Services.” Thanks to all who participated for making it an informative and engaging conversation. We would also like to thank John Dixon (@GreenPagesIT) from GreenPages and Khalid Hakim from VMware (@KhalidHakim47) for co-hosting the chat with us.

To kick off the chat we asked, “What exactly is an IT service?”

Our co-host @KhalidHakim47 suggested they are intangible by nature, unlike products. Our other co-host, @GreenPagesIT, gave the textbook answer: IT services are an asset worthy of investment. He added that an application alone is not an IT service. @kurtmilne defined an IT service as something designed to deliver something to someone in a form or function that meets their need. @AngeloLuciani said that an IT service delivers a business outcome. @KongYang saw it as a bounded deliverable that states which things are being provided by whom and the support that’s to be rendered when things fail.

Next we asked, “Why should you define services in the first place?” Followed by, “What are the benefits of doing so for your users?”

@KhalidHakim47 started off by saying that you cannot claim you manage services until they are defined in the first place. @kurtmilne said service definitions set expectations, which are a key dependency for creating satisfied users. @jfrappier added to Khalid’s point, saying that you also can’t control your public cloud vendors, so as a consumer you need clear definitions. Khalid went on to say that without a service definition, the boundaries may be loose between IT deliverables – setting expectations becomes much clearer when you address a well-defined service. @harrowandy chipped in saying the definition of services helps to make sure that the customer and IT are expecting the same outcome, with which @alamo_jose agreed. Co-host @GreenPagesIT said IT services help to organize people around a delivery objective instead of a technology objective.

We then noted that multiple roles contribute to specifying a service definition and asked, “What roles are involved in defining each service?”

@KhalidHakim47 argued that the driving and accountable role for defining a service is the service owner/manager, but it is not a one-man show. According to Khalid, @CloudOpsVoice and @alamo_jose, some of the key roles involved include the Business Unit Liaison, IT Service Manager, Consumer Relationship Manager, Portfolio/Catalog Manager and Architect, the Service Liaison Manager and Service Catalog Manager. Co-host @GreenpagesIT explained that at first pass, it’s a small group that defines the service, but eventually more parties become involved as you roll into CSI. @harrowandy said the service must have an owner who takes the service from cradle to grave and from initiation to retirement.

We then asked our audience, “Are there recommended approaches to getting multiple groups of users to reach consensus in their service definition?”

@AngeloLuciani explained that groups need to be driven by the business strategy and outcomes. @harrowandy agreed, adding that if groups don’t know the business strategy, how can IT provide them what they want? Co-host @KhalidHakim47 suggested that during the service definition planning phase, all roles that are expected should be looped into the exercise with clear goals and outcomes. @KongYang made a great analogy, saying too many chefs in the kitchen will kill the service – instead, we should look to have one chef for one service, a point with which many of our participants agreed.

Next, co-host @GreenPagesIT wondered: “Are there recommended approaches to balancing the needs of both IT and service consumers?”

@kurtmilne said that IT can deliver fast and cheap if standardized, but slow and expensive if customized. Agreeing,  @KhalidHakim47 said there’s a balancing act between packaging/standardizing and customizing. @harrowandy suggested using the “80/20” rule: You can get 80% of what you want now, or wait a certain number of weeks for the remaining 20%. Kurt also brought up the fact that IT service standardization gives users more flexibility and business process level, with which @alamo_jose agreed, adding that IT must help the business understand that reality. Co-host @KhalidHakim47 noted that standardization drives efficiency, but allowing more service levels gives more freedom as well. Co-host @GreenPagesIT added that requirements should be negotiated during the service definition and not specs.

Switching gears, we then asked “What service components do you think should be included in a service definition?”

@kurtmilne stated that pricing services is key – pricing requires accurate costing, and costing requires clear service definition, thus making the whole process come full circle. @alamo_jose added that ownership, SLA/OLA, a clear definition, features, cost and related services should all be included. Co-host @GreenPagesIT said that knowledge of how to access the service is a necessary service component, as well as hours of operation.

To round off the chat we closed with the question, “What do you do after you define services? What are the next steps?”

For @jfrappier, the answer was, “IT needs to define, then document and automate.” @alamo_jose chipped in, saying that once the service is defined, it should be published in the Service Catalog, with @AngeloLuciani adding that IT also needs to educate and communicate on how to leverage the services. @ckulchar, however, had a very different answer – once services are defined and delivered, he suggested, users should drink beer and celebrate!

Thanks again to everybody who participated in our #CloudOpsChat, and stay tuned details around our next #CloudOpsChat!

Feel free to tweet us at @VMwareCloudOps with any questions or feedback, and join the conversation by using the #CloudOps and #SDDC hashtags.

Transforming IT Services is More Effective with Org Changes

By: Kevin Lees

Last time, I wrote about the challenge of transforming a traditional IT Ops culture and the value of knowing what you’re up against.

Now I want to suggest some specific organizational changes that – given those cultural barriers – will help you successfully undertake your transformation.

At the heart of the model I’m suggesting is the notion of a Cloud Infrastructure Operation Center of Excellence. What’s key is that it can be adopted even when your org is still grouped into traditional functional silos. 

Aspiration Drives Excellence

A Cloud Infrastructure Operation Center of Excellence is a virtual team comprised of the people occupying your IT org’s core cloud-focused roles: the cloud architect, cloud analyst, cloud developers and cloud administrators. They understand what it means to configure a cloud environment, and how to operate and proactively monitor one. They’re able to identify potential issues and fix them before they impact the service.

Starting out, each of these people can still be based in the existing silos that have grown up within the organization. Initially, you are just identifying specific champions to become virtual members of the Center of Excellence. But they are a team, interacting and meeting on a regular basis, so that from the very beginning they know what’s coming down the pipe in terms of increased capacity or capability of the cloud infrastructure itself, as opposed to demands for individual projects.

Just putting them together isn’t enough, though. We’ve found that it’s essential to make membership of the cloud team an aspirational goal for people within the IT organization. It needs to be a group that people want to be good enough to join and for which they are willing improve their skills. Working with the cloud team needs to be the newest, greatest thing.

Then, as cloud becomes more prominent and the defacto way things are done, the Cloud Center of Excellence can expand and start absorbing pieces of the other functional teams. Eventually, you’ll have broken down the silos, the Cloud Center of Excellence will be the norm for IT, and everybody will be working together as an integrated unit.

Four Steps to Success

Here are four steps that can help ensure that your Cloud Infrastructure Operation Center of Excellence rollout is a success:

Step 1 – Get executive sponsorship

You need an enthusiastic, proactive executive sponsor for this kind of change.  Indeed, that’s your number one get – there has to be an executive involved who completely embraces this idea and the change it requires, and who’s committed to proactively supporting you.

Step 2 – Identify your team  

Next you need to identify the right individuals within the organization to join your Center of Excellence. IT organizations that go to cloud invariably already run a virtualized environment, which means they already employ people who are focused on virtualization. That’s a great starting point for identifying individuals who are best qualified to form the nucleus of this Center. So ask: Who from your existing virtualization team are the best candidates to start picking up responsibility for the cloud software that gets layered on top of the virtualized base?

Step 3 – Identify the key functional teams that your cloud team should interact with.

This is typically pretty easy because your cloud team has been interacting with these functional teams in the context of virtualization. But you need to formalize the conneciton and identify a champion within each of these functional teams to become a virtual member of the Center of Excellence. Very importantly, to make that work, the membership has to be part of that person’s job description. That’s a key piece that’s often missed: it can’t just be on top of their day job, or it will never happen. They have to be directly incentivized to make this successful.

Step 4 – Sell the idea

Your next step is basically marketing. The Center of Excellence and those functional team champions must now turn externally within IT and start educating everybody else – being very transparent about what they’re doing, how it has impacted them, how it will impact others within IT and how it can be a positive change for all. You can do brown bag lunches, or webinars that can be recorded and then downloaded and watched, but you need some kind of communication and marketing effort to start educating the others within IT on the new way of doing things, how it’s been successful, and why it’s good for IT in general to start shifting their mindset to this service orientation.

Don’t Forget Tenant Operations 

There’s one last action you need to put in place to really complete your service orientation: create a team that is exclusively focused outwards toward your IT end customers. It’s what we call Cloud Tenant Operations.

Tennant Ops is one of three Ops tiers that enable effective operations in the cloud era. It is also called “Service Ops,” which is one of three Ops tiers outlined here and here.

One of the most important roles in this team is the customer relationship (or sometimes ‘collaboration’) manager who is directly responsible for working with the lines of business, understanding their goals and needs, and staying in regular contact with them, almost like a salesperson, and supporting that line of business in their on-boarding to, and use of, the cloud environment.

They can also provide demand information back to the Center of Excellence to help with forward capacity planning, helping the cloud team stay ahead of the demand curve by making sure they have the infrastructure in place when the lines of business need it.

Tenant Operations is really the counterpart to the Cloud Infrastructure Operation Center of Excellence from a service perspective – it needs to comprise of someone who owns the services offered out to the end customers over their life cycle, a service architect and service developers who actually can understand the technical implications of the requirements. These requirements are coming from multiple sources, so the team needs to identify the common virtual applications that can be offered out and consumed by multiple organizations (and teams within organizations) as opposed to doing custom one-off virtual application development.

In a sense, Tenant Operations function as the dev ops team from a cloud service perspective and really instantiate the concept of a service mindset, becoming the face to the external end users of the cloud environment.

These Changes are Doable

The bottom line here: transforming IT Ops is doable. I have worked with many IT organizations that are successfully making these changes. You can do it too.

Additional Resources

For a comprehensive look at how to best make the transition to a service-oriented cloud infrastructure, check out Kevin’s white paper, Organizing for the Cloud. 

Also look for VMware Cloud Ops Journey study findings later this month, which highlights common operations capability changes, and the drivers for those changes. For future updates, follow us on Twitter at @VMwareCloudOps, and join the conversation by using the #CloudOps and #SDDC hashtags.

Transforming IT Services Starts With a Culture Shift

By: Kevin Lees

It’s happening. In place of their traditional, project- and technology-based approach, IT organizations really are making the shift to deliver IT as a service.

My last post examined what an IT service looks like in practice. But what if you’ve only gone as far as deciding that you need to transform IT? How do you act on that decision?

Your first priority, I’d argue, is to understand how functional silos create an anchor for your organization’s culture, and how that may be your biggest barrier to change. That’s what I’ll be looking at here. In part 2, I’ll suggest a solution for specific organizational changes that address the culture shift problem.

Changing Minds to Change Behavior

For context, here’s the IT model you’re leaving behind: a project request comes in with specific technology or capacity requirements. You procure the infrastructure and build a custom environment and then turn that over to the development team (which is often really a back and forth affair between Dev and Ops, where the final solution doesn’t really look like the initial request). When the new capability is moved into production, you take over the management and maintenance of that application and underlying infrastructure environment.

Here’s where you’re going: well before you get any requests, you build an environment that can be reused across many different development teams. You deliver that environment as a highly standardized service that’s a best fit for all the teams you serve. They request and deploy on demand with little or no IT Ops involvement in the deployment. Developers can customize their deployment to some degree, by selecting from a small set of highly standardized service options or configuration choices.

Leaving the one behind and moving to the other requires new software tools, as well as hardware that can handle the demands of a pooled resource environment. But the real transformation is a shift in mindset. And it’s one that can be hugely challenging for an IT group to both make initially and sustain over time.

I’ve seen this at many IT groups I work directly with. The fact that “It’s just not the way we’ve done things in the past” in itself becomes the obstacle to change.

Breaking Structural Bonds

Team A, for example, has always done their thing and then handed it off to team B who does their thing, who hands it off to the next team. Even with carefully crafted swim lane diagrams, phase gate checklists, and continuous process improvement – it can literally take months to deploy an environment for a development team.

Over time, large IT organizations build a series of silos that  develop deep expertise to facilitate that process: a network silo, a security silo, a storage silo, and so on.  They optimize the steps and sub-optimize the process.

But you’re now looking to move to a situation where everyone works in a much more integrated way: together and not sequentially. After all, with a cloud services-oriented operation, things happen so fast and in such an integrated way that trying to work within the context of these silos and linear processes does nothing but slow the process down, which defeats the whole purpose of making the change.

So for change to happen, the silos have to go.

Fear, Uncertainty . . .  a Plan

Propose ditching silos, though, and people immediately start fearing for their own job security. They won’t know what it will take to do well anymore – deepening expertise was a well worn path to recognition, certifications and a raise. Talk of breaking down this structure conjures in them that awful trinity: fear, uncertainty, doubt.

It’s an understandable reaction and it’s important to anticipate and plan for. But you now know 1) what you want and 2) what you’re up against. You’re ahead of the game.

It is time to own the problem!

In my next blog post, I’ll outline a concrete set of actions that will help you successfully change your organizational culture – reengingeering your Ops team to dynamically deliver services to end customers through a cloud infrastructure.

For future updates, be sure to follow @VMwareCloudOps on Twitter and use the #CloudOps and #SDDChashtags to join the conversation.

Additional Resources

View Kevin Lees webcast 5 Key Steps to Effective IT Ops in a Hybrid World for more information about specific changes that can help IT be more service-oriented.