Author Archives: Kurt Milne

About Kurt Milne

Kurt Milne is the Director of Product Marketing at VMware with more than 20 years of experience in various executive management, engineering, and analyst positions at leading tech companies, including Hewlett-Packard, BMC Software and several startups. In 2011, he released a book called “Visible Ops Private Cloud: from virtualization to private cloud in 4 practical steps”. Formerly the Managing Director of the IT Process Institute, he was the primary investigator and author of 6 major research studies on private cloud, virtualization, IT controls, change configuration and release, strategic alignment and IT governance.

Clouds are Like Babies

By: Kurt Milne

While preparing for the Datamation Google+ hangout about hybrid cloud with Andi Mann and David Linthicum that took place last week, I referred to Seema Jethani’s great presentation from DevOps Days in Mountain View.

Her presentation theme, “Clouds are Like Babies,” was brilliant: Each cloud is a little different, does things its own way, speaks its own language and of course, brings joy. Sometimes, however, clouds can also be hard to work with.

Her great examples got me thinking about where we’re at as an industry with respect to adopting hybrid cloud, and the challenges related to interoperability and multi-cloud environments.

My guess is that we will work through security concerns, and that customers with checkbooks will force vendors to address technical interoperability issues. But then we will realize that there are operational interoperability challenges as well. In addition to cloud service provider decisions to use the AWS API set, there are tactical nuances that make having a single runbook for cloud tasks difficult across platforms.

From her presentation:

  • Cloudsigma requires the server to be stopped before making an image
  • Terremark requires the server to be stopped for a volume to be attached
  • CloudCentral requires the volume attached to the server in order to make a snapshot

The availability of various functions common in standard virtualized environment varies widely across cloud service providers – such as pausing a server, creating a snapshot, creating a load balancer, etc.

We don’t even have a common lexicon to describe a “Machine image” in AWS. VMware calls it a “Template vApp,” Openstack calls it an “Image,” and CloudStack call it a “Template.”

So in an Ops meeting, if you use an OpenStack-based public cloud and a private cloud based on CloudStack, and you say “we provision using templates, not images,” and someone from another team agrees that they do that too, how do you know if they know that you are talking about different things? It confuses me even writing the sentence.

I led a panel discussion on “automated provisioning” at DevOps Days. Due to templates/images/blueprint terminology confusion, we ended up using the terms “baked” (as in baked bread) to refer to provisioning from a single monolithic instance, and “fried” (as in stir-fried vegetables) to refer to building a release from multiple smaller components, assembled before provisioning – just to discuss automation!

Bottom line: Why not avoid all the multi-cloud hybrid-cloud interoperability and ops mishmash and use the vCloud Hybrid Service for your public cloud extension of VMware implementation?

Don’t miss my sessions at VMworld this year:

  • “Moving Enterprise Application Dev/Test to VMware’s internal Private Cloud” with Venkat Gopalakrishnan
  • “VMware Customer Journey – Where Are We with ITaaS and Ops Transformation in the Cloud Era?” with Mike Hulme

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps, #SDDC, and #VMworld hashtags on Twitter.

DevOps and All The Other “Ops Religions”

By: Kurt Milne

I didn’t wake up yesterday thinking, “Today I’ll design a T-shirt for the DevOps Days event in Mountain View.”  But as it turns out – that is what happened.

Some thoughts on what went into my word cloud design:

1. DevOps is great. This will be my 4th year attending DevOps Days.  I get the organic, bottoms up nature of the “movement.” I’ve been on the receiving end of the “throw it over the wall” scenario. A culture of collaboration and understanding go a long way to address the shortcomings of swim lane diagrams, phase gate requirements and mismatch of incentives that hamper effective app lifecycle execution. Continuous deployment is inspirational, and the creativity and power of the DevOps tool chain is very cool.

2. EnterpriseOps is still a mighty force. I remember an EnterpriseOps panel discussion at DevOps Days 2010. The general disdain for ITIL, coming from a crowd that was high off of 2 days of Web App goodness at Velocity 2010, was palpable. The participant from heavy equipment manufacturer Caterpillar asked the audience to raise their hand if they had an IT budget of more than $100M. No hands went up in the startup-dominated audience. His reply – “We have a $100M annual spend with multiple vendors.” The awkward silence suggested that EnterpriseOps is a different beast. It was. It still is. There is a lot EnterpriseOps can learn from DevOps, but the problems dealing with massive scale and legacy are just different.

3. InfraOps, AppOps, Service Ops. This model developed by James Urquhart makes sense to me.  It especially makes sense in the era of Shape Shifting Killer Apps. We need a multi-tier model that addresses the challenges of running infrastructure (yes, even in the cloud era), the challenges of keeping the lights on behind the API in a distribute component SOA environment and the cool development techniques that shift uptime responsibility to developers, as pioneered by Netflix. Clear division of labor with separation of duties, and a bright light shining on the white space in between, is a model that seems to address the needs of every cloud era constituent.

4. Missing from this 3-tier model is ConsumerOps. Oops. Too late to update the shirt design. Many are consuming IT services offered by cloud service providers; there must be a set of Ops practices that help guide cloud consumption. Understanding and negotiating cloud vendor SLAs and architecting multiple AWS availability zones immediately come to mind. Being a service broker and including 3rd party cloud services as part of an integrate service catalog is another.

5. Tenant Ops. As far as I can tell, this term was coined by Kevin Lees and the Cloud Operations Transformation services team at VMware. See pages 17 and 21 in Kevin’s paper on Organizing for the Cloud. It includes customer relationship management, service governance, design and release, as well as ongoing management of services in a multi-tenant environment. VMware internal IT uses the term to describe what they do running our private cloud internally. They have a pie chart that shows the percentage of compute units allocated to different tenants (development, marketing, sales, customer support, etc). It works. It may be similar to ServiceOps in the three tier model, but feels different enough, with a focus on multi-tenancy and not API driven services, to deserves its own term.

6. Finally CloudOps. This term is meta. It encompasses many of the concepts and practices of all the others. This is a term that describes IT Operations in the Cloud Era. Not just in a cloud, or connected to a cloud. But in the cloud era. The distinction being that the “cloud era” is different than the “client server era,” and implies that many practices developed in the previous era no longer apply. Many still do. But dynamic service delivery models are a forcing function for operational change. That change is happening in five pillars of cloud ops: People, Process, Organization, Governance, and IT business.

So while some of the sessions at this year’s DevOps conference are focused on continuous deployment. I’d bet that all the topics of the “Ops religions” will be covered.  Hence the focus on the term CloudOps.

We’ll be live tweeting from DevOps next Friday. Follow us @VMwareCloudOps or join the discussion using the #CloudOps hashtag.

Consider joining the new VMUG CloudOps SIG or find out more about it during VMUG June 27th webcast.

Refresher Course in Automation Economics

It’s a key question in developing a private or hybrid cloud strategy: “What processes should we automate?”

There are plenty of candidates: provisioning; resource scaling; workload movement. And what about automating responses to event storms? Incidents? Performance issues? Disaster recovery?

To answer the question, though, you need to first establish what you’re looking to gain through automation. There are two basic strategic approaches to automation, each with specific value propositions:

  • task automation – where the proposition is more, better, faster
  • service automation – where you’re looking to standardize and scale

In my last post, I looked at how the automation strategy determines your HR needs.

In this post, I’ll highlight a simple economic model that can be used to cost justify task automation decisions. Next time, I’ll refine the math to help analyze decisions about what to automate when pursuing a service automation strategy.

The Cost Justification for Task Automation – the Tipping Point

From a cost perspective, it makes sense to automate IT tasks if:

  • the execution of the automated task has a lower cost than the execution of a manual version of the task.
  • the automated process can be run a large number of times to spread the cost of development, testing, and ongoing maintenance of the automation capability.

Brown and Hellersten at the IBM Thomas Watson Research Center expressed the idea in a simple model.[1] It compares the fixed and variable costs of manual process versus automated version of the same process. The cost calculation is based on the variable N, which represents the number of times the automated process will execute.

IT organizations typically automate existing manual processes. So we consider the fixed cost of developing the manual process as part of the automated process costs.

With these two equations, we can solve for an automation tipping point Nt. Nt, then, is the number of times a process is executed at which it becomes cost effective to automate the process.

Changing the task automation tipping point

Now, what actions could we take that would shift the tipping point? We might:

1. Reduce automation fixed costs. If we can drive down automation fixed costs, automation becomes economically attractive at lower number of process executions.

Automation fixed costs include purchasing and maintaining the automation platform, as well as standardizing process inputs, ensuring the process is repeatable, developing policies, coding automation workflow based on those policies, testing each automation workflow, documenting error and establishing exception handling procedures. We also need to add in ongoing maintenance and management of automation routines that may change as IT processes evolve. If any of this work can become highly standardized, Nt will be lower, which will in turn increase the scope of what can be further automated.

2. Minimize automation variable costs. Reducing automation variable costs also makes automation attractive at lower number of executions.

Variable costs include both the cost of each automation execution and the cost of managing exceptions that typically are triaged via manual resolution processes. With a very large number of process executions, the variable cost of each incremental automated process execution would essentially be zero except for costs related to handling exceptions such as errors and process failures. Standardizing infrastructure and components configurations, and thus management processes, reduces exceptions and lowers the tipping point.

3. Pick the right tasks.  Automating manual processes with high cost of execution is an obvious win. The slower and harder the manual task, the higher the cost of each execution, and the lower the tipping point for automating the process.

Benefits other than cost reduction

Automation offers benefits beyond cost reduction, of course. In the cloud era, demand for agility and service quality are also driving changes in the delivery and consumption of IT services.

Automation for agility 

Agility is key when it comes to quickly provisioning a development or a test environment, rolling it into production, avoiding the need for spec hardware, accelerating time to market and reducing non-development work. Typically, 10-15% of total development team effort is spent just configuring the development environment and its attendant resources. Automation can make big inroads here. Note, too, that agility and speed-to-market factors, which generally have a revenue-related value driver, typically aren’t included in task automation tipping point calculations.

Automation for service quality

Automation promises greater consistency of execution and reduced human error, quality-related benefits that also aren’t factored in the calculations above. Downtime has a cost, after all. Deploying people with different skills and variable (and often ad hoc) work procedures at different datacenter facilities, for example, directly impacts service quality. Automated work procedures reduce both human error and downtime.

Back to the math

Really, we should add the quality-related costs of error and inconsistency to our manual variable processes costs, since they mirror how automation error recovery costs are calculated.

To account for the manual process quality costs, the tipping point calculation could replace “Manual variable costs” with “(Manual variable costs + Manual quality costs)” in the denominator.

Doing that would further lower tipping point number that justifies automation.

Here’s how I sum up these concepts applied to task automation environment:

  • If a manual task is easy, it is difficult to justify automating it because the tipping point number will be very high or never reached
  • If a manual process is hard and error prone, it is easy to justify automation i.e. Nt is a low number
  • If there are a lot of process exceptions that result in a large percentage of process executions that result in a manual intervention – it makes it harder to justify automation
  • If automation routines are hard to program, or take a lot of time and effort to tweak and maintain over time due to ad hoc run book procedures – it makes it harder to justify automation

In the next post, I’ll explore the economic justifications for automation under a service automation strategy.

Follow @VMwareCloudOps for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags.


[1] Reducing the cost of IT Operations – Is automation always the answer? IBM Thomas J. Watson Research Center.  Proceedings of the 10th conference on Hot Topics in Operating Systems, June 12-15, 2005, Santa Fe, NM

VMware #CloudOps Friday Reading List – Standardization in the Cloud Era

I’ve been reviewing submissions for the Ops Transformation track at VMworld 2013.  It is a fascinating look at what a bunch of really smart people think is important in the cloud era.  Based on review of proposed panel discussions and breakout sessions, there seems to be some consensus that standardization is a key dependency for successfully deploying an automated and scalable service strategy.

The quantity and variety of topics suggests there isn’t yet consensus on how the concept of standardization should be applied. But some of the submitted topics suggest that standardization of service definitions and infrastructure configurations is what makes innovation possible at the business process level – where it counts.

Related reading topics:

Monitoring Strategies in the Cloud by Michael Kavis
Michael Kavis takes a look at best practices when dealing with cloud, including standardizing as much as possible in cloud based systems so that a high level of automation can be put in place.

What Goes Around Comes Around Part 2: Is Standardization Still a Valid Strategy? By Theo Priestley
Standardizing business processes reduces innovation. Note – VMware paper submissions suggest that standardizing IT services and infrastructure enable greater business process innovation.

Resilience Engineering Part 1 and Part 2 By John Allspaw
Great insights on how resiliency, automation, and standardization are all tightly linked.

Follow us on Twitter at @VMwareCloudOps for future updates, and join the conversation using the #CloudOps and #SDDC hashtags.

IT Automation Roles Depend on Service Delivery Strategy

Many of the agility and cost reduction benefits realized by deploying standardized and virtualized infrastructure (compute, network and storage) come from automating management processes.

But those benefits don’t come (forgive the pun) automatically. Automation is an approach to achieve a goal. And your reasons for deploying automation greatly influence both the tactics you follow, and their  impact the people, their roles, and skills required to function in a more automated environment.

Strategy Impact on the Automation Manager

I recently spoke with the IT Director of a large multinational bank. As a leader in the IT Operations organization, he is creating a new ‘Automation Manager’ function.

Until last year, he’d been working to get key bank ITIL processes to level 3 or in some cases level 4 maturity, and he’d achieved a state of advanced operations which benchmarked well against peers. As part of that effort, he’d been deploying a group of ITSM process owners in a matrix organization structure with central process owners working part-time placed in various business units. That approach worked well.

Now, though, his focus has shifted from process maturity to process automation. Yes, the department had a foundation of mature and consistent process, created by expert staff. But to get to next level of efficiency, he told me, they needed to increase automation. To help with that, the director created a new role: the Automation Manager.

This wasn’t the first time that I’d heard about such a job. This particular conversation, however, highlighted various questions shaping the new role:

  1. What type of automation? There are different types of automation: powershell script, cron job, workflow tool, policy based orchestration, configuration automation. There are the different activities that can be automated: provisioning, maintenance, scaling resources, proactive incident response. And there are different degrees of automation: automating just a few actions, partical workflows, or going end-to-end.
  2. What is the job scope? Automation has a lifecycle: intake, classification, resources, version control, tracking benefits. What part of it does an Automation Manger own?
  3. Where does the role fit in the organization?  An Automation champion ideally owns overall program, but is it also someone to whom others can turn for technical automation advice? The IT Director’s idea is to create a central role. But where does it fit? The ITSM process group? The technology team?

All of these questions had me stepping back and thinking at a higher level. The answers depend more on the strategic goals, and less on the tactics.. Given that, I think two primary automation strategies  frame decisions about the Automation Manager role.

  1. Task Automation. Here, automation helps staff do existing work more, faster, better. With this strategy, the same people continue doing pretty much the same admin jobs they did before. The new automation manager becomes an overlay function that helps each admin use new tools to turbo boost their existing work.
  2. Service Automation. Moving from a technology or infrastructure focus to a service orientation requires more standardization and automation. This strategy is about automating new processes that didn’t exist before. In many cases, automation enables workflow that wasn’t even possible in a manual process approach. The automation capabilities support admin roles that may be largely new, or may be a combination of previously separate roles.

The Value Proposition Tradeoff

One way to think about these options is by revisiting the basic tradeoff between speed and efficiency, versus customization.

Consider the analogy of the clothing business. If your customers want affordable clothes now, you can offer “off the rack”  in standard sizes. If they don’t mind paying more and waiting, you can offer  something custom-made. They’re simply different value propositions.

In the clothing business, there are uses for automation that help tailors deliver custom work faster, with fewer errors. But automation can also be deployed as part of a strategy to create goods in standard sizes on a massive scale. They are two different automation strategies.  The role of the automation manager is either help tailors improve custom work, or build a factory to mass produce standard sizes.

Similarly IT can deploy automation to help execute existing work faster, better, and with less effort. Or IT can use automation to deliver highly standardized services at scale. Either way, if you’re clear about the strategy, the details about the Automation Manager role will come into focus.

For one, helping existing staff do more, better, faster, requires a role largely focused on implementing tools, training users, and offering support. For the other, building an IT factory that delivers standard “off the rack” services at scale, requires a role that is a process and systems engineer who builds and maintains factory robots that do the work.

Clearly, the IT Director I spoke of has a more, faster, better automation strategy.  What strategy do you have?

What can IT admins do to better position themselves for their new responsibilities in the cloud era? Find out by joining a live Twitter #CloudOpsChat, on “The Changing Role of the IT Admin” – Thursday, April 25th at 11am PT.

We’ll address questions such as:

  1. How does increasing automation change the IT admin job? #CloudOpsChat
  2. Is increasing automation and virtualization good or bad for your career? #CloudOpsChat
  3. Do abstraction and better tools decrease the need for deep expertise? #CloudOpsChat
  4. Does a cloud admin need programming skills? #CloudOpsChat
  5. What skills are needed for scripting vs. automation and orchestration? #CloudOpsChat
  6. What specific automation skills do IT admins need today in order to meet the demand for virtualization and cloud technologies? #CloudOpsChat

Follow us on Twitter at @VMwareCloudOps for future updates, and join the conversation using the #CloudOps and #SDDC hashtags.

VMware #CloudOps Friday Reading Topic – Service Definition Process

Increasingly, we see the service definition process as a key dependency in the success of a hybrid cloud or SDDC strategy. Standardization of service offerings (and thus configurations, as well as management and maintenance processes) is key to simultaneously achieving agility and efficiency benefits.

Here are some interesting Friday reads related to standardization and the service definition process.

Putting The Service Back In “as-a-Service” by CloudTweaks
Pete Chadwick offers advice on how to uilize a service-oriented approach to ensure the business can easily access and rapidly deploy what it needs.

Preventing Epidemics in Cloud Architectures by Gordon Haff
Gordon digs into a recent presentation by Netflix’s ubiquitous Adrian Cockroft. Understand the tension between the benefits of standardized services, and the inherent weakness of a homogenous environment.

ITSM Goodness: How To Up Your IT Service Management Game In 7 Steps by Barclay Rae
To achieve ITSMGoodness – start by listening to customers, and structure services based on business outcomes. Services trump SLAs.  Good perspective from visionary Barclay Rae!

Service Initiation: Understanding the People and Process Behind the Portal by David Crane and Kurt Milne
In VMware’s CloudOps operating model, Service Definition is one part of the multi-part service initiation process.  Listen to this webcast to understand how these four processes fit together.

Follow us on Twitter at @VMwareCloudOps for future updates, and join the conversation using the #CloudOps and #SDDC hashtags.

VMware #CloudOps Friday Reading Topic – It’s Time for Change

As more organizations leverage software-defined datacenter technology to increase resource utilization and automate IT processes, what does this mean for how IT can organize itself to optimize results?

There are a variety of ways IT can transform itself to increase agility, reduce costs, and improve quality of service.

Cloud Computing: 4 Ways To Overcome IT Resistance (Kyle Falkenhagen, ReadWrite)
Enterprise cloud adoption is a transformative shift – these organizational change strategies can help IT departments fight fear as they move to cloud computing.

Secrets of a DevOps Ninja: Four Techniques to Overcome Deployment Roadblocks (Jonathan Thorpe, Serena Software)
Process consistency and automation help development and operations work closely together to get software that delivers value to customers faster.

On IT’s Influence on Technology Buying Decisions. Role #1: Get Out of the way (Ben Kepes, Diversity Limited)
IT needs to help set parameters, then get out of the way and let the business and users drive the process.

The Orchestrated Cloud (Venyu)
The Software Defined Admin – orchestrates provisioning, scaling, incident response and disaster recovery.

When all resources in the datacenter can be manipulated via API (Software-defined data center), the traditional role of the IT admin and how admins are grouped in the IT organization will change.

This means that IT has a great opportunity to reinvent itself as a strategic business enabler. The question is whether you’re ready to rise to the occasion.

Follow us on Twitter at @VMwareCloudOps for future updates, and join the conversation using the #CloudOps and #SDDC hashtags.

A New Kind of Sys Admin

I’m going out on a limb.  I predict that demand for IT professionals who keep complex systems running will grow in the next 5 years. Or 10 years. Or forever.  Until people and businesses realize that tech is a fad, and start relying LESS on technology to do good work, connect with people, make life better.

For this topic, let’s accept the claim that new technologies that abstract and automate resources in the data center or the cloud simultaneously reduce costs AND improve IT responsiveness.  Double value.  Good for business.

But what about people.  Are new technologies good for careers in Infrastructure and Ops?  More importantly, are they good for YOUR career?

Assume that a growing global population coupled with a bigger global “tech footprint” means ever growing IT industry and overall more jobs. More specifically, for IT admins pondering the impact of “the cloud” on their future prospects – job prospects for:

  • Single system specialists – cool
  • Multi-function generalists – warm
  • Admins who can program a little, and get things done with tools that abstract away system details – hot hot hot

Bottom line – even though much of the savings derived from more dynamic and distributed service delivery models (read “the cloud”) is Opex savings, there is and will continue to be exciting opportunity working in IT.

There will be more opportunity to focus on adding business value, and less focus on managing the fine grained details of compute, storage, or network functions.

Here are a couple links for further reading:

Luke from Puppet Labs describes, “The rise of a new kind of administrator.”
Jasmine McTigue discusses, “IT Automation – good for business and IT careers.”

IT Joke – what people say about the water glass picture:

  • Optimist – sees a glass half full with lots of opportunity
  • Pessimist – sees a glass half empty with lots of waste
  • IT Engineer – sees glass that is twice as big as required capacity

Follow us on Twitter at @VMwareCloudOps for future updates, and join the conversation using the #CloudOps and #SDDC hashtags.

VMware #CloudOps Friday Reading Topic – Workload Migration

What existing enterprise applications should be moved to the public cloud?

Your enterprise cloud strategy should be based on a clear understanding of what is realistic. Verify your assumptions about what can be and what shouldn’t be moved into a public Infrastructure as a Service (IaaS) cloud environment as you build a business case and a plan.

 Q: Which Apps Should I Move to the Cloud? A: Wrong Question (James Staten, Forrester)
Asking “which apps to move to the cloud” shows we still have much to learn about public cloud environments.

Which Apps to Move to the Cloud? (Ben Kepes, Diversity Limited)
Think in terms of “peeling of an onion” and “baby steps.”

Checklist: Is my app ready for the cloud? (Marco Meinardi, Joyent)
Three point checklist.  Number 1, is the application written for the cloud?

How To Pick a Project for Your First Public Cloud Migration (David Linthicum, Blue Mountain Labs)
Start with low visibility, low risk, low complexity.

With all the hype about the cloud, the dirty little secret is that most enterprise applications won’t be forklifted into a public cloud environment.

From an IT operations perspective – that means as your company pursues a cloud strategy, or increases the use of automation and standardization to gain agility, you will still need to manage a mix of physical, virtual and cloud environments. You’ll support a broad mix of existing enterprise applications and new killer apps designed for the cloud. And regardless of where the workload physically lives, you will be responsible for making sure it all works and works together.

It is an exciting time to be an IT Ops professional. Use of technology grows unabated. Complexity seems to grow without limits. That sounds like job security to me.

Follow us on Twitter at @VMwareCloudOps for future updates, and join the conversation using the #CloudOps and #SDDC hashtags.

The Shape Shifting Killer App

In a classic August 20, 2011 Wall Street Journal editorial, Marc Andreessen pointed out that software is eating the world. He is right.  It is an exciting time to be a software developer.

What that means to you, is that somewhere out there right now, someone is furiously coding the next killer app with the intent to turn your industry on its ear.

The key question: how can you and your company make software that eats the world, faster, better and cheaper?

One way is to write a different kind of app. Not the legacy application that fills your datacenter with code written in the developer’s favorite language, that uses middleware or web server of choice, and a database that is optimized for use. Both the code and the infrastructure is tailored IT and custom fit for purpose.

The traditional enterprise application typically ends up as a monolithic blob that is:

  • Brittle – any change to application, middleware or infrastructure has a very real probability of causing service failure.
  • Hard to support – extensive documentation and training required for new developer to make changes, or for ops support team to maintain over time and then recover from outage.
  • Hard to scale – does not sense and respond as business needs and usage levels ebb and flow. Even with virtual servers, adding or removing resources, and moving work from one cluster to another is based largely on manual processes.

Your datacenter is full of them.

By contrast, the killer app that will disrupt your industry is likely to be:

  • A mashup of a loosely coupled set of components that each perform a simple task very well.
  • That call on-premise, or hosted, or public services (e.g. hybrid service oriented architecture)
  • That are designed for highly variable load conditions (e.g. rapid prototype, then fail or scale)
  • That leverage virtualized resources (compute, storage, network, security) that can be added, configured, and removed via API call.

Net result, is that your killer app will be different.  It will be architected to leverage services that rely on virtual resources (on premise or somewhere out there in the cloud) that join and leave the application as conditions change, and that cause the application topology to constantly shift.

Ponder that for a moment. The app that is going to delight your customers, and make IT a strategic contributor to your business, and drive your stock PE multiple far above your competitors — is going to be a shape shifter.

For an IT operations professional, the shape shifting killer app requires profound changes that needs to be addressed head on.  Right now.

As a result, VMware is investing in CloudOps based on four key premises:

  • Process and procedure is more important than ever before. How we do things matters. Ad hoc operations won’t cut it when managing a shape shifting killer app.
  • Many of the best practices that implement a “change control” based resiliency strategy, won’t carry forward to shape shifting apps. It may be time to let go of some things near and dear that have worked well for us in the past, but that may be holding us back.
  • We need a new IT operating model. This may be a controversial statement. But a service lifecycle perspective becomes an important part of a revised model that recognizes and optimizes a fundamentally new set of practices at the apps management layer, the service management layer, and the infrastructure management layer. Something like this may be a good starting point for a conceptual framework.
  • And we need a set of management principles and working assumptions that optimize the separation of concerns and white space between those who are focused on apps, services and infrastructure. Not focused on white space between dev and test, or between functional silos.

How do we operate in this new world? Lets work together and figure it out!

Follow us on Twitter at @VMwareCloudOps for future updates, and join the discussion by using the #cloudops and #SDDC hashtags.

References: