Author Archives: Andy Troup

Workload Assessment for Cloud Migration, Part 4: Getting Stakeholder Buy-in

By: Andy Troup

Successfully assessing workloads and placing them in the appropriate private, hybrid, and public cloud environments can make or break a cloud strategy, enabling greater agility and cost efficiency.

In this series, I’ve been reviewing four key areas to consider when performing a workload assessment. In the first three parts, I suggested a framework for classifying workloads as potential candidates for moving to a cloud environment, looked at service portfolio mapping, and examined how to assess the costs and benefits of moving workloads to a target cloud model.

In this final part of the series, let’s take a look at stakeholder analysis. How do you get stakeholder buy-in on your cloud strategy and roadmap?

First, Identify Your Stakeholders

When running a migration project, the first thing you should do is understand who your stakeholders are so you can make a judgment about:

  • Risks that specific stakeholders will hold;
  • Any unsupportive attitudes they might hold towards the proposed system;
  • How they can help indicate the overall socio-political feasibility of the system.

Here’s a sequence you can follow to be sure you have everyone (and their relative influence on the project) accounted for:

  • Identify each stakeholder by title and name.
  • What are his/her interests?
  • What level of influence and power will each have in the project?
  • In what capacity might each be most effective for the project?
  • How do you plan to communicate with each of them?

Don’t forget: There will be key stakeholders for the entire program of work as well as stakeholders for individual workloads (e.g. business units, application owners etc.).

Different Stakeholder Types

Stakeholders can be divided into four different groups. Each has their own set of concerns and drivers for putting workloads into the cloud:

  • Business Stakeholders:
    • Concerned about service disruption and service levels.
    • Drivers: Reducing costs and time to market.
  • Governance:
    • Concerned about compliance, risk impact, data security, provider certifications (SOX etc).
    • Drivers: Same as their concerns.
  • Technology:
    • Concerned about technical feasibility.
    • Drivers: Greater agility, not just keeping the lights on, but being able to implement more lights due to reduced firefighting, efficiencies, cost controls.
  • Operational:
    • Concerned about vendor stability/lock-in, SLAs, availability.
    • Drivers: Same their concerns.

The more you can understand your stakeholders drivers and especially concerns, the better equipped you are to ensure that they are on-board with your migration programme.

Applying Governance – A Matrix

Once you’ve identified your stakeholders and their concerns/drivers, you can place them in a matrix that calibrates their levels of interest and influence. This matrix will help you understand how to monitor and/or manage their concerns:

AndyPt4
 

An Example

Take, for example, the case of supporting a decision around the feasibility of migrating your client-server application. Completing a stakeholder analysis will reveal that your proposed cloud migration will have many implications for the organization, including non-technical areas, such as the finance and marketing departments.

Overall, a positive net benefit may be clear to the business development functions of the enterprise and the more junior levels of the IT support functions. Yet the project management and support management functions of the enterprise might see a net zero benefit, while the technical manager and the support engineer functions might see a negative net benefit.

By doing your analysis, though, you will have identified all potential benefits and risks associated with the migration, and thus are able to accurately inform all stakeholders of factors that might either confirm or challenge their initial impressions.

The result: All stakeholders are heard and their perceptions accounted for, but none get to control the outcome without merit.

Review

Remember the following points for successful stakeholder analysis:

  • Identify all stakeholders for both your entire program and your individual workloads;
  • Understand the concerns and drivers of your various stakeholder types;
  • Calibrate your stakeholders’ levels of interest and influence in order to best decide how to monitor and/or manage their concerns.

Finally, this blog is really trying to guide you in who to communicate to, how to communicate with them and when to communicate. If I can leave you with one message it is that communication is key to your success. The more information you can impart, the more confident your stakeholders will be in the success of the project. Tell them:

  • Why the project is important;
  • How the project will run;
  • The benefits for them and their department;
  • The benefits for the organization as a whole.

When you’ve finished telling them, start over and tell them again. Communicate, communicate, communicate.

Hopefully this series of articles has provided you with some insight into how to run your migration program with some snippets of information that you can take away and use. If you missed the earlier parts of this series, you can find part one here, part two here, and part three here. Also, check out our list of blogs on workload migration.

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

Workload Assessment for Cloud Migration, Part 2: Service Portfolio Mapping

By: Andy Troup

Successfully assessing workloads and placing them in the appropriate private, hybrid, and public cloud environments can make or break a cloud strategy, enabling greater agility and cost efficiency.

In this series, I’m reviewing four key areas to look at when performing a workload assessment. Last time, I suggested a framework for classifying workloads as potential candidates for moving to a cloud environment. Next time I’ll look at the approach for assessing the costs and benefits of moving particular workloads to the cloud, and then in the final part I’ll cover stakeholder analysis.

For now, though, let’s think about service portfolio mapping and how to determine the target cloud service and deployment model for each candidate workload.

First Steps

Let’s assume you’ve established which workloads you’d like to put in the cloud. Now you need to establish a catalog of standardized services that will be placed in the cloud to offer to your customers, as well as to assist in your workload migration.

A service catalog may already exist. But if it doesn’t, this post shows you how to go about establishing one, and how it evolves over the lifespan of a migration project.

To do this successfully, you need to understand the impact that workloads have on the service catalog and vice versa, including:

  • Placement strategy – where workloads can be placed & the impact of placements on the service catalog
  • Workload strategy – how to fit workload analysis with placement strategy
  • How to use the workload and placement strategy to build a service catalog
  • How to use the service catalog to help with workload analysis

Defining Your Placement Strategy

Your workload placement and cloud strategy are based on trade-offs around cost, quality of service and risk.

It’s best to first establish what types of cloud services would be most appropriate to provide and build a roadmap of service types into your cloud strategy. This strategy should be very closely aligned with the requirements of your business partners to ensure you can service their needs when required. So you need to ask: what’s your service model?

  • Infrastructure as a Service (IaaS) only? Offering IaaS services is the most common first service type for new cloud adopters. This is especially true when these adopters already have workloads running on their virtual infrastructure, as they have gained experience of offering virtual machines to their customers.
  • Platform as a Service (PaaS)? This is typically a follow up to providing IaaS, as you can build on your lessons learned and the technology stack you have already created. However, if you have business demands for PaaS over and above IaaS, then this service type should be taken on board right away.
  • Software as a Service (SaaS)? Whether you provide SaaS services is directly influenced by the business requirements that exist. For example, certain application environments might need to be upgraded/replaced and a logical replacement SaaS offering would need to exist in the marketplace. Initially, these SaaS services will more likely be procured from public cloud providers rather than hosting them yourself.

Then you need to formulate your deployment strategy to decide where those services should run:

  • Private Cloud: (most common initially for IT) where dedicated cloud infrastructure is operated to provide cloud resources for a single organization. It may be managed by the organization or a third party and may exist on or off premise.
  • Public Cloud: the cloud resources are made available to the general public or a large industry group over the internet and are owned by an organization selling cloud services.
  • Community Cloud: the cloud resources are shared by several organizations and support a specific community that has shared concerns (e.g., educational organizations, government organizations). They may be managed by the organizations or a third party and may exist on or off premise.
  • Hybrid Cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology to provide cloud resources that enable data and application portability (e.g., cloud bursting for load-balancing between clouds).

For the official definitions of both the service models and deployment models, please refer to The NIST Definition of Cloud Computing.

Assessing Workloads for Best Fit

To be sure that you select the right location to host your services, you also need to analyze your proposed workloads in more detail. In Part 1 of this series I provided some thoughts about how to identify and analyze workloads. Now that you have also established a placement strategy, you can start asking some additional questions:

Benefits:

  1. Will migration into the cloud give me the benefits I expect?
  2. Will this migration help me achieve my goals for cloud migrations? For example, will the service be more reliable, will it be more agile, will I reduce my overall costs?

Migration:

  1. How will the migration be performed?
  2. How difficult will it be? For example, if a large amount of data is to be moved, that move may not be achievable in the outage window provided.
  3. What challenges will you face?

Asking these questions might change where you decide to place your workloads and your approach to their migration.

  • The workload placement might need to change due to particular functional or non-functional requirements.
  • You might need to reevaluate your selection of cloud providers.
  • You might need to renegotiate with your cloud provider.
  • There might be a requirement to re-architect the workload.
  • Risk of migration might be high, thus requiring additional remediation activities.
  • The migration approach might change from being a migration to instantiating a new service from the service catalog.
  • The migration might simply not be worth performing.
  • The workload SLAs (if they exist) may need to be renegotiated.

Using Your Workload and Placement Strategy to Build a Service Catalog

By taking the approach I’ve described, you’re now in a position to start thinking about how the workloads you’re assessing will impact the offerings you want in your service catalog.

During the process, you’ll discover workloads that have attributes and functions that are in demand from some of your other customers. For example, you may find a LAMP stack application that contains the version of RHEL that is your standard within your organization along with versions of Apache, MySQL, and Perl that make this workload one that you’d like to offer as a standardized service to other customers. In that case, you would want to prioritize its migration and also take the workload and put it into the service catalog (after performing any clean up work that may be required).

By taking this approach, you are ensuring that your org’s cloud service catalog is being populated with the best-of-breed workloads that have been deployed into your environment.

Leveraging Your Service Catalog to Analyze Workloads

Whether you already have a service catalog for your cloud services or are putting it together while performing migration, you can work the other way around, and leverage your catalog to analyze and migrate workloads.

When it comes to assessing workloads, you can start using the service catalog more and more as it grows to decide whether a migration is required or whether deployment of the service from the service catalog would be a better approach to provide you with a best of breed implementation based on your agreed standards.

Also, depending on your deployment strategy, you may have a number of different potential destinations for your workloads, with each having its own unique service catalog. Your goal should be to mask the complexity of all those catalogs by presenting a single enterprise service catalog to the users of your cloud. This puts you in control of the destination for all the workloads being instantiated into the cloud.

Finally, you’ll be able to compare the requirements that you have for each workload against the services you’ve established for it in the enterprise service catalog. You’ll get two possible answers:

  • A component exists within the enterprise service catalog that matches the requirement.
    • The service might be provided from the public cloud.
    • The service could be already within the private cloud service catalog.
  • No component exists within the enterprise service catalog that matches the requirement, i.e. neither service provider or the private cloud provider have this workload within their catalog.
    • You might want to develop this service within the private cloud and add it to the private cloud service catalog.
    • You may choose not to develop this service within the private cloud.

For a private solution, you can test which workloads fit the service catalog you have defined and potentially alter the catalog based on your results.

For a public or managed solution, you would need to understand which workloads fit the technical and nonfunctional requirements of your targeted public or managed cloud.

For all services, you will want to consider any service-level agreements and penalties for noncompliance that might influence price or cost.

Set For Life

As you can see, over time, the service catalog will mature and feature more and more services within it that you have accepted and that are properly compliant with your organization’s policies.

Review:

To create a service catalog that evolves over the lifespan of a migration project consider:

  • Placement strategy – ask where workloads can be placed & the impact of placements on the service catalog
  • Workload strategy – ask how to fit workload analysis with placement strategy
  • Use your workload and placement strategy to build a service catalog
  • Leverage your resulting service catalog to help with workload analysis

If you missed part one of this series on identifying and analyzing your workloads, you can find it here.

Check out our list of blogs on workload migration and stay tuned for Part 3, where we’ll look at cost/benefit analysis.

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

Automation – The Scripting, Orchestration, and Technology Love Triangle

By Andy Troup

In speaking with some of my customers, the message comes resoundingly across “WE WANT TO AUTOMATE.” So this is the sweet spot for cloud solutions as they have in-built automation to provide the defined benefits of cloud computing such as On-demand self service, Resource pooling and Rapid elasticity (as defined by NIST here).

However, upon scratching the surface and digging a little deeper, the other thing I’ve found is that when I’m told “yes we’ve got automation,” it typically means that a lot of effort has gone into developing a whole heap of scripts that have been written to solve a very specific problem. This, I would argue, is not the best way for automation to be achieved.

I was in conversation with a customer a few weeks ago where they wanted to automate a particular part of their provisioning process, and my recommendation to them was “DON’T DO IT.” Why did I say this? Well, the process was broken, inefficient, relied on spreadsheets & scripts and meant there was constant rework to have a satisfactorily provisioned system. Their provisioning process took weeks and weeks. There was no point in automating this broken process – what needed to happen was that the process had to be fixed or changed first. I won’t go into anymore detail about this particular problem, but my point is that sometimes you have to take a step back and see if there are other ways of solving a particular problem.

In summary – there’s no point in automating a broken process.

So, why do we want to automate our IT systems and the provisioning of them anyway? Primarily because we want two things:

  1. To take the boring repeatable activities that many IT administrators undertake and get a system to do it instead. This frees up time for the administrator to do the more interesting and difficult things.
  2. Remove the potential for errors. Anything that is done as a manual activity involving people is liable to be inconsistent and error-prone (I say liable, but really we all know that they will be inconsistent and error-prone). Cloud solutions are all based on the premise that everything is standardized, and thus we need to remove any activity that introduces unreliability.

OK, so we’ve now established that automation is a good thing. All we need to do now is work out HOW we’re going to automate, and this may introduce some difficult decisions.

So what are the automation options? Well, in my mind automation comes in three different flavours which should be used together to solve the automation challenge. Here they are with some definitions I found:

  1. Script – programs written for a special runtime environment that can interpret and automate the execution of tasks which could alternatively be executed one-by-one by a human operator. (http://en.wikipedia.org/wiki/Script_(computing))
  2. Orchestration – describes the automated arrangement, coordination, and management of complex computer systems, middleware, and services. (http://en.wikipedia.org/wiki/Orchestration_(computing))
  3. Policy – Policy-based management is an administrative approach that is used to simplify the management of a given endeavor by establishing policies to deal with situations that are likely to occur. (http://whatis.techtarget.com/definition/policy-based-management)

In terms of their use, the image below shows how I believe they should be used and in what quantities. As you can see, we should be aiming for as much policy implementation as possible with as little script as we can achieve.

If you have a process you’d like to automate, to find the solution, you should work up the pyramid from the bottom.

So the first question you should ask yourself is “can I create a policy or several policies to solve the problem?” This will have a dependency on the technology available to utilize the policy, but should be the first port of call. It may even be worth considering investing in the technology to make the policy implementation possible. The overhead of creating and maintaining policies are small and they will provide a robust solution to your problem with reliability and consistency.

If it isn’t possible to create a policy to solve the challenge, next consider orchestrating a solution. This will provide a reusable, standardized capability that has an element of management/maintenance overhead and will be reliable.

Finally, if neither policy nor orchestration will work for you, then utilize scripting as a last resort. Why a last resort? Scripting is a tactical, bespoke solution for a specific requirement and will require managing and maintenance during its entire life, which in turn will incur costs and will be less reliable.

So in summary, when you are considering automating a process:

  • Step back from the automation challenge and consider the options. They may not be what you expected.
  • Work up the “Love Triangle” from the bottom.
  • If you can’t implement a policy, consider orchestration and use scripting as a last resort.

For more great insight on automation, see our previous posts highlighting automation economics and IT automation roles.

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

Workload Assessment for Cloud Migration, Part 1: Identifying and Analyzing Your Workloads

By: Andy Troup

Conducting a thorough workload analysis can make or break the success of a cloud strategy.

If you are successful with assessing workloads and placing them in the appropriate private, hybrid and public cloud environments, then this will help you fulfill your cloud strategy, thus helping you enable greater agility and cost efficiency. If your assessment is unsuccessful, then these benefits will be much harder to achieve and you could see higher costs, lower performance and unhappy customers.  Remember, success breeds success, so if you have happy customers who are realizing the benefits of your cloud implementation, others will be knocking at your door. If you are unsuccessful, the pipeline of customers will very rapidly dry up.

In this four-part series, I’ll explain four main considerations that you should examine when performing a workload assessment. In this blog, I’ll suggest a framework to use to classify workloads as potential candidates for moving to a cloud environment. My next three blog posts in this series will cover service portfolio mapping, analyzing the cost and benefits of moving to the cloud, and last but not least, stakeholder analysis.

Common Questions

When assessing workloads to identify candidates, I often find myself asking:

  • What criteria should be considered when determining what workloads are a good fit for a new cloud environment?
  • What is the best way to capture and evaluate the criteria with minimal effort and impact on a busy IT department?

A thoughtful and efficient workload assessment framework can simplify and streamline the analysis. Without the right methodology, it can be difficult to know where to start, let alone where to finish. The larger the number of workloads, the more complex the prioritization task becomes.

Here are common considerations and requirements that factor into a potential migration:

Business Impact:

  1. Take a look at the workload and evaluate its impact on your business. Is it a business critical workload? How does it affect and impact your company? Take the answer to this question and assess it against where you are on your cloud journey. You wouldn’t want to move mission critical workloads in to your cloud during your first days after “go live” would you?
  2. For which application lifecycle phase will the workload be used (for example, development, test or production)? What are the different requirements for each environment?

Application Architecture:

  1. Is the application written for cloud environment? If not, make sure you understand the impact of migrating it into the cloud.
  2. How hard/expensive is it to refactor the application for new environment e.g. do you need to remove hard coded resource paths? What are the scaling considerations, can you already horizontally scale to add capacity by adding instances or can you only scaling up by adding more resource to a single instance?

Technical Aspects:

  1. What operating systems, databases or application servers are being consumed or provided and how hard will it be to also migrate them into the cloud?
  2. Do your database, application server and web server run on the same type of platform?
  1. What quantity of CPU, memory, network and storage are typically used/needed? Can your cloud implementation support this?
  2. What commercial and custom software support the workload?
  3. What are the dependencies or integration touch points with other workloads?

Non-Functional Requirements:

  1. What are the required service levels, performance, capacity, transaction rates and response time? Again, can your cloud implementation support this?
  2. What are the supporting service requirements?  Backup, HA/DR, security or performance monitoring?  Are specific monitoring or security agents required?
  3. Are there encryption, isolation or other types of security and regulatory compliance requirements?

Support & Costs:

  1. What are the support resources and cost for a given workload? For example, two full-time equivalent employees per server – how much does this resource cost?  Also, don’t forget licensing, how does the software vendor deal with cloud implementations of their software and what are the cost implications?
  2. What are the operational costs for space, power, cooling and so on? What will be saved by migration?

One thing remains through all of this – the benefits of moving these workloads must always outweigh the costs and the risks.

To get started on the journey of migrating your workloads to the cloud, remember these takeaways:

  • Always think about how your workload directly affects your company. With a thorough review of each of your workloads, you’ll know what changes to anticipate when you begin the migration process.
  • Make sure you’re thinking in the cloud mindset. Before beginning the migration process, make sure your applications are cloud-ready. If they aren’t already, make sure you have the proper strategy in place to bring them up to cloud-ready speed.
  • Be prepared. Not only do your employees need to know about these changes, but make sure your cloud implementation is prepared for the capacity (including cost) it will take your company to migrate to the cloud.

Check out our list of great blogs on workload migration and stay tuned for Part 2 of this series, where we’ll look at service portfolio mapping and how to determine the target cloud service and deployment model for each candidate workload.

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

The Illusion of Unlimited Capacity

By: Andy Troup 

I was at a customer workshop last week, and I used a phrase that I’ve used a few times to describe one of the crucial capabilities of a successful cloud computing service, namely “The Illusion of Unlimited Capacity.” It got a bit of a reaction, and people seemed to understand the concept quite easily. So apart from its sounding quite cool (maybe I should get out more), why do I keep on using this term?

Well, in cloud computing, we all know that there is no such thing as unlimited capacity – everything is finite. Every cloud provider only has a limited number of servers, a limited amount of storage capacity, and a limited number of virtual and physical network ports – you get the idea, it’s all limited, right?

Paradoxically, though, providers of cloud resources have to make sure their customers believe the opposite: that there is no end to what can be consumed.

The National Institute of Standards and Technology (NIST) defines one of the characteristics of cloud computing as on-demand self-service; i.e. the user can consume what they want, when they want it. Now, for cloud providers to provide on-demand self-service, they need to be confident that they can fulfill all the requests coming from all their consumers, immediately. They need to maintain, in other words, an illusion of unlimited capacity.

If at any point a consumer makes a request, and the cloud portal they use responds with a “NO” because it’s run out of cloud resources, this illusion has gone. That has real consequences. As it is very easy for consumers to move between cloud providers, it’s very likely that the provider will have lost them as customers and will find it very hard to get them back. Remember, even for internal IT cloud providers, it’s a competitive market place and the customer is king.

So, when defining your cloud strategy, you want to make sure that maintaining ‘the illusion of unlimited capacity’ is on your list. It may not be something you need to consider initially, but when demand for your services increases, you need to be ready to deal with the challenge. To prepare for it, here are 5 things you should start thinking about:

  • Understand your customers – build a strong relationship with your customers, understand their business plans, and use this information to understand the impact those plans will have on the demand for your cloud services.
  • Implement the appropriate tooling – so you can not only understand demand for your cloud capacity today, but also forecast future demand.
  • Consider the Hybrid Cloud – think about how you would burst services in and out of a hybrid cloud and when you would need to do it. Before you actually need to do this, make sure you plan, prepare and automate (where possible), so that everything is in place when it’s needed. Don’t wait until it’s too late.
  • Train users on service consumption etiquette – if they know they can get what they need when they need it, they will be less inclined to hoard resources. And if they aren’t hoarding resources, the pressure to predict their future demand (which can be difficult) will be reduced, because resources are being used more efficiently. Why not agree that they won’t have to plan capacity if they “turn it off” when done, thus freeing resources back to the pool and further increasing spare capacity.
  • Kill zombie workloads – be aware of services that aren’t being used and turn them off (after having a conversation with the customer). Also, encourage the use of leases for temporary services when appropriate.

Finally, going back to the essential characteristics of cloud computing as defined by the National Institute of Standards and Technology (NIST) (here is the very short document for those of you that haven’t read it), one other characteristic is rapid elasticity.

If you think about it, this article is really all about rapid elasticity. It’s just another way of saying that you need to maintain the illusion of unlimited capacity. Now, put on your top hat, hold on to your magic wand, and keep the illusion going.

For future updates, follow @VMwareCloudOps on Twitter and join the conversation using the #CloudOps and #SDDC hashtags.