Home > Blogs > VMware CloudOps

10 Factors to Consider When Estimating IT Staff Ratios Needed to Operate a Cloud Platform

By Pierre Moncassin

Pierre Moncassin-cropIn this post, I want to share with you some “rule of thumb” estimates on how many full-time equivalent (FTE) positions an IT organization may need to operate a cloud platform. Note: this is not an exact science, so I wanted to give you the practitioner’s approach. What are the general guidelines? What do I need to take into account?

Readers can learn more specific details around the different roles in the cloud management team in the VMware white paper “Organizing for the Cloud” as a starter. Here I use a generic term of “administrator” or “operator” to broadly describe the technicians/analysts/operators who manage and configure the tools on a daily basis. Here’s my list of factors to consider when estimating IT staff ratios:

  1. Number of lines of business. It stands to reason that the higher the number of distinct business units (lines of business) that are using the cloud, the higher the number and complexities of workflows to support, the more user profiles to manages, reports to produce, and so forth.
  2. Number of data centers. If the toolsets must manage multiple data centers, there will be added complexity in order to manage multiple environments, which often are in different locations.
  3. Level staff skill/experience. The higher the experience of the operators, the larger and more complex the infrastructure they can manage.  In other words, IT should require fewer FTEs to manage the same level of complexity in a cloud infrastructure. (This is a topic that deserves a separate article: “How the IT Organization Learns to Use Cloud Management Tools — and Over Time.”)
  4. Number of services. By this I mean cloud-type services, as in IT-as-a-service or applications. As a starter, determine how many services will be offered in the cloud service catalog.
  5. Workflow complexity. Factor in the internal complexity of the automated workflows. For example, on a scale of 1-5 (5 being most complex), a workflow with multiple approval points might score as 5, whereas a basic workflow as 1.
  6. Internal process complexity. Within IT, the organization with a higher number of mandatory internal process steps (which might all be in place for good reason) will likely need more staff (or it will take their staff longer) to carry out the same tasks as the organization with fewer internal process steps. A higher degree of complexity often develops in highly regulated environments, be it defense or civil administrations, or where an outsourcing provider requires rigid contractual relationships with inflexible approvals. Process and workflow complexity are related but separate considerations (all processes are not automated into workflows).
  7. Number of third-party integrations. The more integrations that need to be built into the automation workflows, the higher the workload for the operators.
  8. Rate of change. Change may be due to business change (mergers, acquisitions, new products, new applications), but also technological change (such as internal transformation programs). These may impact FTE requirements.
  9. Number of virtual machines under management. It may help to group into broad ranges: less than 100, 100 to 1,000, 1,000 to 10,000, and above 10,000. That range will impact FTE requirements.
  10. Number of user dashboards/reports to maintain. This can range from a couple basic reports to dozens of dashboards and complex reports. If the reporting is not sufficiently automated, the “unfortunate” administrators may need to spend a substantial part of their time producing custom reports for various user groups.

For those readers keen on modeling, each factor I’ve provided can be quite easily prorated on a 1-to-5 scale and turned into a formula. Others can be satisfied with applying as a simple rule of thumb.

My approach can be extended to VMware vRealize Automation or vRealize Operations management products, as well as other management tools. Stay tuned for a future article, as I am also at work to break down the roles far more accurately than “administrators.”

Meanwhile, consider the above factors I’ve outlined as basic guidelines. And a call to action for practitioners: Compare my guidelines to your metrics, and send me your feedback!

—-
Pierre Moncassin is an operations architect with the VMware Operations Transformation global practice and is based in the UK.

Transforming Operations to Optimize DevOps

By Ahmed Al-Buheissi

Ahmed_croppedDevOps. It’s the latest buzzword in IT and, as usual, the industry is either skeptical or confused as to its meaning. In simple terms, DevOps is a concept that allows IT organizations to develop and release software rapidly. By acknowledging the pressure the Development and Operations teams within IT place on each other, the DevOps approach enables the Development and Operations teams to work closely together. IT organizations put policies for shared and delegated responsibilities in place, with an emphasis on communication, collaboration, and integration.

Developers have no problem writing code and pushing it out, however their demand for infrastructure causes conflict with the Operations team. Traditionally it is the Operations team that release code to the various environments including Development, Test, UAT, and Production. As developers want to continuously push functionality through the various environments, it is only natural that Operations gets inundated with requests for more infrastructure. When you add Quality Assurance teams in the mix, efficiency is negatively impacted.

Why the rush to release code?
Rapid application development is requisite. The face of IT is changing very quickly and will continue to change even faster. Businesses need to innovate fast, and introduce products and services into the market to beat the competition and meet the demands of their customers.

Here are four reasons rapid application development and release is fundamental:

  1. This is the social media age. Bad code and bugs can no longer be ignored and scheduled for future major releases; when defects are found, word will spread fast through Twitter and blogs.
  2. Mobile applications are changing the way we work and require a different kind of design—one that fits on a smaller screen and is intuitive. If a user doesn’t like one application, they’ll download the next.
  3. Much of the software developed today is modular and highly dependent on readily-available modules and packages. When an issue is discovered with a particular module, word spreads fast among user communities, and solutions need to be developed immediately.
  4. Last and most important, this is the cloud era. The very existence of the Operations team is at stake, because if it cannot provide infrastructure when Development needs it, developers will opt to use a publicly available cloud service. It is that easy.

So what is DevOps again?
DevOps is not a “something” that can be purchased — it’s an approach that requires new ways of working as an IT organization. As an IT leader, you will need to “operationalize” your Development team and bring them closer to your Operations team. As an example, your developers will need the capability to provision infrastructure based on new operations policies. DevOps also means you will need to move some of your development functionalities to the Operations team. For example, the Operations team will need to start writing workflows and associated scripts/code that will be used to automate the deployment process for the development team.

While there are adequate tools that will facilitate the journey to DevOps, DevOps is more about processes and people.

How to implement DevOps
The IT organization needs to undergo both people and process changes to implement DevOps — and it cannot happen all at once — the change needs to be gradual. It is also very difficult to measure “DevOps maturity.” As an IT leader, you will know it when your organization becomes DevOps capable — it happens when your developers have the necessary tools to release software at the speed of business, and your Operations team is focused on innovation rather than being reactive to infrastructure deployment requirements.

Also, your test environment will evolve to a “continuous integration” environment, where developers can deploy their code and have it tested in an automated and continuous process.

I make the following recommendations to my clients for process, people, and tools required for a DevOps approach:

Process
The diagram below illustrates a process for DevOps, in which the Operations team develops automated deployment workflows, and the Development team uses the workflows to deploy to the Test and UAT environments. The final deployment to production is carried out by the Operations team; in fact Operations should continue to be the only team with direct access to production infrastructure.

devops flow

Service Release Process – Service Access Validation

However, it is critical that Development have access to monitoring tools in production to allow them to monitor applications. These monitoring tools may allow tracking of application performance and its impact on underlying infrastructure resources, network response, and server/application log files. This will allow your developers to monitor the performance of their applications, as well as diagnose issues, without having to consume Operations resources.

Finally, it is assumed that the DevOps tools and workflows will be used for all deployments, including production. This means that the Development and Operations teams must use the same tools to deploy to all environments to ensure consistency and continuity as well as “rehearse” the production release.

People

The following roles are the main players in facilitating a DevOps approach:

  • Operations: The DevOps process starts with the Operations team. Their first responsibility is to develop workflows that will automate the deployment of a complete application environment. In order to develop these workflows, Operations is obliged to be part of the development cycle earlier and will therefore have to become closer to Development in order to understand their infrastructure requirements.
  • Development: The Development team will use their development environment to determine the infrastructure required for the application; for example database version, web server type, and application monitoring requirements. This information will assist the Operations team in determining the capacity required and in developing the deployment workflows. It will help with implementing the custom dashboards and metrics reporting capabilities Development needs to monitor their applications. The Development team will be able to develop and deploy to the “continuous integration” and UAT environments without having to utilize Operations resources. They can “rip and replace” applications to these environments as many times as needed by QA and end-users in order to be production-ready.
  • Quality Assurance (QA):  Due to the high quality of automated test scripts used for testing in such an environment, the QA team can play a lesser role in a DevOps environment by randomly testing applications. QA will also need to test and verify the deployment workflows to ensure the infrastructure configuration used is as per the design.
  • End Users: End-user testing can be reduced in a DevOps environment, by only randomly testing applications. However once DevOps is in place, end users should notice a vast improvement in the quality and speed of the applications produced.

Tools
VMware vRealizeTM Code StreamTM  targets IT organizations that are transforming to DevOps to accelerate application released for business agility. Some of the features it offers include:

  • Automation and governance of the entire application release process
  • A dashboard for end-to-end visibility of the release process across Development and Operations organizations
  • Artifact management and tracking

For IT leaders, vRealize Code Stream can help transform the IT organization through a DevOps approach. The “continuous integration” cycle is a completely automated package that will deploy, validate, and test applications being developed.

DevOps can also benefit greatly from using platform-as-a-service (PaaS) providers. By developing and releasing software on PaaS, the consistency is guaranteed as the platform layer (as well as lower layers) are always consistent. Pivotal CF, for example, allows users and DevOps to publish and manage applications running on the Cloud Foundry platform across distributed infrastructure.

Conclusion
Although DevOps is a relatively new concept, it’s really just the next step after agile software development methods. As the workforce becomes more mobile, and social media brings customers and users closer, it’s necessary for IT organizations to be able to quickly release applications and adapt to changing market dynamics. (Learn how the VMware IT DevOps teams are using the cloud to automate dev test provisioning and streamline application development in the short video below.)

Many organizations have tackled the issues associated with running internal development teams by outsourcing software development. I now see the reverse happening, as organizations want to reach the market more quickly and have started to build internal development teams again.

For the majority of my clients, it’s not a matter of “if” but “how quickly” will they introduce DevOps. By adopting DevOps principles, their development teams will be able to efficiently release features as demanded by the business, at the speed of business.

====
Ahmed Al-Buheissi is an operations technical architect with the VMware Operations Transformation global practice and is based in Melbourne, Australia.

 

Leveraging Proactive Analytics to Optimize IT Response

By Rich Benoit

Benoit-cropWhile ushering in the cloud era means a lot of different things to a lot of different people, one thing is for sure: operations can’t stay the same. To leverage the value and power of the cloud, IT organizations need to:

  1. Solve the challenge of too many alerts with dynamic thresholds
  2. Collect the right information
  3. Understand how to best use the new alerts
  4. Improve the use of dynamic thresholds
  5. Ensure the team has the right roles to support the changing environment

These steps can often be addressed by using the functionality within VMware vRealize Operations Manager, as described below.

1) Solve the challenge of too many alerts with dynamic thresholds
In the past when we tried to alert based on the value of a particular metric, we found that it tended to generate too many false positives. Since false positives tend to lead to the alerts being ignored, we raise the value of hard threshold for the alert until we no longer get false positives. The problem is that users are now calling in before the alert actually triggers, defeating the purpose of the alert in the first place. As a result, we tend to monitor very few metrics because of the difficulty in finding a satisfactory result.

However, now we can leverage dynamic thresholds generated by analytics. These dynamic thresholds identify the normal range for a wide range of metrics according to the results of competing algorithms that best try to model the behavior for each metric over time. Some algorithms are based on time such as day of the week, while others are based on mathematical formulas. The result is a range of expected behavior for each metric for a particular time period.

One of the great use cases for dynamic thresholds is that they identify the signature of applications. For example, they can show that the application always runs slow on Monday mornings or during month-end processing. Each metric outside of the normal signature constitutes an anomaly. If enough anomalies occur, an early warning smart alert can be generated within vRealize Operations Manager that indicates that something has changed significantly within the application and someone should investigate to see if there’s a problem.

2) Collect the right information
As we move from more traditional, client-server era environments to cloud era environments, many teams still use monitoring that has been optimized for the previous era (and tends to be siloed and component-based, too).

It’s not enough to just look at what’s happening with a particular domain or what’s going on with up-down indicators. In the cloud era, you need to look at performance that’s more aligned with the business and the user experience, and move away from a view focused on a particular functional silo or resource.

By putting those metrics into a form that an end user can relate to, you can give your audience better visibility and improve their experience. For example, if you were to measure the response time of a particular transaction, when a user calls in and says, “It’s slow today,” you can check the dynamic thresholds generated by the analytics that show the normal behavior for that transaction and time period. If indeed the response times are within the normal range, you can show the user that although the system may seem slow, it’s the expected behavior. If on the other hand the response times are higher than normal, a ticket could be generated for the appropriate support team to investigate. Ideally, the system would have already generated an alert that was being researched if a KPI Smart Alert had been set up within vRealize Operations Manager for that transaction response time.

3) Understand how to best use the new alerts

You may be wondering: Now that I have these great new alerts enabled by dynamic thresholds, how can I best leverage them?  Although they are far more actionable than previous metric-based alerts, the new alerts may still need some form of human interaction to make sure that the proper action is taken. For example, it is often suggested that when a particular cluster in a virtualized environment starts having performance issues that an alert should be generated that would burst its capacity. The problem with this approach is that although performance issues can indicate a capacity issue, they can also indicate a break in the environment.

The idea is to give the user as much info as they need when an alert is generated to make a quick, well-informed decision and then have automations available to quickly and accurately carry out their decision. Over time, automations can include more and more intelligence, but it’s still hard to replace the human touch when it comes to decision making.

4) Improve the use of dynamic thresholds
A lot of monitoring tools are used after an issue materializes. But implementing proactive processes gives you the opportunity to identify or fix an issue before it impacts users. It’s essential that the link to problem management be very strong so processes can be tightly integrated, as shown in figure 1.

event incident problem cycle

Figure 1: Event incident problem cycle

During the Problem Management Root Cause Analysis process, behaviors or metrics are often identified that are leading indicators for imminent impacts to the user experience. As mentioned earlier, vRealize Operations Manager, as the analytics engine, can create both KPI and Early Warning smart alerts, at the infrastructure, application, and end-user level to alert on these behaviors or metrics. By instrumenting these key metrics within the tool you can create actionable alerts in the environment.

5) Ensure the team has the right roles to support the changing environment.
With the new found abilities enabled by an analytics engine like vRealize Operations Manager, the roles and its structure become more critical. As shown in figure 2 below, the analyst role should be there to identify and document the opportunity for improvement, as well as, report on the KPIs that indicate the effectiveness of the alerts already in place. In addition, developers are needed to develop the new alerts and other content within vRealize Operations Manager.

new roles

Figure 2: New roles to support the changing environment

In a small organization, one person may be performing all of these functions, while in a larger organization, an entire team may perform a single role. This structure can be flexible depending on the size of the organization, but these roles are all critical to leveraging the capabilities of vRealize Operations Manager.

By implementing the right metrics, right KPIs, right level of automation, and putting the right team in place, you’ll be primed for success in the cloud era.

—-
Richard Benoit is an Operations Architect with the VMware Operations Transformation global practice.

Marketing and Communications of a Successful IT Provider

By Alex Salicrup

SALICRUP-crop2Communication is the single-most important pillar of being a service-driven IT organization. While technical aptitude and service are both vital, being able to communicate effectively about value internally and to consumers is the key to IT becoming a true business partner.

IT has always struggled because its culture is one of fragmented thought leadership; not to mention the fact that those in the IT profession are often reactive, detail-oriented, and risk averse. Overcoming these obstacles requires careful management of IT’s internal brand.

Traditional IT is control-driven and customized. Go to a third-party cloud service provider and knock on the door, and they aren’t going to hand you a customized solution. The majority of them have a solution that they have predicted you will need. They have created a small number of services that will satisfy most of their consumers.

Now is the time to take a cue from those vendors and shift to a service-oriented model of IT by truly understanding user needs and perceptions first, then designing services around them. Manage IT like it’s your own business. Be competitive, proactive, and innovative. Manage customer perceptions. Remember that risks are opportunities.

Change is Difficult
Untitled
It’s difficult to change negative perceptions, but marketing campaigns do that every day; they are designed to put a new, positive perception in your head. It’s time to start your own IT marketing campaign to manage how your company views IT and help foster change.

Here are the five components you’ll need to think about as you start your IT brand campaign.

  1. Brand:
    Admit where you are now and where you want your brand to go. Your name, symbol, and color palate, are all part of the perception. So are all of the ways you communicate, including emails and templates.
  2. Catalyst of change:
    What are the reasons why your stakeholders would want change? The biggest place where this is a problem is within IT.
  3. Vision:
    In order to create a good catalyst, you need a vision that you can communicate. “We have to change because…” Many people are nervous about cloud, for example, but there is an opportunity for it to be that positive catalyst for change, that differentiator that tackles business issues, not just IT issues. Your vision needs to be something that is trackable. It can’t be something too absolute, like being the best cloud provider in the world. You will also need to determine who will communicate the vision and to whom.
  4. Targeted services:
    Know your niche. There are all sorts of cloud services available. So find out what the needs of your consumers are and your value proposition to them. A lot of times in IT, we buy the architecture first, and then tell people their needs. Now that consumers have options, that strategy is not competitive.
  5. Effective communication:
    A cohesive message to communicate with different audiences to help position service values.

Let’s look a little more closely at the three types of individuals that you will be communicating with in your entire organization. Of course, the first step is to get your own people on board with the dog food you are going to sell.

  • The complacent are happy with the status quo, they are the most resistant to change, and unwilling to look at the benefits of change. If you tell them they are going to do something new, they say “no way.” They pose the biggest threat to consumer adoption at your organization.
  • The blind followers, on the other hand, can get behind any vision but aren’t able to articulate it. They are tactical so the high-level vision is likely too broad for them.
  • Lastly, you may have a small group of competent followers who may be emerging leaders or IT loyalists (or both). They understand the business units, and are highly interested in the team and organizational results. They can help you manage the other two groups.

Go out and create evangelists. Executives and directors cannot carry the whole load. The individual contributors–those who will be using the services—can be your most influential advocates.

Pave the Way Forward
Now that we’ve looked at the individual types of stakeholders and the five components of your brand campaign, let’s take a look at how to get your message across.

Acknowledge IT’s current state

  • Tell stakeholders your transformation plans from start to finish.
  • Admit challenges to make IT more credible.

The plan should communicate:

  • Product or service
  • Target consumers
  • Your competition and how IT compares
  • IT services value over competition

Three main stages:

  1. Identify critical success factors: What must be right in order to meet forecast and grow?
  2. Value proposition: which aspects of your products make the IT consumer focus on the services rather than the prices?
  3. Prepare a service uptake forecast: Lay the best path that IT can reach.

These IT marketing concepts may seem simple or common sense, but they are also reasonable and achievable. When you prove value through effective communications and marketing, the business starts looking at you like a true partner.

=====
Alex Salicrup is a transformation strategist with VMware Accelerate Advisory Services and is based in California.

The Evolution of the SDLC

By Kai Holthaus

kai_holthaus-cropThe definition of SDLC is changing. SDLC often stands for the “software development life cycle,” a methodology to develop and implement software, currently in use by many IT organizations. IT organizations have realized, however, that this narrow focus on software is insufficient in today’s IT service delivery.

Figure 1: The SDLC Continuum

Figure 1: The SDLC Continuum

In my opinion, the next logical evolutionary step for an SDLC is to look at a “solution development life cycle,” which not only considers functionality requirements for the software, but also requirements for the underlying hardware and infrastructure systems, such as storage or networks. Solution development focuses on a more complete solution, but there is still further to go in maturing the approach. Ultimately, SDLC should really stand for “service development life cycle,” with a goal of developing, implementing, maintaining, and supporting all aspects of an IT service in order to bring real value to IT customers.

Software Development Life Cycle
Project teams often use a form of a software development life cycle to provide functionality in the form of software to users. The goal of the SDLC in this form is to use repeatable, predictable processes that improve software development productivity and software quality. Project teams will commonly incorporate aspects of project management frameworks into the SDLC, because without effective project management, it is very easy to deliver software projects late and/or over budget.

This approach to SDLC typically uses a methodology that will take the software development through multiple phases, such as planning, requirements, design, building, testing, deploying, and maintaining. The phases may be organized in a waterfall model, in a spiral model, or a combination of the two. Additionally, project teams may incorporate rapid application development or agile methods, such as SCRUM. There are a number of publicly available standards that can be applied to the SDLC, such as ISO 12207 (the international standard describing the method of selecting, implementing, and monitoring the life cycle for software), as well as process improvement guidance, such as the Capability Maturity Model Integration for Development (CMMI-DEV) or ISO 15504 (Information Technology Process Assessment).

It is important to note that the software development life cycle is really only concerned with software functionality. The framework provides guidance for developing this functionality, regardless of underlying systems, such as servers or the network that the software will need to be functional. While the SDLC in this form might (and should) be describing the requirements for such systems, the provisioning of such systems is typically not a part of this form of the SDLC. This does not necessarily mean that these requirements are not addressed at all, but typically the project team deals with them in a very separate way. This can potentially lead to miscommunication and a lack of coordination between different groups, and eventually to poorly delivered results. Another pitfall in such an approach is that the infrastructure side is done in an entirely ad hoc way, which can lead to struggles for the project team to be forced to fix things up in production, or severely under-performing services, because the infrastructure cannot support what is truly needed.

Solution Development Life Cycle
In a solution development life cycle (sometimes also known as systems development life cycle), the scope of the methodology is expanded from a narrow focus on software functionality to include the underlying systems, such as hardware and infrastructure.  The SDLC in this form is seen as a process to develop an information system, aiming to produce a high quality system that meets or exceeds customer expectations, reaches completion within time and cost estimates, and is inexpensive to maintain and cost-effective to enhance.

The solution development life cycle approach will be similar to the software development life cycle, in that phases for requirements, design, development, building, testing, deploying, and maintaining will be defined by the project team, and in that guidance from project management methodologies or process improvement methodologies can also be incorporated. However, the focus here is still on providing software functionality to users and customers at a given point in time, and not business value in the form of delivering ongoing services.

The benefit of a solution development life cycle over a software development life cycle is that requirements for underlying systems are defined along with requirements for software functionality, and the entire solution will be developed, thus reducing the risk that these underlying systems can derail the delivery of the desired functionality late in the life cycle. The solution development life cycle therefore ensures a more complete view of how the software functionality is delivered, thereby improving the user experience.

Service Development Life Cycle
Further maturing the SDLC leads to a true service development life cycle, which, while still concerned with the software application(s) needed for success, focuses on the definition, design, build, operation, and improvement of a complete IT service, providing outcomes that customers want to achieve. This view is much bigger than the view being taken in the software or solutions development life cycle. The central idea is for the project team to figure out the end results that their customers need to accomplish and then deliver and manage IT services to achieve those outcomes.  This holistic approach requires the team to consider not only the technical aspects of the service, but also the non-technical aspects such as training, documentation, support, communications, or processes.

For Example…
Here’s an example to further illustrate the difference between these three approaches. Let’s take a look at a payroll application. When using software development life cycle methods, the project team’s focus is on functionality provided by the application. For instance, an improvement to the payroll application might be to expand state tax calculation from handling a single state to also include other states, because the company will now also open offices in more states. The focus is solely on the calculations in the software.

Moving to the solutions development life cycle approach, more aspects are looked at for this change. Since adding this functionality most likely means more users and more employee records being managed by the application, the project team would also consider additional space requirements for the database storing the information about employees, additional network bandwidth that may be required for additional users, and more CPU power being needed for those users.

Taking a service development life cycle approach would require that the project team understand the outcomes customers want to achieve (e.g., “process payroll for all employees”), taking a holistic view of the current IT service landscape, and then determining how those outcomes can be best achieved within that landscape. Besides just application functionality, other aspects of service delivery come into focus, such as availability or continuity requirements, training, support, and even marketing new capabilities in the organization.

In conclusion, the evolution of the SDLC takes us from the traditional “software development life cycle” with its focus on developing and implementing functionality provided by software to defining, designing, implementing, and maintaining services that provide value to IT customers. Doing so means expanding the processes and roles described in these life cycles to ensure this value can be realized.

====
Kai Holthaus is a transformation consultant with VMware Accelerate Advisory Services and is based in Oregon. Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags

A Click Away—The IT Vending Machine Experience

IT users increasingly expect a more consumer-type of enterprise technology experience. A user friendly, cloud-based experience will increase productivity, efficiency, and customer satisfaction.

This infographic reveals why the consumerization of IT is a key development that organizations can leverage to ensure that IT is successful.

 

140207-VMware-OnDemand-IT-Services-final-LORes

5 Steps to Shape Your IT Organization for the Software-Defined Data Center

by Tim Jones

TimJones-cropOne aspect of the software-defined data center (SDDC) that is not solved through software and automation is how to support what is being built. The abstraction of the data center into software managed by policy, integrated through automation, and delivered as a service directly to customers requires a realignment of the existing support structure.

The traditional IT organizational model does not support bundling compute, network, storage, and security into easily consumable packages. Each of these components is owned by a separate team with its own charter and with management chains that don’t merge until they reach the CTO. The storage team is required to support the storage needs of the virtualized environment as well as physical servers, the backup storage, and replication of data between sites. The network team has core, distribution, top of rack, and edge switches to support in addition to any routers or firewalls. And someone has to support the storage network whether it is IP, InfiniBand, or Fibre Channel. None of these teams has only the software-defined data center to support. The next logical question asked is: What does an organization look like that can support SDDC?

While there is no simple answer that allows you to fill a specific set of roles with staff possessing skill sets from a checklist, there are many organizational models that can be modified to support your SDDC. In order to modify an organizational model or to build your own model to meet your IT organization’s requirements, certain questions need to be answered. The answers to the following five steps will help shape your new organization model:

  1. Define what your new IT organization will offer.
    Although this sounds elementary, it is necessary to understand what is planned on being offered in order to know what is necessary to provide support. Will infrastructure as a service (IaaS) be the only offering or will database as a service (DBaaS) and platform as a service (PaaS) also be offered? Does support stop at the infrastructure layer, or will operating system, platform, or database support be required? Who will the customer work with to utilize the services or to request and design additional services?
  2. Identify the existing organizational model.
    A thorough understanding of the existing support structure will help identify what support customers will expect based on their current experience and any challenges associated with the model. Are there silos within that negatively impact customers?  What skills currently exist in the organization?  Identifying the existing organization and defining what will be offered by the new organization will help to identify what gaps exist.
  3. Leverage what is already working.
    If there are components of the existing organization that can either be replicated or consumed by the new organization, take advantage of the option. For example, if there is already a functioning group that works with the customers and supports the operating system, then evaluate how to best incorporate them into the new organization. Or if certain support is outsourced, then incorporate that into the new organizational model.
  4. Evaluate beyond the technical.
    The inclusion of service architects, process designers, business analysts, and project managers can be critical to the success of your new organization. These resources could be consumed from existing internal groups such as a central PMO. But overlooking the non-technical organizational requirements can inhibit the ability of the IT organization to deliver on its service roadmap.
  5. Create a new IT organization.
    Don’t accept the status quo with your current organization. If the storage, compute, and virtualization teams all report through separate management chains in the current organization, the new organization should leverage a single management chain for all three teams. Removing silos within the IT organization fosters a collaborative spirit that results in better support and better service offerings for customers.

Although there is no one size fits all organizational model for the software-defined data center, understanding where your IT organization is currently and where it is headed will enable you to create an organizational model capable of supporting the service roadmap.

====
Tim Jones is business transformation architect with VMware Accelerate Advisory Services and is based in California.

What Does It Mean for IT to Be Customer-Focused?

By John Worthington

worthingtonp-cropBy definition a service is a means of delivering outcomes that a customer wants to achieve, so it’s important not to forget where these outcomes originate in order for IT to be customer-focused.

Transforming IT from a technology-oriented to a services-oriented organization is at the heart of IT service management.  The “specialized organizational capabilities for delivering value to customers in the form of services” must be developed, refined, and continually improved with business outcomes in mind.

If IT is working well, with a true service orientation, your customer will see that:

  • IT actions align with the business, particularly in ways that help the business serve external business customers
  • IT costs are controlled and reduced wherever possible
  • Quality of end-to-end IT services is improved
  • IT agility in responding to business needs is improved
  • IT is focused on customer results
  • Prioritization of IT expenditures and actions is based on business priorities

For the IT organization, this service orientation starts with defining what constitutes a “service” in the context of the particular business and cataloging all the services available. Then the resulting service catalog, and the full service portfolio of which it is a part, become the means of ensuring that IT and the business are always completely in synch around IT services and their value.

What your customers want from IT
When I work with IT organizations that are building their initial catalog of services, I’m interested to see who views whom as the “customer.”  This is fundamental, however, since it is IT’s customer who defines value.

Frequently, service definition work is driven between particular IT groups, which can essentially put the entire effort within the boundaries of the IT organization, as illustrated in Figure 1.  This can result in an internally focused view of the customer/supplier relationship.  The focus is on supporting services, and parts of the IT organization itself end up being treated as “customers” of other parts of IT.

Figure 1 – Supporting Service Focus

Undoubtedly supporting services are important, since these are the building blocks that provide the capabilities that enable the customer-facing, outcome-oriented services.  But they are not what the business is ultimately concerned with.  Enumerating supporting services does not provide for the benefits the business expects – surely we don’t intend the service catalog to be limited to the IT organization!

Another approach I see my clients commonly take is to begin defining services that face the internal customers — the business — which establishes service catalog boundaries within the enterprise as illustrated in Figure 2.  Services are defined as what IT does for the business itself, without reference to the external customers of the business.

Figure 2 – Internal Customer Focus

This approach reflects a critical step in the evolution of an IT organization’s maturity as a service provider.  IT has begun to look at customer outcomes, with the customer being the business the IT organization serves.  I believe such an approach can lead to a more coordinated, collaborative way of working within IT; the various IT groups focus their attention on end-to-end service provisioning, not merely on their own IT silos.

So while initial service catalogs often start with the existing applications and infrastructure and package that for customers, a best practice approach that I recommend is to begin with the outcomes that customers desire and define services based on them as illustrated in Figure 3.

Figure 3 – External Customer Focus

The truth is, there will need to be multiple cycles of service definition and re-definition that need to continue indefinitely, since customers’ desired outcomes and perceptions are under constant change.

Defining services from the top down, starting with external services, is also a recommended approach. But this is easier said than done, since it quickly exposes a need to define both internal customer-facing services and supporting services.

Accelerating the journey to IT as a Service
This is the exciting part of being part of VMware. By establishing re-usable supporting IT services enabled by a software-defined data center and transformation road maps that make sure people and process changes are in place to realize the IT-as-a-service vision, I can help the IT organizations I work with to accelerate their ability to be truly customer-focused.


John Worthington is a VMware transformation consultant and is based in New Jersey. Follow @jMarcusWorthy and @VMwareCloudOps on Twitter.

A New Angle on the Classic Challenge of Retained IT

By Pierre Moncassin

Pierre Moncassin-cropWhen discussing the organization models for managing cloud infrastructure with customers, I have come across situations where some if not all infrastructure services are outsourced to a third party. In these situations my customers often ask – does your (VMware) operating model still apply? Should I retain cloud-related skills in-house? If so, which ones?

The short answer is: Yes. The advice I give my customers is that their IT organization should establish a core organization modeled on the “tenant operations” team as defined in Organizing for the Cloud, a VMware white paper by my colleague Kevin Lees.

Let’s assume a relatively simple scenario where a single outsourcer is providing “standard” infrastructure services — such as computing, storage, backups. In this scenario, the outsourcer has accepted to transform at least some of its services towards software-defined data center (SDDC), which is by no means an easy step (I will return to that point later).

For now let’s also assume a cooperative situation where customer and outsourcer are collaboratively working towards a cloud model. The question is — what skills and functions should the customer retain in-house? Which skills can be handed over to the outsourcer?

The question is a classic one. In traditional infrastructure outsourcing, we would talk about a “retained IT” organization.  For the SDDC environment, here are some skill groups that I believe have to be preserved within the core, in-house team:

  • Service Design and Self-service Provisioning is clearly a skillset to keep in-house. The in-house team must be able to work with the business to define services end-to-end, but the team should also be able to grasp accurately the possibilities that automation offers with software such as VMware vCloud Automation Center.  Though I am not suggesting that the core team needs to be expert in all aspects of workflows, APIs or scripting, they do need a solid grasp of the possibilities of automation.
  • Process Automation and Optimization.  A solid working knowledge of automation software is useful but not enough.  The in-house teams are required to decide which processes to automate and how. They need to make business-level decisions. Which processes are worth automating? What is the benefit of automation versus its cost?
  • Security and Compliance is often a top priority for cloud adopters. The cloud-based services need to align with enterprise policies and standards.  The retained IT function must be able to demonstrate compliance and where needed, enforce those standards in the cloud infrastructure.
  • Service Level Management and Trend Analysis. Whilst the retained IT organization does not need to be involved in the day-to-day monitoring and troubleshooting, they need to be able to monitor key service levels. Specifically, the business users will be highly sensitive to the performance of some business-critical applications. The retained IT organization will need to keep enough knowledge of these applications and of performance monitoring tools to ensure that application performance is measured adequately.
  • Application Life Cycle (DevOps). We have assumed in our scenario an infrastructure-only outsourcing — the skills for application development remaining in-house.  In the SDDC environment, the tenant operations team will work closely with the application development teams. Amongst other skills, the retained IT will need detailed knowledge not only of application provisioning, but also the architectures, configuration dependencies, and patching policies required to maintain those applications.

I have reviewed skills groups needed as more automation is used, but there will be less reliance on skills that relate to routine tasks and trouble-shooting. Skills that can typically be outsourced include:

  • Routine scripting and monitoring
  • System (middleware) configuration
  • Routine network administration

The diagram below is a (very simplified) summary of the evolution from traditional retained IT to tenant operations for SDDC environments.

Retained IT modelIt is also worth noting that the transformation from traditional infrastructure outsourcing to SDDC is a far from obvious step from the point of view of an outsourcer. Why should the outsourcer invest time and cost to streamline services, if the end customer has already contracted to pay for the full cost of service? Gaining buy-in from the outsourcer to transform its model can be a significant challenge. Therefore it is prudent to key to gain acceptance either:
-  early in the contract negotiations, so that the provider can build in a cloud delivery model in its service offering,
- or towards the end of a contract when the outsourcer is often highly motivated to obtain a renewal.

Finally outsourcers may initiate their own technology refresh programs, which can create a win-win situation when both sides are prepared to invest in modernization towards SDDC.

3 Key Take-Aways

  1. Organizations that undertake their journey to SDDC with an outsourcer are advised to establish a core SDDC  organization including most tenant operations skills; a key focus is to leverage automation (whilst routine, repetitive tasks can be outsourced).
  2. The exact profile of the tenant operations (retained IT) will depend on the scope of the outsourcing contract.
  3. Early contract negotiations, renewals, or technology refresh can create opportunities to encourage an outsourcer to move towards the SDDC model.

———
Pierre Moncassin
is an operations architect with VMware’s Global Operations Transformation Practice and is based in the UK. Follow @VMwareCloudOps on Twitter for future updates.

VMware vCenter Operations Manager Users: Raise Your Hands!

Keng-Leong-Choong-cropBy Choong Keng Leong

I innocently asked attendees in a workshop I was delivering at one of my clients, “Who uses VMware vCenter Operations Management Suite in your company?“ I got two simple answers: “Cloud administrator” or “VM administrator.”  This triggered me to write this blog and hopefully will change your thinking if you have the same answer.

The vCenter Operations Management Suite consists of four components:

  • vCenter Operations Manager : Allows you to monitor and manage the performance, capacity and health of your SDDC infrastructure, operating systems and applications
  • vCenter Configuration Manager : Enables you to automate configuration management across virtual and physical servers, and continuously assess them for compliance with IT policies, regulatory, and security compliance
  • vCenter Hyperic : Helps to monitor operating systems, databases, and applications
  • vCenter Infrastructure Navigator : Automatically discovers and visualizes application components and infrastructure dependencies

If I were to map the vCenter Operations Management Suite to the IT processes it can support, it would look like the matrix shown in Table 1:

Table 1: A Possible vCenter Operations Management Suite to Process Mapping

What Table 1 also implies is that multiple roles will be using and accessing vCenter Operations Manager, or be a recipient of its outputs, i.e., reports. For example, the IT Director can access the vCenter Operations Manager Dashboard to view the overall health of the infrastructure. The Application Support team accesses it via a Custom Dashboard to understand applications status and performance. The IT Compliance Manager reviews the compliance status of IT systems on the vCenter Operations Manager Dashboard and gets more details from the vCenter Configuration Manager to initiate remediation of the systems.

Table 2 below shows a possible list of roles accessing the vCenter Operations Management Suite.

Table 2: Possible List of Roles Using vCenter Operations Management Suite

Tables 1 and 2 illustrate clearly that vCenter Operations Management Suite is not just another lightweight app for the cloud or VM administrator — it supports multiple IT operational processes and roles.

Taking a step further, you need to embed vCenter Operations Management Suite into operational procedures to take maximum advantage of the tools’ full potential and integrated approach to performance, capacity, and configuration management. To draw an analogy –  if you deploy a new SAP system without defining the triggers or use cases for a user to access the SAP system; establishing the procedural steps on which modules to access and how to navigate in the system; what to input; how to query and report and so on; it is unlikely the system will be rolled out successfully.

Although vCenter Operations Management Suite is not as complex, the concept is the same. You need to define procedures with tight linkage to the tools to ensure they are used consistently and in the way it is designed or configured for.

I hope that my blog motivates you to start thinking about transforming your IT operations to make full use of the capabilities of your VMware technology investment.

========
Choong Keng Leong is an operations architect with VMware Professional Services and is based in Singapore. You can connect with him on LinkedIn