Home > Blogs > VMware CloudOps

Transforming Operations and Perception of the IT Organization

By David Crane

dcrane-cropA recent engagement with a long-established telecommunications firm presented a huge challenge—the solution for which is a great example of how operations transformation can drive technical transformation. The firm’s customer base spans various global regions, each of which presented a different customer experience. The IT organization functioned in extremely siloed environments, having grown organically over 25 years to support an aging, fragmented infrastructure.

A frustrated but motivated CIO laid down the following requirements for the VMware consulting services team, to be met over an aggressive six-month timeline:

  • Reduce operational costs
  • Improve agility
  • Provide more service offerings
  • Help IT become a service broker and eliminate shadow IT
  • Build a flexible architecture to meet the needs of the business
  • Reduce total number of physical data centers
  • Gain more control and compliance of IT infrastructure environments

The internal IT team lacked the expertise and resources required to implement a software-defined data center (SDDC) solution. Their service request process was time-consuming, manual, and inconsistent. Add to that an average provisioning time for a full end-to-end server of eight weeks, and it’s no surprise that internal customers were seeking out external solution providers for their IT needs.

The VMware team set out to remedy all of this with the following solution:

  • Implement a production SDDC platform
  • Make self-service automated provisioning the first available service
  • Assess the customers’ operating processes
  • Introduce an optimized organizational structure
  • Integrate operations transformation and technical implementation
  • Take a phased approach to the project with clearly defined milestones to deliver immediate results
  • Ensure the VMware team team worked closely with internal groups

Transforming the Operating Model
Breaking down the siloed IT organization, and introducing horizontal, cross-departmental communications was the first step to allow the customer to become service-focused.

The team did have the business analyst concept, but the analysts sat outside the IT organization. They didn’t understand IT and weren’t incentivized to do so. As a result, rogue users were going out and doing things themselves, leading to compliance and governance issues.

We introduced the concept of infrastructure operations and tenant operations. These were cross-functional teams that talk to each other—a virtual center of excellence within the IT organization. As part of this organizational change, we brought in new roles, the two most important being the customer relationship manager and the service owner. We brought customer relationship management back into IT, so the person in the role started to understand IT and what they could deliver (and how) against customer requirements.

One of these requirements was the revelation that customers did not really have an interest in availability.  This was not because they didn’t care, but simply because IT over the years has become robust enough that availability is expected.  What their customers really cared about was the speed, and standardization, of the service provisioning lifecycle, as it was this that allowed them to quickly respond to market demands, and support the business objective to be the first to market with new products.

This led to a technical requirement as the IT organization’s customers requested to see this information in a dashboard format, so that proactive monitoring of the provisioning process could take place.

Transforming Infrastructure Operations
The service owners played a key role in saying VMware vRealize Operations only looks at infrastructure—this resulted in a demand to change things within VMware vRealize Automation.

However, the dashboards needed to be delivered through vRealize Operations. To meet the technical requirement, we focused on the self-service provisioning portal and allowed consumers to monitor the status of their ordered services via that portal. To do that, we needed a dashboard in VMware vRealize Operations to monitor the KPIs involved in service provisioning. In order to build the dashboard to monitor provisioning time, we had to create a custom solution using vRealize Automation. The technical solution was necessary to enable the operating framework architecture and organizational model to support it.

Dashboard Solution
We ended up with a provisioned resources dashboard as shown in figure 1 below that lists each virtual machine (VM) and the number of minutes it took to be provisioned. Less than 30 minutes shows green, less than two hours shows yellow, and over two hours is red. It also shows the average, minimum, and maximum times to provision.

Time to Provision

Figure 1:  Provisioned resources dashboard

The dashboard also enabled the customer to use data to feed back into the service life cycle process. For example, they started to understand service demand. Service owners—who were expected to forecast demand for services—could now do so with more accuracy. Now that the team was forecasting capacity demand more accurately, they were able to increase credibility by sharing this information with the infrastructure team. And ultimately they saved money by having a better handle on demand.

The dashboard also allowed IT to develop proactive operational processes.  On several occasions the service owners started to see a degradation in performance of the provisioning process, while the infrastructure monitoring dashboards were still showing a healthy ecosystem.

On further analysis, changes to the underlying infrastructure, whilst keeping in tolerance and SLA for the IT infrastructure teams, were having an accumulative impact further down the chain to the service provisioning process.

The provisioning dashboard and further integration with the customers’ service desk platform and event, incident, and problem management processes allowed the IT infrastructure teams to tune the change management process so that service provisioning would not be affected.

In the end, IT became service-oriented because of the dashboard. Because internal customers could use that tool to see the incredible accuracy with which the IT team was meeting its 30-minutes-or-less goal, it had a huge impact on the way the IT was perceived within business. IT’s credibility skyrocketed, and suddenly it became easier to drive things like the “cloud first” policy within the organization.

======
David Crane is an operations architect with the VMware Operations Transformation global practice and is based in the U.K.

How to Avoid 5 Common Mistakes When Implementing an SDDC Solution

By Jose Alamo

Jose alamo-cropImplementing a software-defined data center (SDDC) is much more than implementing or installing a set of technology — an SDDC solution requires clear changes to the organization vision, policies, processes, operations, and organization readiness. Today’s CIO needs to spend a good amount of time understanding the business needs, the IT organization’s culture, and how to establish the vision and strategy that will guide the organization to make the adjustments required to meet the needs of the business.

The software-defined data center is an open architecture that impacts the way IT operates today. And as such, the IT organization needs to create a plan that will utilize the investments in people, process, and technology already made to deliver both legacy and new applications while meeting vital IT responsibilities. Below is a list of five common mistakes that I’ve come across working with organizations that are implementing SDDC solutions, and my recommendations on how avoid their adverse impacts:

1. Failure to develop the vision and strategy—including the technology, process, and people aspects
Many times organizations implement solutions without setting the right expectation and a clear direction for the program. The CIO must use all the resources available within the IT organization to create a vision and strategy, and in some cases it is necessary to bring in external resources that have experience in the subject. The vision and strategy must align with the business needs, and it should identify the different areas that must be analyzed to ensure a successful adoption of an SDDC solution.

In my experience working with clients, it is imperative that as part of the planning a full assessment is conducted, and it must include the areas of people, process, and technology. A SWOT analysis should also be completed to fully understand the organization’s strengths,  weaknesses, opportunities, and threats. Armed with this insight, the CIO and IT team will be able to express the direction that must be taken to be successful, including the changes required across people, process, and technology.

Failing to complete this step will add complexity and lack of clarity for those who will be responsible for implementing the solution.

2. Limited time spent reviewing and understanding the current policies
There are often many policies within the IT organization that can prevent moving forward with the implementation of SDDC solutions. In such cases, the organization needs to have an in-depth review of the current policies governing the business and IT day-to-day operations. The IT team also needs to ensure it devotes a significant amount of time with the company’s security and compliance team to understand their concerns and what measures need to be taken to make the necessary adjustments to support the implementation of the solutions. For example, the IT organization needs to look at its change policies; some older policies could prevent the deployment of process automation that is key to the SDDC solution. When these issues are identified from the beginning, IT can start the negotiation with the lines of business to either change its policies or create workarounds that will allow the solution to provide the expected value.

Performing these activities at the beginning of the project will allow IT leadership to make smart choices and avoid delays or workarounds when deploying future SDDC solutions.

3. Lack of maturity around the IT organization’s service management processes
The software-defined data center redefines IT infrastructure and enables the IT organization to combine technology and a new way of operating to become more service-oriented and more focused on business value. To support this transformation, mature service management processes need to be established.

After the assessment of current processes, the IT organization will be able to determine which process will require a higher level of maturity, which process will need to be adapted to the SDDC environment, and which processes are missing and will need to be established in order to support the new environment.

Special attention will be required for the following processes:  financial management, demand management, service catalog management, service level management, capacity management, change management, configuration management, event management, request fulfillment, and continuous service improvement.

Ensure ownership is identify for each process, with KPIs and measurable metrics established—and keep the IT team involved as new processes are developed.

4. Managing the new solution as a retrofit within the current environment
Many IT organizations will embrace a new technology and/or solution only to attempt to retrofit it into their current operational model. This is typically a major mistake, especically if the organization is expecting better efficiency, more flexibility, lower cost to operate, transparency, and tighter compliance as potential benefits from an SDDC.

Organizations must assess their current requirements and determine if they will be required for the new solutions. Most processes, roles, audit controls, reports, and policies are in place to support the current/legacy environment, and each must be assessed to determine its purpose and value to the business, and to determine whether it is required for the new solution.

IT leadership should ask themselves: If the new solution is going to be retrofitted into the current operational model, then why do we need a new solution?  What business problems are we going to resolve if we don’t change the way we operate?

My recommendation to my clients is to start lean, minimize the red tape, reduce complex processes, automate as much as possible, clearly identify new roles, implement basic reporting, and establish strict change policies. The IT organization needs to commit to minimize the number of changes to the new solution to ensure only changes that are truly required get implemented.

5. No assessment of the IT organization’s capabilities and no plan to fill the skill set gaps
The most important resource to the IT organization is its people. IT management can implement the greatest technologies, but their organizations will not be successful if their people are not trained and empowered to operate, maintain, and enhance the new solution.

The IT organization needs to first assess current skill sets. Then work with internal resources and/or vendors to determine how the organization needs to evolve in order to achieve its desired state. Once that gap has been identified, the IT management team can develop an enablement plan to begin to bridge the gap. Enablement plans typically include formal “train the trainer” models to cascade knowledge within the organization, as well as shadowing vendors for organizational insight and guidance along with knowledge transfer sessions to develop self-sufficiency. In some cases it may be necessary to bring in external resources to augment the IT team’s expertise.

In conclusion, implementing a software-defined data center solution will require a new approach to implementing processes, technologies, skill sets, and even IT organizational structures. I hope these practical tips on how to avoid common mistakes will help guide your successful SDDC solution implementations.

====
Jose Alamo is a senior transformation consultant with VMware Accelerate Advisory Services and is based in Florida. Follow Jose on Twitter @alamo_jose  or connect on LinkedIn.

It’s Time for IT to Come Out of the Shadows

Chances are shadow IT is happening right now at your company. No longer content waiting for their companies’ IT help, today’s employees are taking action into their own hands by finding and using their own technology to solve work challenges as they arise—a trend that likely isn’t fading into the shadows anytime soon.

Print

10 Factors to Consider When Estimating IT Staff Ratios Needed to Operate a Cloud Platform

By Pierre Moncassin

Pierre Moncassin-cropIn this post, I want to share with you some “rule of thumb” estimates on how many full-time equivalent (FTE) positions an IT organization may need to operate a cloud platform. Note: this is not an exact science, so I wanted to give you the practitioner’s approach. What are the general guidelines? What do I need to take into account?

Readers can learn more specific details around the different roles in the cloud management team in the VMware white paper “Organizing for the Cloud” as a starter. Here I use a generic term of “administrator” or “operator” to broadly describe the technicians/analysts/operators who manage and configure the tools on a daily basis. Here’s my list of factors to consider when estimating IT staff ratios:

  1. Number of lines of business. It stands to reason that the higher the number of distinct business units (lines of business) that are using the cloud, the higher the number and complexities of workflows to support, the more user profiles to manages, reports to produce, and so forth.
  2. Number of data centers. If the toolsets must manage multiple data centers, there will be added complexity in order to manage multiple environments, which often are in different locations.
  3. Level staff skill/experience. The higher the experience of the operators, the larger and more complex the infrastructure they can manage.  In other words, IT should require fewer FTEs to manage the same level of complexity in a cloud infrastructure. (This is a topic that deserves a separate article: “How the IT Organization Learns to Use Cloud Management Tools — and Over Time.”)
  4. Number of services. By this I mean cloud-type services, as in IT-as-a-service or applications. As a starter, determine how many services will be offered in the cloud service catalog.
  5. Workflow complexity. Factor in the internal complexity of the automated workflows. For example, on a scale of 1-5 (5 being most complex), a workflow with multiple approval points might score as 5, whereas a basic workflow as 1.
  6. Internal process complexity. Within IT, the organization with a higher number of mandatory internal process steps (which might all be in place for good reason) will likely need more staff (or it will take their staff longer) to carry out the same tasks as the organization with fewer internal process steps. A higher degree of complexity often develops in highly regulated environments, be it defense or civil administrations, or where an outsourcing provider requires rigid contractual relationships with inflexible approvals. Process and workflow complexity are related but separate considerations (all processes are not automated into workflows).
  7. Number of third-party integrations. The more integrations that need to be built into the automation workflows, the higher the workload for the operators.
  8. Rate of change. Change may be due to business change (mergers, acquisitions, new products, new applications), but also technological change (such as internal transformation programs). These may impact FTE requirements.
  9. Number of virtual machines under management. It may help to group into broad ranges: less than 100, 100 to 1,000, 1,000 to 10,000, and above 10,000. That range will impact FTE requirements.
  10. Number of user dashboards/reports to maintain. This can range from a couple basic reports to dozens of dashboards and complex reports. If the reporting is not sufficiently automated, the “unfortunate” administrators may need to spend a substantial part of their time producing custom reports for various user groups.

For those readers keen on modeling, each factor I’ve provided can be quite easily prorated on a 1-to-5 scale and turned into a formula. Others can be satisfied with applying as a simple rule of thumb.

My approach can be extended to VMware vRealize Automation or vRealize Operations management products, as well as other management tools. Stay tuned for a future article, as I am also at work to break down the roles far more accurately than “administrators.”

Meanwhile, consider the above factors I’ve outlined as basic guidelines. And a call to action for practitioners: Compare my guidelines to your metrics, and send me your feedback!

—-
Pierre Moncassin is an operations architect with the VMware Operations Transformation global practice and is based in the UK.

Transforming Operations to Optimize DevOps

By Ahmed Al-Buheissi

Ahmed_croppedDevOps. It’s the latest buzzword in IT and, as usual, the industry is either skeptical or confused as to its meaning. In simple terms, DevOps is a concept that allows IT organizations to develop and release software rapidly. By acknowledging the pressure the Development and Operations teams within IT place on each other, the DevOps approach enables the Development and Operations teams to work closely together. IT organizations put policies for shared and delegated responsibilities in place, with an emphasis on communication, collaboration, and integration.

Developers have no problem writing code and pushing it out, however their demand for infrastructure causes conflict with the Operations team. Traditionally it is the Operations team that release code to the various environments including Development, Test, UAT, and Production. As developers want to continuously push functionality through the various environments, it is only natural that Operations gets inundated with requests for more infrastructure. When you add Quality Assurance teams in the mix, efficiency is negatively impacted.

Why the rush to release code?
Rapid application development is requisite. The face of IT is changing very quickly and will continue to change even faster. Businesses need to innovate fast, and introduce products and services into the market to beat the competition and meet the demands of their customers.

Here are four reasons rapid application development and release is fundamental:

  1. This is the social media age. Bad code and bugs can no longer be ignored and scheduled for future major releases; when defects are found, word will spread fast through Twitter and blogs.
  2. Mobile applications are changing the way we work and require a different kind of design—one that fits on a smaller screen and is intuitive. If a user doesn’t like one application, they’ll download the next.
  3. Much of the software developed today is modular and highly dependent on readily-available modules and packages. When an issue is discovered with a particular module, word spreads fast among user communities, and solutions need to be developed immediately.
  4. Last and most important, this is the cloud era. The very existence of the Operations team is at stake, because if it cannot provide infrastructure when Development needs it, developers will opt to use a publicly available cloud service. It is that easy.

So what is DevOps again?
DevOps is not a “something” that can be purchased — it’s an approach that requires new ways of working as an IT organization. As an IT leader, you will need to “operationalize” your Development team and bring them closer to your Operations team. As an example, your developers will need the capability to provision infrastructure based on new operations policies. DevOps also means you will need to move some of your development functionalities to the Operations team. For example, the Operations team will need to start writing workflows and associated scripts/code that will be used to automate the deployment process for the development team.

While there are adequate tools that will facilitate the journey to DevOps, DevOps is more about processes and people.

How to implement DevOps
The IT organization needs to undergo both people and process changes to implement DevOps — and it cannot happen all at once — the change needs to be gradual. It is also very difficult to measure “DevOps maturity.” As an IT leader, you will know it when your organization becomes DevOps capable — it happens when your developers have the necessary tools to release software at the speed of business, and your Operations team is focused on innovation rather than being reactive to infrastructure deployment requirements.

Also, your test environment will evolve to a “continuous integration” environment, where developers can deploy their code and have it tested in an automated and continuous process.

I make the following recommendations to my clients for process, people, and tools required for a DevOps approach:

Process
The diagram below illustrates a process for DevOps, in which the Operations team develops automated deployment workflows, and the Development team uses the workflows to deploy to the Test and UAT environments. The final deployment to production is carried out by the Operations team; in fact Operations should continue to be the only team with direct access to production infrastructure.

devops flow

Service Release Process – Service Access Validation

However, it is critical that Development have access to monitoring tools in production to allow them to monitor applications. These monitoring tools may allow tracking of application performance and its impact on underlying infrastructure resources, network response, and server/application log files. This will allow your developers to monitor the performance of their applications, as well as diagnose issues, without having to consume Operations resources.

Finally, it is assumed that the DevOps tools and workflows will be used for all deployments, including production. This means that the Development and Operations teams must use the same tools to deploy to all environments to ensure consistency and continuity as well as “rehearse” the production release.

People

The following roles are the main players in facilitating a DevOps approach:

  • Operations: The DevOps process starts with the Operations team. Their first responsibility is to develop workflows that will automate the deployment of a complete application environment. In order to develop these workflows, Operations is obliged to be part of the development cycle earlier and will therefore have to become closer to Development in order to understand their infrastructure requirements.
  • Development: The Development team will use their development environment to determine the infrastructure required for the application; for example database version, web server type, and application monitoring requirements. This information will assist the Operations team in determining the capacity required and in developing the deployment workflows. It will help with implementing the custom dashboards and metrics reporting capabilities Development needs to monitor their applications. The Development team will be able to develop and deploy to the “continuous integration” and UAT environments without having to utilize Operations resources. They can “rip and replace” applications to these environments as many times as needed by QA and end-users in order to be production-ready.
  • Quality Assurance (QA):  Due to the high quality of automated test scripts used for testing in such an environment, the QA team can play a lesser role in a DevOps environment by randomly testing applications. QA will also need to test and verify the deployment workflows to ensure the infrastructure configuration used is as per the design.
  • End Users: End-user testing can be reduced in a DevOps environment, by only randomly testing applications. However once DevOps is in place, end users should notice a vast improvement in the quality and speed of the applications produced.

Tools
VMware vRealizeTM Code StreamTM  targets IT organizations that are transforming to DevOps to accelerate application released for business agility. Some of the features it offers include:

  • Automation and governance of the entire application release process
  • A dashboard for end-to-end visibility of the release process across Development and Operations organizations
  • Artifact management and tracking

For IT leaders, vRealize Code Stream can help transform the IT organization through a DevOps approach. The “continuous integration” cycle is a completely automated package that will deploy, validate, and test applications being developed.

DevOps can also benefit greatly from using platform-as-a-service (PaaS) providers. By developing and releasing software on PaaS, the consistency is guaranteed as the platform layer (as well as lower layers) are always consistent. Pivotal CF, for example, allows users and DevOps to publish and manage applications running on the Cloud Foundry platform across distributed infrastructure.

Conclusion
Although DevOps is a relatively new concept, it’s really just the next step after agile software development methods. As the workforce becomes more mobile, and social media brings customers and users closer, it’s necessary for IT organizations to be able to quickly release applications and adapt to changing market dynamics. (Learn how the VMware IT DevOps teams are using the cloud to automate dev test provisioning and streamline application development in the short video below.)

Many organizations have tackled the issues associated with running internal development teams by outsourcing software development. I now see the reverse happening, as organizations want to reach the market more quickly and have started to build internal development teams again.

For the majority of my clients, it’s not a matter of “if” but “how quickly” will they introduce DevOps. By adopting DevOps principles, their development teams will be able to efficiently release features as demanded by the business, at the speed of business.

====
Ahmed Al-Buheissi is an operations technical architect with the VMware Operations Transformation global practice and is based in Melbourne, Australia.

 

Leveraging Proactive Analytics to Optimize IT Response

By Rich Benoit

Benoit-cropWhile ushering in the cloud era means a lot of different things to a lot of different people, one thing is for sure: operations can’t stay the same. To leverage the value and power of the cloud, IT organizations need to:

  1. Solve the challenge of too many alerts with dynamic thresholds
  2. Collect the right information
  3. Understand how to best use the new alerts
  4. Improve the use of dynamic thresholds
  5. Ensure the team has the right roles to support the changing environment

These steps can often be addressed by using the functionality within VMware vRealize Operations Manager, as described below.

1) Solve the challenge of too many alerts with dynamic thresholds
In the past when we tried to alert based on the value of a particular metric, we found that it tended to generate too many false positives. Since false positives tend to lead to the alerts being ignored, we raise the value of hard threshold for the alert until we no longer get false positives. The problem is that users are now calling in before the alert actually triggers, defeating the purpose of the alert in the first place. As a result, we tend to monitor very few metrics because of the difficulty in finding a satisfactory result.

However, now we can leverage dynamic thresholds generated by analytics. These dynamic thresholds identify the normal range for a wide range of metrics according to the results of competing algorithms that best try to model the behavior for each metric over time. Some algorithms are based on time such as day of the week, while others are based on mathematical formulas. The result is a range of expected behavior for each metric for a particular time period.

One of the great use cases for dynamic thresholds is that they identify the signature of applications. For example, they can show that the application always runs slow on Monday mornings or during month-end processing. Each metric outside of the normal signature constitutes an anomaly. If enough anomalies occur, an early warning smart alert can be generated within vRealize Operations Manager that indicates that something has changed significantly within the application and someone should investigate to see if there’s a problem.

2) Collect the right information
As we move from more traditional, client-server era environments to cloud era environments, many teams still use monitoring that has been optimized for the previous era (and tends to be siloed and component-based, too).

It’s not enough to just look at what’s happening with a particular domain or what’s going on with up-down indicators. In the cloud era, you need to look at performance that’s more aligned with the business and the user experience, and move away from a view focused on a particular functional silo or resource.

By putting those metrics into a form that an end user can relate to, you can give your audience better visibility and improve their experience. For example, if you were to measure the response time of a particular transaction, when a user calls in and says, “It’s slow today,” you can check the dynamic thresholds generated by the analytics that show the normal behavior for that transaction and time period. If indeed the response times are within the normal range, you can show the user that although the system may seem slow, it’s the expected behavior. If on the other hand the response times are higher than normal, a ticket could be generated for the appropriate support team to investigate. Ideally, the system would have already generated an alert that was being researched if a KPI Smart Alert had been set up within vRealize Operations Manager for that transaction response time.

3) Understand how to best use the new alerts

You may be wondering: Now that I have these great new alerts enabled by dynamic thresholds, how can I best leverage them?  Although they are far more actionable than previous metric-based alerts, the new alerts may still need some form of human interaction to make sure that the proper action is taken. For example, it is often suggested that when a particular cluster in a virtualized environment starts having performance issues that an alert should be generated that would burst its capacity. The problem with this approach is that although performance issues can indicate a capacity issue, they can also indicate a break in the environment.

The idea is to give the user as much info as they need when an alert is generated to make a quick, well-informed decision and then have automations available to quickly and accurately carry out their decision. Over time, automations can include more and more intelligence, but it’s still hard to replace the human touch when it comes to decision making.

4) Improve the use of dynamic thresholds
A lot of monitoring tools are used after an issue materializes. But implementing proactive processes gives you the opportunity to identify or fix an issue before it impacts users. It’s essential that the link to problem management be very strong so processes can be tightly integrated, as shown in figure 1.

event incident problem cycle

Figure 1: Event incident problem cycle

During the Problem Management Root Cause Analysis process, behaviors or metrics are often identified that are leading indicators for imminent impacts to the user experience. As mentioned earlier, vRealize Operations Manager, as the analytics engine, can create both KPI and Early Warning smart alerts, at the infrastructure, application, and end-user level to alert on these behaviors or metrics. By instrumenting these key metrics within the tool you can create actionable alerts in the environment.

5) Ensure the team has the right roles to support the changing environment.
With the new found abilities enabled by an analytics engine like vRealize Operations Manager, the roles and its structure become more critical. As shown in figure 2 below, the analyst role should be there to identify and document the opportunity for improvement, as well as, report on the KPIs that indicate the effectiveness of the alerts already in place. In addition, developers are needed to develop the new alerts and other content within vRealize Operations Manager.

new roles

Figure 2: New roles to support the changing environment

In a small organization, one person may be performing all of these functions, while in a larger organization, an entire team may perform a single role. This structure can be flexible depending on the size of the organization, but these roles are all critical to leveraging the capabilities of vRealize Operations Manager.

By implementing the right metrics, right KPIs, right level of automation, and putting the right team in place, you’ll be primed for success in the cloud era.

—-
Richard Benoit is an Operations Architect with the VMware Operations Transformation global practice.

Marketing and Communications of a Successful IT Provider

By Alex Salicrup

SALICRUP-crop2Communication is the single-most important pillar of being a service-driven IT organization. While technical aptitude and service are both vital, being able to communicate effectively about value internally and to consumers is the key to IT becoming a true business partner.

IT has always struggled because its culture is one of fragmented thought leadership; not to mention the fact that those in the IT profession are often reactive, detail-oriented, and risk averse. Overcoming these obstacles requires careful management of IT’s internal brand.

Traditional IT is control-driven and customized. Go to a third-party cloud service provider and knock on the door, and they aren’t going to hand you a customized solution. The majority of them have a solution that they have predicted you will need. They have created a small number of services that will satisfy most of their consumers.

Now is the time to take a cue from those vendors and shift to a service-oriented model of IT by truly understanding user needs and perceptions first, then designing services around them. Manage IT like it’s your own business. Be competitive, proactive, and innovative. Manage customer perceptions. Remember that risks are opportunities.

Change is Difficult
Untitled
It’s difficult to change negative perceptions, but marketing campaigns do that every day; they are designed to put a new, positive perception in your head. It’s time to start your own IT marketing campaign to manage how your company views IT and help foster change.

Here are the five components you’ll need to think about as you start your IT brand campaign.

  1. Brand:
    Admit where you are now and where you want your brand to go. Your name, symbol, and color palate, are all part of the perception. So are all of the ways you communicate, including emails and templates.
  2. Catalyst of change:
    What are the reasons why your stakeholders would want change? The biggest place where this is a problem is within IT.
  3. Vision:
    In order to create a good catalyst, you need a vision that you can communicate. “We have to change because…” Many people are nervous about cloud, for example, but there is an opportunity for it to be that positive catalyst for change, that differentiator that tackles business issues, not just IT issues. Your vision needs to be something that is trackable. It can’t be something too absolute, like being the best cloud provider in the world. You will also need to determine who will communicate the vision and to whom.
  4. Targeted services:
    Know your niche. There are all sorts of cloud services available. So find out what the needs of your consumers are and your value proposition to them. A lot of times in IT, we buy the architecture first, and then tell people their needs. Now that consumers have options, that strategy is not competitive.
  5. Effective communication:
    A cohesive message to communicate with different audiences to help position service values.

Let’s look a little more closely at the three types of individuals that you will be communicating with in your entire organization. Of course, the first step is to get your own people on board with the dog food you are going to sell.

  • The complacent are happy with the status quo, they are the most resistant to change, and unwilling to look at the benefits of change. If you tell them they are going to do something new, they say “no way.” They pose the biggest threat to consumer adoption at your organization.
  • The blind followers, on the other hand, can get behind any vision but aren’t able to articulate it. They are tactical so the high-level vision is likely too broad for them.
  • Lastly, you may have a small group of competent followers who may be emerging leaders or IT loyalists (or both). They understand the business units, and are highly interested in the team and organizational results. They can help you manage the other two groups.

Go out and create evangelists. Executives and directors cannot carry the whole load. The individual contributors–those who will be using the services—can be your most influential advocates.

Pave the Way Forward
Now that we’ve looked at the individual types of stakeholders and the five components of your brand campaign, let’s take a look at how to get your message across.

Acknowledge IT’s current state

  • Tell stakeholders your transformation plans from start to finish.
  • Admit challenges to make IT more credible.

The plan should communicate:

  • Product or service
  • Target consumers
  • Your competition and how IT compares
  • IT services value over competition

Three main stages:

  1. Identify critical success factors: What must be right in order to meet forecast and grow?
  2. Value proposition: which aspects of your products make the IT consumer focus on the services rather than the prices?
  3. Prepare a service uptake forecast: Lay the best path that IT can reach.

These IT marketing concepts may seem simple or common sense, but they are also reasonable and achievable. When you prove value through effective communications and marketing, the business starts looking at you like a true partner.

=====
Alex Salicrup is a transformation strategist with VMware Accelerate Advisory Services and is based in California.

The Evolution of the SDLC

By Kai Holthaus

kai_holthaus-cropThe definition of SDLC is changing. SDLC often stands for the “software development life cycle,” a methodology to develop and implement software, currently in use by many IT organizations. IT organizations have realized, however, that this narrow focus on software is insufficient in today’s IT service delivery.

Figure 1: The SDLC Continuum

Figure 1: The SDLC Continuum

In my opinion, the next logical evolutionary step for an SDLC is to look at a “solution development life cycle,” which not only considers functionality requirements for the software, but also requirements for the underlying hardware and infrastructure systems, such as storage or networks. Solution development focuses on a more complete solution, but there is still further to go in maturing the approach. Ultimately, SDLC should really stand for “service development life cycle,” with a goal of developing, implementing, maintaining, and supporting all aspects of an IT service in order to bring real value to IT customers.

Software Development Life Cycle
Project teams often use a form of a software development life cycle to provide functionality in the form of software to users. The goal of the SDLC in this form is to use repeatable, predictable processes that improve software development productivity and software quality. Project teams will commonly incorporate aspects of project management frameworks into the SDLC, because without effective project management, it is very easy to deliver software projects late and/or over budget.

This approach to SDLC typically uses a methodology that will take the software development through multiple phases, such as planning, requirements, design, building, testing, deploying, and maintaining. The phases may be organized in a waterfall model, in a spiral model, or a combination of the two. Additionally, project teams may incorporate rapid application development or agile methods, such as SCRUM. There are a number of publicly available standards that can be applied to the SDLC, such as ISO 12207 (the international standard describing the method of selecting, implementing, and monitoring the life cycle for software), as well as process improvement guidance, such as the Capability Maturity Model Integration for Development (CMMI-DEV) or ISO 15504 (Information Technology Process Assessment).

It is important to note that the software development life cycle is really only concerned with software functionality. The framework provides guidance for developing this functionality, regardless of underlying systems, such as servers or the network that the software will need to be functional. While the SDLC in this form might (and should) be describing the requirements for such systems, the provisioning of such systems is typically not a part of this form of the SDLC. This does not necessarily mean that these requirements are not addressed at all, but typically the project team deals with them in a very separate way. This can potentially lead to miscommunication and a lack of coordination between different groups, and eventually to poorly delivered results. Another pitfall in such an approach is that the infrastructure side is done in an entirely ad hoc way, which can lead to struggles for the project team to be forced to fix things up in production, or severely under-performing services, because the infrastructure cannot support what is truly needed.

Solution Development Life Cycle
In a solution development life cycle (sometimes also known as systems development life cycle), the scope of the methodology is expanded from a narrow focus on software functionality to include the underlying systems, such as hardware and infrastructure.  The SDLC in this form is seen as a process to develop an information system, aiming to produce a high quality system that meets or exceeds customer expectations, reaches completion within time and cost estimates, and is inexpensive to maintain and cost-effective to enhance.

The solution development life cycle approach will be similar to the software development life cycle, in that phases for requirements, design, development, building, testing, deploying, and maintaining will be defined by the project team, and in that guidance from project management methodologies or process improvement methodologies can also be incorporated. However, the focus here is still on providing software functionality to users and customers at a given point in time, and not business value in the form of delivering ongoing services.

The benefit of a solution development life cycle over a software development life cycle is that requirements for underlying systems are defined along with requirements for software functionality, and the entire solution will be developed, thus reducing the risk that these underlying systems can derail the delivery of the desired functionality late in the life cycle. The solution development life cycle therefore ensures a more complete view of how the software functionality is delivered, thereby improving the user experience.

Service Development Life Cycle
Further maturing the SDLC leads to a true service development life cycle, which, while still concerned with the software application(s) needed for success, focuses on the definition, design, build, operation, and improvement of a complete IT service, providing outcomes that customers want to achieve. This view is much bigger than the view being taken in the software or solutions development life cycle. The central idea is for the project team to figure out the end results that their customers need to accomplish and then deliver and manage IT services to achieve those outcomes.  This holistic approach requires the team to consider not only the technical aspects of the service, but also the non-technical aspects such as training, documentation, support, communications, or processes.

For Example…
Here’s an example to further illustrate the difference between these three approaches. Let’s take a look at a payroll application. When using software development life cycle methods, the project team’s focus is on functionality provided by the application. For instance, an improvement to the payroll application might be to expand state tax calculation from handling a single state to also include other states, because the company will now also open offices in more states. The focus is solely on the calculations in the software.

Moving to the solutions development life cycle approach, more aspects are looked at for this change. Since adding this functionality most likely means more users and more employee records being managed by the application, the project team would also consider additional space requirements for the database storing the information about employees, additional network bandwidth that may be required for additional users, and more CPU power being needed for those users.

Taking a service development life cycle approach would require that the project team understand the outcomes customers want to achieve (e.g., “process payroll for all employees”), taking a holistic view of the current IT service landscape, and then determining how those outcomes can be best achieved within that landscape. Besides just application functionality, other aspects of service delivery come into focus, such as availability or continuity requirements, training, support, and even marketing new capabilities in the organization.

In conclusion, the evolution of the SDLC takes us from the traditional “software development life cycle” with its focus on developing and implementing functionality provided by software to defining, designing, implementing, and maintaining services that provide value to IT customers. Doing so means expanding the processes and roles described in these life cycles to ensure this value can be realized.

====
Kai Holthaus is a transformation consultant with VMware Accelerate Advisory Services and is based in Oregon. Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags

A Click Away—The IT Vending Machine Experience

IT users increasingly expect a more consumer-type of enterprise technology experience. A user friendly, cloud-based experience will increase productivity, efficiency, and customer satisfaction.

This infographic reveals why the consumerization of IT is a key development that organizations can leverage to ensure that IT is successful.

 

140207-VMware-OnDemand-IT-Services-final-LORes

5 Steps to Shape Your IT Organization for the Software-Defined Data Center

by Tim Jones

TimJones-cropOne aspect of the software-defined data center (SDDC) that is not solved through software and automation is how to support what is being built. The abstraction of the data center into software managed by policy, integrated through automation, and delivered as a service directly to customers requires a realignment of the existing support structure.

The traditional IT organizational model does not support bundling compute, network, storage, and security into easily consumable packages. Each of these components is owned by a separate team with its own charter and with management chains that don’t merge until they reach the CTO. The storage team is required to support the storage needs of the virtualized environment as well as physical servers, the backup storage, and replication of data between sites. The network team has core, distribution, top of rack, and edge switches to support in addition to any routers or firewalls. And someone has to support the storage network whether it is IP, InfiniBand, or Fibre Channel. None of these teams has only the software-defined data center to support. The next logical question asked is: What does an organization look like that can support SDDC?

While there is no simple answer that allows you to fill a specific set of roles with staff possessing skill sets from a checklist, there are many organizational models that can be modified to support your SDDC. In order to modify an organizational model or to build your own model to meet your IT organization’s requirements, certain questions need to be answered. The answers to the following five steps will help shape your new organization model:

  1. Define what your new IT organization will offer.
    Although this sounds elementary, it is necessary to understand what is planned on being offered in order to know what is necessary to provide support. Will infrastructure as a service (IaaS) be the only offering or will database as a service (DBaaS) and platform as a service (PaaS) also be offered? Does support stop at the infrastructure layer, or will operating system, platform, or database support be required? Who will the customer work with to utilize the services or to request and design additional services?
  2. Identify the existing organizational model.
    A thorough understanding of the existing support structure will help identify what support customers will expect based on their current experience and any challenges associated with the model. Are there silos within that negatively impact customers?  What skills currently exist in the organization?  Identifying the existing organization and defining what will be offered by the new organization will help to identify what gaps exist.
  3. Leverage what is already working.
    If there are components of the existing organization that can either be replicated or consumed by the new organization, take advantage of the option. For example, if there is already a functioning group that works with the customers and supports the operating system, then evaluate how to best incorporate them into the new organization. Or if certain support is outsourced, then incorporate that into the new organizational model.
  4. Evaluate beyond the technical.
    The inclusion of service architects, process designers, business analysts, and project managers can be critical to the success of your new organization. These resources could be consumed from existing internal groups such as a central PMO. But overlooking the non-technical organizational requirements can inhibit the ability of the IT organization to deliver on its service roadmap.
  5. Create a new IT organization.
    Don’t accept the status quo with your current organization. If the storage, compute, and virtualization teams all report through separate management chains in the current organization, the new organization should leverage a single management chain for all three teams. Removing silos within the IT organization fosters a collaborative spirit that results in better support and better service offerings for customers.

Although there is no one size fits all organizational model for the software-defined data center, understanding where your IT organization is currently and where it is headed will enable you to create an organizational model capable of supporting the service roadmap.

====
Tim Jones is business transformation architect with VMware Accelerate Advisory Services and is based in California.