Home > Blogs > VMware Operations Transformation Services > Tag Archives: software-defined data center

Tag Archives: software-defined data center

Best Practice Approaches to Transformation with the Software-Defined Data Center

Kevin_cropBy: Kevin Lees

VMworld is almost upon us. Technology continues, of course, to be a key enabler in helping IT on its transformation journey, whether that journey is to offering IT as a Service, moving more fully to Cloud, supporting cloud native applications and continuous delivery, DevOps, or any number of other initiatives focused on providing increasing value to the business. It’s also the primary reason you attend VMworld. But, as we work with our customers across the world, we continue to see how integral people and process changes are to really making this journey successful and to truly providing the value business is increasingly demanding of IT.

To help you make the most of the great technology and solutions VMware provides and will be showcasing at VMworld, we’re hosting an Operations Transformation track again this year. As the Principal Architect for our Global Operations Transformation Practice, this track is near and dear to my heart. We have a great lineup of sessions focused on the practical aspects of applying organizational, people, and process change to get the most out of VMware’s technology.

I have several sessions this year, but one that might be of particular interest is focused on the key best practices we’ve learned while helping some of our biggest IT customers transform their value proposition to their business customers by deploying a Software-Defined Data Center (SDDC). I’ll discuss what not to do and what to watch out for, as well as what you should do to be successful. I’ll present lessons learned through real customer examples (though the names will be changed to protect the innocent) and provide guidance on how you can avoid learning the same lessons – the hard way. Of course I’ll address the organizational, people, and process aspects but will also dive into some of the technical challenges we overcame or avoided along the way. This particular session is OPT 5361 on Wednesday at 11 a.m.

I know you’re looking forward to VMworld as much as I am. I hope to see you in one or more of my sessions but more importantly, check out the Operations Transformation track in the on-line Schedule Builder. You won’t be disappointed and you could be the hero “back at the office’ because IT’s success isn’t just about the technology.

Quick guide to my sessions:

Tuesday Sept. 1

  • 12:30 OPT 4743 Organizational Change Group Discussion
  • 2:30 OPT 4992 vRealize CodeStream:  Is DevOps about Tools or Transformation?

Wednesday Sept. 2

  • 8 :00 OPT 5232 Cloud Native Apps, MicroServices and Twelve-Factor Apps:  What Do They Mean for your SDDC/Cloud Ops?
  • 11:00 OPT 5361 Best Practice Approaches to Transformation with the SDDC
  • 2:30 OPT 5972 80K VM’s and Growing:  VMware’s Internal Cloud Journey Told by the People on the Frontlines

=====
Kevin Lees is principal architect for VMware’s global Operations Transformation Practice and is based in Colorado.

When to Engage Your Organization in Their Cloud Journey

By Yohanna Emkies

Yohanna-cropThe most common question I hear from my customers is: “What’s going to happen to me (read: my organization) if we introduce the cloud?”  Closely followed by:

“How are we going to begin the planning process…?” These are fair questions, which have to be discussed and worked out.

A question that is often underestimated, although it’s no less important than “what” and the “how” is “when.” When is the right time to tackle operational readiness and organizational questions?

I notice two types of customers when it comes to addressing operational and organizational-related topics. Many simply omit or keep postponing the subject, until they are in the midst of cloud technical go-lives. At some point they realize that they need to cover a number of basics in order to move on and are forced to rely on improvisation. I call them the “late awakeners.”

Others—“early birds” keen to plan for the change—will come up with good questions very early on, but expect all answers to be concrete, before they even start their cloud journey. Here are my observations on each type:

1. Let’s start with the late awakeners.
Quite naturally, the customers I’m working with tend to focus on the technical aspects of the software-defined data center (SDDC), deploying all their resources, putting all the other things on hold, working hard for the technical go-live to succeed, until…

“Hold on a minute, who will take care of the operational tasks once the service is deployed? What is the incident management process? How are we going to measure our service levels? What if adoption is too rapid? And what if we don’t get enough adoption?”

In such cases, critical questions are raised very late in the process, when resources are already under pressure from heavy workloads and increasing uncertainty. These customers end up calling for our support urgently but at the same time find themselves unable to free up resources and attention to address the transformation. And when they do, they fail to look at the big picture, getting caught up in very short-term questions instead of defining services or processes properly.

Doing a first tour in these organizations and mapping the gaps, we may discover entire subjects, which have been left aside, because they are too complex to be addressed on the fly. But even worse, some subjects have already been treated because they were critical… but not treated consciously nor fully. The teams may feel that they don’t have time for these questions, think that it’s taking focus from the “important stuff,” but in reality that’s mostly because they are not aware that they are ALREADY spending a lot of time on these same questions, except they don’t focus their effort on it.

That results not only in poor awareness and maturity at day 1, but also in a low capacity to grow this maturity over time because no framework has been put in place.

Putting things back on track may eventually take more time and focus than if they had been addressed properly in the first place. But it is still feasible.

Clearly, it is an IT senior manager’s role to provide strategic direction, while project managers must include these important work streams in their planning from the start. Ultimately, it’s all part of one holistic project.

2. The early birds are also a tough catch.
From accompanying many organizations in different types of transformations, I cannot advocate loudly enough the need to encourage planning and designing before doing. Being mature in terms of the “what” before running to the “how” is undoubtedly the right approach.

A key lesson learnt is that in order to reorganize successfully for the cloud you have to accept some level of uncertainty while you are making your journey.

Some organizations get stuck upfront with one recurring question: “What will our future organization look like?” Relax.

First no pre-set organization design, even roughly customized to your needs, should be taken for granted. Secondly, no design—even accurate—will ever bring the move about. It’s the people that support the organization who are the critical success factor.

Don’t get me wrong, giving insight, best practices, and direction will definitely help the management in envisioning the future organization, which is essential, but at the same time, an organization is a lively thing by definition. There is also a psychological impact. When you start raising words like “people” and “organization,” concern and fear about change come with.

Sometimes it is even trickier because some organizations are already—or still—in the midst of other transformations started a few years back and lingering. In that case, the impression of “yet another change” may be perceived negatively by the core team and may put them in a situation of stress and stop them from moving forward. What if your team has just finished redesigning and implementing incident management processes, only to realize that they have to do it again to adapt to the cloud?

It will take time for the organization to mature. Embracing the cloud is a big change, but no drastic overnight revolution will take you there. Moving to the cloud is not “yet another re-org” but an ongoing, spreading move, which relies on existing assets, and it’s here to last.

Your organization will evolve as you grow, your skills will improve as your service portfolio and cloud adoption increases. And this will happen organically as long as you put the right foundations in place: the right people, the right processes, the right metrics…and the right mindset.

The right balance to the “when” is somewhere in between the two behaviors of late awakeners and early adopters. Here are some of the most important best practices that I share with my customers:

  1. Gain and maintain the full commitment of senior management sponsors who will support your vision and guarantee focus all along the journey.
  2. Plan your effort and get help: dealing with operational readiness and with technical readiness should be one holistic project, and for the most part, it involves the same people. The project has to integrate both streams together from the start and wisely split effort among the teams to avoid bottlenecks, rework, and wastage.
  3. Opt for an iterative approach: be strategic and pragmatic. Designing as much as you can while you start implementing your cloud, and then refining as you go, will provide a more agile approach and guarantee you reach your goals more efficiently.
  4. Practice full awareness: create a common language on the project, hit important communication milestones, and reward intermediary achievements, so people feel they contribute and see the progress. It is key that your cloud project will be seen positively in the organization and that the people involved in it convey a certain positive image.
  5. Engage your people, engage your people, and engage your people.

As is often said, timing is everything. When dealing with people and their capacity to change, it’s even more critical to find a balance between building momentum and keeping the distance. Your teams will equally need to embrace the vision, feel the success, and at some point also breathe…and when you empower them efficiently across the process you will have the best configuration for success.

=====
Yohanna Emkies is an operations architect in the VMware Operations Transformation Services global practice and is based in Tel Aviv, Israel.

4 Ways to Maximize the Value of VMware vRealize Operations Manager

By Rich Benoit

Benoit-cropWhen installing an enterprise IT solution like VMware vRealize Operations Manager (formerly vCenter Operations Manager), supporting the technology implementation with people and process changes is paramount to your organization’s success.

We all have to think about impacts beyond the technology any time we make a change to our systems, but enterprise products require more planning than most. Take, for example, the difference between installing VMware vSphere compared to an enterprise product. The users affected by vSphere generally sit in one organization, the toolset is fairly simple, little to no training is required, and time from installation to extracting value is a matter of days. Extend this thinking to enterprise products and you have many more users and groups affected, a much more complex toolset, training required for most users, and weeks or months from deployment to extracting real value from the product. Breaking it down like this, it’s easy to see the need to address supporting teams and processes to maximize value.

Here’s a recent example from a technology client I worked with that is very typical of customers I talk to. Management felt they were getting very little value from vRealize Operations Manager. Here’s what I learned:

  • Application dashboards in vRealize Operations Manager were not being used (despite extensive custom development).
  • The only team using the tool was virtual infrastructure (very typical).
  • They had not defined roles or processes to enable the technology to be successful. outside of the virtual infrastructure team.
  • There was no training or documentation for ongoing operations.
  • The customer was not enabled to maintain or expand the tool or its content.

My recommendations were as follows, and this goes for anyone implementing vRealize Operations Manager:

  1. Establish ongoing training and documentation for all users.
  2. Establish an analyst role to define, measure and report on processes and effectiveness related to vRealize Operations Manager and to also establish relationships with potential users and process areas of vRealize Operations Manager content.
  3. Establish a developer role to create and modify content based on the analyst’s collected requirements and fully leverage the extensive functionality vRealize Operations Manager provides.
  4. Establish an architecture board to coordinate an overall enterprise management approach, including vRealize Operations Manager.

The key takeaway here: IT transformation isn’t a plug-and-play proposition, and technology alone isn’t enough to make it happen. This applies especially to a potentially enterprise-level tool like vRealize Operations Manager. In order to maximize value and avoid it becoming just another silo-based tool, think about the human and process factors. This way you’ll be well on the way towards true transformational success for your enterprise.

—-
Rich Benoit is an Operations Architect with the VMware Operations Transformation global practice.

Building Service-based Cost Models to Accelerate Your IT Transformation

By Khalid Hakim

“Why is this so expensive?”

As IT moves towards a service-based model, this is the refrain that IT financial managers often hear. It’s a difficult question to answer if you don’t have the data and structure that you need to clearly and accurately defend the numbers. Fighting this perception, and building trust with the line of business, requires a change in how IT approaches cost management that will match the new IT-as-a-service format.

The first and most important step in building service-based cost models is defining what exactly a service is, and what it is not. For example, the onboarding process: is this a service, a process, or an application? Drawing the lines of what service means within your organization, and making it consistent and scalable, will allow you to calculate unit costs. Businesses are already doing cost management by department, by product, by technology, but what about the base costs, such as labor, facilities, or technology within a software-defined data center? Your final service cost should include all these components in a transparent way, so that other parts of the business can understand what exactly they are getting for their money.

Building these base costs into your service cost requires an in-depth look into how service-to-service allocation will work. For example, how do you allocate the cost of the network, which is delivered to desktops, client environments, wireless, VPN, and data centers? Before you can start to bring in a tool to automate costing out your services, map out how each service affects another, and define units and cost points for them. While it’s often tempting to jump straight into service pricing and consider yourself done once it’s complete, it’s important to start with a well defined service catalog, including costs for each service, then to continue to manage and optimize once the pricing has been implemented. Service costing helps to classify your costs, to understand what is fixed, what is variable, direct, indirect, and so forth.

So we’ve allocated the shared cost (indirect cost in accounting language) of services across the catalog. Now it’s time to bring in the service managers—the people who really understand what is being delivered. Just as a manufacturing company would expect a product manager to understand their product end to end, service managers should understand their entire service holistically. Once you’ve built a costing process, the service manager should be able to apply that process to their service.

In the past, service managers have really only been required to understand the technology involved. Bringing them into this process may require them to understand new elements of their service, such as how to sell the service, what it costs, and how to market it. It helps to map out the service in a visual way, which helps the service managers understand their own service better, and also identifies the points at which new costs should be built into the pricing model. Once you understand the service itself, then decide how you want to package it, the SLAs around it, and what the cost of a single unit will be. When relevant, create pre-defined packages that customers will be able to choose from.

SCP white paper coverOnce the costing has been implemented, you can circle back and use the data you’re gathering to help optimize the costs. This is where automation can offer a lot of value. VMware Realize Business (formerly IT Business Management Suite) helps you align IT spending with business priorities by getting full transparency of infrastructure and application cost and service quality. At a high level, it helps you build “what if” cost models, which automatically identify potential areas for cost reduction through virtualization or consolidation. The dashboard view offers the transparency needed to quickly understand cost by service and to be able to justify your costs across the business.

Service-based cost models are a major component of full IT transformation, which requires more than just new technology. You need an integrated approach that includes modernization of people, process, and technology. In this short video below, I share some basic steps that you need to jumpstart your business acumen and deliver your IT services like a business.

For more in-depth guidance, you can also access my white paper: Real IT Transformation Requires a Real IT Service Costing Process, as a resource on your journey to IT as a service.

====
Khalid Hakim is an operations architect with the VMware Operations Transformation global practice and is based in Dallas. You can follow him on Twitter @KhalidHakim47.

 

How to Avoid 5 Common Mistakes When Implementing an SDDC Solution

By Jose Alamo

Jose alamo-cropImplementing a software-defined data center (SDDC) is much more than implementing or installing a set of technology — an SDDC solution requires clear changes to the organization vision, policies, processes, operations, and organization readiness. Today’s CIO needs to spend a good amount of time understanding the business needs, the IT organization’s culture, and how to establish the vision and strategy that will guide the organization to make the adjustments required to meet the needs of the business.

The software-defined data center is an open architecture that impacts the way IT operates today. And as such, the IT organization needs to create a plan that will utilize the investments in people, process, and technology already made to deliver both legacy and new applications while meeting vital IT responsibilities. Below is a list of five common mistakes that I’ve come across working with organizations that are implementing SDDC solutions, and my recommendations on how avoid their adverse impacts:

1. Failure to develop the vision and strategy—including the technology, process, and people aspects
Many times organizations implement solutions without setting the right expectation and a clear direction for the program. The CIO must use all the resources available within the IT organization to create a vision and strategy, and in some cases it is necessary to bring in external resources that have experience in the subject. The vision and strategy must align with the business needs, and it should identify the different areas that must be analyzed to ensure a successful adoption of an SDDC solution.

In my experience working with clients, it is imperative that as part of the planning a full assessment is conducted, and it must include the areas of people, process, and technology. A SWOT analysis should also be completed to fully understand the organization’s strengths,  weaknesses, opportunities, and threats. Armed with this insight, the CIO and IT team will be able to express the direction that must be taken to be successful, including the changes required across people, process, and technology.

Failing to complete this step will add complexity and lack of clarity for those who will be responsible for implementing the solution.

2. Limited time spent reviewing and understanding the current policies
There are often many policies within the IT organization that can prevent moving forward with the implementation of SDDC solutions. In such cases, the organization needs to have an in-depth review of the current policies governing the business and IT day-to-day operations. The IT team also needs to ensure it devotes a significant amount of time with the company’s security and compliance team to understand their concerns and what measures need to be taken to make the necessary adjustments to support the implementation of the solutions. For example, the IT organization needs to look at its change policies; some older policies could prevent the deployment of process automation that is key to the SDDC solution. When these issues are identified from the beginning, IT can start the negotiation with the lines of business to either change its policies or create workarounds that will allow the solution to provide the expected value.

Performing these activities at the beginning of the project will allow IT leadership to make smart choices and avoid delays or workarounds when deploying future SDDC solutions.

3. Lack of maturity around the IT organization’s service management processes
The software-defined data center redefines IT infrastructure and enables the IT organization to combine technology and a new way of operating to become more service-oriented and more focused on business value. To support this transformation, mature service management processes need to be established.

After the assessment of current processes, the IT organization will be able to determine which process will require a higher level of maturity, which process will need to be adapted to the SDDC environment, and which processes are missing and will need to be established in order to support the new environment.

Special attention will be required for the following processes:  financial management, demand management, service catalog management, service level management, capacity management, change management, configuration management, event management, request fulfillment, and continuous service improvement.

Ensure ownership is identify for each process, with KPIs and measurable metrics established—and keep the IT team involved as new processes are developed.

4. Managing the new solution as a retrofit within the current environment
Many IT organizations will embrace a new technology and/or solution only to attempt to retrofit it into their current operational model. This is typically a major mistake, especically if the organization is expecting better efficiency, more flexibility, lower cost to operate, transparency, and tighter compliance as potential benefits from an SDDC.

Organizations must assess their current requirements and determine if they will be required for the new solutions. Most processes, roles, audit controls, reports, and policies are in place to support the current/legacy environment, and each must be assessed to determine its purpose and value to the business, and to determine whether it is required for the new solution.

IT leadership should ask themselves: If the new solution is going to be retrofitted into the current operational model, then why do we need a new solution?  What business problems are we going to resolve if we don’t change the way we operate?

My recommendation to my clients is to start lean, minimize the red tape, reduce complex processes, automate as much as possible, clearly identify new roles, implement basic reporting, and establish strict change policies. The IT organization needs to commit to minimize the number of changes to the new solution to ensure only changes that are truly required get implemented.

5. No assessment of the IT organization’s capabilities and no plan to fill the skill set gaps
The most important resource to the IT organization is its people. IT management can implement the greatest technologies, but their organizations will not be successful if their people are not trained and empowered to operate, maintain, and enhance the new solution.

The IT organization needs to first assess current skill sets. Then work with internal resources and/or vendors to determine how the organization needs to evolve in order to achieve its desired state. Once that gap has been identified, the IT management team can develop an enablement plan to begin to bridge the gap. Enablement plans typically include formal “train the trainer” models to cascade knowledge within the organization, as well as shadowing vendors for organizational insight and guidance along with knowledge transfer sessions to develop self-sufficiency. In some cases it may be necessary to bring in external resources to augment the IT team’s expertise.

In conclusion, implementing a software-defined data center solution will require a new approach to implementing processes, technologies, skill sets, and even IT organizational structures. I hope these practical tips on how to avoid common mistakes will help guide your successful SDDC solution implementations.

====
Jose Alamo is a senior transformation consultant with VMware Accelerate Advisory Services and is based in Florida. Follow Jose on Twitter @alamo_jose  or connect on LinkedIn.

5 Steps to Shape Your IT Organization for the Software-Defined Data Center

by Tim Jones

TimJones-cropOne aspect of the software-defined data center (SDDC) that is not solved through software and automation is how to support what is being built. The abstraction of the data center into software managed by policy, integrated through automation, and delivered as a service directly to customers requires a realignment of the existing support structure.

The traditional IT organizational model does not support bundling compute, network, storage, and security into easily consumable packages. Each of these components is owned by a separate team with its own charter and with management chains that don’t merge until they reach the CTO. The storage team is required to support the storage needs of the virtualized environment as well as physical servers, the backup storage, and replication of data between sites. The network team has core, distribution, top of rack, and edge switches to support in addition to any routers or firewalls. And someone has to support the storage network whether it is IP, InfiniBand, or Fibre Channel. None of these teams has only the software-defined data center to support. The next logical question asked is: What does an organization look like that can support SDDC?

While there is no simple answer that allows you to fill a specific set of roles with staff possessing skill sets from a checklist, there are many organizational models that can be modified to support your SDDC. In order to modify an organizational model or to build your own model to meet your IT organization’s requirements, certain questions need to be answered. The answers to the following five steps will help shape your new organization model:

  1. Define what your new IT organization will offer.
    Although this sounds elementary, it is necessary to understand what is planned on being offered in order to know what is necessary to provide support. Will infrastructure as a service (IaaS) be the only offering or will database as a service (DBaaS) and platform as a service (PaaS) also be offered? Does support stop at the infrastructure layer, or will operating system, platform, or database support be required? Who will the customer work with to utilize the services or to request and design additional services?
  2. Identify the existing organizational model.
    A thorough understanding of the existing support structure will help identify what support customers will expect based on their current experience and any challenges associated with the model. Are there silos within that negatively impact customers?  What skills currently exist in the organization?  Identifying the existing organization and defining what will be offered by the new organization will help to identify what gaps exist.
  3. Leverage what is already working.
    If there are components of the existing organization that can either be replicated or consumed by the new organization, take advantage of the option. For example, if there is already a functioning group that works with the customers and supports the operating system, then evaluate how to best incorporate them into the new organization. Or if certain support is outsourced, then incorporate that into the new organizational model.
  4. Evaluate beyond the technical.
    The inclusion of service architects, process designers, business analysts, and project managers can be critical to the success of your new organization. These resources could be consumed from existing internal groups such as a central PMO. But overlooking the non-technical organizational requirements can inhibit the ability of the IT organization to deliver on its service roadmap.
  5. Create a new IT organization.
    Don’t accept the status quo with your current organization. If the storage, compute, and virtualization teams all report through separate management chains in the current organization, the new organization should leverage a single management chain for all three teams. Removing silos within the IT organization fosters a collaborative spirit that results in better support and better service offerings for customers.

Although there is no one size fits all organizational model for the software-defined data center, understanding where your IT organization is currently and where it is headed will enable you to create an organizational model capable of supporting the service roadmap.

====
Tim Jones is business transformation architect with VMware Accelerate Advisory Services and is based in California.

A New Angle on the Classic Challenge of Retained IT

By Pierre Moncassin

Pierre Moncassin-cropWhen discussing the organization models for managing cloud infrastructure with customers, I have come across situations where some if not all infrastructure services are outsourced to a third party. In these situations my customers often ask – does your (VMware) operating model still apply? Should I retain cloud-related skills in-house? If so, which ones?

The short answer is: Yes. The advice I give my customers is that their IT organization should establish a core organization modeled on the “tenant operations” team as defined in Organizing for the Cloud, a VMware white paper by my colleague Kevin Lees.

Let’s assume a relatively simple scenario where a single outsourcer is providing “standard” infrastructure services — such as computing, storage, backups. In this scenario, the outsourcer has accepted to transform at least some of its services towards software-defined data center (SDDC), which is by no means an easy step (I will return to that point later).

For now let’s also assume a cooperative situation where customer and outsourcer are collaboratively working towards a cloud model. The question is — what skills and functions should the customer retain in-house? Which skills can be handed over to the outsourcer?

The question is a classic one. In traditional infrastructure outsourcing, we would talk about a “retained IT” organization.  For the SDDC environment, here are some skill groups that I believe have to be preserved within the core, in-house team:

  • Service Design and Self-service Provisioning is clearly a skillset to keep in-house. The in-house team must be able to work with the business to define services end-to-end, but the team should also be able to grasp accurately the possibilities that automation offers with software such as VMware vCloud Automation Center.  Though I am not suggesting that the core team needs to be expert in all aspects of workflows, APIs or scripting, they do need a solid grasp of the possibilities of automation.
  • Process Automation and Optimization.  A solid working knowledge of automation software is useful but not enough.  The in-house teams are required to decide which processes to automate and how. They need to make business-level decisions. Which processes are worth automating? What is the benefit of automation versus its cost?
  • Security and Compliance is often a top priority for cloud adopters. The cloud-based services need to align with enterprise policies and standards.  The retained IT function must be able to demonstrate compliance and where needed, enforce those standards in the cloud infrastructure.
  • Service Level Management and Trend Analysis. Whilst the retained IT organization does not need to be involved in the day-to-day monitoring and troubleshooting, they need to be able to monitor key service levels. Specifically, the business users will be highly sensitive to the performance of some business-critical applications. The retained IT organization will need to keep enough knowledge of these applications and of performance monitoring tools to ensure that application performance is measured adequately.
  • Application Life Cycle (DevOps). We have assumed in our scenario an infrastructure-only outsourcing — the skills for application development remaining in-house.  In the SDDC environment, the tenant operations team will work closely with the application development teams. Amongst other skills, the retained IT will need detailed knowledge not only of application provisioning, but also the architectures, configuration dependencies, and patching policies required to maintain those applications.

I have reviewed skills groups needed as more automation is used, but there will be less reliance on skills that relate to routine tasks and trouble-shooting. Skills that can typically be outsourced include:

  • Routine scripting and monitoring
  • System (middleware) configuration
  • Routine network administration

The diagram below is a (very simplified) summary of the evolution from traditional retained IT to tenant operations for SDDC environments.

Retained IT modelIt is also worth noting that the transformation from traditional infrastructure outsourcing to SDDC is a far from obvious step from the point of view of an outsourcer. Why should the outsourcer invest time and cost to streamline services, if the end customer has already contracted to pay for the full cost of service? Gaining buy-in from the outsourcer to transform its model can be a significant challenge. Therefore it is prudent to key to gain acceptance either:
–  early in the contract negotiations, so that the provider can build in a cloud delivery model in its service offering,
– or towards the end of a contract when the outsourcer is often highly motivated to obtain a renewal.

Finally outsourcers may initiate their own technology refresh programs, which can create a win-win situation when both sides are prepared to invest in modernization towards SDDC.

3 Key Take-Aways

  1. Organizations that undertake their journey to SDDC with an outsourcer are advised to establish a core SDDC  organization including most tenant operations skills; a key focus is to leverage automation (whilst routine, repetitive tasks can be outsourced).
  2. The exact profile of the tenant operations (retained IT) will depend on the scope of the outsourcing contract.
  3. Early contract negotiations, renewals, or technology refresh can create opportunities to encourage an outsourcer to move towards the SDDC model.

———
Pierre Moncassin
is an operations architect with VMware’s Global Operations Transformation Practice and is based in the UK. Follow @VMwareCloudOps on Twitter for future updates.