Tag Archives: sddc

Build Your Operations Transformation Agenda for VMworld 2015

By: Andy Troup

VMworld 2015

VMworld 2015 is nearly upon us and I’d like to give you an overview of the Operations Transformation (OPT) Track that will be running again this year to help you get the most out of what’s on offer.

As a reminder, the track is focused on helping you understand how the VMware Software-Defined Data Center is redefining IT infrastructure, and how it enables IT organizations to combine technology and a new way of operating to become more service-oriented and focused on business value. This track offers unique opportunities to learn the latest best practices and key considerations from experienced VMware experts, practitioners, and the real-world experiences of customers transforming their IT infrastructures and operational processes.

This year in San Francisco, the OPT track is offering 3 different types of sessions. There are 23 breakout sessions and one Group Discussion session all of which last for an hour. In addition to these, and new for this year, there are also 4 Quick Talk sessions which last for 30 minutes and are available on Sunday 30th August.

The focus for this years OPT track is around a number of different areas which I’ll give you a quick insight into.

Operations Transformation

The track as a whole is all about how to transform the way that you operate so that you can really start to get the benefits of your technology investment and become a service provider to your customers. There are a number of session that cover how transformation is achieved. There will be customers who will give you a view of the transformation they have undertaken and how they approached it, including a session covering VMware’s own transformation and the “OneCloud” implementation. Some of VMware’s transformation specialists who have helped many customers undertake a transformation will also be providing you with details of best practices and pitfalls to watch out for. Check out the following sessions:

  • OPT4682-QT – A Roadmap for Transformation – Planning Your Future State and Ensuring Governance
  • OPT4684 – Engineers in The Cloud – The New Model of Datacenter Operation
  • OPT5010-QT – The Lifecycle of Cloud Services
  • OPT5069 – Enterprise Hybrid Cloud—Federal Case Study
  • OPT5238 – VMWare IT DevOps Transformation: A VMware on VMware Showcase
  • OPT5361 – Best Practice Approaches to Transformation with the Software-Defined Data Center
  • OPT5509 – Building an Enterprise Hybrid Cloud Strategy and Operating Mode
  • OPT5709 – Customer Experience—Building a Software Defined Data Center with CIT
  • OPT5814-QT – AGILE for Infrastructure: Utilizing Agile Methods to Drive Iterative Infrastructure Development and IT Service Delivery
  • OPT5972 – 80,000 VM’s and Growing! VMware’s Internal Cloud Journey Told by the People on the Frontline

DevOps

DevOps is a big theme this year, and the OPT track will cover how the technology is enabling operational change to make DevOps become a reality. If you’re new to DevOps, then one of our specialists has a session covering the DevOps concept. There are some customers as well as VMware IT talking about how they were able to embrace DevOps. Also, how VMware’s technology is helping DevOps transformations will be covered in a number of sessions by some of our specialists. Check out the following sessions:

  • OPT4868 – Your DevOps Transformation:  Culture, Technology or Both?
  • OPT4992 – VMware vRealize Code Stream:  Is DevOps about Tools or Transformation?
  • OPT5235 – Cloud-Native Apps, Microservices and Twelve-Factor Apps: What Do They Mean for Your SDDC/Cloud Operations?
  • OPT5238 – VMWare IT DevOps Transformation: A VMware on VMware Showcase
  • OPT5960 – VMware NSX with a DevOps Mentality:  Streamline Your Operations for Zero Downtime Networking
  • OPT6227 – Developing a new IT:  How the Boeing Company IT Department is empowering its Customers through internal cloud and services

vRealize Suite

The vRealize suite of products features in the OPT track this year, covering vRealize Automation, vRealize Buisness, vRealize Operations and vRealize CodeStream and how they have been instrumental in enabling operational transformation. How vRealize Business can be used to help you become service focused and really manage IT as a business will be covered as well as how to build effective cost models.

Other sessions will show how the implementation of vRealize Operations has enabled customers to undertake their transformation and manage the services that they are offering. How close integration between vRealize Operations and vRealize Automation has meant a clearer understanding of the service provision process and the operational benefits will be covered in another session.

Continuing the automation theme, there is a panel session with a number of customers from healthcare who will discuss automation in what is a very regulated environment. Check out the following sessions:

  • OPT4680 – Advanced  Automated  Approvals Use Case—Using vRealize Operations and vRealize Automation to Seize Back the Approval Charter
  • OPT4707 – Integrating vRealize Automation with Service Catalogs:  Does Your Implementation Strategy Align with Your Integration Needs?
  • OPT4992 – VMware vRealize Code Stream:  Is DevOps about Tools or Transformation?
  • OPT5029 – How to Use Service Definitions in VMware vRealize Business to Build Highly Effective, Service-Based Cost Models
  • OPT5075 – 6 Steps to Establish Your IT Business Management Office (ITBMO) with VMware vRealize Business
  • OPT5222 – Keys to Successfully Marketing and Managing Your vRealize Automation Service Catalog
  • OPT5369 – Pro-Active Monitoring of a Service: People, Process and Technology
  • OPT5279 – Chargeback in the Department of Defense
  • OPT5387-QT – Talking Security’s Language Using NSX, LogInsight and vRealize Tools
  • OPT5519 – Nimble Automation in a Regulated Environment:  Good, Fast and Cheap.  Pick Any Two.
  • OPT6226 – Kaiser:  Metrics-driven Transformation: Using vROps as the Foundation for Operations Transformation

NSX

NSX is become front of mind for many people, and there is realization that this technology product is having a big impact on the way that IT groups operate. The OPT track is offering some sessions that will provide real world experiences of how this takes shape.

  • OPT4953 – Operationalizing VMware NSX:  Practical Strategies and Lessons from Real-World Implementations
  • OPT5387-QT – Talking Security’s Language Using NSX, LogInsight and vRealize Tools
  • OPT5960 – VMware NSX with a DevOps Mentality:  Streamline Your Operations for Zero Downtime Networking

SDDC

The impact that the implementation of the Software Defined Datacenter has on organizational structure is a common discussion point, and this year the OPT track offers both a session covering organizational change management and a group discussion with leading organizational change specialists who have a vast amount of experience with many customers.

  • OPT4743-GD – Organizational Change Group Discussion
  • OPT5793 – Organizational Change Management and SDDC:  Why Getting Your Organization and People Aligned Are the Key Ingredient in Ensuring Maximum Value

As you can see there’s a large selection of sessions covering a number of different topics. If you’re lucky enough to be attending in San Francisco and you’d like to build your event around the operations transformation track, download this handy PDF.

===========

Andy Troup is a senior solution architect with the Operations Transformation Services practice based in the UK.

Best Practice Approaches to Transformation with the Software-Defined Data Center

Kevin_cropBy: Kevin Lees

VMworld is almost upon us. Technology continues, of course, to be a key enabler in helping IT on its transformation journey, whether that journey is to offering IT as a Service, moving more fully to Cloud, supporting cloud native applications and continuous delivery, DevOps, or any number of other initiatives focused on providing increasing value to the business. It’s also the primary reason you attend VMworld. But, as we work with our customers across the world, we continue to see how integral people and process changes are to really making this journey successful and to truly providing the value business is increasingly demanding of IT.

To help you make the most of the great technology and solutions VMware provides and will be showcasing at VMworld, we’re hosting an Operations Transformation track again this year. As the Principal Architect for our Global Operations Transformation Practice, this track is near and dear to my heart. We have a great lineup of sessions focused on the practical aspects of applying organizational, people, and process change to get the most out of VMware’s technology.

I have several sessions this year, but one that might be of particular interest is focused on the key best practices we’ve learned while helping some of our biggest IT customers transform their value proposition to their business customers by deploying a Software-Defined Data Center (SDDC). I’ll discuss what not to do and what to watch out for, as well as what you should do to be successful. I’ll present lessons learned through real customer examples (though the names will be changed to protect the innocent) and provide guidance on how you can avoid learning the same lessons – the hard way. Of course I’ll address the organizational, people, and process aspects but will also dive into some of the technical challenges we overcame or avoided along the way. This particular session is OPT 5361 on Wednesday at 11 a.m.

I know you’re looking forward to VMworld as much as I am. I hope to see you in one or more of my sessions but more importantly, check out the Operations Transformation track in the on-line Schedule Builder. You won’t be disappointed and you could be the hero “back at the office’ because IT’s success isn’t just about the technology.

Quick guide to my sessions:

Tuesday Sept. 1

  • 12:30 OPT 4743 Organizational Change Group Discussion
  • 2:30 OPT 4992 vRealize CodeStream:  Is DevOps about Tools or Transformation?

Wednesday Sept. 2

  • 8 :00 OPT 5232 Cloud Native Apps, MicroServices and Twelve-Factor Apps:  What Do They Mean for your SDDC/Cloud Ops?
  • 11:00 OPT 5361 Best Practice Approaches to Transformation with the SDDC
  • 2:30 OPT 5972 80K VM’s and Growing:  VMware’s Internal Cloud Journey Told by the People on the Frontlines

=====
Kevin Lees is principal architect for VMware’s global Operations Transformation Practice and is based in Colorado.

When to Engage Your Organization in Their Cloud Journey

By Yohanna Emkies

Yohanna-cropThe most common question I hear from my customers is: “What’s going to happen to me (read: my organization) if we introduce the cloud?”  Closely followed by:

“How are we going to begin the planning process…?” These are fair questions, which have to be discussed and worked out.

A question that is often underestimated, although it’s no less important than “what” and the “how” is “when.” When is the right time to tackle operational readiness and organizational questions?

I notice two types of customers when it comes to addressing operational and organizational-related topics. Many simply omit or keep postponing the subject, until they are in the midst of cloud technical go-lives. At some point they realize that they need to cover a number of basics in order to move on and are forced to rely on improvisation. I call them the “late awakeners.”

Others—“early birds” keen to plan for the change—will come up with good questions very early on, but expect all answers to be concrete, before they even start their cloud journey. Here are my observations on each type:

1. Let’s start with the late awakeners.
Quite naturally, the customers I’m working with tend to focus on the technical aspects of the software-defined data center (SDDC), deploying all their resources, putting all the other things on hold, working hard for the technical go-live to succeed, until…

“Hold on a minute, who will take care of the operational tasks once the service is deployed? What is the incident management process? How are we going to measure our service levels? What if adoption is too rapid? And what if we don’t get enough adoption?”

In such cases, critical questions are raised very late in the process, when resources are already under pressure from heavy workloads and increasing uncertainty. These customers end up calling for our support urgently but at the same time find themselves unable to free up resources and attention to address the transformation. And when they do, they fail to look at the big picture, getting caught up in very short-term questions instead of defining services or processes properly.

Doing a first tour in these organizations and mapping the gaps, we may discover entire subjects, which have been left aside, because they are too complex to be addressed on the fly. But even worse, some subjects have already been treated because they were critical… but not treated consciously nor fully. The teams may feel that they don’t have time for these questions, think that it’s taking focus from the “important stuff,” but in reality that’s mostly because they are not aware that they are ALREADY spending a lot of time on these same questions, except they don’t focus their effort on it.

That results not only in poor awareness and maturity at day 1, but also in a low capacity to grow this maturity over time because no framework has been put in place.

Putting things back on track may eventually take more time and focus than if they had been addressed properly in the first place. But it is still feasible.

Clearly, it is an IT senior manager’s role to provide strategic direction, while project managers must include these important work streams in their planning from the start. Ultimately, it’s all part of one holistic project.

2. The early birds are also a tough catch.
From accompanying many organizations in different types of transformations, I cannot advocate loudly enough the need to encourage planning and designing before doing. Being mature in terms of the “what” before running to the “how” is undoubtedly the right approach.

A key lesson learnt is that in order to reorganize successfully for the cloud you have to accept some level of uncertainty while you are making your journey.

Some organizations get stuck upfront with one recurring question: “What will our future organization look like?” Relax.

First no pre-set organization design, even roughly customized to your needs, should be taken for granted. Secondly, no design—even accurate—will ever bring the move about. It’s the people that support the organization who are the critical success factor.

Don’t get me wrong, giving insight, best practices, and direction will definitely help the management in envisioning the future organization, which is essential, but at the same time, an organization is a lively thing by definition. There is also a psychological impact. When you start raising words like “people” and “organization,” concern and fear about change come with.

Sometimes it is even trickier because some organizations are already—or still—in the midst of other transformations started a few years back and lingering. In that case, the impression of “yet another change” may be perceived negatively by the core team and may put them in a situation of stress and stop them from moving forward. What if your team has just finished redesigning and implementing incident management processes, only to realize that they have to do it again to adapt to the cloud?

It will take time for the organization to mature. Embracing the cloud is a big change, but no drastic overnight revolution will take you there. Moving to the cloud is not “yet another re-org” but an ongoing, spreading move, which relies on existing assets, and it’s here to last.

Your organization will evolve as you grow, your skills will improve as your service portfolio and cloud adoption increases. And this will happen organically as long as you put the right foundations in place: the right people, the right processes, the right metrics…and the right mindset.

The right balance to the “when” is somewhere in between the two behaviors of late awakeners and early adopters. Here are some of the most important best practices that I share with my customers:

  1. Gain and maintain the full commitment of senior management sponsors who will support your vision and guarantee focus all along the journey.
  2. Plan your effort and get help: dealing with operational readiness and with technical readiness should be one holistic project, and for the most part, it involves the same people. The project has to integrate both streams together from the start and wisely split effort among the teams to avoid bottlenecks, rework, and wastage.
  3. Opt for an iterative approach: be strategic and pragmatic. Designing as much as you can while you start implementing your cloud, and then refining as you go, will provide a more agile approach and guarantee you reach your goals more efficiently.
  4. Practice full awareness: create a common language on the project, hit important communication milestones, and reward intermediary achievements, so people feel they contribute and see the progress. It is key that your cloud project will be seen positively in the organization and that the people involved in it convey a certain positive image.
  5. Engage your people, engage your people, and engage your people.

As is often said, timing is everything. When dealing with people and their capacity to change, it’s even more critical to find a balance between building momentum and keeping the distance. Your teams will equally need to embrace the vision, feel the success, and at some point also breathe…and when you empower them efficiently across the process you will have the best configuration for success.

=====
Yohanna Emkies is an operations architect in the VMware Operations Transformation Services global practice and is based in Tel Aviv, Israel.

Building Service-based Cost Models to Accelerate Your IT Transformation

By Khalid Hakim

“Why is this so expensive?”

As IT moves towards a service-based model, this is the refrain that IT financial managers often hear. It’s a difficult question to answer if you don’t have the data and structure that you need to clearly and accurately defend the numbers. Fighting this perception, and building trust with the line of business, requires a change in how IT approaches cost management that will match the new IT-as-a-service format.

The first and most important step in building service-based cost models is defining what exactly a service is, and what it is not. For example, the onboarding process: is this a service, a process, or an application? Drawing the lines of what service means within your organization, and making it consistent and scalable, will allow you to calculate unit costs. Businesses are already doing cost management by department, by product, by technology, but what about the base costs, such as labor, facilities, or technology within a software-defined data center? Your final service cost should include all these components in a transparent way, so that other parts of the business can understand what exactly they are getting for their money.

Building these base costs into your service cost requires an in-depth look into how service-to-service allocation will work. For example, how do you allocate the cost of the network, which is delivered to desktops, client environments, wireless, VPN, and data centers? Before you can start to bring in a tool to automate costing out your services, map out how each service affects another, and define units and cost points for them. While it’s often tempting to jump straight into service pricing and consider yourself done once it’s complete, it’s important to start with a well defined service catalog, including costs for each service, then to continue to manage and optimize once the pricing has been implemented. Service costing helps to classify your costs, to understand what is fixed, what is variable, direct, indirect, and so forth.

So we’ve allocated the shared cost (indirect cost in accounting language) of services across the catalog. Now it’s time to bring in the service managers—the people who really understand what is being delivered. Just as a manufacturing company would expect a product manager to understand their product end to end, service managers should understand their entire service holistically. Once you’ve built a costing process, the service manager should be able to apply that process to their service.

In the past, service managers have really only been required to understand the technology involved. Bringing them into this process may require them to understand new elements of their service, such as how to sell the service, what it costs, and how to market it. It helps to map out the service in a visual way, which helps the service managers understand their own service better, and also identifies the points at which new costs should be built into the pricing model. Once you understand the service itself, then decide how you want to package it, the SLAs around it, and what the cost of a single unit will be. When relevant, create pre-defined packages that customers will be able to choose from.

SCP white paper coverOnce the costing has been implemented, you can circle back and use the data you’re gathering to help optimize the costs. This is where automation can offer a lot of value. VMware Realize Business (formerly IT Business Management Suite) helps you align IT spending with business priorities by getting full transparency of infrastructure and application cost and service quality. At a high level, it helps you build “what if” cost models, which automatically identify potential areas for cost reduction through virtualization or consolidation. The dashboard view offers the transparency needed to quickly understand cost by service and to be able to justify your costs across the business.

Service-based cost models are a major component of full IT transformation, which requires more than just new technology. You need an integrated approach that includes modernization of people, process, and technology. In this short video below, I share some basic steps that you need to jumpstart your business acumen and deliver your IT services like a business.

For more in-depth guidance, you can also access my white paper: Real IT Transformation Requires a Real IT Service Costing Process, as a resource on your journey to IT as a service.

====
Khalid Hakim is an operations architect with the VMware Operations Transformation global practice and is based in Dallas. You can follow him on Twitter @KhalidHakim47.

 

How to Avoid 5 Common Mistakes When Implementing an SDDC Solution

By Jose Alamo

Jose alamo-cropImplementing a software-defined data center (SDDC) is much more than implementing or installing a set of technology — an SDDC solution requires clear changes to the organization vision, policies, processes, operations, and organization readiness. Today’s CIO needs to spend a good amount of time understanding the business needs, the IT organization’s culture, and how to establish the vision and strategy that will guide the organization to make the adjustments required to meet the needs of the business.

The software-defined data center is an open architecture that impacts the way IT operates today. And as such, the IT organization needs to create a plan that will utilize the investments in people, process, and technology already made to deliver both legacy and new applications while meeting vital IT responsibilities. Below is a list of five common mistakes that I’ve come across working with organizations that are implementing SDDC solutions, and my recommendations on how avoid their adverse impacts:

1. Failure to develop the vision and strategy—including the technology, process, and people aspects
Many times organizations implement solutions without setting the right expectation and a clear direction for the program. The CIO must use all the resources available within the IT organization to create a vision and strategy, and in some cases it is necessary to bring in external resources that have experience in the subject. The vision and strategy must align with the business needs, and it should identify the different areas that must be analyzed to ensure a successful adoption of an SDDC solution.

In my experience working with clients, it is imperative that as part of the planning a full assessment is conducted, and it must include the areas of people, process, and technology. A SWOT analysis should also be completed to fully understand the organization’s strengths,  weaknesses, opportunities, and threats. Armed with this insight, the CIO and IT team will be able to express the direction that must be taken to be successful, including the changes required across people, process, and technology.

Failing to complete this step will add complexity and lack of clarity for those who will be responsible for implementing the solution.

2. Limited time spent reviewing and understanding the current policies
There are often many policies within the IT organization that can prevent moving forward with the implementation of SDDC solutions. In such cases, the organization needs to have an in-depth review of the current policies governing the business and IT day-to-day operations. The IT team also needs to ensure it devotes a significant amount of time with the company’s security and compliance team to understand their concerns and what measures need to be taken to make the necessary adjustments to support the implementation of the solutions. For example, the IT organization needs to look at its change policies; some older policies could prevent the deployment of process automation that is key to the SDDC solution. When these issues are identified from the beginning, IT can start the negotiation with the lines of business to either change its policies or create workarounds that will allow the solution to provide the expected value.

Performing these activities at the beginning of the project will allow IT leadership to make smart choices and avoid delays or workarounds when deploying future SDDC solutions.

3. Lack of maturity around the IT organization’s service management processes
The software-defined data center redefines IT infrastructure and enables the IT organization to combine technology and a new way of operating to become more service-oriented and more focused on business value. To support this transformation, mature service management processes need to be established.

After the assessment of current processes, the IT organization will be able to determine which process will require a higher level of maturity, which process will need to be adapted to the SDDC environment, and which processes are missing and will need to be established in order to support the new environment.

Special attention will be required for the following processes:  financial management, demand management, service catalog management, service level management, capacity management, change management, configuration management, event management, request fulfillment, and continuous service improvement.

Ensure ownership is identify for each process, with KPIs and measurable metrics established—and keep the IT team involved as new processes are developed.

4. Managing the new solution as a retrofit within the current environment
Many IT organizations will embrace a new technology and/or solution only to attempt to retrofit it into their current operational model. This is typically a major mistake, especically if the organization is expecting better efficiency, more flexibility, lower cost to operate, transparency, and tighter compliance as potential benefits from an SDDC.

Organizations must assess their current requirements and determine if they will be required for the new solutions. Most processes, roles, audit controls, reports, and policies are in place to support the current/legacy environment, and each must be assessed to determine its purpose and value to the business, and to determine whether it is required for the new solution.

IT leadership should ask themselves: If the new solution is going to be retrofitted into the current operational model, then why do we need a new solution?  What business problems are we going to resolve if we don’t change the way we operate?

My recommendation to my clients is to start lean, minimize the red tape, reduce complex processes, automate as much as possible, clearly identify new roles, implement basic reporting, and establish strict change policies. The IT organization needs to commit to minimize the number of changes to the new solution to ensure only changes that are truly required get implemented.

5. No assessment of the IT organization’s capabilities and no plan to fill the skill set gaps
The most important resource to the IT organization is its people. IT management can implement the greatest technologies, but their organizations will not be successful if their people are not trained and empowered to operate, maintain, and enhance the new solution.

The IT organization needs to first assess current skill sets. Then work with internal resources and/or vendors to determine how the organization needs to evolve in order to achieve its desired state. Once that gap has been identified, the IT management team can develop an enablement plan to begin to bridge the gap. Enablement plans typically include formal “train the trainer” models to cascade knowledge within the organization, as well as shadowing vendors for organizational insight and guidance along with knowledge transfer sessions to develop self-sufficiency. In some cases it may be necessary to bring in external resources to augment the IT team’s expertise.

In conclusion, implementing a software-defined data center solution will require a new approach to implementing processes, technologies, skill sets, and even IT organizational structures. I hope these practical tips on how to avoid common mistakes will help guide your successful SDDC solution implementations.

====
Jose Alamo is a senior transformation consultant with VMware Accelerate Advisory Services and is based in Florida. Follow Jose on Twitter @alamo_jose  or connect on LinkedIn.

Transforming Operations to Optimize DevOps

By Ahmed Al-Buheissi

Ahmed_croppedDevOps. It’s the latest buzzword in IT and, as usual, the industry is either skeptical or confused as to its meaning. In simple terms, DevOps is a concept that allows IT organizations to develop and release software rapidly. By acknowledging the pressure the Development and Operations teams within IT place on each other, the DevOps approach enables the Development and Operations teams to work closely together. IT organizations put policies for shared and delegated responsibilities in place, with an emphasis on communication, collaboration, and integration.

Developers have no problem writing code and pushing it out, however their demand for infrastructure causes conflict with the Operations team. Traditionally it is the Operations team that release code to the various environments including Development, Test, UAT, and Production. As developers want to continuously push functionality through the various environments, it is only natural that Operations gets inundated with requests for more infrastructure. When you add Quality Assurance teams in the mix, efficiency is negatively impacted.

Why the rush to release code?
Rapid application development is requisite. The face of IT is changing very quickly and will continue to change even faster. Businesses need to innovate fast, and introduce products and services into the market to beat the competition and meet the demands of their customers.

Here are four reasons rapid application development and release is fundamental:

  1. This is the social media age. Bad code and bugs can no longer be ignored and scheduled for future major releases; when defects are found, word will spread fast through Twitter and blogs.
  2. Mobile applications are changing the way we work and require a different kind of design—one that fits on a smaller screen and is intuitive. If a user doesn’t like one application, they’ll download the next.
  3. Much of the software developed today is modular and highly dependent on readily-available modules and packages. When an issue is discovered with a particular module, word spreads fast among user communities, and solutions need to be developed immediately.
  4. Last and most important, this is the cloud era. The very existence of the Operations team is at stake, because if it cannot provide infrastructure when Development needs it, developers will opt to use a publicly available cloud service. It is that easy.

So what is DevOps again?
DevOps is not a “something” that can be purchased — it’s an approach that requires new ways of working as an IT organization. As an IT leader, you will need to “operationalize” your Development team and bring them closer to your Operations team. As an example, your developers will need the capability to provision infrastructure based on new operations policies. DevOps also means you will need to move some of your development functionalities to the Operations team. For example, the Operations team will need to start writing workflows and associated scripts/code that will be used to automate the deployment process for the development team.

While there are adequate tools that will facilitate the journey to DevOps, DevOps is more about processes and people.

How to implement DevOps
The IT organization needs to undergo both people and process changes to implement DevOps — and it cannot happen all at once — the change needs to be gradual. It is also very difficult to measure “DevOps maturity.” As an IT leader, you will know it when your organization becomes DevOps capable — it happens when your developers have the necessary tools to release software at the speed of business, and your Operations team is focused on innovation rather than being reactive to infrastructure deployment requirements.

Also, your test environment will evolve to a “continuous integration” environment, where developers can deploy their code and have it tested in an automated and continuous process.

I make the following recommendations to my clients for process, people, and tools required for a DevOps approach:

Process
The diagram below illustrates a process for DevOps, in which the Operations team develops automated deployment workflows, and the Development team uses the workflows to deploy to the Test and UAT environments. The final deployment to production is carried out by the Operations team; in fact Operations should continue to be the only team with direct access to production infrastructure.

devops flow

Service Release Process – Service Access Validation

However, it is critical that Development have access to monitoring tools in production to allow them to monitor applications. These monitoring tools may allow tracking of application performance and its impact on underlying infrastructure resources, network response, and server/application log files. This will allow your developers to monitor the performance of their applications, as well as diagnose issues, without having to consume Operations resources.

Finally, it is assumed that the DevOps tools and workflows will be used for all deployments, including production. This means that the Development and Operations teams must use the same tools to deploy to all environments to ensure consistency and continuity as well as “rehearse” the production release.

People

The following roles are the main players in facilitating a DevOps approach:

  • Operations: The DevOps process starts with the Operations team. Their first responsibility is to develop workflows that will automate the deployment of a complete application environment. In order to develop these workflows, Operations is obliged to be part of the development cycle earlier and will therefore have to become closer to Development in order to understand their infrastructure requirements.
  • Development: The Development team will use their development environment to determine the infrastructure required for the application; for example database version, web server type, and application monitoring requirements. This information will assist the Operations team in determining the capacity required and in developing the deployment workflows. It will help with implementing the custom dashboards and metrics reporting capabilities Development needs to monitor their applications. The Development team will be able to develop and deploy to the “continuous integration” and UAT environments without having to utilize Operations resources. They can “rip and replace” applications to these environments as many times as needed by QA and end-users in order to be production-ready.
  • Quality Assurance (QA):  Due to the high quality of automated test scripts used for testing in such an environment, the QA team can play a lesser role in a DevOps environment by randomly testing applications. QA will also need to test and verify the deployment workflows to ensure the infrastructure configuration used is as per the design.
  • End Users: End-user testing can be reduced in a DevOps environment, by only randomly testing applications. However once DevOps is in place, end users should notice a vast improvement in the quality and speed of the applications produced.

Tools
VMware vRealizeTM Code StreamTM  targets IT organizations that are transforming to DevOps to accelerate application released for business agility. Some of the features it offers include:

  • Automation and governance of the entire application release process
  • A dashboard for end-to-end visibility of the release process across Development and Operations organizations
  • Artifact management and tracking

For IT leaders, vRealize Code Stream can help transform the IT organization through a DevOps approach. The “continuous integration” cycle is a completely automated package that will deploy, validate, and test applications being developed.

DevOps can also benefit greatly from using platform-as-a-service (PaaS) providers. By developing and releasing software on PaaS, the consistency is guaranteed as the platform layer (as well as lower layers) are always consistent. Pivotal CF, for example, allows users and DevOps to publish and manage applications running on the Cloud Foundry platform across distributed infrastructure.

Conclusion
Although DevOps is a relatively new concept, it’s really just the next step after agile software development methods. As the workforce becomes more mobile, and social media brings customers and users closer, it’s necessary for IT organizations to be able to quickly release applications and adapt to changing market dynamics. (Learn how the VMware IT DevOps teams are using the cloud to automate dev test provisioning and streamline application development in the short video below.)

Many organizations have tackled the issues associated with running internal development teams by outsourcing software development. I now see the reverse happening, as organizations want to reach the market more quickly and have started to build internal development teams again.

For the majority of my clients, it’s not a matter of “if” but “how quickly” will they introduce DevOps. By adopting DevOps principles, their development teams will be able to efficiently release features as demanded by the business, at the speed of business.

====
Ahmed Al-Buheissi is an operations technical architect with the VMware Operations Transformation global practice and is based in Melbourne, Australia.

 

5 Steps to Shape Your IT Organization for the Software-Defined Data Center

by Tim Jones

TimJones-cropOne aspect of the software-defined data center (SDDC) that is not solved through software and automation is how to support what is being built. The abstraction of the data center into software managed by policy, integrated through automation, and delivered as a service directly to customers requires a realignment of the existing support structure.

The traditional IT organizational model does not support bundling compute, network, storage, and security into easily consumable packages. Each of these components is owned by a separate team with its own charter and with management chains that don’t merge until they reach the CTO. The storage team is required to support the storage needs of the virtualized environment as well as physical servers, the backup storage, and replication of data between sites. The network team has core, distribution, top of rack, and edge switches to support in addition to any routers or firewalls. And someone has to support the storage network whether it is IP, InfiniBand, or Fibre Channel. None of these teams has only the software-defined data center to support. The next logical question asked is: What does an organization look like that can support SDDC?

While there is no simple answer that allows you to fill a specific set of roles with staff possessing skill sets from a checklist, there are many organizational models that can be modified to support your SDDC. In order to modify an organizational model or to build your own model to meet your IT organization’s requirements, certain questions need to be answered. The answers to the following five steps will help shape your new organization model:

  1. Define what your new IT organization will offer.
    Although this sounds elementary, it is necessary to understand what is planned on being offered in order to know what is necessary to provide support. Will infrastructure as a service (IaaS) be the only offering or will database as a service (DBaaS) and platform as a service (PaaS) also be offered? Does support stop at the infrastructure layer, or will operating system, platform, or database support be required? Who will the customer work with to utilize the services or to request and design additional services?
  2. Identify the existing organizational model.
    A thorough understanding of the existing support structure will help identify what support customers will expect based on their current experience and any challenges associated with the model. Are there silos within that negatively impact customers?  What skills currently exist in the organization?  Identifying the existing organization and defining what will be offered by the new organization will help to identify what gaps exist.
  3. Leverage what is already working.
    If there are components of the existing organization that can either be replicated or consumed by the new organization, take advantage of the option. For example, if there is already a functioning group that works with the customers and supports the operating system, then evaluate how to best incorporate them into the new organization. Or if certain support is outsourced, then incorporate that into the new organizational model.
  4. Evaluate beyond the technical.
    The inclusion of service architects, process designers, business analysts, and project managers can be critical to the success of your new organization. These resources could be consumed from existing internal groups such as a central PMO. But overlooking the non-technical organizational requirements can inhibit the ability of the IT organization to deliver on its service roadmap.
  5. Create a new IT organization.
    Don’t accept the status quo with your current organization. If the storage, compute, and virtualization teams all report through separate management chains in the current organization, the new organization should leverage a single management chain for all three teams. Removing silos within the IT organization fosters a collaborative spirit that results in better support and better service offerings for customers.

Although there is no one size fits all organizational model for the software-defined data center, understanding where your IT organization is currently and where it is headed will enable you to create an organizational model capable of supporting the service roadmap.

====
Tim Jones is business transformation architect with VMware Accelerate Advisory Services and is based in California.

A New Angle on the Classic Challenge of Retained IT

By Pierre Moncassin

Pierre Moncassin-cropWhen discussing the organization models for managing cloud infrastructure with customers, I have come across situations where some if not all infrastructure services are outsourced to a third party. In these situations my customers often ask – does your (VMware) operating model still apply? Should I retain cloud-related skills in-house? If so, which ones?

The short answer is: Yes. The advice I give my customers is that their IT organization should establish a core organization modeled on the “tenant operations” team as defined in Organizing for the Cloud, a VMware white paper by my colleague Kevin Lees.

Let’s assume a relatively simple scenario where a single outsourcer is providing “standard” infrastructure services — such as computing, storage, backups. In this scenario, the outsourcer has accepted to transform at least some of its services towards software-defined data center (SDDC), which is by no means an easy step (I will return to that point later).

For now let’s also assume a cooperative situation where customer and outsourcer are collaboratively working towards a cloud model. The question is — what skills and functions should the customer retain in-house? Which skills can be handed over to the outsourcer?

The question is a classic one. In traditional infrastructure outsourcing, we would talk about a “retained IT” organization.  For the SDDC environment, here are some skill groups that I believe have to be preserved within the core, in-house team:

  • Service Design and Self-service Provisioning is clearly a skillset to keep in-house. The in-house team must be able to work with the business to define services end-to-end, but the team should also be able to grasp accurately the possibilities that automation offers with software such as VMware vCloud Automation Center.  Though I am not suggesting that the core team needs to be expert in all aspects of workflows, APIs or scripting, they do need a solid grasp of the possibilities of automation.
  • Process Automation and Optimization.  A solid working knowledge of automation software is useful but not enough.  The in-house teams are required to decide which processes to automate and how. They need to make business-level decisions. Which processes are worth automating? What is the benefit of automation versus its cost?
  • Security and Compliance is often a top priority for cloud adopters. The cloud-based services need to align with enterprise policies and standards.  The retained IT function must be able to demonstrate compliance and where needed, enforce those standards in the cloud infrastructure.
  • Service Level Management and Trend Analysis. Whilst the retained IT organization does not need to be involved in the day-to-day monitoring and troubleshooting, they need to be able to monitor key service levels. Specifically, the business users will be highly sensitive to the performance of some business-critical applications. The retained IT organization will need to keep enough knowledge of these applications and of performance monitoring tools to ensure that application performance is measured adequately.
  • Application Life Cycle (DevOps). We have assumed in our scenario an infrastructure-only outsourcing — the skills for application development remaining in-house.  In the SDDC environment, the tenant operations team will work closely with the application development teams. Amongst other skills, the retained IT will need detailed knowledge not only of application provisioning, but also the architectures, configuration dependencies, and patching policies required to maintain those applications.

I have reviewed skills groups needed as more automation is used, but there will be less reliance on skills that relate to routine tasks and trouble-shooting. Skills that can typically be outsourced include:

  • Routine scripting and monitoring
  • System (middleware) configuration
  • Routine network administration

The diagram below is a (very simplified) summary of the evolution from traditional retained IT to tenant operations for SDDC environments.

Retained IT modelIt is also worth noting that the transformation from traditional infrastructure outsourcing to SDDC is a far from obvious step from the point of view of an outsourcer. Why should the outsourcer invest time and cost to streamline services, if the end customer has already contracted to pay for the full cost of service? Gaining buy-in from the outsourcer to transform its model can be a significant challenge. Therefore it is prudent to key to gain acceptance either:
–  early in the contract negotiations, so that the provider can build in a cloud delivery model in its service offering,
– or towards the end of a contract when the outsourcer is often highly motivated to obtain a renewal.

Finally outsourcers may initiate their own technology refresh programs, which can create a win-win situation when both sides are prepared to invest in modernization towards SDDC.

3 Key Take-Aways

  1. Organizations that undertake their journey to SDDC with an outsourcer are advised to establish a core SDDC  organization including most tenant operations skills; a key focus is to leverage automation (whilst routine, repetitive tasks can be outsourced).
  2. The exact profile of the tenant operations (retained IT) will depend on the scope of the outsourcing contract.
  3. Early contract negotiations, renewals, or technology refresh can create opportunities to encourage an outsourcer to move towards the SDDC model.

———
Pierre Moncassin
is an operations architect with VMware’s Global Operations Transformation Practice and is based in the UK. Follow @VMwareCloudOps on Twitter for future updates.

SDDC: Changing Organizational Cultures

By Tim Jones

TimJones-cropI like to think of SDDC as “service-driven data center” in addition to “software-defined data center.” The vision for SDDC expands beyond technical implementation, encompassing the transformation from IT shop to service provider and from cost center to business enabler. The idea of “service-driven” opens the conversation to include the business logic that drives how the entire service is offered. Organizations have to consider the business processes that form the basis of what to automate. They must define the roles required to support both the infrastructure and the automation. There are financial models and financial maturity necessary to drive behavior on both the customer and the service provider side. And finally, the service definitions should be derived from use cases that enable customers to use the technology and define what the infrastructure should support.

When you think through all of the above, you’re really redefining how you do business, which requires a certain amount of cultural change across the entire organization. If you don’t change the thinking about how and why you offer the technology, then you will introduce new problems alongside the problems you were trying to alleviate. (Of course the same problems will happen faster and will be delivered automatically. )

I correlate the advancement to SDDC to the shift that occurred when VMware first introduced x86 virtualization. The shift to more efficient use of resources that were previously wasted on physical servers by deploying multiple virtual machines gathered momentum very quickly. But based on my experiences, the companies that truly benefited were those that implemented new processes for server requisitioning. They worked with their customers to help them understand that they no longer needed to buy today what they might need in three years, because resources could be easily added in a virtual environment.

The successful IT shops actively managed their environments to ensure that resources weren’t wasted on unnecessary servers. They also anticipated future customer needs and planned ahead. These same shops understood the need to train support staff to manage the virtualized environment efficiently, with quick response times and personal service that matched the technology advances. They instituted a “virtualization first” mentality to drive more cost savings and extend the benefits of virtualization to the broadest possible audience. And they evangelized. They believed in the benefits virtualization offered and helped change the culture of their IT shops and the business they supported from the bottom up.

The IT shops that didn’t achieve these things ended up with VM sprawl and over-sized virtual machines designed as if they were physical servers. The environment became as expensive or more expensive than the physical-server-only environment it replaced.

The same types of things will happen with this next shift from virtualized servers to virtualized, automated infrastructure. The ability for users to deploy virtual machines without IT intervention requires strict controls around chargeback and lifecycle management. Security vulnerabilities are introduced because systems aren’t added to monitoring or virus scanning applications. Time and effort—which equate to cost—are wasted because IT continues to design services without engaging the business. Instead of shadow IT, you end up with shadow applications or platforms that self-service users create because what they need isn’t offered.

The primary way to avoid these mistakes is to remake the culture of IT—and by extension the business—to support the broader vision of offering ITaaS and not just IaaS.

Tim Jones is business transformation architect with VMware Accelerate Advisory Services and is based in California. Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags.

Task Automation Vs. Process Automation – Highlights from #CloudOpsChat

After a successful automation-themed #CloudOpsChat in September, we decided to take a deeper dive into automation for this month’s edition, discussing “Task Automation Vs. Process Automation.” Thanks to everyone who participated, and thank you especially to Rich Pleasants (@CloudOpsVoice), Business Solutions Architect and Operations Lead for Accelerate Advisory Services at VMware, for co-hosting!

To begin the chat, we asked: “What IT tasks or processes has your company successfully automated?”

@Andrea_Mauro jumped right in, asking how automation compares to tasks? @kurtmilne offered VMware’s take, saying “VMware IT has fully automated provisioning of complex workloads on private cloud,” and clarified that the most complex workloads were “Oracle ERP with web portals, and over 80 blueprints.” @venkatgvm also elaborated on VMware’s automation story: “VMware instance provisioning had over 20 major steps, each of them were executed by siloed teams.”

Co-host @CloudOpsVoice took the question further, asking, “Are people automating day to day maintenance activities or actual steps in the process?”

@vHamburger gave his advice on where to begin with automation, saying “[day-to-day automation] is a good starting point. Nominate your top 10 time-consuming tasks for automation.” @Andrea_Mauro replied, suggesting that “task automation is more for repeatable operations and day by day [tasks].” He followed up by offering a definition of process automation: “Process automation could be more related to organization level and blueprint usage.” @kurtmilne also chimed in with business-related definitions of task and process automation: “Task automation math includes cost/time of single task vs. developing automation capability…Process automation math includes business benefit of overall improved agility, service quality – as well as cost.” @CloudOpsVoice broke his definition of automation into three parts: “day-to-day, build and run.”

@CloudOpsVoice next asked, “What technique do you use primarily for automation? Policy, orchestration or scripting? How do App blueprints impact it?”

@kurtmilne noted the value of blueprints and scripting: “Blueprints and scripting allow app provisioning automation – not just VM provisioning.” @thinkingvirtual also offered sound advice on how to select what to automate at your company: “Always make sure your automation efforts provide real value. Don’t automate for automation’s sake.” Elaborating on this, @kurtmilne discussed the value of automation, stating that automation’s “real value” is “ideally measured in business outcomes, and not IT efficiency.” @vHamburger also warned against bottlenecks preventing automation: “every enhancement after your bottleneck is not efficient – know your bottlenecks!”

@vHamburger went on to mention task workflow: “Clean task workflow with documented steps is always preferred over scripts,” he suggested, because it’s “easier and repeatable for new admins.” @Andrea_Mauro countered by saying that sometimes a “‘quick and dirty’ solution could be good enough,” to which @vHamburger replied, “In my experience ‘quick and dirty’ always leads to fire fighting ;).” @kurtmilne then vouched for “leaning out” a process: “‘Leaning out’ an IT process is good. But sometimes it’s better to use automation to eliminate tasks vs. automate tasks,” he wrote. @thinkingvirtual also noted how important communication is to successful automation: “Often forgotten: keep your business in the loop. Show back the value continuously to broaden the relationship.”

@AngeloLuciani kept things moving by asking, “Do you pick a tool to fit the process or a process to fit the tool?”

@JonathanFrappier enthusiastically went with the latter: “Process to fit the tool! Processes can change, tools have to live on until more budget is approved!” @kurtmilne added, “Tool/process construct doesn’t make sense with full automation. You can do things with automation you can’t do with manual tasks: For example, you don’t figure out manual horizontal scaling process in cloud – then look for tool to automate.”

#CloudOpsChat ended with one last great tip (and a nod to VMworld!) from @thinkingvirtual: “Automation skills are a huge career opportunity. Don’t avoid automation, defy convention.”

Thanks again to everybody who participated in this latest #CloudOpsChat, and stay tuned for details of our next meet up. If you have suggestions for future #CloudOpsChat topics, let us know in the comments.

For more resources on automation, check out the following CloudOps blog posts below:

In the meantime, feel free to tweet us at @VMwareCloudOps with questions or feedback, and join the conversation by using the #CloudOps and #SDDC hashtags. For more from Rich Pleasants, head over to the VMware Accelerate blog.