Home > Blogs > VMware Operations Transformation Services > Tag Archives: private cloud

Tag Archives: private cloud

Transforming Operations to Optimize DevOps

By Ahmed Al-Buheissi

Ahmed_croppedDevOps. It’s the latest buzzword in IT and, as usual, the industry is either skeptical or confused as to its meaning. In simple terms, DevOps is a concept that allows IT organizations to develop and release software rapidly. By acknowledging the pressure the Development and Operations teams within IT place on each other, the DevOps approach enables the Development and Operations teams to work closely together. IT organizations put policies for shared and delegated responsibilities in place, with an emphasis on communication, collaboration, and integration.

Developers have no problem writing code and pushing it out, however their demand for infrastructure causes conflict with the Operations team. Traditionally it is the Operations team that release code to the various environments including Development, Test, UAT, and Production. As developers want to continuously push functionality through the various environments, it is only natural that Operations gets inundated with requests for more infrastructure. When you add Quality Assurance teams in the mix, efficiency is negatively impacted.

Why the rush to release code?
Rapid application development is requisite. The face of IT is changing very quickly and will continue to change even faster. Businesses need to innovate fast, and introduce products and services into the market to beat the competition and meet the demands of their customers.

Here are four reasons rapid application development and release is fundamental:

  1. This is the social media age. Bad code and bugs can no longer be ignored and scheduled for future major releases; when defects are found, word will spread fast through Twitter and blogs.
  2. Mobile applications are changing the way we work and require a different kind of design—one that fits on a smaller screen and is intuitive. If a user doesn’t like one application, they’ll download the next.
  3. Much of the software developed today is modular and highly dependent on readily-available modules and packages. When an issue is discovered with a particular module, word spreads fast among user communities, and solutions need to be developed immediately.
  4. Last and most important, this is the cloud era. The very existence of the Operations team is at stake, because if it cannot provide infrastructure when Development needs it, developers will opt to use a publicly available cloud service. It is that easy.

So what is DevOps again?
DevOps is not a “something” that can be purchased — it’s an approach that requires new ways of working as an IT organization. As an IT leader, you will need to “operationalize” your Development team and bring them closer to your Operations team. As an example, your developers will need the capability to provision infrastructure based on new operations policies. DevOps also means you will need to move some of your development functionalities to the Operations team. For example, the Operations team will need to start writing workflows and associated scripts/code that will be used to automate the deployment process for the development team.

While there are adequate tools that will facilitate the journey to DevOps, DevOps is more about processes and people.

How to implement DevOps
The IT organization needs to undergo both people and process changes to implement DevOps — and it cannot happen all at once — the change needs to be gradual. It is also very difficult to measure “DevOps maturity.” As an IT leader, you will know it when your organization becomes DevOps capable — it happens when your developers have the necessary tools to release software at the speed of business, and your Operations team is focused on innovation rather than being reactive to infrastructure deployment requirements.

Also, your test environment will evolve to a “continuous integration” environment, where developers can deploy their code and have it tested in an automated and continuous process.

I make the following recommendations to my clients for process, people, and tools required for a DevOps approach:

Process
The diagram below illustrates a process for DevOps, in which the Operations team develops automated deployment workflows, and the Development team uses the workflows to deploy to the Test and UAT environments. The final deployment to production is carried out by the Operations team; in fact Operations should continue to be the only team with direct access to production infrastructure.

devops flow

Service Release Process – Service Access Validation

However, it is critical that Development have access to monitoring tools in production to allow them to monitor applications. These monitoring tools may allow tracking of application performance and its impact on underlying infrastructure resources, network response, and server/application log files. This will allow your developers to monitor the performance of their applications, as well as diagnose issues, without having to consume Operations resources.

Finally, it is assumed that the DevOps tools and workflows will be used for all deployments, including production. This means that the Development and Operations teams must use the same tools to deploy to all environments to ensure consistency and continuity as well as “rehearse” the production release.

People

The following roles are the main players in facilitating a DevOps approach:

  • Operations: The DevOps process starts with the Operations team. Their first responsibility is to develop workflows that will automate the deployment of a complete application environment. In order to develop these workflows, Operations is obliged to be part of the development cycle earlier and will therefore have to become closer to Development in order to understand their infrastructure requirements.
  • Development: The Development team will use their development environment to determine the infrastructure required for the application; for example database version, web server type, and application monitoring requirements. This information will assist the Operations team in determining the capacity required and in developing the deployment workflows. It will help with implementing the custom dashboards and metrics reporting capabilities Development needs to monitor their applications. The Development team will be able to develop and deploy to the “continuous integration” and UAT environments without having to utilize Operations resources. They can “rip and replace” applications to these environments as many times as needed by QA and end-users in order to be production-ready.
  • Quality Assurance (QA):  Due to the high quality of automated test scripts used for testing in such an environment, the QA team can play a lesser role in a DevOps environment by randomly testing applications. QA will also need to test and verify the deployment workflows to ensure the infrastructure configuration used is as per the design.
  • End Users: End-user testing can be reduced in a DevOps environment, by only randomly testing applications. However once DevOps is in place, end users should notice a vast improvement in the quality and speed of the applications produced.

Tools
VMware vRealizeTM Code StreamTM  targets IT organizations that are transforming to DevOps to accelerate application released for business agility. Some of the features it offers include:

  • Automation and governance of the entire application release process
  • A dashboard for end-to-end visibility of the release process across Development and Operations organizations
  • Artifact management and tracking

For IT leaders, vRealize Code Stream can help transform the IT organization through a DevOps approach. The “continuous integration” cycle is a completely automated package that will deploy, validate, and test applications being developed.

DevOps can also benefit greatly from using platform-as-a-service (PaaS) providers. By developing and releasing software on PaaS, the consistency is guaranteed as the platform layer (as well as lower layers) are always consistent. Pivotal CF, for example, allows users and DevOps to publish and manage applications running on the Cloud Foundry platform across distributed infrastructure.

Conclusion
Although DevOps is a relatively new concept, it’s really just the next step after agile software development methods. As the workforce becomes more mobile, and social media brings customers and users closer, it’s necessary for IT organizations to be able to quickly release applications and adapt to changing market dynamics. (Learn how the VMware IT DevOps teams are using the cloud to automate dev test provisioning and streamline application development in the short video below.)

Many organizations have tackled the issues associated with running internal development teams by outsourcing software development. I now see the reverse happening, as organizations want to reach the market more quickly and have started to build internal development teams again.

For the majority of my clients, it’s not a matter of “if” but “how quickly” will they introduce DevOps. By adopting DevOps principles, their development teams will be able to efficiently release features as demanded by the business, at the speed of business.

====
Ahmed Al-Buheissi is an operations technical architect with the VMware Operations Transformation global practice and is based in Melbourne, Australia.

 

What I Learned from VMware’s Internal Private Cloud Deployment

By Kurt Milne

kurtmilne-cropFor seven years as an industry analyst, I studied top-performing IT organizations to figure out what made them best-in-class. And after studying 30 private cloud early adopters in 2011, I co-authored a book about how to deploy private cloud.

But after joining VMware last year, I’ve had the opportunity to spend six months working closely with VMware’s IT team to get an in-depth understanding of our internal private vCloud Suite deployment.

In this multi-part blog series, I’ll write about what I’ve learned.

Lesson learned – The most important thing I learned, and what really reframed much of my thinking about IT spending, is that VMware IT invested in our private cloud strategy to increase business agility.  And that effort drastically lowered our costs.

Breaking it down:

1. We made a strategic decision to try something different.

Over the years, I’ve studied companies that use every form of squeezing IT budgets there is. But what happens with a “cut till it hurts” or a “cut until something important breaks” approach is that the primary objective of lowering IT budgets is often achieved. But it also leaves IT hamstrung and unable to meet the needs of the business. An unbalanced focus on cost cutting reduces IT’s ability to deliver. That in turn lowers business perception of IT value, which further focuses efforts on cost cutting. Define “death spiral.”

VMware didn’t follow that path when we decided to invest in private cloud. We justified our “Project OneCloud” based on belief that that the traditional way of growing IT capabilities wouldn’t scale to meet our growth objectives. We have doubled revenue and headcount many times over the last 10 years. The IT executive team had the insight to realize that a linear approach of increasing capacity by buying more boxes and adding more headcount would not support business needs as we double in size yet again. We are no longer a startup. We have grown up as a company. We had to try a different approach.

Apparently VMware IT is not alone with this thinking. IT Under Pressure: McKinsey Global Survey results shows a marked shift in 2013 as IT organizations are using IT to improve business effectiveness and efficiency, not just manage costs.

2. Effective service design drove adoption.

What really enabled our private cloud success was broad adoption. There is a commitment and investment in private cloud that requires broad adoption to justify the cost and effort. The promise of delivering IT services the same old way at lower cost didn’t drive adoption. What drove adoption was a new operating model focused on delivering and consuming IT as a service. Specifically, abstracting infrastructure delivered as basic compute, network, and storage as a service. Then designing IT services for specific groups of consumers that allowed them to get what they need, when they needed it. That included application stacks, dev/test environments, and any other business function that depends on IT infrastructure (almost all do in the mobile-cloud era). We strove to eliminate the need to call IT, and also eliminated tickets between functional groups within IT.

Ten different business functions — from sales, marketing, and product delivery, to support and training — have moved their workloads to the cloud. Many have their own service catalog with a focused set of services as front end on the private cloud. Many have their own operations team who monitor and support automation and process that are built on top of infrastructure services.

Carefully designing IT services, then giving people access to get what they need when they need it without having to call IT — is key to success.

3. Broad adoption drove down costs via scale economies.

We started with one business group deploying sales demos and put their work in a service catalog front end on the private cloud. Then we expanded onboarding other functional groups to the cloud. One trick – and that is to develop a relationship with procurement. Any time someone orders hardware within the company, get in front of the order and see if they will deploy on private cloud instead.

Make IT customers’ jobs easier. Accelerate their time to desired results. Build trust by setting realistic expectations, then delivering per expectation.

Three primary milestones:

  1. Once we onboarded a few key tenants and got to ~10,000 VMs in our cloud, we lowered cost per general purpose VM by roughly 50 percent. With a new infrastructure as a service model that allowed consumers to “outsource infrastructure” to our central cloud team — and at a much lower cost per VM — word got out, and multiple other business groups wanted to move to the cloud.
  2. Once we onboarded another handful of tenants and got to ~50,000 VMs in our private cloud, we lowered cost per general purpose VM by another 50 percent. We were surprised by how fast demand grew and how fast we scaled from 10,000 to 50,000 VMs.
  3. We are “all in” and now on track to meet our goal of having around 95 percent of all our corporate workloads in private or hybrid cloud (vCloud Hybrid Service) – for a total of around 80,000 to 90,000 VMs. We expect cost per VM to drop another 50 percent.

So we set out to increase agility and better meet the needs of the business, delivered services that made IT consumers’ jobs easier, and as a result we dropped our cost per VM by ~85 percent.

Key takeaways:

  • Our private cloud goal was to reshape IT to better meet revenue growth objectives.
  • We transformed IT to deliver IT services in a way that abstracted the infrastructure layer and allowed various business team to “outsource infrastructure.”
  • Ten different internal business groups have moved workloads to private cloud.
  • Less focus on infrastructure and easy access to personalized services made it easier for IT service consumers to do their jobs and focus more on their customers.
  • A new operating model for IT and effective service design drove adoption.
  • Broad adoption drove down costs. By ~85 percent.

Below are links to two short videos of VMware IT executives sharing their lessons learned related to cost and agility. In my next post, I’ll talk about what I learned about a new operating model for IT.

—-
Follow @VMwareCloudOps and @kurtmilne on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

 

 

 

 

 

 

 

 

Automated Development and Test in Private Cloud: Join VMUG CloudOps SIG Webinar 7/25

Are you an IT practitioner considering private cloud?

Hear from VMware IT leadership about what they learned using VMware products to build a private cloud on SDDC architecture. They will discuss how they transformed people, process, organization, structure, governance and financial model to make VMware’s private cloud IaaS successful. This Thursday, join Venkat Gopalakrishnan, Director of the Software Defined Data Center (SDDC) and IT Transformation Initiatives, and Kurt Milne, Director of the VMware CloudOps Program for an exclusive webinar.

The webinar will cover:

  • SDLC lifecycle – supporting dev/test for 600 developers.
  • Using vCloud suite to automate end-to-end dev/test instance provisioning for complex application stacks.
  • Moving 4000 non-production dev/test VMs from traditional virtual to private cloud.
  • Improving agility and service quality, while also saving $6M in annual infrastructure and operating costs.

BONUS: This is a sneak peek of OPT5194 – VMware Private Cloud – Operations Transformation – one of the biggest sessions at VMworld 2013.

Register for this VMUG CloudOps SIG webinar today to see how you can take the private cloud from operational to transformational and learn how the private cloud can fit into your work environment. For a head start, take a look at our recent post, “Automated Deployment and Testing Big ‘Hairball’ Application Stacks” to hear more about the deployment from Venkat, one of the webinar’s speakers.

We will be also be live tweeting during the event via @VMwareCloudOps for anyone who is unable to attend the webcast. Feel free to join the conversation using the #CloudOps and #SDDC hashtags.