Tag Archives: DevOps

Transforming Operations to Optimize DevOps

By Ahmed Al-Buheissi

Ahmed_croppedDevOps. It’s the latest buzzword in IT and, as usual, the industry is either skeptical or confused as to its meaning. In simple terms, DevOps is a concept that allows IT organizations to develop and release software rapidly. By acknowledging the pressure the Development and Operations teams within IT place on each other, the DevOps approach enables the Development and Operations teams to work closely together. IT organizations put policies for shared and delegated responsibilities in place, with an emphasis on communication, collaboration, and integration.

Developers have no problem writing code and pushing it out, however their demand for infrastructure causes conflict with the Operations team. Traditionally it is the Operations team that release code to the various environments including Development, Test, UAT, and Production. As developers want to continuously push functionality through the various environments, it is only natural that Operations gets inundated with requests for more infrastructure. When you add Quality Assurance teams in the mix, efficiency is negatively impacted.

Why the rush to release code?
Rapid application development is requisite. The face of IT is changing very quickly and will continue to change even faster. Businesses need to innovate fast, and introduce products and services into the market to beat the competition and meet the demands of their customers.

Here are four reasons rapid application development and release is fundamental:

  1. This is the social media age. Bad code and bugs can no longer be ignored and scheduled for future major releases; when defects are found, word will spread fast through Twitter and blogs.
  2. Mobile applications are changing the way we work and require a different kind of design—one that fits on a smaller screen and is intuitive. If a user doesn’t like one application, they’ll download the next.
  3. Much of the software developed today is modular and highly dependent on readily-available modules and packages. When an issue is discovered with a particular module, word spreads fast among user communities, and solutions need to be developed immediately.
  4. Last and most important, this is the cloud era. The very existence of the Operations team is at stake, because if it cannot provide infrastructure when Development needs it, developers will opt to use a publicly available cloud service. It is that easy.

So what is DevOps again?
DevOps is not a “something” that can be purchased — it’s an approach that requires new ways of working as an IT organization. As an IT leader, you will need to “operationalize” your Development team and bring them closer to your Operations team. As an example, your developers will need the capability to provision infrastructure based on new operations policies. DevOps also means you will need to move some of your development functionalities to the Operations team. For example, the Operations team will need to start writing workflows and associated scripts/code that will be used to automate the deployment process for the development team.

While there are adequate tools that will facilitate the journey to DevOps, DevOps is more about processes and people.

How to implement DevOps
The IT organization needs to undergo both people and process changes to implement DevOps — and it cannot happen all at once — the change needs to be gradual. It is also very difficult to measure “DevOps maturity.” As an IT leader, you will know it when your organization becomes DevOps capable — it happens when your developers have the necessary tools to release software at the speed of business, and your Operations team is focused on innovation rather than being reactive to infrastructure deployment requirements.

Also, your test environment will evolve to a “continuous integration” environment, where developers can deploy their code and have it tested in an automated and continuous process.

I make the following recommendations to my clients for process, people, and tools required for a DevOps approach:

Process
The diagram below illustrates a process for DevOps, in which the Operations team develops automated deployment workflows, and the Development team uses the workflows to deploy to the Test and UAT environments. The final deployment to production is carried out by the Operations team; in fact Operations should continue to be the only team with direct access to production infrastructure.

devops flow

Service Release Process – Service Access Validation

However, it is critical that Development have access to monitoring tools in production to allow them to monitor applications. These monitoring tools may allow tracking of application performance and its impact on underlying infrastructure resources, network response, and server/application log files. This will allow your developers to monitor the performance of their applications, as well as diagnose issues, without having to consume Operations resources.

Finally, it is assumed that the DevOps tools and workflows will be used for all deployments, including production. This means that the Development and Operations teams must use the same tools to deploy to all environments to ensure consistency and continuity as well as “rehearse” the production release.

People

The following roles are the main players in facilitating a DevOps approach:

  • Operations: The DevOps process starts with the Operations team. Their first responsibility is to develop workflows that will automate the deployment of a complete application environment. In order to develop these workflows, Operations is obliged to be part of the development cycle earlier and will therefore have to become closer to Development in order to understand their infrastructure requirements.
  • Development: The Development team will use their development environment to determine the infrastructure required for the application; for example database version, web server type, and application monitoring requirements. This information will assist the Operations team in determining the capacity required and in developing the deployment workflows. It will help with implementing the custom dashboards and metrics reporting capabilities Development needs to monitor their applications. The Development team will be able to develop and deploy to the “continuous integration” and UAT environments without having to utilize Operations resources. They can “rip and replace” applications to these environments as many times as needed by QA and end-users in order to be production-ready.
  • Quality Assurance (QA):  Due to the high quality of automated test scripts used for testing in such an environment, the QA team can play a lesser role in a DevOps environment by randomly testing applications. QA will also need to test and verify the deployment workflows to ensure the infrastructure configuration used is as per the design.
  • End Users: End-user testing can be reduced in a DevOps environment, by only randomly testing applications. However once DevOps is in place, end users should notice a vast improvement in the quality and speed of the applications produced.

Tools
VMware vRealizeTM Code StreamTM  targets IT organizations that are transforming to DevOps to accelerate application released for business agility. Some of the features it offers include:

  • Automation and governance of the entire application release process
  • A dashboard for end-to-end visibility of the release process across Development and Operations organizations
  • Artifact management and tracking

For IT leaders, vRealize Code Stream can help transform the IT organization through a DevOps approach. The “continuous integration” cycle is a completely automated package that will deploy, validate, and test applications being developed.

DevOps can also benefit greatly from using platform-as-a-service (PaaS) providers. By developing and releasing software on PaaS, the consistency is guaranteed as the platform layer (as well as lower layers) are always consistent. Pivotal CF, for example, allows users and DevOps to publish and manage applications running on the Cloud Foundry platform across distributed infrastructure.

Conclusion
Although DevOps is a relatively new concept, it’s really just the next step after agile software development methods. As the workforce becomes more mobile, and social media brings customers and users closer, it’s necessary for IT organizations to be able to quickly release applications and adapt to changing market dynamics. (Learn how the VMware IT DevOps teams are using the cloud to automate dev test provisioning and streamline application development in the short video below.)

Many organizations have tackled the issues associated with running internal development teams by outsourcing software development. I now see the reverse happening, as organizations want to reach the market more quickly and have started to build internal development teams again.

For the majority of my clients, it’s not a matter of “if” but “how quickly” will they introduce DevOps. By adopting DevOps principles, their development teams will be able to efficiently release features as demanded by the business, at the speed of business.

====
Ahmed Al-Buheissi is an operations technical architect with the VMware Operations Transformation global practice and is based in Melbourne, Australia.

 

A New Angle on the Classic Challenge of Retained IT

By Pierre Moncassin

Pierre Moncassin-cropWhen discussing the organization models for managing cloud infrastructure with customers, I have come across situations where some if not all infrastructure services are outsourced to a third party. In these situations my customers often ask – does your (VMware) operating model still apply? Should I retain cloud-related skills in-house? If so, which ones?

The short answer is: Yes. The advice I give my customers is that their IT organization should establish a core organization modeled on the “tenant operations” team as defined in Organizing for the Cloud, a VMware white paper by my colleague Kevin Lees.

Let’s assume a relatively simple scenario where a single outsourcer is providing “standard” infrastructure services — such as computing, storage, backups. In this scenario, the outsourcer has accepted to transform at least some of its services towards software-defined data center (SDDC), which is by no means an easy step (I will return to that point later).

For now let’s also assume a cooperative situation where customer and outsourcer are collaboratively working towards a cloud model. The question is — what skills and functions should the customer retain in-house? Which skills can be handed over to the outsourcer?

The question is a classic one. In traditional infrastructure outsourcing, we would talk about a “retained IT” organization.  For the SDDC environment, here are some skill groups that I believe have to be preserved within the core, in-house team:

  • Service Design and Self-service Provisioning is clearly a skillset to keep in-house. The in-house team must be able to work with the business to define services end-to-end, but the team should also be able to grasp accurately the possibilities that automation offers with software such as VMware vCloud Automation Center.  Though I am not suggesting that the core team needs to be expert in all aspects of workflows, APIs or scripting, they do need a solid grasp of the possibilities of automation.
  • Process Automation and Optimization.  A solid working knowledge of automation software is useful but not enough.  The in-house teams are required to decide which processes to automate and how. They need to make business-level decisions. Which processes are worth automating? What is the benefit of automation versus its cost?
  • Security and Compliance is often a top priority for cloud adopters. The cloud-based services need to align with enterprise policies and standards.  The retained IT function must be able to demonstrate compliance and where needed, enforce those standards in the cloud infrastructure.
  • Service Level Management and Trend Analysis. Whilst the retained IT organization does not need to be involved in the day-to-day monitoring and troubleshooting, they need to be able to monitor key service levels. Specifically, the business users will be highly sensitive to the performance of some business-critical applications. The retained IT organization will need to keep enough knowledge of these applications and of performance monitoring tools to ensure that application performance is measured adequately.
  • Application Life Cycle (DevOps). We have assumed in our scenario an infrastructure-only outsourcing — the skills for application development remaining in-house.  In the SDDC environment, the tenant operations team will work closely with the application development teams. Amongst other skills, the retained IT will need detailed knowledge not only of application provisioning, but also the architectures, configuration dependencies, and patching policies required to maintain those applications.

I have reviewed skills groups needed as more automation is used, but there will be less reliance on skills that relate to routine tasks and trouble-shooting. Skills that can typically be outsourced include:

  • Routine scripting and monitoring
  • System (middleware) configuration
  • Routine network administration

The diagram below is a (very simplified) summary of the evolution from traditional retained IT to tenant operations for SDDC environments.

Retained IT modelIt is also worth noting that the transformation from traditional infrastructure outsourcing to SDDC is a far from obvious step from the point of view of an outsourcer. Why should the outsourcer invest time and cost to streamline services, if the end customer has already contracted to pay for the full cost of service? Gaining buy-in from the outsourcer to transform its model can be a significant challenge. Therefore it is prudent to key to gain acceptance either:
–  early in the contract negotiations, so that the provider can build in a cloud delivery model in its service offering,
– or towards the end of a contract when the outsourcer is often highly motivated to obtain a renewal.

Finally outsourcers may initiate their own technology refresh programs, which can create a win-win situation when both sides are prepared to invest in modernization towards SDDC.

3 Key Take-Aways

  1. Organizations that undertake their journey to SDDC with an outsourcer are advised to establish a core SDDC  organization including most tenant operations skills; a key focus is to leverage automation (whilst routine, repetitive tasks can be outsourced).
  2. The exact profile of the tenant operations (retained IT) will depend on the scope of the outsourcing contract.
  3. Early contract negotiations, renewals, or technology refresh can create opportunities to encourage an outsourcer to move towards the SDDC model.

———
Pierre Moncassin
is an operations architect with VMware’s Global Operations Transformation Practice and is based in the UK. Follow @VMwareCloudOps on Twitter for future updates.

Using vCloud Suite to Streamline DevOps

By: Jennifer Galvin

A few weeks ago I was discussing mobile app development and deployment with a friend. This particular friend works for a company that develops mobile applications for all platforms on a contract-by-contract basis. It’s a good business. But one of the key challenges they have is the time and effort required to install a client’s development and test environment so that they can start development. Multiple platforms need to be provisioned. And development and testing tools that may be unique to each platform must be installed. This results often in needing to maintain large teams with specialized skills and having to maintain a broad range of dev/test environments.

I have always been aware that VMware’s vCloud Suite can speed up deployment of applications, (even complex application stacks), but I didn’t know if long setup times were common in the mobile application business. So I started to ask around:

“What was the shortest time possible it would take for your development teams to make a minor change to a mobile application, on ALL mobile platforms – Android, iPhone, Windows, Blackberry, etc?”

comic part 1 The answers ranged between “months” and “never”.

Sometime later, after presenting VMware’s Software Defined Datacenter vision to a tech meetup in Washington, D.C. a gentleman approached me to discuss the question posed. While he liked the SDDC vision, he wondered if I knew of a way to use vCloud Suite and software controlled everything to speed up mobile development. So I decided to sketch out how the blueprints and automated provisioning capabilities of the vCloud Suite could help speed up application development on multiple platforms.

First, let’s figure out why this is so hard in the first place – after all, mobile development SDK’s are frameworks, and while it takes a developer to write an app, the SDK is still doing a lot of the heavy lifting. So why is this still taking so long? As it turns out, there are some major obstacles to deal with:

  • Mobile applications always need a server-side application to test against: mobile applications interact with server-side applications, and unless your server-side application is already a stable, multi-tenant application that can deal with the extra performance drain of 32 developers running amok (and you don’t mind upsetting your existing customers), you’re going to need to point them at a completely separate environment.
  • The server-side application is complex and lengthy to deploy: A 3-tier web application with infrastructure (including networking and storage), scrubbed production database data to provide some working data, and front-end load balancing is the same kind of deployment you did when the application initially went into production. You’re not going to start development on your application any time soon unless this process speeds up (and gets more automated).

Let’s solve these problems by getting a copy of the application (and a copy of production-scrubbed data) out into a new Testing area so the developers can get access to it, fast. vCloud Suite provides a framework for the server-side application developers to express its deployment as a blueprint, capable of deploying not just the code, but all the properties to automate the deployment, and consumes capacity from on-premises resources as well as those from the public cloud. That means that when it comes time to deploy a new copy (with the database refreshed and available), it’s as easy as a single click of a button.

comic part 2comic part 3Since the underlying infrastructure is virtualized, compute resources are used or migrated to make room for the new server-side application. Other testing environments can even be briefly powered down so that this testing (which is our top priority) can occur.

Anyone can deploy the application, and what used to take hours and teams of engineers can now be done by one person. However, we are still aiming to deploy this on all mobile platforms. In order to put all of our developers on this challenge, we first need to ensure they have the right tools and configurations. In the mobile world, that means more than just installing a few software packages and adjusting some settings. In some cases, that could mean you need new desktops, with entirely different operating systems.

Not every mobile vendor offers an SDK on all operating systems, and in fact, there isn’t one operating system that’s common to the top selling mobile phones today.

For example, you may only develop iOS applications using xCode, which runs only on Mac OSX. Both Windows and Android rely on an SDK compatible with Windows, and each has dependencies on external libraries to function (especially Android). Many developers typically favor MacBooks running VMware Fusion to accommodate for all of these different environments, but what if you decide that to re-write the application quickly, you require some temporary contractors? Those contractors are going to need those development environments with the right SDKs and testing.

This is also where vCloud Suite shines. It provides Desktop as a Service to those new contractors. The same platform that allowed us to provision the entire server-side application allows us to provision any client-side resources they might need.

By provisioning all of the infrastructure at once, we are now ready to redevelop our mobile app. We can spend developer time development and testing, making it the best app possible, instead of wasting resources for work environment deployment.

comic part 4

Now, let’s think back to that challenge I laid out earlier. Once you start deploying your applications using VMware’s vCloud Suite, how long will it take to improve your mobile applications across all platforms? I bet we’re not measuring that time in months any longer. Instead, mobile applications are improved in just a week or two.

Your call to action is clear:

  • Implement vCloud Suite on top of your existing infrastructure and public cloud deployments.
  • Streamline your application development process by using vCloud suite to deploy both server and client-side applications, dev and test environments, dev and test tools, and sample databases – for all platforms – at the click of a button.

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

The Top 10 CloudOps Blogs of 2013

What a year it’s been for the CloudOps team! Since launching the CloudOps blog earlier this year, we’ve published 63 items and have seen a tremendous response from the larger IT and cloud operations community.

Looking back on 2013, we wanted to highlight some of the top performing content and topics from the CloudOps blog this past year:

1. “Workload Assessment for Cloud Migration Part 1: Identifying and Analyzing Your Workloads” by Andy Troup
2. “Automation – The Scripting, Orchestration, and Technology Love Triangle” by Andy Troup
3. “IT Automation Roles Depend on Service Delivery Strategy” by Kurt Milne
4. “Workload Assessment for Cloud Migration, Part 2: Service Portfolio Mapping” by Andy Troup
5. “Tips for Using KPIs to Filter Noise with vCenter Operations Manager” by Michael Steinberg and Pierre Moncassin
6. “Automated Deployment and Testing Big ‘Hairball’ Application Stacks” by Venkat Gopalakrishnan
7. “Rethinking IT for the Cloud, Pt. 1 – Calculating Your Cloud Service Costs” by Khalid Hakim
8. “The Illusion of Unlimited Capacity” by Andy Troup
9. “Transforming IT Services is More Effective with Org Changes” by Kevin Lees
10. “A VMware Perspective on IT as a Service, Part 1: The Journey” by Paul Chapman

As we look forward to 2014, we want to thank you, our readers, for taking the time to follow, share, comment, and react to all of our content. We’ve enjoyed reading your feedback and helping build the conversation around how today’s IT admins can take full advantage of cloud technologies.

From IT automation to patch management to IT-as-a-Service and beyond, we’re looking forward to bringing you even more insights from our VMware CloudOps pros in the New Year. Happy Holidays to all – we’ll see you in 2014!

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

VMware CloudOps Is Heading to Silicon Valley DevOps Days

(Photo from DevOpsDays.org)

As you may have heard, we’ll be on-site in Santa Clara for this year’s DevOps Days, taking place tomorrow, June 21st and this Saturday, June 22nd. If you’re attending the conference, make sure to swing by our table to say hello and to learn more about VMware’s cloud operations solutions and services.

We’ll be live tweeting from the show floor with exclusive photos and videos, and we’ll also be covering the following sessions:

Day 1:

  • 10:15-10:45 – DevOps + Agile = Business Transformation
  • 12:00-12:30 – Leveling Up a New Engineer in a Devops Culture; Healthy Sustainability

Day 2:

  • 10:15-10:45 – Leading the Horses to Drink: A Practical Guide to Gaining Support and Initiating a DevOps Transformation
  • 11:30-12:00 – Analysis Techniques for Identifying Waste in Your Build Pipeline
  • 12:00-12:30 – Clusters, Developers and the Complexity in Infrastructure Automation

During each afternoon of DevOps Days, there are open spaces for attendees to propose sessions to present. Once these sessions have been selected, we’ll tweet the sessions we’ll be live-tweeting from.

We’re also giving away t-shirts at DevOps Days: follow us at @VMwareCloudOps during the conference and you could soon be the proud owner of one of these shirts:

We hope to see you at DevOps Days!

Follow @VMwareCloudOps on Twitter for future updates, and join the conversation by using the #CloudOps and #SDDC hashtags on Twitter.

DevOps and All The Other “Ops Religions”

By: Kurt Milne

I didn’t wake up yesterday thinking, “Today I’ll design a T-shirt for the DevOps Days event in Mountain View.”  But as it turns out – that is what happened.

Some thoughts on what went into my word cloud design:

1. DevOps is great. This will be my 4th year attending DevOps Days.  I get the organic, bottoms up nature of the “movement.” I’ve been on the receiving end of the “throw it over the wall” scenario. A culture of collaboration and understanding go a long way to address the shortcomings of swim lane diagrams, phase gate requirements and mismatch of incentives that hamper effective app lifecycle execution. Continuous deployment is inspirational, and the creativity and power of the DevOps tool chain is very cool.

2. EnterpriseOps is still a mighty force. I remember an EnterpriseOps panel discussion at DevOps Days 2010. The general disdain for ITIL, coming from a crowd that was high off of 2 days of Web App goodness at Velocity 2010, was palpable. The participant from heavy equipment manufacturer Caterpillar asked the audience to raise their hand if they had an IT budget of more than $100M. No hands went up in the startup-dominated audience. His reply – “We have a $100M annual spend with multiple vendors.” The awkward silence suggested that EnterpriseOps is a different beast. It was. It still is. There is a lot EnterpriseOps can learn from DevOps, but the problems dealing with massive scale and legacy are just different.

3. InfraOps, AppOps, Service Ops. This model developed by James Urquhart makes sense to me.  It especially makes sense in the era of Shape Shifting Killer Apps. We need a multi-tier model that addresses the challenges of running infrastructure (yes, even in the cloud era), the challenges of keeping the lights on behind the API in a distribute component SOA environment and the cool development techniques that shift uptime responsibility to developers, as pioneered by Netflix. Clear division of labor with separation of duties, and a bright light shining on the white space in between, is a model that seems to address the needs of every cloud era constituent.

4. Missing from this 3-tier model is ConsumerOps. Oops. Too late to update the shirt design. Many are consuming IT services offered by cloud service providers; there must be a set of Ops practices that help guide cloud consumption. Understanding and negotiating cloud vendor SLAs and architecting multiple AWS availability zones immediately come to mind. Being a service broker and including 3rd party cloud services as part of an integrate service catalog is another.

5. Tenant Ops. As far as I can tell, this term was coined by Kevin Lees and the Cloud Operations Transformation services team at VMware. See pages 17 and 21 in Kevin’s paper on Organizing for the Cloud. It includes customer relationship management, service governance, design and release, as well as ongoing management of services in a multi-tenant environment. VMware internal IT uses the term to describe what they do running our private cloud internally. They have a pie chart that shows the percentage of compute units allocated to different tenants (development, marketing, sales, customer support, etc). It works. It may be similar to ServiceOps in the three tier model, but feels different enough, with a focus on multi-tenancy and not API driven services, to deserves its own term.

6. Finally CloudOps. This term is meta. It encompasses many of the concepts and practices of all the others. This is a term that describes IT Operations in the Cloud Era. Not just in a cloud, or connected to a cloud. But in the cloud era. The distinction being that the “cloud era” is different than the “client server era,” and implies that many practices developed in the previous era no longer apply. Many still do. But dynamic service delivery models are a forcing function for operational change. That change is happening in five pillars of cloud ops: People, Process, Organization, Governance, and IT business.

So while some of the sessions at this year’s DevOps conference are focused on continuous deployment. I’d bet that all the topics of the “Ops religions” will be covered.  Hence the focus on the term CloudOps.

We’ll be live tweeting from DevOps next Friday. Follow us @VMwareCloudOps or join the discussion using the #CloudOps hashtag.

Consider joining the new VMUG CloudOps SIG or find out more about it during VMUG June 27th webcast.

VMware #CloudOps Friday Reading Topic – It’s Time for Change

As more organizations leverage software-defined datacenter technology to increase resource utilization and automate IT processes, what does this mean for how IT can organize itself to optimize results?

There are a variety of ways IT can transform itself to increase agility, reduce costs, and improve quality of service.

Cloud Computing: 4 Ways To Overcome IT Resistance (Kyle Falkenhagen, ReadWrite)
Enterprise cloud adoption is a transformative shift – these organizational change strategies can help IT departments fight fear as they move to cloud computing.

Secrets of a DevOps Ninja: Four Techniques to Overcome Deployment Roadblocks (Jonathan Thorpe, Serena Software)
Process consistency and automation help development and operations work closely together to get software that delivers value to customers faster.

On IT’s Influence on Technology Buying Decisions. Role #1: Get Out of the way (Ben Kepes, Diversity Limited)
IT needs to help set parameters, then get out of the way and let the business and users drive the process.

The Orchestrated Cloud (Venyu)
The Software Defined Admin – orchestrates provisioning, scaling, incident response and disaster recovery.

When all resources in the datacenter can be manipulated via API (Software-defined data center), the traditional role of the IT admin and how admins are grouped in the IT organization will change.

This means that IT has a great opportunity to reinvent itself as a strategic business enabler. The question is whether you’re ready to rise to the occasion.

Follow us on Twitter at @VMwareCloudOps for future updates, and join the conversation using the #CloudOps and #SDDC hashtags.