Our goal at Intel was to let software developers get an innovative idea into production in less than a day.
One of the large aspects of meeting this goal is delivering a robust PaaS solution. We made a decision in mid-2011 that PaaS would enable this goal, and for our wide range of data and security requirements, running it in our private cloud was paramount. In searching for a PaaS solution for our Enterprise Private cloud, we conducted a study of solutions that could be landed within an enterprise. We specifically wanted a solution that could run on an IaaS and help address our key challenges.
We needed greater agility, simplicity, standardization, and efficiency, and these needs served as an impetus for our Cloud Foundry cloud. Though our journey from proof of concept to enterprise standard is still underway, we are sharing our vision of “how to help developers get apps to production in one day” at VMworld along with lessons learned and technical approaches (APP-CAP3310 – Intel Enterprise PaaS with Cloud Foundry). This post provides some additional detail on the business drivers and what led us to select Cloud Foundry. Attend our session at VMworld to get into a much greater level of detail.
The 4 Drivers
1. More Agility
When custom applications were initially developed at Intel, the apps still took several months to deploy into production. Without any question, these delays impacted business metrics. There were 75 individual steps and little automation during a deployment. The process could take 130 to 140 days for new custom applications, and 30 to 40 days for version updates (see Figure 1 for steps in each lifecycle stage). Maintenance, new releases, and end-of-life deployment processes all faced error-prone, manual slow-downs. Business units and functional departments needed to move quicker.
Application development teams were responsible for provisioning their own infrastructure and needed to understand the infrastructure in detail. For example:
- How much storage area network and network attached storage?
- What middleware and how will it interface with infrastructure?
- What IP addresses and device names?
Determining this information was not a straightforward process. On top of it, our developers would hard-code the infrastructure details, which made migrations difficult. As well, governance review processes were lengthy and rigorous regardless of application scope and size. There was a benefit in simplicity, but we were far from it.
3. More Standardization
Without standard templates, processes, step-by-step instructions, policies, or configurations, every development team did things a little different. The high degree of variation created problems. For example:
- Back-ups and business continuity tests became more cumbersome and expensive due to the variation. It just takes more work when there might be 100s or 1000s of completely different configurations instead of a few standards.
- Subcontracted web applications or microsites added variation. These also increased complexity when hard-coded to various 3rd party infrastructures.
- No standard monitoring existed, making centralized application support a challenge.
- Some applications and environments had redundancy, some didn’t. The proliferation of models for redundancy produced no economy of scale.
4. Greater Efficiency
Historically, our development teams had no easy way to automatically scale or add resources to applications. So, we significantly over-estimated resource requirements like compute and storage – our teams would order an infrastructure to meet the demands of high-use scenarios. When applications weren’t used as much, the servers sat idle. As well, multiple instances like development and test sat idle for the life of the app, even if they were only required a few days per release.
So, how did we choose a complete application stack? Of course, we knew what chipset to use, but what about frameworks, middleware, application servers, and other components? To answer this question, we began looking at an open source model for a cloud environment. Like others, our journey led us to Cloud Foundry.org.
We narrowed the field down to several possible options and after running a proof of concept (POC) we ultimately choose Cloud Foundry as the basis for Private PaaS in our next phase pilot. Cloud Foundry met our requirements in terms of technical capabilities and was differentiated by its availability as open source software and its array of supported programming languages.
Open source enables Intel to benefit from the fast pace of community updates while remaining open to customizing the solution for Intel’s business. Cloud Foundry’s vibrant community provides many advantages including feature contributions, knowledge sharing, and 3rd party support options. While many IT shops are wary of the risks associated with open source, for many years Intel has had great success using open source software in our Design Grid. Our past experience makes us comfortable with the open source approach and the Cloud Foundry open source project.
The other key factor in our decision to use Cloud Foundry is that its flexible architecture supports a number of popular programming languages and frameworks such as Java, Ruby, Python and PHP. This aligns with Intel developer requirements, with the exception of a gap in support of .NET applications. Fortunately, near the end of our POC, a new open source software project became available called Iron Foundry which extends Cloud Foundry for .NET applications. This means that all developers can use the same toolset and platform for application deployment, which is a huge win in terms of platform flexibility and addresses the current and emerging needs of Intel’s developer community. While there is still much work to do to meet all of our requirements for a Hybrid PaaS solution, we have made great progress establishing our initial PaaS offering for Intel SW Developers.