Gimme, gimme never gets… except if you’re a developer.
Being a CIO is tough. It’s a high-pressure job with a complex problem set and typically short tenures. Perhaps unfairly, CIO’s are second to last in job duration among the C-Suite, according to the Korn Ferry Institute.
There are many reasons that average enterprise CIO doesn’t last long in their role. Budget overages, project failures, and unrealistic transformation objectives rank among them. Not being able to deliver new services and respond to the needs of the business is a recurring theme. A common refrain from developers is whenever they ask the CIO or the IT organization for resources, they receive the same response: “Wait.”
“Wait.” is usually interpreted as “No.”
Therefore, the developer, or the LoB, just goes to the public cloud. And the CIO might just get the bill.
How can CIO’s and their teams stop being seen as an obstacle to innovation? Moreover, how can they start saying “Yes” to the various requests they’re burdened by, without creating even more problems for themselves by breaking good governance practices? Simply put, the cloud teams, and their CIO’s, fundamentally want two things:
- To enable cloud consumption.
- To maintain cloud control.
Let’s take a look at how the Cloud Operating Model can make this possible.
Restoring ‘Day Zero’ with the Cloud Operating Model
I think it is fair to say that developers embraced a key element of cloud operations first. Most coders will check out a piece of code, add their changes, and subsequently check that code back into a shared repository, such as Git. Through a series of decision gates, their code will be promoted through compile, test, and staging steps before reaching production. The entire development team has visibility to the process and can collaborate, iterate, and improve the code over time. This is the heart of the innovation engine that sends daily updates to the apps on my mobile phone.
How to successfully address the CIO’s inability to say “Yes” to increasing developer and LoB requests is found in this aspect of the Cloud Operating Model. What if the IT organization was just as adept at rapid iteration and improvement of their service designs, policies, and provisioning as developers are with their code? What if developers could easily leverage the good work that the IT, network, and security organizations are already doing, and they could apply it to both private and public cloud workloads?
What if the answer to every request from the developer and the Line of Business was “Yes”?
In Part Two of this series, I reviewed Self-Driving Operations and how it enables IT organizations to deliver a competitive cloud experience to developers. Additionally, I discussed how many Lines of Business already are going directly to the cloud (Day 2-Run), bypassing the best practices that IT offers through Day 0 (Plan) and Day 1 (Build).
What if the CIO could restore the policy, efficiency, and control benefits of Day 0 and Day 1 and say “Yes” to every request from the developers for resources?
Saying “Yes” to every request can look like the following:
- Chris, the developer wants a new environment. “Gimme!”
- Test and Dev applications and infrastructure are instantly deployed to a legacy data center, or possibly an underutilized public cloud endpoint. (Many enterprises have unused cloud credits embedded in the enterprise agreements from various vendors.)
- Chris’s new test environment has a lease associated with it. It expires after a week, and her team lead or director renews the lease every subsequent week as it is an active project. Or it is just a science project, and it expires and is automatically decommissioned.
- Or… this might be an initiative that has legs.
The test/dev build that started as a “Gimme!” slowly matures through several generations of development – possibly into a system that is about to launch from staging into production.
- Based on intelligent policy managed by the Cloud Team, the “gimme” app is automatically deployed into production on a secure, highly performant, and cost-effective tier-one data center. Automatically and transparently, adhering to performance, cost, and security policy.
Through every step, Chris gets the resources she needs. If it comes time to switch clouds for policy, performance, or cost reasons, her code is portable. There’s no need to start over. There are no security or data sovereignty issues as her development environment has the appropriate best practices and compliance standards applied to it automatically.
Day 0,1 (Plan, Build) make their big return to the enterprise through the Cloud Operating Model. Not only can the developer and the Line of Business get what they want, the architecture, operations, and security practices can efficiently work together at developer pace while including their designs and policies directly into the “gimme” environments need by Chris. Remarkably, all this effort is reusable across multiple clouds.
This is how the VMware platform allows the CIO to say “Yes” to every resource request and simultaneously restore design excellence, security, due diligence, and cost management.
With the next-generation self-service functions available in VMware vRealize Automation, work that is done to create and improve workload policies is applicable across data center and public cloud environments. Policy and provisioning efforts are not duplicated after they are performed once. This eliminates the time and cost of reskilling and avoids data center and cloud operational silos, enabling IT teams to focus on innovation that impacts the business.
The planning, best practices, and build efforts you initiate under VMware’s platform can be simultaneously reintroduced into your public cloud endpoints now that VMware supports AWS, Azure, GCP, and IBM. Provisioning is iterated on and further enhanced in future deployments, keeping designs and policies easier to deploy, maintain, and enforce, without impeding developers or the Line of Business. No matter the cloud, the process remains consistent. Plans from the past won’t be lost to memory and turnover as infrastructure and application deployments can be managed centrally with a high level of visibility.
Self-Service Automation
If Self-Driving Operations primarily addresses the needs of the infrastructure group, Self-Service Automation supports the needs of the cloud and development teams. The platform allows organizations to automatically apply the right policies, to the right workload, at the right time—incorporating any cloud endpoint or service.
Cross-functional teams can significantly reduce time-to-market and provisioning costs. With vRealize Automation’s next-generation architecture, the work to create and apply policies can be reused and improved after the first deployment. Provisioning automation is applicable across both data center and multiple public cloud environments, including common native cloud services including AWS RDS, Route 53, EC2 and Azure VMs, SQL, Redis , and more. This eliminates the time and cost of reskilling and avoids data center and cloud operational silos.
vRealize Automation and the Cloud Operating Model
Some teams encounter resistance when explaining how their cloud strategy will handle all the products and services already deployed. The problem underlying siloed clouds is the diverse array of systems and equipment required to run the various workloads that your company has today in the data center as well as the cloud or the edge.
VMware’s platform can help break down these silos and unify your clouds into one operational environment – spanning multiple endpoints, hardware investments, and geographies.
We’ve found that some teams have struggled to succeed in their early multi-cloud and Infrastructure as Code (IaC) initiatives for a straightforward reason. It has been challenging to determine the design of their existing deployments, and no one has the time to rewrite them all from scratch manually.
vRealize Automation can automatically search your vSphere, AWS, and Azure environments, discover your applications and bring them under the continuous improvement process described above. The platform with onboard your multi-cloud deployments using a rules-based process that can be guided by the tagging you already have in place. It will even create the blueprints and IaC artifacts for you.
At VMware, we’re working hard to make it easy for infrastructure, ops, and security teams to take the benefits of the Cloud Operating Model into their practice areas. Our objective is to empower you to bridge your existing talent and investments into new deployment methodologies and innovation opportunities across your organizations. Furthermore, the performance and impact of the applications are the primary concern of the LoB or the App Owner, and ultimately the organization.
Our next view into the Cloud Operating Model will focus on economics, starting with debt. See ‘Cloud Is’ – Part Four – Technical Debt
VMware vRealize Automation is a multi-cloud automation platform that transforms IT service delivery by reducing the complexity of app, OS, compute, network, and storage delivery while streamlining cloud processes.
With vRealize Automation, you can:
- Eliminate error-prone manual IT tasks and processes
- Onboard workloads to the cloud quickly and consistently
- Enable consistent visibility and governance for workloads deployed across any environment
- Reduce day-to-day administration, support, and maintenance
To learn more about how VMware delivers Self-Service Automation, see:
https://www.vmware.com/products/vrealize-automation.html
To learn more about using Infrastructure as Code in vRealize Automation, see:
https://blogs.vmware.com/management/2020/01/infrastructure-as-code-and-vrealize-automation.html
The full ‘Cloud Is’ series is available here: https://blogs.vmware.com/management/author/rquerin