In my recent interactions with customers regarding their cloud strategies, the conversation inevitably turns towards the subject of “Cloud Cost” and its management. A common trend I observe is the prevalence of targets centred around the volume of workloads shifted to the cloud. Although this is an easy-to-measure KPI, it raises the question – but why? This is also a question I often pose to customers: why are you migrating to the cloud? While the responses vary, they often align with goals such as cost reduction, overall digitalisation, application modernisation, and increased security.
Studies corroborate that companies utilising the cloud trend outperform their counterparts (as indicated in the State of DevOps reports). This raises two questions: what is cloud, and why does it foster more success? According to the National Institute of Standards and Technology (NIST), cloud computing comprises five key characteristics:
- On-demand Self-Service
- Broad Network Access
- Resource Pooling
- Rapid Elasticity
- Measured Service
Inspired by the insightful words of my esteemed colleague, Paul Nothard, I’d argue that the cloud is not a place, but a consumption model. With this perspective, it becomes clear that these characteristics help enhance flexibility and shift responsibility towards the product owner through self-service and measured services. Similarly, they decentralise operational efforts from specific silos (like network, storage, compute) towards a platform team through resource pooling and rapid elasticity.
So how does this model contribute to business success?
From my point of view, the cloud’s power lies in its dynamism. It empowers businesses to accelerate their pace, albeit with the challenge of having to keep up with that pace, due to the rapid obsolescence and unavailability of legacy functionalities in cloud services.
Is your business up to speed with all its applications? If so, that’s fantastic news! However, based on my interactions with clients, they typically engage only 10% – 30% of their complete application portfolio annually. This suggests that 70% – 90% of applications generally don’t require the cloud’s dynamic characteristics. Nevertheless, if you plan to lift and shift everything to the cloud, you will incur costs for all applications. This realisation led to the genesis of the “Cloud Smart” doctrine, which advocates for only moving what will benefit from the cloud. But this approach only holds if you perceive the cloud as a location.
When you examine legacy applications, it’s evident that resource pooling is still desirable, albeit without a dynamic necessity. Moreover, you’d want modern methods to isolate and secure machines to enhance security. And when it comes to updating software that hasn’t been used for a while, testing it in an isolated environment is ideal. Given that legacy applications typically have predictable resource requirements, there’s no need for rapid elasticity or self-service. What you need is a robust hosting environment.
The solution lies in flexibility and freedom of choice
At VMware, our vision is to deliver the best of both worlds. We offer a fully automated Software Defined Data Center (SDDC) on-premises for predictable loads at a forecastable price. Additionally, we provide self-service with a standardised catalogue and seamless management interface for engaging with hyperscalers like Amazon, Microsoft, Google, and others. Some might argue that this approach undermines the goal of migrating 80% of workloads to the cloud. However, this is only true if you view the cloud as a place and not an operating model. With our SDDC stack, you get all the key cloud characteristics defined by NIST. Moving your legacy workloads to this SDDC stack equates to transitioning your workloads to an on-premises cloud.
One potential concern could be the rapid elasticity aspect in on-premises data centres. Although achievable, it comes at a cost due to the need for ample white space capacity. A more intelligent approach could be devising a burst strategy that leverages hyperscalers, creating a seamless integration with our products as they can combine on-premises and hyperscalers resources into a single logical entity.
Business cases indicate that legacy workloads can be operated at 30% less cost than with hyperscalers, given a certain scale (ranging from 200 virtual servers to thousands, the more the better). That is to say that an on-premises cloud can often be cheaper and more efficient than a public cloud. With such a strategy, you can counter high legacy workload costs by migrating them to your private cloud. For your cloud-native software development teams, with very busy requirements, typically using cloud-native services, hyperscalers are often the ideal host. However, wouldn’t it be convenient to have a unified management layer rather than building separate operational teams?
Dropbox provides an illustrative case of a company initially cloud-native that eventually realised the financial benefits of repatriating workloads from hyperscalers. They now operate most of their storage in their data centres, which from my perspective, would be considered a private cloud, as it aligns with the five characteristics previously discussed.
In conclusion, to the question of Cloud First or Cloud Smart – this question underlines the misconception of the cloud as a place. If you see the cloud as an operating model – sure is the cloud the right place to host your applications! Smart in this context is that the application landscape is understood, and depending on each application’s characteristics, the placement which is called for most customers’ landing zone is decided. A private cloud on-premises is likely the best fit for steady state workloads, while hyperscalers can be better suited for the dynamic loads of developer`s cloud-native applications.