Cloud native represents a fundamental change in the way communications service providers (CSPs) design, deploy, and manage applications and services. Its goal is to enable the production of highly scalable and flexible applications for deployment in a public, private, or hybrid cloud.
For service providers with networks that are highly monolithic and inflexible, this transformation isn’t an easy one which is why we worked with Heavy Reading to get an analyst perspective on how service providers are approaching the transformation from a platform perspective.
The outcome of our engagement with Heavy Reading is a white paper produced by industry-recognized Jennifer Clark. One of the biggest questions service providers must first tackle when building their transformation strategies is the choice between bare-metal or virtualized infrastructure. In the paper, Jennifer examines this question and evaluates the relative advantages of implementing containerized solutions in the 5G RAN over a virtualized infrastructure versus a bare-metal solution.
Let’s take a look at what you’ll learn in the paper.
The Move to Containers is Underway
CSPs are committed to the migration of containers as part of their overall cloud native transition strategy, which is shown in the results of a Heavy Reading survey with 92 service providers in 4Q21.
This data shows that containers and microservices are currently being implemented throughout the CSP organization. While many of these implementations are trials, confined to a few nodes and/or for centralized mobile core functions, most have a two to five-year implementation schedule.
Realities of Bare-metal
For the CSPs who have expressed interest in deploying on bare metal, the benefits cited are similar to those for containerization and cloud native networking in general. CSPs expect the bare-metal infrastructure to be compact, draw less power, be highly automated, and be both easier and faster to deploy. The assumption is also that bare-metal infrastructure will be easier to manage, particularly for highly distributed applications such as edge computing and the RAN.
As the CSPs start to deal with the realities of bare-metal infrastructure implementation, however, they face a variety of hurdles:
- Leveraging existing investment in virtualization infrastructure
- Cost of scaling
- Managing hardware lifecycles
- CSPs’ internal skillsets
- Concerns about the security and isolation of applications on bare metal
- Kubernetes implementation
- Support for end-to-end network slicing
Today’s rising investments in 5G infrastructure, together with the functional requirements for CNFs needed by the CSPs today, make bare-metal infrastructure without a virtualization layer and hypervisor a deployment for the future or for greenfield network operators.
Meeting Service Provider Objectives for Cloud Native
In the paper, Jennifer addresses the eight primary objectives of moving to cloud native and then highlights how virtualization supports those objectives.
- Simplify – The use of virtualization and a hypervisor allows the server to support multiple tenants and multiple Oss which enables CSPs to keep up with an environment that adapts easily to constant changes in applications/ workloads.
- Scale – The use of a highly virtualized environment with VMs and a hypervisor provides a more agile environment where capacity can be shared across VMs (controlled by how the policies governing the VMs are set up) and instantiated in minutes when and where needed.
- Performance and Latency – While historically there was a concern of “noise neighbors”, recent advancements in virtualization have proven there is no hit to performance or latency – even for high-requirement applications like RAN workloads.
- Security – VM-based infrastructure has strong embedded security as part of the separation of applications into independent VMs and can guarantee application isolation along with high levels of service.
- Manage and Automate – Virtualized solutions offer rich management, visualization, analytics, and capacity planning/control that enable improve resiliency and recovery. The abstraction of application and OS from the hardware also simplifies updates and enables the support of multiple versions of the OS and/or Kubernetes.
- Multi-Cloud Support – A multi-cloud environment requires management, tools, and orchestration that can are used across multiple clouds to simplify managed from a single location and these solutions do exist for virtualized environments.
- Avoid Vendor Lock-in – By abstracting an application and its associated OS from the hardware, CSPs can build an ecosystem of CNFs and avoid being locked into a limited set of one or two vendors.
- Lower TCO – Fewer servers on a virtualized infrastructure mean savings in OPEX as well as CAPEX due to a smaller footprint and reduced power draw, aiming for sustainability goals.
Virtualization allows CSPs to leverage cloud economics by sharing a pool of resources. The investments that CSPs have already made in NFV, along with the management and tooling expertise that they have acquired, will encourage them to deploy containers in a virtualized, VM-enabled platform.
Learn more by reading the Heavy Reading White Paper: A Platform for Change