Reduce CNF Customization Time by 75% and Accelerate the Journey to vRAN
Today’s Communication Service Providers (CSPs) are faced with a difficult task: how can the Radio Access Network (RAN) be virtualized to enable next-generation 5G services, such as Ultra Reliable Low Latency Communication (URLLC) applications or Massive Machine-Type Communications (mMTC)? CSPs understand that migrating to a virtualized RAN (vRAN) will not only improve their own operations, but also unlock these new 5G revenue sources with differentiated services. In fact, according to a recent Analysys Mason survey, CSPs identified the need to reduce expenses, decrease time to deploy new services, improve network scalability and enable new enterprise revenue streams—especially at the edge—as the primary reasons for vRAN adoption. It comes as no surprise, then, that vRAN is set to become a $22B global market by 2025.
Operators understand vRAN’s potential benefits and plan to invest accordingly, but also must confront important operational challenges: deploying a vRAN requires a highly-optimized environment. If CSPs are to enable 5G-dependent applications, they need to offer differentiated services leveraging Network Slicing, strong reliability, low latency, tight security and, most importantly, a robust method for handling massively distributed deployments.
To enable 5G and Open vRAN, most network functions follow cloud-native principles and run as containers in Kubernetes clusters that ease deployments, improve resiliency and simplify operations. Unlike modern enterprise applications, these network functions answer stringent carrier-grade requirements running on specific OS kernel and hardware configurations such as accelerated network cards, SmartNICs (for vRAN), and GPUs (for MEC). A simple container platform to host vRAN requirements is not enough—the platform must allow for centralized management to onboard, instantiate and dynamically customize infrastructure to fit application requirements.
Many vRAN network function vendors have decided to leverage Cloud Native technologies to create their software-based components. As such, to complement VMware Telco Cloud Automation (TCA)’s Container-as-a-Service (CaaS) Automation, VMware developed automated customization of the CaaS layer, applied during the workload instantiation process to fit workload requirements. This crucial capability for a cloud-native network is called “late-binding.”
How it Works
As demonstrated in the demo above, late-binding addresses a fundamental issue for network operators: how to balance a set of heterogeneous vRAN and 5G core vendor requirements with consistent operations. TCA’s late-binding configures cloud resources on-demand based on the network function requirements and then automates this process on an ongoing basis.
Traditionally, Kubernetes node customization was a highly complex task requiring combined application, CaaS and infrastructure expertise. These activities usually required professional services and still led to misconfigurations and rollbacks. Figure 1 below, for example, illustrates the antiquated onboarding and instantiation processes that led to disjointed infrastructure, cluster configuration and application provisioning that necessitated manual reconciliation. As pictured below, the manual infrastructure configuration process requires an operator to create the Kubernetes worker nodes, customize the nodes, create a Kubernetes cluster using the worker nodes and then, finally, deploy the CNF—a process that can take roughly four hours to complete.
Figure 1 – Manual CNF Validation & Instantiation
VMware’s automated late-binding solves these manual headaches for operators through automated customization of items such as OS kernels, network adapters, or Precision-Time-Protocol (PTP) configuration in response to workload demands. As highlighted in Figure 2 below, a CNF vendor can define these late-binding customizations quickly and easily in the Designer interface or via a template.
Figure 2 – CNF Customization Interface
Figure 3, moreover, demonstrates this same onboarding and instantiation process when streamlined by applying VMware’s Ready for Telco Cloud certification in the earlier steps to validate the CNF artifacts and ensure that the PaaS and CaaS support all CNF required extensions, or plug-ins. Late-binding occurs during the instantiation phase and automatically aligns both application and infrastructure layers—thus extricating the manual configuration processes and generally reducing the CNF customization time by 75%.
Figure 3 – Late-Binding Workflow Process: Reducing CNF Customization Time by 75%
Let’s now walk through an example use case of an operator using late-binding to customize a virtual distributed unit (vDU)—a necessary component of vRAN and Open RAN (O-RAN) deployments. As demonstrated below, the operator in this case is seeking to deploy an Open vRAN network, which necessitates customizing the vDU hosting infrastructure. The operator, in our scenario, needs to onboard and deploy its vendor’s vDU, vCU-UP and vCU-CP workloads across hundreds of cell sites. To deploy its vCUs and vDUs, scores of customizations are necessary—without late-binding automation, the process to customize Kubernetes nodes and VMs at each cell site would be done manually and would need to be repeated across hundreds or thousands of instances, resulting in time-consuming and costly upgrades.
Late-binding constitutes an integral part of a CSP’s design of new cloud-native networks as it provides repeatability, consistency and uniformity of cluster customization based on CNF requirements. Late-binding automates complex operational tasks while providing an essential abstraction between the cloud infrastructure resources and the network function. Without using TCA’s late-binding, it could take several hours to customize Kubernetes nodes manually. With late-binding, however, the customization process can occur in 75% less time. Without late-binding, an operator must know the clusters’ purpose before deployment, and all the nodes in the cluster would operate similarly. With late-binding, on the other hand, an operator can adapt the nodes on the fly and as-needed based on workload requirements. As such, an operator’s costs decrease: late-binding saves time and resources, increases the frequency of application roll-out, enables the adoption of CI/CD operations methods and reduces the complexity of deploying and maintaining applications.
As operators continue to migrate to cloud-native networks and from RAN to vRAN to O-RAN, late-binding will prove crucial to these journeys through unparalleled automation.
 See: Analysys Mason report, “Implementing the vRAN Cloud: Strategies for Success,” by Caroline Chappell, Caroline Gabriel and Gorkem Yigit, January 18, 2021, p. 10.
 “Implementing the vRAN Cloud,” p. 4.