RAN

Infrastructure for Agility: How Hypervisors Benefit Radio Access Networks

For communications service providers that are opening their radio access networks, RAN to better support 5G, the hypervisor is the unheralded workhorse. Though it’s often removed from the center of attention, it supplies the underlying foundation for building an open RAN. The benefits of the hypervisor are multi-faceted:  

  • Supports disaggregation of the RAN in a multi-vendor network 
  • Run multiple workloads on the same COTS hardware 
  • Run multiple versions of operating systems 
  • Provides agility and automation to reap the full potential of 5G  
  • Supports high performance and low latency for telecom workloads 
  • Improves security for containerized workloads through isolation and boundaries 

An open RAN seeks to adopt the industry trajectory toward virtualization and software-defined networking, while the standards for 5G point toward a cloud-native future. Part of that future lies in a vision to disaggregate the RAN. Virtualization and software-defined networking let you replace costly, purpose-built RAN hardware with cheaper commodity servers.  

Consolidation on a common, shared hardware layer created by hypervisors improves utilization and increases efficiency and the software-defined nature of virtual machines (VMs) lets you manage your RAN stack with more agility, as a new paper from IDC titled Empowering Telecom Operators to Deploy vRAN On Cloud and Edge Infrastructure makes clear. 

By virtualizing and disaggregating RAN functions, you can lower cost, deploy network functions for the RAN at their optimal locations, manage the functions at scale from a central location, and automate such things as elasticity and security. 

Supporting Multi-Vendor Networks 

The establishment of multi-vendor networks is a guiding principle of the efforts by the O-RAN Alliance and the Open RAN Policy Coalition to disaggregate and open the RAN. And it is the hypervisor that turns this guiding principle into an immediately obtainable reality. Simply put, the hypervisor supplies the infrastructure that can be immediately used to support multiple vendors and extract your operations from the vendor lock-in trap of a closed monolithic stack.  

A multi-vendor RAN stack and network is also key to security. The security problems of using a single-vendor RAN stack are compounded if the components are closed systems without software transparency for the operator and without rapid announcements of discovered security vulnerabilities and their patches, as our recent white paper on open RAN security pointed out. 

Multi-vendor environments that flourish through the adoption of standardized open interfaces and the resulting interoperability lead to better protected environments and more secure telecommunications infrastructure, so much so that federal governments are beginning to either require or support multi-vendor environments.  

Flexible Workload Workhorse 

The hypervisor is also the underlying workhorse that lets you run and easily manage multiple workloads on the same hardware, and you can use different operating systems to support different network functions. Multiple DUs, for example, can be run on the same server. And you can do it with high performance and cost-effective density. Physical pass through and offload can be shared across multiple DUs running on the same hypervisor to increase density and minimize server costs. 

RAN workloads have stringent performance requirements measured in microseconds. The VMware hypervisor, VMware ESXi, has been optimized to remove jitter and meet the demanding requirements of RAN applications. With VMware Telco Cloud Platform RAN, the Topology Manager optimally allocates CPU, memory, and device resources on the same NUMA node to support performance-sensitive workloads.  

Timing accuracy is also fundamental to running RAN workloads. The VMware hypervisor uses the Precision Time Protocol (PTP) to meet the levels of accuracy required by RAN. A dedicated port is not required to use the Precision Time Protocol with the Intel 810 card. 

And then there’s the elasticity that’s built into VMware vSphere to scale virtual machines and hypervisors to improve performance or to minimize resource costs.  

* Workloads can be migrated through automation to maximize performance. 

* Workloads can be consolidated through automation to minimize resource costs. 

The outcome is cost-effective performance: Workloads that need high performance get it, and workloads that don’t need high performance can be run at a lower cost.  

High Performance and Low Latency for RAN Workloads 

VMware ran industry-standard real-time micro-benchmarks, namely cyclictest and oslat, to compare the performance of RAN workloads on VMware vSphere and bare metal and found that performance is equivalent. 

In addition to performance, operating CNFs in production requires security, lifecycle management, high availability, resource management, data persistence, networking, and automation ─ all of which are an integral part of VMware vSphere and the VMware Telco Cloud.  

When it comes to RAN workloads, why would you want to use bare metal when you can get all the upside of a hypervisor, including security, centralized management, availability, and automation, with no risk to performance and no impact on latency? The choice is obvious.  

Management and Automation 

The management and automation capabilities of using hypervisors and virtual machines can significantly reduce the pain and problems that come with trying to use bare metal. With ESXi 7.0U3, you can, for instance, manage the NIC firmware and image directly. And with VMware Telco Cloud Automation, which is part of our RAN stack, late binding of node configurations can be accomplished with IPv6 support.  

VMware Telco Cloud Platform RAN also optimizes the performance of large Kubernetes clusters and mixed workloads. Programmable resource provisioning optimizes the placement of 5G services and CNFs to maximize resources and RAN performance through the following steps: 

1. Assess a service’s requirements. 

2. Gauge the resources of Kubernetes and the hardware and infrastructure. 

3. Deploy a performance-optimized Kubernetes cluster. 

4. Place the service on the cluster. 

As for the components of a virtualized RAN, programmable resource provisioning optimizes where to locate DUs and CUs. When you onboard a virtualized RAN function, you can programmatically adjust the underpinning availability and resource configuration based on the function’s requirements.  

To meet high-performance, low-latency requirements, DUs can be placed at the far edge near users. CUs, meanwhile, can be automatically placed or dynamically moved closer to the core to maximize resource utilization. These late-binding capabilities let you dynamically move DU and CU resources on demand to improve resource utilization or to add more resources when necessary. 

Boosting performance by selecting a Linux kernel version 

With containers, a crucial factor is the version of the container host’s kernel and its performance characteristics. Another performance advantage of running containers on virtual machines is that you can not only easily select the container host that you want to use but also maximize the CPU resources of the underlying hardware by running multiple virtual machines, each with its own choice of container host.  

In other words, each virtual CPU allocated to each virtual machine on vSphere can run a different kernel; a bare metal server, however, can run only one kernel. By using virtual machines, you can easily select the container host operating system that works best for the performance demands of your CNF.  

Real-Time Operating System Optimized for the RAN 

On vSphere, one example is Photon OS, the security-hardened minimalist host operating system. Its Linux kernel is optimized for performance on vSphere and the VMware hypervisor, and it supports new devices such as ARM64 (Raspberry Pi 3) to, for instance, help enable Internet of things applications at edge sites.  

Photon OS provides a secure Linux runtime environment for running containers, including CNFs, and a real-time kernel flavor called ‘linux-rt’ to support low-latency RAN workloads. linux-rt is based on the Linux kernel PREEMPT_RT patch set that turns Linux into a realtime operating system. In addition to the real-time kernel, Photon OS 3.0 supports several userspace packages that are useful to configure the operating system for real-time workloads. The linux-rt kernel and the associated userspace packages together are referred to as Photon Real Time (RT). 

Conclusion  

Hypervisors solve infrastructure-related problems by better utilizing servers, improving infrastructure management, and streamlining IT operations. In a word, hypervisors are optimally designed to help you reap the full potential of new revenue streams in the 5G era; to find out more, see this paper from IDC. 

To wrap up, here’s a summary of the key benefits of using hypervisors and virtual machines for RAN:  

* Scale CNFs without the pain of adding, configuring, and managing physical hardware  

* Select the best Linux kernel version for your workload  

* Optimize the performance of large Kubernetes clusters and mixed workloads on shared infrastructure  

* Automate lifecycle management of Kubernetes clusters, RAN functions, and 5G services  

* Streamline operations and reduce OpEx  

Previous blog posts have highlighted the following benefits of virtualization over bare metal:  

Finally, check out how our tests show that RAN workload performance is equivalent on bare metal and the VMware hypervisor