Airhop is a 2024 Nominee for the TECHCOnnect Innovation Showcase. Learn More Here.
Reduce power consumption and operating expenses, while making the RAN more sustainable.
Massive MIMO (Multiple Input Multiple Output) technology heralds a transformative era in
wireless communication, prominently characterized by both its advantages and the primary
challenge of elevated power consumption. The technology’s commendable spectral efficiency
brings about simultaneous transmission to multiple users, fostering heightened data rates,
augmented network capacity, and minimized latency. However, the implementation of Massive
MIMO entails a considerable downside, mainly in the form of substantial power requirements.
The deployment demands extensive infrastructure investments, including an array of antennas
and sophisticated signal processing equipment, contributing to escalated energy consumption
and operational costs.
As networks strive for sustainability and efficiency, managing and
mitigating the increased power demands of Massive MIMO becomes a crucial consideration,
necessitating a delicate balance to optimize its benefits while curbing the associated power-
related challenges.
What if …
Wouldn’t it be great if there was a way for a CSP to dynamically reduce the power consumption
by adjusting the RF configuration of the Tx/Rx arrays? And what if this were based on the RAN
traffic demand while always using the most energy-efficient configuration and while always
meeting the traffic demand? Now the good news, AirHop has developed an AI-powered rApp
called Energy Saving with MIMO Adaption (ESMA) that does just this trick. The ESMA rApp is
designed for deployment on the non-real-time RAN Intelligent Controller (RIC) in the O-RAN
Alliance deployment architecture, as provided by VMware as part of its SMO Framework.,
including the VMware Centralized RIC. The ESMA rApp is the latest addition to AirHop’s Auptim
portfolio of xApps and rApps.
How Does Auptim ESMA rApp work?
The Auptim ESMA rApp harnesses the power of artificial intelligence through Deep
Reinforcement Learning (DRL). The ESMA DRL agent strategically considers the Radio Access
Network (RAN) state, characterized by historical traffic data, to formulate actions. These actions
involve dynamic switches between diverse Radio Frequency (RF) configurations of the Tx/Rx
arrays of the radio units (O-RUs), such as 16T16R vs 32T32R versus 64T64R vs … . Subsequently,
the agent receives feedback in the form of rewards from the network. The reward system is
intricately designed to simultaneously minimize RAN power consumption and uphold the
Quality of Service (QoS) for users. This dynamic approach ensures that the agent’s decisions are
finely tuned to optimize RF configurations in a way that prioritizes both efficient power usage
and a seamless user experience and that’s what AirHop’s Efficiency Excellence innovation is
focused on.
The first step for the EMSA rApp operation is for the DRL agent to learn the best RF
configuration to use (action) based on the current and predicted traffic demand (state) and the
feedback received from previous action taken by the DRL agent (reward). This DRL agent
training is performed off-line using a simulated network that generates a multitude of network
traffic patterns and user QoS requirements together with various RF configurations and their
respective power consumption profiles.
The second step for the ESMA rApp operation is to deploy the trained AI agent as an rApp on
the RIC function in VMware’s SMO Framework, VMware centralized RIC where it takes decision
on the RF configuration for each cell at the granularity of every 5 minutes to an hour depending
on the operator’s requirements and constraints.
AI innovation matters.
Developing a solution for optimizing the RF configuration in a massive MIMO network poses
multifaceted challenges. Balancing the trade-off between minimizing power consumption and
maintaining quality of service demands a delicate equilibrium. The sheer complexity of massive
MIMO networks, with a vast number of antennas and potential configurations, amplifies the
challenge. Furthermore, the dynamic nature of wireless communication environments
introduces uncertainties that complicate the learning process.
Achieving a robust and adaptive solution necessitates addressing these intricacies, considering
real-time network variations, and designing algorithms that can learn and adapt efficiently to
deliver optimal RF configurations for sustainable power usage without compromising service
quality. In order to understand how well the ESMA rApp works, its performance is measured in
a network with varying traffic load and a system using three RF configurations: an 8×8 64Tx/Rx
array configuration for high load, 4×4 16Tx/Rx configuration for mid load and a 2×2 4Tx/Rx RF
for low load traffic conditions. The ESMA rApp is able to demonstrate significant power savings
of close to 40% while maintaining the end-user quality of service at greater than 99.9% when
compared with a fixed 64T64R configuration. With the same traffic profile, a fixed 4T4R
configuration would result in a significant drop in the end-use quality of service down to 64%.
About AirHop
AirHop Communications provides AI-powered cloud-native open RAN automation and real-time
optimization software solutions that deliver improved network performance, lower operating
costs, and improved end-user quality of experience for 4G and 5G mobile networks. AirHop’s
solutions include the Auptim family of O-RAN standard-compliant xApps and rApps and eSON
and eSON360 for pre-standard O-RAN architecture deployments.