Tune your NFVi for Optimum Performance

Start your engines! I mean, your NFs.

You wouldn’t think of racing in a Grand Prix without tuning all aspects of your car, so why would you think about running your network without tuning all aspects of your network functions (NFs)? Over the past 15 years, networks have shifted from hardware-based architectures to software-based ones. As off-the-shelf computers have become more powerful, network administrators have virtualized network functions to run on commercial off-the-shelf computers, thus removing the limitations of hardware-based designs. This shift has created what is called a network functions virtualization infrastructure (NFVi). As network functions move to computer-based networks, these components must be tuned properly to deliver optimum performance. This article discusses how to improve the performance of an NFV. For a general background on NFV, please see our earlier blog post, “Telco Transformation to the Cloud.”

Software-based networks have many benefits, but without proper tuning, software-based networks can adversely affect traffic performance. Telco workloads are sensitive to latency, packet drops and jitter, and they often require high throughput. To achieve the demands for high bandwidth and proper QoS, all parts of the network must be properly tuned for optimal performance including any servers that are part of the NFVi. It is crucial to study and understand how different traffic behaves in the environment to achieve maximum performance. Depending on the applications running in the infrastructure, different types of traffic will be involved and accordingly, infrastructure resources will need to be tuned differently. For example, signaling traffic — the kind of control traffic generated when any communication starts from the user end-device and at times of communication termination — is highly CPU intensive. As such, signal processing VNFs will be highly CPU intensive. On the other hand, data traffic — where the actual packets are generated and traverse the network with the user data, like voice and data files — consumes large amounts of memory, and as such, data plane workloads consume large amounts of memory. Based on the type of traffic, you need to plan your infrastructure to cater to the right traffic mix.


Just as you would not expect to get the optimum performance from a sports car without properly tuning the engine, an NFVi is the same. Some of the areas that need to be tuned for the telco infrastructure are discussed below. These are specifically focused on computing (hypervisor) areas, not network tuning parameters.

There are seven key areas to optimize when tuning your network: CPU, memory, huge pages, data plane deployment kit (DPDK), SR-IOV and storage. Several other parameters can be tuned but these seven have the most impact.

CPU tuning

CPU reservation and CPU pinning can both improve CPU performance. CPU reservation is the parameter that can be set on the hypervisor to reserve the CPU cores for each particular VNF. CPU pinning is a core concept linked with the base OS CPU scheduling. With this feature, one can dedicate the CPU’s cores to a service (i.e., VNF) running on the hypervisor OS. When tuning the CPU, one must be sure to analyze the VNF functionality and its traffic. It is especially important if you are a VNF vendor to establish and validate the CPU-intensive VNF traffic with CPU pinning as well as CPU reservation before finalizing the CPU tuning.


Memory reservation guarantees the memory allocation to the VM (VNF). Memory reservation helps the hypervisor prioritize the VMs when it is allocating the memory. It is common knowledge that the CPU needs memory to store the data temporarily and process the needs. When the CPU stores the data, performance is mainly dependent on the location of memory and the speed of the bus between the CPU and memory. Using centralized memory with a single shared bus between CPUs may not be a good option to get the optimum performance.

When the CPU, RAM and the bus all deliver optimum performance, the VNF (VM) will have the best advantage. This leads to the concept called NUMA, or non-uniform memory access. With a NUMA configuration, the CPU will use the memory that is closest to it rather than the memory in the central location or from a different slot. This will boost the performance of the VNF and will complement the deliverables of the telco.

Properly allocating the entire set of resources for a VNF, such as the network, from the same NUMA node will help achieve optimum performance.

Huge pages

This performance enabler will help with memory-centric applications that send huge amounts of data. By default, the memory page size is 4K but one can configure the page size to 1G to help the memory-intensive VNFs. However, one must remember that if the memory page size is configured on a system that does not need huge pages, there will be a negative impact on performance. Plan the allocation in such a way that memory-hungry VNFs are placed on the hypervisor configured for huge pages and other VNFs are placed on hypervisors with default page size. Proper testing is recommended before deciding.


The data plane development kit, or DPDK, is a NIC-level performance enabler that makes transaction-intensive VNFs/CNFs more efficient. With traditional operating systems, Kernel space manages network drivers/libraries so that applications running in User space is dependent on Kernel space. Using DPDK, VNF/CNF traffic is directly sent to network cards by User space. This removes the burden on Kernel space, allowing it to take care of other OS processes.

Examples of processes that use DPDK are the DU node, NSX Edge node and Ericsson vEPG.


For latency-sensitive applications, one can configure SR-IOV / passthrough to get optimum performance. SR-IOV stands for single route IO virtualization, and it helps provide guaranteed bandwidth, a crucial need for some VNFs and CNFs. A physical NIC, technically called a PNIC, can be divided into virtual functions. With a PNIC, the bandwidth will be divided into several VFs. With this configuration, we bypass the virtual switch and only dedicate bandwidth to the desired NFs.

One use case for SR-IOV includes Mavenir, which uses SR-IOV and passthrough for DU and CU in cell sites with an MTU of 9,000. Another example is that the Altiostar CU uses SR-IOV, Altiostar DU uses PCI-Passthrough for PTP. Also, consider the possibilities of Jumbo frames, and MTU configurations when you are planning for telco traffic in the NFVi environment.


Some VNFs/CNFs have a demand for greater input/output operations per second (IOPS). To achieve optimum performance, one must make sure that the required IOPS can be served by the provided storage. For example, a Nokia AAA VNF requires a minimum of 7200 RPM SSD disk type and it works with concurrent IOPS rather than with sequential IOPS. Be sure to understand the storage requirement of the NFs to plan for the type of disks needed for the storage. There are trade-offs, of course. For instance, flash drives give much better IOPS than SATA drives, but are a bit more expensive than SATA. Plan the disk groups and map the NFs to the appropriate disk group based on the NF’s requirements.

As you can see from the details above, there are quite a few things that you should tune to give your network the best performance. Proper setup can take a while, but once done, your NFVis should be better prepared for whatever your network throws at them. If you think you could use some help optimizing your network, VMware Professional Services is here to help. Contact us today.

VMware is excited to announce that we are racing into the future with McLaren Formula 1 team as an official partner.


Leave a Reply

Your email address will not be published. Required fields are marked *