Technical

What is the Impact of the VMKlinux Driver Stack Deprecation?

Current versions of ESXi 6.x are shipped with both the VMKlinux and the Native driver stack. Modern hardware is, in many cases, using the new Native drivers. However, older hardware may still depend on a VMKlinux driver module. We announced to deprecate the VMKlinux driver stack back in 2017. This blog post goes into detail what that means and provides a way to get insights on the possible impact on your current vSphere clusters.

VMKlinux

Since the first days of ESX, we used the Linux derived driver modules to provide support for a large range of hardware devices. Doing so gave us a lot of hardware compatibility, but at the cost of introducing an additional layer of driver emulation. Because ESX is not Linux, we needed a translate layer that provides communication between the VMkernel and the Linux driver modules. This is the imperative layer that is VMKlinux. But since vSphere 5.5, we introduced the Native driver stack with the plan to move away from the VMKlinux driver stack. Please review the following blog posts for more in depth information about both driver stacks:

The vSphere 6.7 family will be the last vSphere versions that includes the VMKlinux driver stack. Future releases will only ship with the Native driver stack.

Native Driver Stack

Why we are moving forward with the Native driver stack only? It is all about keeping the ESXi VMkernel footprint as lean and mean as possible. Using only Native drivers allows for overall VMkernel CPU utilization and memory usage savings, while providing support for new hardware device features like RDMA and support for 100GbE NICs.

Since vSphere 5.5, we’ve been replacing VMKlinux drivers with Native drivers. Starting vSphere 6.5, the VMKlinux module is not even started unless there is need for a VMKlinux driver module. New releases typically come with new and updated Native driver modules. The vSphere 6.7 Update 2 release is no exception to that as it comes with the following driver updates:

  • Solarflare Network Adapters (sfvmk)
  • Broadcom 100G Network Adapters (bnxtnet)
  • RoCE Mellanox nmlx4_rdma 40G RMDA driver update
  • RoCE Mellanox nmlx5_rdma 100G RMDA driver update
  • Mellanox nmlx4_rdma 40G driver update
  • Mellanox nmlx5_rdma 100G driver update
  • Broadcom lpfc and brcmfcoe
  • Various Broadcom lsi drivers
  • Cavium qinativefc
  • Cisco nfnic

How to check impact on your cluster

If you run the following command, it will show you if the VMKlinux module is loaded on that specific ESXi host.

When it returns nothing, the system is already in full native mode and there are no VMKlinux dependencies. This however, is an ESXi host only approach. We came up with the following script to assist you in verifying an entire cluster.

With the help of William Lam, we created a script that identifies ESXi hosts with VMKlinux driver modules loaded. The usage of the script is pretty straightforward. Get, and contribute to, the script here: https://github.com/lamw/vghetto-scripts/blob/master/powershell/VMKLinuxDrivers.ps1.

  1. Download the script and load the function into PowerCLI:
  2. Make sure that you are connected to vCenter. Now you can run the function against a vSphere cluster, i.e. Get-VMKlinuxDrivers -Cluster <cluster-name>
  3. The output clearly shows that one host, nh-esx-03 in the exemplary screenshot, is running the igb (Intel Gbit NIC) VMKlinux driver module. Zooming in on the ESXi host in question, I can now easily identify the NIC’s that are running this driver module.

In the example above, I disabled the igbn native driver for the sake of the argument. If the script returns nothing, you are in full Native driver mode already.

To Conclude

As stated before, typical modern hardware does not rely on VMKlinux drivers anymore. However, it is important to verify whether your environment is still using VMKlinux driver modules. Some VMKlinux drivers have no Native driver replacement, this is because the entire hardware family is already declared End of Life (EOL) by the hardware vendor.

Please refer to the VMware Compatibility Guide to review hardware device compatibility using either an Inbox Native driver that is shipped with ESXi, or an Async Native driver that is provided by the hardware vendor.