Home Page

ESXi-Arm Fling 1.12 Refresh

Today we released a new refresh to the ESXi-Arm Fling. New installs and upgrades from a previous installation of the ESXi-Arm 1.x Fling are supported. You can now update using the published Offline Bundle (zip), see below for instructions. Do not use VUM or vSphere Lifecycle.

Tip: if your evaluation period has expired, you can perform a new installation; choose to preserve the VMFS filesystems, and register your VMs again.

NVMe on an Arm edge platform

Improved virtualization

  • Various fixes related to Arm SystemReady compliance for virtual hardware exposed to guests
  • Compatibility fixes related to secure boot

Host support improvements

New platforms

  • EXPERIMENTAL support for HPE ProLiant RL300 Gen11 servers
  • EXPERIMENTAL support for Marvell OCTEON 10 based platforms

NVMe

  • Support for NVMe on non-cache coherent PCIe root complexes (e.g. Rockchip RK3566 systems like Pine64 Quartz64 and Firefly Station M2)
  • Add a workaround for devices with PCI vendor/device ID 126f:2263 (e.g. Patriot M.2 P300) that report non-unique EUI64/NGUID identifiers which prevented more than one disk from being detected on systems with multiple devices present
    • When upgrading a system from 1.12 from a prior Fling release with one of these devices, datastores from the device will not be mounted by default. Please see the Upgrading section below on how to mount the volumes after the upgrade is complete.

Miscellaneous

  • ESXi-Arm Offline Bundle (zip) now available
  • Fixed cache size detection for Armv8.3+ based systems
  • Relax processor speed uniformity checks for DVFS enabled systems
  • Support additional PHY modes in the mvpp2 driver
  • Fixed IPv6 LRO handling in the eqos driver
  • Identify some new CPU models

Upgrading

Existing installs of the v1.x Fling can be upgraded in one of two ways:

  • Copy the ISO to boot media, run the installer, and select upgrade
  • Using the published Offline Bundle (zip) file

To upgrade with the Offline Bundle:

Reboot to complete the upgrade process.

NOTE: If you are using NVMe storage on a device with PCI vendor/device ID 126f:2263, datastores may not be mounted by default after upgrade. To restore them, find the VMFS UUID:

Mount the volume (and make it persistent) with the following command:


Contact us

We’d love to hear more about your plans for ESXi on Arm. Please reach out to us at [email protected] with the following details to help us shape future releases:

  • Company name
  • Arm platform of choice
  • Scale of deployment (number of servers/VMs)
  • Types of workloads and use cases
  • vSphere features of interest (HA, DRS, NSX, vSAN, Tanzu, vCD, etc)

Supported systems

The ESXi-Arm Fling should boot on any system that is (or close to) compliant to the Arm SystemReady ES or Arm SystemReady SR specifications. The following systems are known to work:

Datacenter and cloud

  • Ampere Computing Altra and Altra Max-based systems
  • Ampere Computing eMAG-based systems
  • Arm Neoverse N1 SDP

Near edge

  • SolidRun Honeycomb LX2
  • SolidRun MACCHIATObin or CN9132 EVB
  • NVIDIA Jetson AGX Xavier Development Kit

Far edge

  • Raspberry Pi 4B (4GB or 8GB models) and Pi 400
  • NVIDIA Jetson Xavier NX Development Kit
  • NXP LS1046A Freeway Board and Reference Design Board
  • Socionext SynQuacer Developerbox
  • PINE64 Quartz64 Model A and SOQuartz (4GB or 8GB models)
  • Firefly Station M2 (4GB and 8GB models)

If you are running the Fling on a system not on this list, please let us know!


Known issues

  • Ampere Altra-based systems may PSOD when AHCI disks are used
  • In 1.11 we mentioned that the kernel included with the Ubuntu for Arm 22.04.1 LTS installer had an issue that prevented graphics from initializing properly. Ubuntu for Arm 22.04.2 LTS has since been released and includes a fix for this issue.
  • FreeBSD 13.1-RELEASE has a known bug with PVSCSI support and large I/O requests. There are a few ways to work around this issue:
    • Upgrade to FreeBSD 13.2-RC1 or later, which includes a fix
    • Set the tunable kern.maxphys=”131072″ to limit the maximum I/O request size
    • Use AHCI instead of PVSCSI