Home > Blogs > VMware vSphere Blog


VMFS Heap Considerations

[Updated 29-April-2013] By default, prior to 5.0p5 & 5.1U1, an ESXi host has 80MB of VMFS heap at its disposal. This is defined in the advanced setting VMFS3.MaxHeapSizeMB. The main consumer of VMFS heap are the pointer blocks which are used to address file blocks in very large files/VMDKs on a VMFS filesystem. Therefore, the larger your VMDKs, the more VMFS heap you can consume. This is more true on VMFS-5, where double-indirect pointers exist to allow the unified 1MB file block size back a 2TB VMDK.

As a rule of thumb, we are conservatively estimating that a single ESXi host should have enough default heap space to address around 10TB of open files/VMDKs on a VMFS-5 volume.

If you change the advanced setting VMFS3.MaxHeapSizeMB to the maximum amount of heap (which is 256MB), we again conservatively estimate that about 30TB of open files/VMDKs can be addressed by that single ESXi host.

The point to keep in mind is that this is a per ESXi host setting – so each ESXi host can address this amount of open files/VMDKs per VMFS volume. Changes are coming to reduce heap consumption and allow a single ESXi host to address even larger VMDKs with less heap. However VMFS heap should be a factor that is taken into account when sizing deployments.

With the release on ESXi 5.0p5 in March 2013 & 5.1U1 in April 2013, both the default and the maximum size of the heap has been increased to 640MB. This means that the full 64TB of a VMFS volume may be addressed. Note that if you upgrade from a previous ESXi version, the maximum may remain at the previously configured setting and you may have to manually increase the size. For new installs, the new settings should be in place.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

This entry was posted in Storage, vSphere and tagged , on by .
Cormac Hogan

About Cormac Hogan

Cormac Hogan is a senior technical marketing architect within the Cloud Infrastructure Product Marketing group at VMware. He is responsible for storage in general, with a focus on core VMware vSphere storage technologies and virtual storage, including the VMware vSphere® Storage Appliance. He has been in VMware since 2005 and in technical marketing since 2011.

16 thoughts on “VMFS Heap Considerations

  1. Conor

    If you use an ESXi host with a large amount (say 24 * 3 TB disks) to serve a single VM which acts as a storage appliance, you may have greater than 30TB addressed by a single host, depending on RAID characteristics.That being the case, how should you address the VMFS3.maxHeapSizeMB calculation? Or is this inadvisable, or impossible?

    Reply
  2. Cormac Hogan

    Hi Conor,
    In this case, presenting those LUNs directly to a VM would necessitate the use of RDMs, which do not require VMFS heap. Therefore you do not need to take VMFS heap into consideration in that case.

    Reply
  3. Loren

    Hi, this is great info! Could we clarify a bit what types of storage and disks are and are not affected by this? From the comments, it sounds like it does not affect RDMs…does that include both physical and virtual mode RDMs? Also, from the name of the option, can I correctly assume that it does not affect NFS storage?

    Thanks!
    -Loren

    Reply
      1. Ron

        I’ll take that as a “no” :-(. For us this means that we will need to change to using NFS on our new storage system (Nexsan NST5530). Of course there are other reasons for us to choose NFS over iSCSI, but this is a pretty good one. In our case we have virtualized all of our file services which collectively add up to about 44TB.

        Out of curiosity, what happens if the ESXi host is presented with more open vmdk’s on vmfs-5 than it has RAM to manage? EG say the 64TB max of a single datastore/LUN?

        Also, the KB should be updated to state that the vmdk limit of 25TB on ESXi 5.x is applicable to datastores formatted with vmfs-5. It seems to me that if the datastore is formatted with vmfs-3 and 8MB block size then the open vmdk limit could be 64TB since there is no memory penalty from the double indirect pointers imposed by vmfs-5. Correct?

        Thanks,

        Ron

        Reply
        1. Cormac

          If you have on a single ESXi host, and you require VMs to address 44TB of open VMDK simultaneously, then yes, NFS or RDMs would be advisable. Remember however that this is not a datastore limit but an ESXi host limit, So once you have two or more ESXi hosts addressing the VMFS datastore, the issue/restriction is mitigated. Regarding large VMFS-3 datastores, the issue is still there but not prevalent,

          Reply
  4. Eric K. Miller

    In case anyone runs into this issue (like we just did), there is a circumstance where, if you use extents, the pointer block allocation doesn’t get adjusted and the maximum number of pointer blocks can be reached. The Datastore will act as though it is full when it really isn’t (resulting in a “no space left on device” message).

    You can see the maximum and free number of blocks using:
    vmkfstools -Pv 10 /vmfs/volumes/

    This was a real shocker when we had a 31TB volume which had 7.5TB full and started to get messages indicating the Datastore was out of available space – similar to what you see when the maximum number of inodes is reached.

    Reply
  5. Joseph Hartford

    This article says:

    The point to keep in mind is that this is a per ESXi host setting – so each ESXi host can address this amount of open files/VMDKs per VMFS volume.

    Does that mean if I have my MaxHeapSizeMB set to 256MB, and I have 4 VMFS volumes presented to the ESX host, I can get:

    30TB * 4 VMFS volumes = 120TB usable on the ESX host?

    The 1004424 KB article says that with a MaxHeapSizeMB of 256 the entire host can only use 25-30TB, nothing about per-VMFS volume.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>