posted

4 Comments

Those of you who have been using NFS with vSphere over the past number of years will be aware that VMware currently only supports NFS v3 over TCP. There is no multipathing with this version of NFS, and although NIC teaming can be used on the virtual switch, this is for failover purposes only.

To do some semblance of load balancing, one could mount NFS datastores via different network interfaces. For instance, NFS datastore1 could be mounted via controller1 on subnet A, and NFS datastore2 could be mounted via controller2 of the same NFS server on subnet B. This would allow you to balance the load, but is a very manual process. Could we automate this in any way?

What about using round-robin DNS where each request to resolve a Fully Qualified Domain Name (FQDN) would result in the DNS server supplying the next IP address in a list of IP addresses associated with that FQDN? Interestingly, I had this query twice last week.

First, some background on how NFS behaves in vSphere. if a user specifies the DNS name for an NFS server, we persist that DNS name in the vCenter DB. Once the datastore is instantiated on ESX, we resolve the DNS name once. So even if the datastore is temporarily unmounted and remounted (say via esxcli) we would use the same IP address. If the ESX host is restarted or if the datastore is removed and re-added later, we would resolve the FQDN again which may come back with a different IP address if the DNS Server was configured to use round-robin.

Also note that DNS resolution is done on a per datastore basis. We don't have a DNS name lookup cache in NFS that is shared between multiple mount points. Therefore different ESX hosts mounting the same NFS datastore may resolve to different IPs using round-robin. Doing mounts of different datastores using an FQDN from the same ESX server will cause each mount to resolve the FDQN and again possibly pickup a different IP using round-robin DNS configuration.

So overall, DNS round-robin should work just fine if you want to do some automated load balancing with NFS.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @VMwareStorage

About the Author

Cormac Hogan

Cormac Hogan is a Senior Staff Engineer in the Office of the CTO in the Storage and Availability Business Unit (SABU) at VMware. He has been with VMware since April 2005 and has previously held roles in VMware’s Technical Marketing and Technical Support organizations. He has written a number of storage related white papers and have given numerous presentations on storage best practices and vSphere storage features. He is also the co-author of the “Essential Virtual SAN” book published by VMware Press.