Although most of my time is dedicated to Virtual SAN (VSAN) these days, I am still very interested in the core storage features that are part of vSphere. I reached out earlier to a number of core storage product managers and engineers to find out what new and exciting features are included in vSphere 6.0. The first feature is one that I know a lot of customers are waiting on – NFS v4.1. Yes, it’s finally here.
Many readers will know that VMware has only supported NFS v3 for the longest time (I think it was introduced in ESX 3.0, way back in the day). Finally we have support for NFS 4.1.
Caution: do not mix protocols
A word of caution before we get into the details. One should also be aware that an NFS volume should not be mounted as NFS v3 to one ESXi host, and NFS v4.1 to another ESXi host. A best practice would be to configure any NFS/NAS array to only allow one NFS protocol access, either NFS v3 or v4.1, but not both. NFS v3 uses propriety client side co-operative locking. NFS v4.1 uses server-side locking. When creating an NFS datastore, this is clearly called out in the Add storage wizard:
NFSv3 or NFSv4.1Yes – that does say “data corruption” folks, so let’s be careful out there.
Multipathing and Load-balancing
Now onto the improvements. NFS v4.1 introduces better performance and availability through load balancing and multipathing. Note that this is not pNFS (parallel NFS). pNFS support is not in vSphere 6.0.
Setup NFSv4.1 datastoreIn the server(s) field, add a comma separate list of IP addresses for the server if you wish to use load-balancing and multipathing.
Security/Kerberos
Another major enhancement with NFS v4.1 is the security aspect. With this version, Kerberos and thus non-root user authentication are both supported. With version 3, remote files were accessed with root permissions, and servers had to be configured with the no_root_squash option to allow root access to files. This is known as the AUTH_SYS mechanism. While this method is still supported with NFS v4.1, Kerberos is a much more secure mechanism. An NFS user is now defined on each ESXi host using esxcfg-nas -U -v 4.1, and this is the user that is used for remote file access. One should use the same user on all hosts. If two hosts are using different users, you might find that a vMotion task will fail.
There is a requirement on Active Directory for this to work, and each ESXi host should be joined to the AD domain. Kerberos is enabled when the NFS v4.1 datastore is being mounted to the ESXi host.
enable kerberosNote the warning message that each host mounting this datastore needs to be part of an AD domain.
Interoperability
There are some limitations when using NFS v4.1 datastores and other core vSphere 6.0 features however. While NFS v4.1 volumes can be used with features like DRS and HA, it is not supported with Storage DRS, Storage I/O Control, Site Recovery Manager and Virtual Volumes.
[Update – March 20th, 2015] I had a few questions about interop with Fault Tolerance. VMs on NFS v4.1 support FT, as long as it is the new FT mechanism introduced in 6.0. VMs running on NFS v4.1 do not support the old, legacy FT mechanism. In vSphere 6.0, the newer Fault Tolerance mechanism can accommodate symmetric multiprocessor (SMP) virtual machines with up to four vCPUs. Earlier versions of vSphere used a different technology for Fault Tolerance (now known as legacy FT), with different requirements and characteristics (including a limitation of single vCPUs for legacy FT VMs). ?
So lots of nice new features with NFS v4.1 around performance, multipathing, load balancing and security, and we can finally move away from using NFS v3.
[Update] There have been a few questions about whether or not multiple datastores can be presented to ESXi hosts over NFS v4.1. The answer is yes. We certainly support multiple NFS v4.1 datastores per array.
Recent Comments