How to setup NFS on Synology NAS for VMware ESXi lab

Synology

I’ve been asked several times how and why I setup my home lab to use NFS on my Synology NAS and thought a post detailing the steps would be best. First the why, when I purchased my Synology DS412+ about two years I recall seeing several people stating NFS was out performing iSCSI (like this post) on the Synology. It was strictly from reading other peoples findings that I started with NFS and have continued to use NFS without any issue. In fact I’ve been very happy with my DS412+ in a RAID 10 setup.

How I setup NFS on the Synology for my ESXi homelab is pretty simple as well.

Configuring Synology NFS access

  1. Log into the Synology DiskStation and go to: Control Panel > File Services – located under “File Sharing”.
  2. NFS is disabled by default so we need to enable it first. Expand NFS Services and check “Enable NFS” and click on Apply.
    synology enable nfs
  3. From the Synology control panel to go “Shared Folder”. A name is the only required step, but I like to give it a description and hide the shared folder from my network places as well as hide files from users without permission. Then click on Ok.
    synology create shared folder
  4. Once your folder has been created, go to the “Permissions” tab and change all users except your admin account to have “No Access“.
  5. Then change to the “NFS Permission” and click on Create and type the hostname or IP address of your ESXi host, then click Ok.
    synology nfs rule
  6. After you have clicked on “Ok” from the step above you will be taken back to the “Edit Shared Folder (FOLDER NAME)”, you can either click on “Create” to add permissions for another ESXi host if needed or click on “Ok” if not, before clicking on “Ok” to close the window be sure to make note of the “Mount Path“.
    synology nfs mount path

That completes the steps on the Synology side of things. You will want to repeat Steps 3 – 6 for each additional NFS share you wish to create. Next we need to add the datastore to the ESXi host.

Note: The above steps was completed using DSM version 5.1-5022 Update 1, your steps may vary if running a different version.

Add NFS datastore(s) to your VMware ESXi host

  1. Log into the VMware Web Client.
  2. Under Inventors click on “Hosts and Clusters”.
  3. Right click on your cluster name and select “New Datastore“.
    vmware new datastore
  4. For Type, select “NFS” then click on Next.
    vmware nfs datastore
  5. Give the NFS datastore a name, type in the IP of your Synology NAS, and for folder type in the “Mount Path” you took note of from step 6 above then press Next.
    vmware nfs name config
  6. Under Host accessibility, select which host(s) you want to add the new NFS datastore then press Next.
    vmware datastore host accessibility
  7. Finally press Finish.

That’s it, quick and easy!

You will may also want to install the Synology NFS VAAI plugin if you haven’t already. This will enable the ability to create thick disks as well as improve cloning tasks.

Similar Posts

  • VMware Workstation 8.0.4 released

    VMware has just released it’s forth minor update for VMware Workstation 8, bringing it up to 8.0.4. The looks to contain mostly a few bug and security fixes.

    General Issues

    • Linux guests running the Linux kernel version 2.6.34 or later could not be pinged from the host via an IPv6 address.
    • On rare occasions, Linux guests would suddenly fail to Autofit or enter Unity.
    • Unity mode would exit if the title bar of an application contained certain non UTF-8 encoded extended ASCII characters.
    • On Windows hosts, the VMware Workstation user interface sometimes became unresponsive when minimized from full-screen mode if the suggestion balloon was being displayed.
    • On Windows hosts, the user interface sometimes became unresponsive if the application was rendered on an extended display that was abruptly disconnected.

    Read More “VMware Workstation 8.0.4 released”

  • Install Synology NFS VAAI Plug-in for VMware

    Synology

    In the recent DSM update (5.1), Synology added VMware VAAI support for NFS volumes using two primitives which are Full File Clone and Reserve Space. What do these VAAI primitives offer?

    • Full File Clone enables virtual disks to be cloned by the NAS albeit while the machine is powered off.
    • Reserve Space allows you can create a thick VMDK file. However Reserve Space does not off-load the work to the array. The benefit of thick VMDKs is that many use eager-zero for high I/O performance needs.

    On the Synology side of things you just need to update to DSM 5.1, but in order to take advantage of VAAI you still need to install the VIB plugin on your ESXi 5.5 hosts.

    Read More “Install Synology NFS VAAI Plug-in for VMware”

  • How to upgrade ESXi 6.5 to ESXi 6.7

    VMware released ESXi 6.7 a little while ago, but it’s only been here recently have I started deploying it in my home and work lab environments. Below are two ways to easily upgrade your ESXi 6.5 hosts to ESXi 6.7 using the command line or by using the VMware ESXi offline bundle.

    Read More “How to upgrade ESXi 6.5 to ESXi 6.7”

  • Synology DSM 5.1-5021 update released

    Synology

    Synology released DSM 5.1-5021 update as well as Cloud Station 3.1-3320 today. This update includes all the updates since 5.1-5004 as well as fixes for a number of vulnerabilities in PHP, OpenVPN, and other security improvements. DSM 5.1-5004 also improves Amazon S3 backup stability along with a number of other fixes and improvements.

    Read More “Synology DSM 5.1-5021 update released”

  • How to upgrade vCenter Server Appliance 6.7 to 7.0

    VMware vCenter 7.0 has been released for several months now and figured it was about time I upgraded my home lab to the latest version.

    This post will detail all the steps needed to upgrade vCenter Server Appliance 6.7 to 7.0 without any issues.

    Getting Started

    Before beginning, I HIGHLY recommended you first check the VMware Interoperability Matrix before performing any upgrade to check compatibility of other VMware products.

    Then go download the VCSA 7.0 ISO if you haven’t already and lets get started with the upgrade!

    Read More “How to upgrade vCenter Server Appliance 6.7 to 7.0”

  • My VMware ESXi Home Lab Upgrade

    Although the focus in my career right now is certainly more cloud focused in Amazon Web Services and Azure, I still use my home lab a lot.

    For the last 5+ years my home lab had consisted of using 3x Intel NUC’s (i5 DC53427HYE), a Synology NAS for shared storage and an HP ProCurve switch. This setup served me well for most of those years. It has allowed me to get many of the certifications I have, progress in my career and have fun as well.

    At the start of this year I decided it was time to give the home lab an overhaul. At first I looked at the newest generation of Intel NUC’s but really wasn’t looking forward to dropping over $1,300 on just partial compute (I’d still need to be RAM for each of the 3 NUC’s). I also wanted something that just worked, no more fooling around with network adapter drivers or doing this tweak or that tweak.

    I also no longer needed to be concerned about something that had a tiny footprint. I also questioned if I really needed multiple physical ESXi hosts. My home lab isn’t running anything mission critical and if I really wanted I could always build additional nested VMware ESXi hosts on one powerful machine if I needed.

    So in the end, the below is what I settled on. Replacing all of my compute, most of my networking and adding more storage!

    Read More “My VMware ESXi Home Lab Upgrade”

Leave a Reply to Jordan Fishman Cancel reply

Your email address will not be published. Required fields are marked *

12 Comments

  1. good post, thanks!

    Fwiw, i had issues with step 5 (NFS permissions). Ended up having to use * rather than separate individual rules for my two host IPs.

    1. big problem for me as well. I can’t get anything other than * for hosts to get vsphere to mount. This is a security concern I have not been able to find a workaround for.

  2. When I try to add an NFS volume I get “33389)WARNING: NFS41: NFS41ExidNFSProcess:2022: Server doesn’t support the NFS 4.1 protocol” – Looks like vSphere 6U2 uses NFS 4.1 and from what I’m reading Synology doesn’t support that. Has anyone had luck getting it to work or do you end up using NFS3?

    1. Hi Matt, I’m in the same boat. vSphere6.5 and DSM 6.0.2-8451 Update 9. I’m receiving error: “NFS41ExidNFSProcess:2053: Server doesn’t support the NFS 4.1 protocol”. No solution here, I’m backing down to NFS3 until Synology gets their act together or someone finds a proper solution.

  3. I’m trying to created a content library via nfs, getting the following error, any advice would be appreciated
    Content Library Service does not have write permission on this storage backing. This might be because the user who is running Content Library Service on the system does not have write permission on it.

    I’m using vcenter appliance 6.5 btw

  4. It keeps failing for me, I get “Failed to mount NFS datastore SynoDS1 – NFS mount 192.168.0.1:/volume1/ESXi failed: Unable to connect to NFS server. ” despite the fact that I keep telling it the server IP is 192.168.0.250

  5. Mike – I am not getting very far here.

    vCenter is ver 6.5, Synology running DSM 6.1.4 update 1

    I have added 10GB PCI NICs to my RS3617 . The built in NICs are on a 10.0.5.x network at 1GB, the 10GB NICs are on a physically separated 10.0.4.x network. The host is able to ping both the 10GB ports and the 1GB ports on the Synology. I have one volume set up and running with iSCSI connections, and it works, but performance is lacking, and I get a lot of dropped heartbeats which sometimes cause severe problems.

    I have tried to connect a second volume with NFS share set up per your write up, but the operation fails complaining it is “unable to complete Sysinfo operation”.

    I have been trying many combinations of VMFS versions, host IP address, and permissions without any happy faces.

    If you are still monitoring this thread I could use some guidance.

  6. How about performance? Is it actually outperforming iSCSI?
    Can you show some test results?
    And how would you setup iSCSI with multipathing?