The other day, one of our volumes in the lab environment filled up. This volume has a couple large VM’s on it, coupled with a couple different Veeam backup jobs running using the native Veeam backup methods as well as using NetApp snap mirror to snapshot the volume and then using Veeam to ship it out to Azure.
At any rate the volume filled up to the point where vCenter wasn’t allowing me to migrate VM’s off the datastore. I really didn’t want to expand the volume just so I could move VM’s off of it.
Instead, I decided to delete some of the older proof of concept snapshots from SnapMirror. Below are the quick and easy steps to clear up some un-used snapshots and free up some space on the datastore.
I’ve been asked several times how and why I setup my home lab to use NFS on my Synology NAS and thought a post detailing the steps would be best. First the why, when I purchased my Synology DS412+ about two years I recall seeing several people stating NFS was out performing iSCSI (like this post) on the Synology. It was strictly from reading other peoples findings that I started with NFS and have continued to use NFS without any issue. In fact I’ve been very happy with my DS412+ in a RAID 10 setup.
How I setup NFS on the Synology for my ESXi homelab is pretty simple as well.
In the recent DSM update (5.1), Synology added VMware VAAI support for NFS volumes using two primitives which are Full File Clone and Reserve Space. What do these VAAI primitives offer?
- Full File Clone enables virtual disks to be cloned by the NAS albeit while the machine is powered off.
- Reserve Space allows you can create a thick VMDK file. However Reserve Space does not off-load the work to the array. The benefit of thick VMDKs is that many use eager-zero for high I/O performance needs.
On the Synology side of things you just need to update to DSM 5.1, but in order to take advantage of VAAI you still need to install the VIB plugin on your ESXi 5.5 hosts.