FYI - possible issue and how to fix if you use NFS for your Repositories. The error was indicating a problem with rpc.statd.
"[2017-10-12 14:50:56 10057] DEBUG (nfs_linux:48) cmd = /bin/mount -t nfs -v 192.168.5.67:/production /OVS/Repositories/01c66b45--420f--dafsdfasd -o nosharecache,soft,fg,retry=1,tcp,vers=3 DONE; status 32
[2017-10-12 14:50:56 10057] DEBUG (nfs_linux:89) mount output: mount.nfs: timeout set for Thu Oct 12 14:51:56 2017
; status 32
[2017-10-12 14:50:56 10057] DEBUG (nfs_linux:94) mount failed, error mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified
There was a process running rpc.statd... However, the problem ended up being that rpcbind had died!
I eventually noticed this when I ran "service --status-all".
So if you are getting an error about rpc.statd not running, this may be your fix:
- restart the rpcbind service
- at this point it still will not work; that is because you must also restart the rpc.statd service. I issued command "pkill -HUP rpc.statd" which just kills the service
- Now you can try a CLI nfs mount and see if it will start the rpc.statd service back up and perform the NFS mount.
- If successful, umount the mount you just made as test
- Performed these steps (except the mount/umount) on each host in the cluster if the "service rpcbind status" or "--status-all" indicates it is dead. They were all in the same predicament for me.
After doing this the File Server Discovery was able to complete and the new repository was created.
I upgraded my OVM cluster to 3.4.4 on 9/21.
Today was the first time to add a new volume from our NetApp storage for the OVM cluster in a long time.