NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
modac1
May 20, 2008Aspirant
ReadyNAS NV+ and VMWARE ESX NFS Share
I'm trying to create a NFS DataStore on my ReadyNAS NV+ with ESX 3.5
I'm using RAIDiator 4.01c1-p2 and have created an NFS Share (VMStore) with default access of Read/Write with Root Privilege-enable hosts of the ESX Server (10.10.10.181)
On the ESX Server I've created a VMKernal port (10.10.10.181) on the Service Console virtual switch (0)
Using the IC - I navigate to the ESX Server - Configuration - Storage - Add Storage - Network File System - ip address 10.10.10.150 (ReadyNAS NV+), Folder /VMStore, DataStore NFSStore
I receive the error:
Error during the configuration of the host: NFS Error : Unable to Mount filesystem : Unable to connect to NFS server
From the ESX console I can ping the ReadyNAS NV+ successfully.
Any ideas what I need to do in order to mount the share as an NFS datastore ?
I'm using RAIDiator 4.01c1-p2 and have created an NFS Share (VMStore) with default access of Read/Write with Root Privilege-enable hosts of the ESX Server (10.10.10.181)
On the ESX Server I've created a VMKernal port (10.10.10.181) on the Service Console virtual switch (0)
Using the IC - I navigate to the ESX Server - Configuration - Storage - Add Storage - Network File System - ip address 10.10.10.150 (ReadyNAS NV+), Folder /VMStore, DataStore NFSStore
I receive the error:
Error during the configuration of the host: NFS Error : Unable to Mount filesystem : Unable to connect to NFS server
From the ESX console I can ping the ReadyNAS NV+ successfully.
Any ideas what I need to do in order to mount the share as an NFS datastore ?
35 Replies
Replies have been turned off for this discussion
- ewokNETGEAR ExpertDo you have another machine (real, not VMWare) that you can use to try and mount that NFS share?
- modac1AspirantNo - we only run Windows. I don't have a UNIX/LINUX box to try and map the drive.
When I fill out the Root privilege-enabled hosts with the VMWare ip address - does that give full control to the share ? - btaroliProdigy
ewok wrote: Do you have another machine (real, not VMWare) that you can use to try and mount that NFS share?
That belies a misunderstanding of what he's talking about. ESX hosts (2.5, 3.0, or 3.5) are RHEL4 builds. The "service console" is the actual Linux installation inside which the ESX vmkernel runs as a kernel module. Of course, Vmware has a good deal of custom code going on, especially as it relates to shared repositories for VM hosts.
I wonder if the OP has tried to manually use "mount" from the service console, versus configuring the volume as a datastore within VirtualCenter. I have tested the root option from Mac OS X, Solaris, and Fedora (at home) and it seems to do exactly what one would expect.
The basic mount command from the service console should behave exactly as any other RHEL installation, so I'm very curious whether you find it works that way but not as a datastore. If it is just the latter that is an issue, then this would be the result of some incompatibility between ESX (not the service console itself) and ReadyNAS. - btaroliProdigy
modac wrote: No - we only run Windows. I don't have a UNIX/LINUX box to try and map the drive.
When I fill out the Root privilege-enabled hosts with the VMWare ip address - does that give full control to the share ?
Probably not in the way you're thinking of it. The "root" option on an NFS export enables the "root" user (usu 0) from an NFS client mounting this export to execute the usual root activities (changing UID/GID of files, etc). This is as close as you get to "full control" on an NFS mount. ;) - btaroliProdigy
modac wrote: Any ideas what I need to do in order to mount the share as an NFS datastore ?
It could just be an NFS compatibility issue between ReadyNAS and ESX (see my earlier post). Have you checked your vmkernel log file to see if it says anything about the NFS mount when it tries to hook up? You may wind up having to report this issue to VMware in order to learn more about what is happening and determine a workaround/fix. VMware has a custom device driver they use for NFS as a datastore, which enables shared use across all ESX servers in a cluster. - modac1AspirantHere is some additional information -
I've opened the NFS Client on the ESX firewall
I've edited the \etc\exports with
/VMStore 10.10.10.0/24(rw,no_root_squash,sync)
from the ESX Console I type
exportfs -r
then
mount -t nfs 10.10.10.9:/VMStore /mntpoint
it replies with
mount: RPC: Timed out
Any ideas ? - ewokNETGEAR Expert
btaroli wrote:
That belies a misunderstanding of what he's talking about. ESX hosts (2.5, 3.0, or 3.5) are RHEL4 builds. The "service console" is the actual Linux installation inside which the ESX vmkernel runs as a kernel module. Of course, Vmware has a good deal of custom code going on, especially as it relates to shared repositories for VM hosts.
It's good to see someone else here has ESX experience. I certainly have none. :D - btaroliProdigy
ewok wrote: It's good to see someone else here has ESX experience. I certainly have none. :D
But you're still cute and furry! ;) :worship: - btaroliProdigy
modac wrote: Here is some additional information -
I've opened the NFS Client on the ESX firewall
I've edited the \etc\exports with
/VMStore 10.10.10.0/24(rw,no_root_squash,sync)
from the ESX Console I type
exportfs -r
then
mount -t nfs 10.10.10.9:/VMStore /mntpoint
it replies with
mount: RPC: Timed out
Any ideas ?
Well, I'm not sure you have to fool with the firewall in order to configure NFS. I found references to two overviews of the process (both for NAS, one for Netapp) and they seem to suggest that the important factors are:- You must have the NFS device on the same subnet as the VMKernel interface on each ESX host
- You must give rw and root perms on the NFS export
- You may have to force the NFS version, but that's usu done on the client side so I think such references had to do with Linux-based mounts and not those done via the vmkernel.
The reference notes I found are:
NFS is a gray area for me with ESX because we mostly use Fibrechannel. I might, though, suggest you take a peek at the VMware Communities, which is a kickass customer forum. - Gecko1AspirantHahah this if funny because I thought I was the only crazy guy running ESX with the NV+ as my storage... OK Guys, here's the scoop with this error or it least my experience the last time this happened with me... nfsd was running but not rpcd on my ReadyNAS NV+ (RAIDiator 4.01c1-p1 [1.00a041]), even though my share that I setup was NFS. When I tried to mount this filesystem on my FreeBSD box, I would get an rpc time out error also. There isn't a way to gracefully restart a sub-system like NFS on the ReadyNAS because it usually just works. Would be nice if there was a process that monitored services and if one dies it tries to restart X times if it doesn't then log an error in the log that you can read under Health, Logs. I simple restart of my ReadyNAS via the web interface resolved the problem. On the ESX server, login via the console, become root so that all the esxcfg* commands are in your path and you can execute, then type: "esxcfg-nas -l" which will list all NFS storage containers you setup via the VIC. IE:
[root@flyingZ root]# esxcfg-nas -l
Falcor is /artax from 10.0.0.31 mounted
If the filesystem is failing to mount, you'll see "unmounted" instead of "mounted". You can type: "esxcfg-nas -r" to attempt a restore of the NFS volume/container but remember ESX 3.x* will continually retry mounting automatically so you should never really have to run this.
Anyway, bottom line is I installed the SSH component on my ReadyNAS NV+ and 'ps' for rpc and nfsd, only nfsd was there, after a gracefull reboot of the ReadyNAS+ both processes were there running:) and my ESX automatically remounted all is good.
L8r!
Gecko
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!