NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
VMware
65 TopicsReadyNAS loses network connectivity
We're using a ReadyNas 6 Ultra Plus for backup of ESX servers (using ghettoVCB with the ReadyNAS mounted as an NFS datastore) and a ReadyNAS Ultra 4 plus as an NFS datastore. Unfortunately, both ReadyNAS devices occasionally (every few months) disappear from the network (i.e. become unreachable via NFS, http, icmp etc). The disks are OK and if the device is rebooted then everything is fine again. This is bad when its being used for backup, but even worse when its an NFS datastore. To detect and automate the recovery, I installed monit (apt-get install monit) and configured it to ping the default gateway to check if all is OK with the network - if it fails several times in a row, it reboots the ReadyNAS. This happened last night during a backup. Its pretty alarming that both ReadyNAS devices just go offline like this. Anyone else experiencing such issues? Both are running 4.2.19 firmware.11KViews0likes17CommentsESXi reports "All Paths Down" for ReadyNAS hosted NFS share
Hiya - looking for some feedback from the community on an issue I'm seeing. Thanks in advance for any insights. Some background: We're using two ReadyNAS 3200's to host virtual machines via NFS. ESXi hosts are running ESXi 5.1. The ReadyNAS units are running 4.2.19, and have "adaptive load balancing" set on the NICs. Issue: I'm seeing some of the ESXi hosts report that NFS shares enter "All Paths Down" state for 6-7 seconds, before exiting this status and reconnecting. This happens for BOTH ReadyNAS units, and on 9 ESXi hosts - with no solid pattern on which host is impacted OR which ReadyNAS shows as "All Paths Down". It DOES appear to be related to the current load on the ReadyNAS. For example, if I start a backup job, I can expect to see this error on 3-4 ESXi hosts at least. I believe this has been happening for awhile without anyone noticing - but it caused a HUGE issue 2 weeks ago, when one of the ReadyNAS units entered/exited "All Paths Down" state nonstop while backups were running. (I opened a support case with Netgear and submitted the logs but they could not explain why this happened.) Current theory: From what I can tell, adaptive load balancing causes the ReadyNAS to change what MAC address (and NIC) is receiving traffic for a certain percentage of the overall traffic. It's my guess that when I run backups (or do anything else load intensive), the ReadyNAS attempts to load balance some of the traffic going to the ESXi hosts. The resulting change to the MAC address being reported to the ESXi host causes ESXi to report "all paths down" briefly before the new MAC address/NIC resolves correctly. The issue we experienced must have been due to a glitch or bug in the load balancing, which caused the ReadyNAS to fail to "stabilize" the load balancing correctly. I was only able to stabilize the unit by power cycling it. Questions: 1.) Does this sound like a plausible theory? My current thinking is I should disable load balancing and go to active-backup configuration to see if this resolves the issue. 2.) Will a firmware update resolve this issue? I reviewed the firmware patch notes and none of them mention NFS stability with NIC teaming.11KViews0likes22CommentsReadynas 2100, ESXi 5.5u2?
Hey We've got a ReadyNAS 2100 that we use as an iscsi datastore for a single esxi host. We're currently running esxi4.2, about to upgrade to 5.1. One of our future paths involves upgrading some VM guests to Debian 7, which required esxi5+. Although he Radynas2100 isn't certified to run 5.5 - is there a reason it should or shouldn't work? Thanks9.6KViews0likes10Comments4200 iSCSI performance solved getting 100 MBs per nic
I was never really happy with the performance I was getting on the Readynas 4200 about 95 MBs read. It was OK but not great for having multiple nics. So I finally did some optimization this weekend. The trick was to turn off teaming on the Readynas and have each NIC on a separate Vlan. Then setup ESXi for MPIO round robin in the GUI with 4 NICs on separate Vlans. You would think this is enough but it is not. I was still getting 95MBs performance but instead of seeing 1 nic 100% utilized I saw 4 NIC's 25% utilized. This had me stumped till I found an article about MPIO round robin IOPS and how by default it is set to 1000. It needs to be set to 1. So this ESXi 5 command must be run on each host from SSH. for i in `ls /vmfs/devices/disks/ | grep naa.600` ; do esxcli storage nmp psp roundrobin deviceconfig set -d $i --iops 1 --type iops;done Once I did that my read performance went from 95MBs to 400MBs with 4 NICs using the vmware OpenPerformanceTest.icf on IoMeter. Now I am thrilled with the performance of the Readynas 4200 and feel comfortable virtualizing my Exchange server. A detailed step by step guide has been posted here https://sites.google.com/site/abraindum ... erformance Troy9KViews0likes8CommentsReadyNAS NVX problems with ESXi 5.0 [Case #16549549]
Hello, I'm trying out ESXi 5.0 with iSCSI on ReadyNAS NVX but it seems that it is not able to write to the volume VMFS v3.46, which it should be able to. I can add the target perfectly fine and the ESXi can see all the LUNs but once I try to write or create a VM it just keeps showing In Progress but after a while it says Unable to access file xxx/xxx.vmxf. I have two other ESXi 4.1 on the ReadyNAS volume and they are working perfectly fine. Furthermore, when this ESXi 5.0 machine was ESXi 4.1 it was also working fine together with the two other machines on the same VMFS volume. I tried adding a new LUN using VMFS v5 and it seems to be writing fine but here I only have one machine accessing the VMFS LUN. Some errors I see on ESXi 5.0 for the VMFS 3.46 volume are as follows: 2011-08-27T15:28:49.063Z cpu0:2056)ScsiDeviceIO: 2305: Cmd(0x412400749ac0) 0x93, CmdSN 0x226 to dev "naa.60014052e2962c00248d003000000000" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xb 0x24 0x0. 2011-08-27T15:28:49.083Z cpu0:2056)NMP: nmp_ThrottleLogForDevice:2318: Cmd 0x93 (0x412400749ac0) to dev "naa.60014052e2962c00248d003000000000" on path "vmhba34:C0:T1:L1" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0xb 0x24 0x0.Act:NONE 2011-08-27T15:28:49.099Z cpu0:2056)ScsiDeviceIO: 2305: Cmd(0x412400749ac0) 0x93, CmdSN 0x226 to dev "naa.60014052e2962c00248d003000000000" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xb 0x24 0x0. 2011-08-27T15:28:49.103Z cpu0:2056)ScsiDeviceIO: 2305: Cmd(0x412400749ac0) 0x93, CmdSN 0x226 to dev "naa.60014052e2962c00248d003000000000" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xb 0x24 0x0. 2011-08-27T15:28:49.123Z cpu0:2056)ScsiDeviceIO: 2316: Cmd(0x412400749ac0) 0x93, CmdSN 0x226 to dev "naa.60014052e2962c00248d003000000000" failed H:0x3 D:0x0 P:0x0 Possible sense data: 0xb 0x24 0x0. 2011-08-27T15:28:49.123Z cpu6:2177)ScsiDeviceIO: 2949: CmdSN 0x24f to device naa.60014052e2962c00248d003000000000 timed out: expiry time occurs 1ms in the past 2011-08-27T15:28:49.123Z cpu6:4241)FS3DM: 2427: status Timeout zeroing 1 extents (65536 each) 2011-08-27T15:28:49.123Z cpu6:4241)J3: 2601: Aborting txn (0x410018c24c50) callerID: 0xc1d00006 due to failure pre-committing: Timeout 2011-08-27T15:28:49.123Z cpu6:4241)Fil3: 13341: Max timeout retries exceeded for caller Fil3_FileIO (status 'Timeout') 2011-08-27T15:28:49.123Z cpu6:4241)BC: 4347: Failed to flush 1 buffers of size 8192 each for object '1' f530 28 3 4e04f371 46875229 1c00457f dca691c0 3401444 13 0 0 0 0 0: Timeout Does anyone have any idea of why this is happening on the ReadyNAS NVX with ESXi 5.0 or whether the ReadyNAS isn't really compatible with ESXi 5.0? Thanks,7.9KViews0likes51CommentsBonding multiple NIC's w/ Cisco 3750
I have a new ReadyNAS 4220. I'm trying to bond all 4 1GbE NIC ports to a Cisco 3750G switch. I'm connecting to the management tool via the 10GbE port so the tool connectivity isn't affected by my tweaking. I have configured the 4 ports on the switch using the Cisco Network Assistant by going to the "Etherchannels.." menu, adding a group 1 and assigning port 3,4,5 and 6 to the group and configured all 4 as LACP. All 4 ports show green...suggesting they aren't disabled ;). When I go into the ReadyNAS configuration: - select "Network" tab - click eth0 and select "New Bond...". - I add eth1, eth2 and eth3 interfaces to the new group - select IEEE 802.3ad LACP - select Layer 2 - Click Create. The busy wheel starts spinning, the lights on the 3750 go orange for a bit then turn green. Now I don't have connectivity (can't find DHCP server). I change the IP address to static and configure the IP settings, but still no connectivity. I am unable to ping the interface. Before I bond them, I can ping it just fine. What setting am I missing? I bonded the two 10GbE ports to my Juniper EX4550 switch this same way and that seems to work. Thoughts?7.4KViews0likes15CommentsReadyNAS Pro 4.2.24, iSCSI and VMware vSphere ESXi 5.5
I just upgraded to VMware vSphere ESXi 5.5 and I tried to access an iSCSI target (LUN) that was originally created with ESXi 5.0. I added the new initiator IQN to the access control list for the iSCSI target on the ReadyNAS Pro, which is running RAIDiator 4.2.24, but ESXI did not see the device or the datastore. I then deleted the ISCSI target and created a new one, but ESXi still did not see the device. Oddly enough, VMware does show the target name in the iSCSI initiator properties. I've refreshed, rescanned, rebooted (both sides) several times. Is RAIDiator going to need an update for ESXi 5.5?6.5KViews0likes7CommentsReadynas 4200 losing iSCSI connection for 5 secs (15072737)
Hello, we have 2x vmware ESXi 4.1 u1 servers with 4 GB paths each to the iSCSI SAN (ReadyNAS 4200). We now have the problem that we sometimes loose the connection and the entire vmware environment hangs for 5 seconds. We let vmware analyse the logs and they have come up with this: Hello Jeremy, vm-support logs show that ESXi host lost access to NetGear ReadyNAS4200 iSCSI array : messages.2:Mar 21 14:39:26 vmkernel: 4:19:08:18.685 cpu11:4844)WARNING: iscsi_vmk: iscsivmk_StopConnection: vmhba37:CH:1 T:0 CN:0: iSCSI connection is being marked "OFFLINE" (Event:6) messages.2:Mar 21 14:39:26 vmkernel: 4:19:08:18.685 cpu11:4844)WARNING: iscsi_vmk: iscsivmk_StopConnection: Sess [ISID: 00023d000003 TARGET: iqn.2010-12.BIGFOOT:vmware.lun0 TPGT: 1 TSIH: 0] messages.2:Mar 21 14:39:26 vmkernel: 4:19:08:18.685 cpu11:4844)WARNING: iscsi_vmk: iscsivmk_StopConnection: Conn [CID: 0 L: 192.168.0.52:51125 R: 192.168.0.3:3260] messages.1:Mar 21 14:39:28 vmkernel: 4:19:08:20.654 cpu7:4103)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x41027f1ca540) to NMP device "naa.60014052e10897000c6d002000000000" failed on physical path "vmhba37:C1:T0:L1" H:0x2 D:0x0 P:0x0 Possible sense data: 0 messages.1:Mar 21 14:39:28 vmkernel: 4:19:08:20.654 cpu7:4103)ScsiDeviceIO: 1672: Command 0x28 to device "naa.60014052e10897000c6d002000000000" failed H:0x2 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0. messages.1:Mar 21 14:39:28 vmkernel: 4:19:08:20.676 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x12 (0x41027f9cc840) to NMP device "naa.60014052e10897000c6d003000000000" failed on physical path "vmhba37:C1:T0:L2" H:0x2 D:0x0 P:0x0 Possible sense data: 0 messages.1:Mar 21 14:39:28 vmkernel: 4:19:08:20.676 cpu1:4097)ScsiDeviceIO: 1672: Command 0x12 to device "naa.60014052e10897000c6d003000000000" failed H:0x2 D:0x0 P:0x0 Possible sense data: 0x5 0x24 0x0. messages.1:Mar 21 14:39:31 vmkernel: 4:19:08:22.974 cpu2:4844)WARNING: iscsi_vmk: iscsivmk_StartConnection: vmhba37:CH:1 T:0 CN:0: iSCSI connection is being m arked "ONLINE" On the other hand logs also show that iSCSI LUNs are using MRU as path policy and the Storage Array is detected as VMW_SATP_ALUA. This is something that nees to be validated with your SAN Vendor as our HCL states that storage array type should be VMW_SATP_DEFAULT_AA and Path Policy should be VMW_PSP_FIXED. Please engage your SAN Vendor to have this validated by them. Please assist in fixing this! We are on 4.2.15-SP16.4KViews0likes19CommentsSBS 2011 and ESXi
I was thinking of maybe getting a ReadyNAS to use as NFS storage for a new ESXi server to replace any ageing setup. Which ReadyNAS would you recommend for this new setup? Currently we're using an ageing SBS 2003 OEM server, with a Pentium IV 3Ghz Processor and 4GB RAM installed (3.25GB can be used) It works fine however it doesn't meet the requirements of some new software we need to use. So I was thinking with SBS 2011 now released it would be a good time to upgrade our systems. I was thinking maybe this server would be good: http://betterit.com.au/scripts/prodview ... uct=141001 Should I get SAS disks (if so which ones) or enterprise SATA disks. I was a bit shocked when I read the system requirements for SBS 2011: http://www.microsoft.com/sbs/en/us/syst ... ments.aspx What's so special about SBS 2011 that it needs a quad-core processor? Why wouldn't a dual-core suffice? Ideally I would've liked to have just migrated our SBS 2003 installation onto a new server. However as we have the OEM version this isn't possible. So I'd like to get a Retail or Volume License version of SBS 2011 (which do you recommend?) Also I was thinking using ESXi would be a better way of doing things in case we wish to get the Premium Add-on down the track and also to make it easier to move to a new server if the current server fails. I'd like to be able to separate hardware upgrades from OS upgrades when it comes to the server. As an SBS migration is a time consuming task. Any advice on how to migrate from SBS 2003 to 2011 such as good sites to look at? Also how much downtime will this lead to?6.3KViews0likes15Comments