NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
using with a virtual hypervisor
113 TopicsRN3220 / RN4200 Crippling iSCSI Write Performance
I've got a mixed OEM ecosystem with several Tier 2 RN3220's and RN4200's along with Tier 1 EqualLogic storage appliances. All ReadyNAS appliances have 12 drive compliments Seagate Enterprise 4TB drives in them. I wanted to be able to use the ReadyNAS' as iSCSI options, particuarly for maintainance cycles on the storage network, so I re-purposed them from NFS use. Factory reset them onto 6.6 firmware and they have subsequently been upgraded to 6.7.1 with no change in symptoms. The drives have no errors It's a new install, so there are no fragmentation issues etc. The NAS's use X-RAID I've set them up with 2 NIC 1GbE MPIO from the Hyper-V side (I've also tried it on 10GbE and on 1 NIC 1GbE with no impact). 9k jumbo frames are on There are no teams on the RN's Checksum is off Quota is off iSCSI Target LUN's are all thick Write caching is on (UPS's are in place) Sync writes are disabled As many services as I can find are off, there are no SMB shares No NetGear apps or cloud services No snapshots The short summary of the problem is that if I migrate a VM VHDX storage from the EqualLogic on to any of the ReadyNAS appliances, accross the same storage network and running from the same hypervisor, the data migration takes an eternity and once the move completes the write performance is uterrly, utterly crippled. Here are the ATTO stats: Same VM, on the same 1GbE network. In the case of the above over 1x 1GbE, single path, all running on the same switches. Moving the VM back to the EqualLogic storage takes a fraction of the time to move and the performance of the VM is instantly restored. According to the ATTO data above the ReadyNAS should be offering better performance than the substantially more expensive EqualLogic, however in this condition these units are next to useless. If I create an SMB share and perform a file copy of the storage network, I can get 113MB/s off of a file copy on a 1x1GbE NIC, with no problems. So it does not look like a network issue. Does anyone have any ideas? I have one of the 3220's in a state that I can do anything to. The only thing that I haven't done is try Flex-RAID instead of X-RAID, but these numbers do not look like a RAID parity calculation issue. Many thanks,19KViews0likes25CommentsESXi connectivity problems
Hi, I am experiencing a weird problem with RN4220S (10Gbps SFP connection through XSM7224 switch) - sometimes the device becomes inaccessible from VMWare (no NFS connectivity - virtual machines are killed) and web management interface states that the device is offline. The device does not recover itself - lat time I had to reboot it manually. Netgear tech support refused to help because set up warranty is ended already. However, the problem persists (and is critical enough not to put the device into production). If someone has faced the same (or has the solution) - please let me know.19KViews0likes1CommentReadyNAS 4220 - NFS vs. iSCSI - VMware
NETGEAR Community, We are in the process of changing our VMware infrastructure and I have run into a few performance questions regarding our ReadyNAS setup. I am hoping the experienced members can offer some advice and insight. Currently we have two ESXi 5.5 hosts running virtual machines from both internal (SAS) storage and a ReadyNAS 3100 using NFS (default settings). Our main (critical) database servers are running on the internal datastores. Our lightweight virtual machines are running from the 3100. We have purchased a ReadyNAS 4220 and setup 10Gbe connections to the two ESXi hosts. We would like to setup centralized storage for all of our virtual machines and eliminate the need for local storage on the ESXi hosts. I have done a great deal of research and testing. However, this has led me to a great deal of confusion. The ReadyNAS 4220 has 12 – WD 2TB Black drives installed in raid 10. I have been very impressed with the performance I am getting while testing iSCSI. However, the NFS write speeds are not good (no difference between a 1Gb and 10Gb connection and well below the iSCSI). I understand that this is a potential limitation of NFS in general. The NFS performance problems are resolved by enabling async, although I have read that this can lead to data corruption and problems. Ultimately we will see a performance increase running the VMs from the 4220, but I don’t want to risk a major corruption problem if we are using async to gain the desired performance with NFS. The ESXi hosts are connected to battery backups and will automatically shut down all VMs in the event of a power failure. The ReadyNAS is not setup to shutdown (but I am looking into a solution). So my essential question is: Should I enable async to gain the performance using NFS, or restructure our setup and use iSCSI? I like the flexibility NFS offers and have read it is the recommended setup for VMware, but I will need the performance achieved by enabling async to make it a viable solution. I greatly appreciate any help/insight the community can provide and will be more than happy to provide additional details as needed. Thank you, KevinSolved13KViews0likes6CommentsReadyNAS loses network connectivity
We're using a ReadyNas 6 Ultra Plus for backup of ESX servers (using ghettoVCB with the ReadyNAS mounted as an NFS datastore) and a ReadyNAS Ultra 4 plus as an NFS datastore. Unfortunately, both ReadyNAS devices occasionally (every few months) disappear from the network (i.e. become unreachable via NFS, http, icmp etc). The disks are OK and if the device is rebooted then everything is fine again. This is bad when its being used for backup, but even worse when its an NFS datastore. To detect and automate the recovery, I installed monit (apt-get install monit) and configured it to ping the default gateway to check if all is OK with the network - if it fails several times in a row, it reboots the ReadyNAS. This happened last night during a backup. Its pretty alarming that both ReadyNAS devices just go offline like this. Anyone else experiencing such issues? Both are running 4.2.19 firmware.11KViews0likes17CommentsESXi reports "All Paths Down" for ReadyNAS hosted NFS share
Hiya - looking for some feedback from the community on an issue I'm seeing. Thanks in advance for any insights. Some background: We're using two ReadyNAS 3200's to host virtual machines via NFS. ESXi hosts are running ESXi 5.1. The ReadyNAS units are running 4.2.19, and have "adaptive load balancing" set on the NICs. Issue: I'm seeing some of the ESXi hosts report that NFS shares enter "All Paths Down" state for 6-7 seconds, before exiting this status and reconnecting. This happens for BOTH ReadyNAS units, and on 9 ESXi hosts - with no solid pattern on which host is impacted OR which ReadyNAS shows as "All Paths Down". It DOES appear to be related to the current load on the ReadyNAS. For example, if I start a backup job, I can expect to see this error on 3-4 ESXi hosts at least. I believe this has been happening for awhile without anyone noticing - but it caused a HUGE issue 2 weeks ago, when one of the ReadyNAS units entered/exited "All Paths Down" state nonstop while backups were running. (I opened a support case with Netgear and submitted the logs but they could not explain why this happened.) Current theory: From what I can tell, adaptive load balancing causes the ReadyNAS to change what MAC address (and NIC) is receiving traffic for a certain percentage of the overall traffic. It's my guess that when I run backups (or do anything else load intensive), the ReadyNAS attempts to load balance some of the traffic going to the ESXi hosts. The resulting change to the MAC address being reported to the ESXi host causes ESXi to report "all paths down" briefly before the new MAC address/NIC resolves correctly. The issue we experienced must have been due to a glitch or bug in the load balancing, which caused the ReadyNAS to fail to "stabilize" the load balancing correctly. I was only able to stabilize the unit by power cycling it. Questions: 1.) Does this sound like a plausible theory? My current thinking is I should disable load balancing and go to active-backup configuration to see if this resolves the issue. 2.) Will a firmware update resolve this issue? I reviewed the firmware patch notes and none of them mention NFS stability with NIC teaming.11KViews0likes22CommentsReadynas 2100, ESXi 5.5u2?
Hey We've got a ReadyNAS 2100 that we use as an iscsi datastore for a single esxi host. We're currently running esxi4.2, about to upgrade to 5.1. One of our future paths involves upgrading some VM guests to Debian 7, which required esxi5+. Although he Radynas2100 isn't certified to run 5.5 - is there a reason it should or shouldn't work? Thanks9.6KViews0likes10Comments4200 iSCSI performance solved getting 100 MBs per nic
I was never really happy with the performance I was getting on the Readynas 4200 about 95 MBs read. It was OK but not great for having multiple nics. So I finally did some optimization this weekend. The trick was to turn off teaming on the Readynas and have each NIC on a separate Vlan. Then setup ESXi for MPIO round robin in the GUI with 4 NICs on separate Vlans. You would think this is enough but it is not. I was still getting 95MBs performance but instead of seeing 1 nic 100% utilized I saw 4 NIC's 25% utilized. This had me stumped till I found an article about MPIO round robin IOPS and how by default it is set to 1000. It needs to be set to 1. So this ESXi 5 command must be run on each host from SSH. for i in `ls /vmfs/devices/disks/ | grep naa.600` ; do esxcli storage nmp psp roundrobin deviceconfig set -d $i --iops 1 --type iops;done Once I did that my read performance went from 95MBs to 400MBs with 4 NICs using the vmware OpenPerformanceTest.icf on IoMeter. Now I am thrilled with the performance of the Readynas 4200 and feel comfortable virtualizing my Exchange server. A detailed step by step guide has been posted here https://sites.google.com/site/abraindum ... erformance Troy9KViews0likes8CommentsHyperV/4220 - Remote SMB share does not support resiliency.
I have a 4220 I want to use for Hyper V (2012 R2 Standard) over SMB. I tried to move a virtual machine from local storage over to the 4220 SMB share and receive the error: "Remote SMB share does not support resiliency." This KB article (https://support.microsoft.com/kb/2920193) seems to point to this being fixed with SMB3, that it should only occur with SMB1 and 2. But, I have installed SMB Plus and confirmed my 4220 is using SMB3. Upgraded to OS 6.1.9 and same issue occurs.7.9KViews0likes4CommentsReadyNAS NVX problems with ESXi 5.0 [Case #16549549]
Hello, I'm trying out ESXi 5.0 with iSCSI on ReadyNAS NVX but it seems that it is not able to write to the volume VMFS v3.46, which it should be able to. I can add the target perfectly fine and the ESXi can see all the LUNs but once I try to write or create a VM it just keeps showing In Progress but after a while it says Unable to access file xxx/xxx.vmxf. I have two other ESXi 4.1 on the ReadyNAS volume and they are working perfectly fine. Furthermore, when this ESXi 5.0 machine was ESXi 4.1 it was also working fine together with the two other machines on the same VMFS volume. I tried adding a new LUN using VMFS v5 and it seems to be writing fine but here I only have one machine accessing the VMFS LUN. Some errors I see on ESXi 5.0 for the VMFS 3.46 volume are as follows: 2011-08-27T15:28:49.063Z cpu0:2056)ScsiDeviceIO: 2305: Cmd(0x412400749ac0) 0x93, CmdSN 0x226 to dev "naa.60014052e2962c00248d003000000000" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xb 0x24 0x0. 2011-08-27T15:28:49.083Z cpu0:2056)NMP: nmp_ThrottleLogForDevice:2318: Cmd 0x93 (0x412400749ac0) to dev "naa.60014052e2962c00248d003000000000" on path "vmhba34:C0:T1:L1" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0xb 0x24 0x0.Act:NONE 2011-08-27T15:28:49.099Z cpu0:2056)ScsiDeviceIO: 2305: Cmd(0x412400749ac0) 0x93, CmdSN 0x226 to dev "naa.60014052e2962c00248d003000000000" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xb 0x24 0x0. 2011-08-27T15:28:49.103Z cpu0:2056)ScsiDeviceIO: 2305: Cmd(0x412400749ac0) 0x93, CmdSN 0x226 to dev "naa.60014052e2962c00248d003000000000" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0xb 0x24 0x0. 2011-08-27T15:28:49.123Z cpu0:2056)ScsiDeviceIO: 2316: Cmd(0x412400749ac0) 0x93, CmdSN 0x226 to dev "naa.60014052e2962c00248d003000000000" failed H:0x3 D:0x0 P:0x0 Possible sense data: 0xb 0x24 0x0. 2011-08-27T15:28:49.123Z cpu6:2177)ScsiDeviceIO: 2949: CmdSN 0x24f to device naa.60014052e2962c00248d003000000000 timed out: expiry time occurs 1ms in the past 2011-08-27T15:28:49.123Z cpu6:4241)FS3DM: 2427: status Timeout zeroing 1 extents (65536 each) 2011-08-27T15:28:49.123Z cpu6:4241)J3: 2601: Aborting txn (0x410018c24c50) callerID: 0xc1d00006 due to failure pre-committing: Timeout 2011-08-27T15:28:49.123Z cpu6:4241)Fil3: 13341: Max timeout retries exceeded for caller Fil3_FileIO (status 'Timeout') 2011-08-27T15:28:49.123Z cpu6:4241)BC: 4347: Failed to flush 1 buffers of size 8192 each for object '1' f530 28 3 4e04f371 46875229 1c00457f dca691c0 3401444 13 0 0 0 0 0: Timeout Does anyone have any idea of why this is happening on the ReadyNAS NVX with ESXi 5.0 or whether the ReadyNAS isn't really compatible with ESXi 5.0? Thanks,7.8KViews0likes51Comments