NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Bug Report
2 Topics[BUG] RAID Resync Progress Output in 6.8.0
In my quest to find half usable iSCSI performance on the 3220 (https://community.netgear.com/t5/ReadyNAS-in-Business/RN3220-RN4200-Crippling-iSCSI-Write-Performance/m-p/1289066), I upgraded to FW 6.8.0, blew away the array and rebuilt it - into a RAID 10 this time - and left it re-synching overnight. I came back this morning and this is what the re-sync operation was reporting: If you jump in and out of the tab, the % and timer are still ticking. So it looks like a bug. I assume the UI update algorithm was never correctly informed that the operation was complete and the counters continued ticking.2.8KViews0likes6CommentsBonding/VLAN bug in 6.6.0
I have found a UI bug (IMO anyway) with the Network/Bonding/VLAN setup in 6.6.0. First the disclaimer (caveat): This is running on a modified, unsupported (ancient) Pro Pioneer, upgraded with a 3.0 GHz E7600, 2GBytes 1333MHz DDR2, VGA connector, fan mods., etc. The only non-factory build APP installed is linux-dash. None of this has anything to do with the bug, but I don't want to be accused of leaving anything potentially important out, so... Note: Full details below, FYI only. Attached is a pic. that should say it all! While playing around with various bonding modes (lots of reboots with a single NIC attached to recover admin control), I now have lingering "bonds" with a single NIC (eth0 or eth1 ONLY). I can't delete them, change the settings, add a VLAN to them, or in any way modify them (just want them to go away!) Anything I try results in a crash dialog popup. The correctly bonded "bond0" works fine, as do VLANs added to it with higher IDs than the stuck 1-NIC "bonds". BTW: If anyone is curious, the E7600 is pretty fast! Even running maxed-out at a fairly well sustained 110MBytes/sec. I never see CPU throughput hit over 50%. I'm pretty sure the old Seagate 1TB ES drives (7200 RPM, built like tanks!) aren't the bottle neck either. In my case it's clear (again IMO) the NICs are the limiting element. As has been explained before, due to the way bonding/teaming is implemented (even with LACP, 802.3ad thru 2+3 mode), it's hard to see increased net throughput via bonding, even with static LAGs, VLANs, a fast external network (4 teamed, i350 server NICs on a Windows Server 2016 TP5 server box, Z170 chipset, 6x 4TB WD Red/5200 RPM "NAS" SATA-III, 6Gbit, overclocked i3-6300), and a fast Layer 2+ smart switch ( NG M4100-D12G). It is especially diappointing that LACP mode 3+4 (supposed to load balance on xmit from a single client) doesn't appear to work in 6.6.0. I see the following in dmesg.log: bond0: Setting xmit hash policy to layer3+4 (1) bond0: option primary: mode dependency failed, not supported in mode 802.3ad(4) Kernel log FYI on E7600: Oct 07 12:08:31 SIMATH_NAS kernel: smpboot: CPU0: Intel(R) Core(TM)2 Duo CPU E7600 @ 3.06GHz (fam: 06, model: 17, stepping: 0a) Oct 07 12:08:31 SIMATH_NAS kernel: Performance Events: PEBS fmt0+, 4-deep LBR, Core2 events, Intel PMU driver. Oct 07 12:08:31 SIMATH_NAS kernel: ... version: 2 Oct 07 12:08:31 SIMATH_NAS kernel: ... bit width: 40 Oct 07 12:08:31 SIMATH_NAS kernel: ... generic registers: 2 Oct 07 12:08:31 SIMATH_NAS kernel: ... value mask: 000000ffffffffff Oct 07 12:08:31 SIMATH_NAS kernel: ... max period: 000000007fffffff Oct 07 12:08:31 SIMATH_NAS kernel: ... fixed-purpose events: 3 Oct 07 12:08:31 SIMATH_NAS kernel: ... event mask: 0000000700000003 Oct 07 12:08:31 SIMATH_NAS kernel: x86: Booting SMP configuration: Oct 07 12:08:31 SIMATH_NAS kernel: .... node #0, CPUs: #1 Oct 07 12:08:31 SIMATH_NAS kernel: x86: Booted up 1 node, 2 CPUs Oct 07 12:08:31 SIMATH_NAS kernel: smpboot: Total of 2 processors activated (12235.88 BogoMIPS)5.8KViews0likes17Comments