NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Retired_Member
Jul 22, 2018XRAID Hard-Drive Speeds?
Hi all,
Ever since the NAS OS was re-installed and I did some maintance on the NAS (test disks -> scrub -> balance -> defrag) I've been experienicng slow transfer rates.
I have run th...
- Retired_MemberJul 22, 2018
removed network bonding and appears to have speed up network transfers considerably (46Mbs)
StephenB
Jul 23, 2018Guru - Experienced User
Retired_Member wrote:
If anyone else can please post their drive speed test results to compare with, it would be appreaciated.
Not sure why you are using oflag=dsync, so I ran the command both ways. This is on an RN526x with 4x6TB WD60EFRX drives.
root@NAS:/data/Test# dd bs=1M count=256 if=/dev/zero of=test oflag=dsync 256+0 records in 256+0 records out 268435456 bytes (268 MB, 256 MiB) copied, 28.2462 s, 9.5 MB/s root@NAS:/data/Test# dd bs=1M count=256 if=/dev/zero of=test 256+0 records in 256+0 records out 268435456 bytes (268 MB, 256 MiB) copied, 0.61322 s, 438 MB/s root@NAS:/data/Test#
Retired_Member wrote:
removed network bonding and appears to have speed up network transfers considerably (46Mbs)
You should be seeing ~100 MB/sec for large sequential file transfers over a gigabit network. Perhaps re-run iperf and see what your raw network speeds are without bonding.
What bonding mode were you using on the NAS (and what mode did you have enabled on the switch)?
Also, what MTU are you using on the NAS?
- Retired_MemberJul 24, 2018
StephenB wrote:
Not sure why you are using oflag=dsync, so I ran the command both ways. This is on an RN526x with 4x6TB WD60EFRX drives.
root@NAS:/data/Test# dd bs=1M count=256 if=/dev/zero of=test oflag=dsync 256+0 records in 256+0 records out 268435456 bytes (268 MB, 256 MiB) copied, 28.2462 s, 9.5 MB/s root@NAS:/data/Test# dd bs=1M count=256 if=/dev/zero of=test 256+0 records in 256+0 records out 268435456 bytes (268 MB, 256 MiB) copied, 0.61322 s, 438 MB/s root@NAS:/data/Test#
I believe dsync (data only sync, no meta data) writes sequentially and ensures write cache isn't used or kept to a min (buffer flushed out evey block)...where as direct bypasses linux page cache, however writes concurrently. I'm not sure if there is an option to write sequentially and bypass linux page cahe altogether, so I used dsync for sequential writing for a more accurate write speed. Not that it really matters when comparing the same tests, but it does minimize variance in influence.
(testing with some unavoidable hard-drive use):
***@NAS:~$ dd bs=1M count=256 if=/dev/zero of=test oflag=dsync 256+0 records in 256+0 records out 268435456 bytes (268 MB, 256 MiB) copied, 22.7576 s, 11.8 MB/s ***@NAS:~$ dd bs=1M count=256 if=/dev/zero of=test oflag=direct 256+0 records in 256+0 records out 268435456 bytes (268 MB, 256 MiB) copied, 4.33843 s, 61.9 MB/s
***@NAS:~$ dd bs=1M count=256 if=/dev/zero of=test
256+0 records in
256+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 0.849287 s, 316 MB/sAppears the hard-drive speeds are comparable - phew :)
StephenB wrote:
Retired_Member wrote:
removed network bonding and appears to have speed up network transfers considerably (46Mbs)You should be seeing ~100 MB/sec for large sequential file transfers over a gigabit network. Perhaps re-run iperf and see what your raw network speeds are without bonding.
What bonding mode were you using on the NAS (and what mode did you have enabled on the switch)?
Also, what MTU are you using on the NAS?
Config: NAS MTU=1500, same as router.
CPE: NAS 2x1gb <-> 2xcat 6 <-> 2x1gb router ports <-> cat6 <-> 1xgb pc
I was using round-ribbon bonding, but appears ok now after swapping out the ethernet cables. I appear to be getting around the 750-890Mbps range, which is around 100MB/sec.
***@PC: iperf -c 10.1.1.250 -i 5 -d ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 208 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 10.1.1.250, TCP port 5001 TCP window size: 208 KByte (default) ------------------------------------------------------------ [ 4] local 10.1.1.89 port 3937 connected with 10.1.1.250 port 5001 [ 5] local 10.1.1.89 port 5001 connected with 10.1.1.250 port 33836 [ ID] Interval Transfer Bandwidth [ 4] 0.0- 5.0 sec 447 MBytes 749 Mbits/sec [ 4] 5.0-10.0 sec 531 MBytes 891 Mbits/sec [ 4] 0.0-10.0 sec 978 MBytes 820 Mbits/sec
- StephenBJul 24, 2018Guru - Experienced User
It looks like you've sorted it out.
On the bonding - if you are using round-robin in the NAS, then you need to have a static LAG set up on the switch/router. I don't know of any home router that supports static LAGs. A couple of home routers (for instance the Netgear R8500) supports LACP bonding - if you have one of those you could try that. Or you could try using TLB or ALB, as they are the two modes that don't require router/switch support.
Though as I mentioned earlier, with link aggregation the router->client connection becomes the bottleneck - so you won't see any speed increase for a single connection using wireless or gigabit. You would see a gain only if you have multiple clients accessing the NAS simultaneously. So you could just leave well enough alone, and use a single ethernet connection on the NAS.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!