× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Post your performance results

kgoncher
Aspirant

Re: Post your performance results

I tried a different cat6 cable and port on my D-LINK DGS-1016D Gigabit Switch, still amber or 100Mbits.
I tried a different cat6 cable and tied to Ethernet port2 on RNU6 and still 100Mbit amber on switch.
Rest of the ports on the D-LINK DGS-1016D Gigabit Switch are green for the gigabit Ethernet.
Any Ideas why my RNU6 is at 100Mb ? (specs say 1000Mb)
Message 301 of 309
kgoncher
Aspirant

Re: Post your performance results

I tried a different cat6 cable and port on my D-LINK DGS-1016D Gigabit Switch, still amber or 100Mbits.
I tried a different cat6 cable and tied to Ethernet port2 on RNU6 and still 100Mbit amber on switch.
Rest of the ports on the D-LINK DGS-1016D Gigabit Switch are green for the gigabit Ethernet.
Now I have tried
connecting to gigabit Router RNU6 still at 100Mb/s
connecting a second switch, Roswell 8port gigabit switch and RNU6 still at 100Mbit/s

Any Ideas on how to set my RNU6 to 1000 Mbit/s ?
Message 302 of 309
StephenB
Guru

Re: Post your performance results

You've chosen "auto-negotiation" on the frontview interfaces tab?
Message 303 of 309
kgoncher
Aspirant

Re: Post your performance results

yes, Network > Interfaces > speed/Duplex mode: "Auto-negotiation"
The pull down arrow only gives me the choices of Full/Half Duplex at 100Mbit.
There is no way for me to choose 1000 Mbit.?
Message 304 of 309
kgoncher
Aspirant

Re: Post your performance results

I had a support ticket and Mary was patient, final fix was to set Networking > Interfaces > Ethernet tab at bottom , Performance Settings (had to scroll down was not in view) check "enable jumbo frames".
She said this was the only way to get gigabit Ethernet and it worked.
So some where at some time it got switched off, not sure how or when,
but now 3GB file is only 30sec instead of 30 min.
Back up and running with transfer speeds of 110MB/s
Thanks for all the help. Kurt
Message 305 of 309
kgoncher
Aspirant

Re: Post your performance results

Update, I don't need Jumbo frames, with "good" Cat6 cable, getting 110+ MB/s Reads, Writes very from 65 to 80 MB/s.
Message 306 of 309
mmartinezv
Aspirant

Re: Post your performance results

Hi, I've been doing some performance tests with 2 ReadyNAS 2100 and a ReadyNAS Pro4. I'm a bit confused because I've found that the network performance is very good, and the disks gives high performance also, but combined the results are not as good. I'm sharing here the information with the interest of clarifying this results and, if possible, improve the performance.

ReadyNAS 2100 with 4x4TB Western Digital Red Hard disks. Local disk performance test:

/c/test# time dd if=/dev/zero of=test_10G bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 75.0244 s, 140 MB/s

real 1m15.470s
user 0m0.146s
sys 1m6.038s

Same ReadyNAS connected to a Gigabit network (Netgear 724). Network performance test 1:

On the nas: nc -v -v -l -n -p 2222 >/dev/null
On a server: cat /dev/zero |nc -v -v -n ip_address 2222 >/dev/null

Collecting stats: dstat -N eth0,eth1,bond0 -D md2,md3
----total-cpu-usage---- --dsk/md2-----dsk/md3-- --net/eth0----net/eth1---net/bond0- ---paging-- ---system--
usr sys idl wai hiq siq|_read _writ:_read _writ|_recv _send:_recv _send:_recv _send|__in_ _out_|_int_ _csw_
0 3 91 3 0 2|8092k 491k: 690k 100k| 0 0 : 0 0 : 0 0 | 0.1 5.4B|5745 3255
0 1 97 1 0 1|1132k 0 : 0 0 |2212k 312k: 140B 214B:2212k 313k| 0 0 |4108 2074
1 19 63 2 0 16| 52k 0 : 0 0 | 117M 2479k: 62k 69k: 117M 2547k| 0 0 | 13k 4134
1 17 66 0 0 16| 0 0 : 0 0 | 111M 2332k: 70B 214B: 111M 2330k| 0 0 | 12k 4245
1 18 66 1 0 16| 104k 0 : 0 0 | 117M 2858k: 258B 402B: 117M 2861k| 0 0 | 12k 4216
1 18 64 3 0 15| 92k 52k: 0 0 | 118M 2479k: 25k 27k: 118M 2505k| 0 0 | 12k 3467
1 17 67 0 0 15| 0 0 :4096B 0 | 117M 2478k: 22k 24k: 117M 2501k| 0 0 | 12k 3486
1 17 69 0 0 14|4096B 0 : 0 0 | 117M 2497k: 11k 12k: [117M 2510k| 0 0 | 12k 3159
4 17 65 0 0 14| 16k 0 : 0 0 | 115M 2303k:1744B 647B: 115M 2304k| 0 0 | 12k 2713

As you can see, the network is working well, using one gigabit interface of the bond at max speed.

I can repeat the experiment with dd specifying blocksize.

Same scenario. Network performance test2 using 1MB blocksize (as on local disk test).
On the nas: nc -l -n -p 2222 | dd of=/dev/null bs=1M
0+258465 records in
0+258465 records out
2588194952 bytes (2.6 GB) copied, 25.0691 s, 103 MB/s

On the server: time dd if=/dev/zero bs=1M |nc -n 192.168.200.101 2222 >/dev/null
2469+0 records in
2468+0 records out
2587885568 bytes (2.6 GB) copied, 23.5151 s, 110 MB/s
(the transfer time observed on the nas is higher because i'm entering the commands on different terminals, but the real time is 23.5ms -> 110MB/s)

Similar results. Command: dstat -N eth0,eth1,bond0 -D md2,md3
----total-cpu-usage---- --dsk/md2-----dsk/md3-- --net/eth0----net/eth1---net/bond0- ---paging-- ---system--
usr sys idl wai hiq siq|_read _writ:_read _writ|_recv _send:_recv _send:_recv _send|__in_ _out_|_int_ _csw_
1 28 56 0 0 15| 0 0 : 0 0 | 114M 1083k: 280B 461B: 114M 1084k| 0 0 | 15k 23k
2 27 56 0 0 15| 0 0 :4096B 0 | 114M 1067k: 70B 214B: 114M 1067k| 0 0 | 15k 24k
2 28 55 0 0 15|8192B 12k: 0 0 | 113M 1045k: 11k 2648B: 113M 1048k| 0 0 | 15k 23k
2 28 55 0 0 15| 0 1668k: 0 120k| 112M 1066k: 70B 214B: 112M 1067k| 0 0 | 15k 23k
2 26 57 0 0 15| 0 64k: 0 0 | 112M 1038k: 362B 372B: 112M 1039k| 0 0 | 15k 23k
1 28 56 0 0 15| 0 0 : 0 0 | 112M 1054k: 346B 332B: 112M 1054k| 0 0 | 15k 23k
2 29 54 0 0 15| 0 0 :4096B 0 | 113M 1185k: 70B 214B: 113M 1185k| 0 0 | 14k 24k

Summary

The local disk test show a write transfer test of 140 MB/s
The network tests shows tranfers of 110MB/s with a gigabit network interface at max speed.
With these results I was expecting to be able to do network writes on the NAS disks near 100MB/s as I understand that the bottleneck is on the network, but as you will see in the next combined test, this is not what I get.

Network writes test (combined test).

On the server: time dd if=/dev/zero bs=1M |nc -n 192.168.200.101 2222 >/dev/null
^C

real 0m26.680s
user 0m0.062s
sys 0m4.103s

On the NAS: nc -l -n -p 2222 | dd of=/c/test/test_1G bs=1M
0+97988 records in
0+97988 records out
1884771088 bytes (1.9 GB) copied, 29.1939 s, 64.6 MB/s

Poor results. Command: dstat -N eth0,eth1,bond0 -D md2,md3
----total-cpu-usage---- --dsk/md2-----dsk/md3-- --net/eth0----net/eth1---net/bond0- ---paging-- ---system--
usr sys idl wai hiq siq|_read _writ:_read _writ|_recv _send:_recv _send:_recv _send|__in_ _out_|_int_ _csw_
1 46 33 3 0 17|4096B 3924k:4096B 56M| 79M 905k: 12k 3946B: 79M 909k| 0 0 | 12k 7648
1 45 33 6 0 16| 0 1560k:4096B 28M| 79M 795k: 842B 1162B: 79M 796k| 0 0 | 13k 4730
1 33 46 11 0 9| 0 0 : 0 73M| 35M 448k: 258B 402B: 35M 449k| 0 0 |8245 4689
1 53 29 0 0 17| 0 164k: 0 95M| 71M 750k: 70B 214B: 71M 750k| 0 0 | 12k 6083
1 46 33 4 0 16| 192k 4096B: 0 31M| 86M 963k: 19k 21k: 86M 984k| 0 0 | 12k 5863
1 49 33 0 0 17| 0 308k: 0 46M| 80M 797k: 134B 214B: 80M 797k| 0 0 | 12k 5571
1 52 31 0 0 16| 0 396k:8192B 91M| 64M 678k:2084B 2268B: 64M 680k| 0 0 | 11k 5981

As you can see, the only modification is redirecting the network output to a file, using the same blocksize that on previous tests, an the use of the network goes down significantly. The CPU is still 31% idle, and the write's statistics are lower than in the first test.

With the ReadyNAS Pro4 (with Radiator 4.2.27 or ReadyNAS OS6) we get similar results. The local disks writes are a bit higher (near 200MBps) but the network writes (combined test) are about 60MBps.

Can somebody explain me these results? I'd like to improve the ReadyNAS performance if possible.

Thanks in advance,

Manuel Martínez
Message 307 of 309
mdgm-ntgr
NETGEAR Employee Retired

Re: Post your performance results

What disk(s) are in your client PC?
Message 308 of 309
mmartinezv
Aspirant

Re: Post your performance results

Hi,

In fact I'm not using the disks on the client PC. It's a Linux box and in all the tests I'm simply redirecting the fast /dev/zero device.
Message 309 of 309
Top Contributors
Announcements