NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

Retired_Member's avatar
Retired_Member
Jul 22, 2018
Solved

XRAID Hard-Drive Speeds?

Hi all,

 

Ever since the NAS OS was re-installed and I did some maintance on the NAS (test disks -> scrub -> balance -> defrag) I've been experienicng slow transfer rates.

 

 

I have run the following for the hard drive speeds on the NAS, to see if there is any read/write issue, but I don't know if the follow is slow or normal:

***@NAS:/media/USB_HDD_2$ dd bs=1M count=256 if=/dev/zero of=test oflag=dsync                                                                                     
256+0 records in                                                                                                                                                    
256+0 records out                                                                                                                                                   
268435456 bytes (268 MB, 256 MiB) copied, 4.55748 s, 58.9 MB/s                                                                                                      
***@NAS:/media/USB_HDD_2$ cd /data/Sharing/                                                                                                                       
***@NAS:/data/Sharing$ dd bs=1M count=256 if=/dev/zero of=test oflag=dsync                                                                                        
256+0 records in                                                                                                                                                    
256+0 records out                                                                                                                                                   
268435456 bytes (268 MB, 256 MiB) copied, 21.7408 s, 12.3 MB/s                                                                                                      

 

If anyone else can please post their drive speed test results to compare with, it would be appreaciated.

 

Thanks

 

 

  • Retired_Member's avatar
    Retired_Member
    Jul 22, 2018

    removed network bonding and appears to have speed up network transfers considerably (46Mbs)

5 Replies

Replies have been turned off for this discussion
  • Retired_Member's avatar
    Retired_Member

     

    Just some further info:

    Here are the (concerning) transfer rates across the LAN (albiet, its PC <-> wireless <-> router (gig) <-> 2xcat6 -> NAS, but see below as there are no PC wifi issues with the internet speeds):

    ***@PC: NAS performance tester - 
    Running a 100MB file write on Z: once... ----------------------------- Average (W): 0.29 MB/sec ----------------------------- Running a 100MB file read on Z: once... ----------------------------- Average (R): 0.38 MB/sec ----------------------------- 

     

    The network speeds - NAS (client): ***

    ***@NAS: iprerf -c
    Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 10.1.1.4, TCP port 5001 TCP window size: 196 KByte (default) ------------------------------------------------------------ [ 5] local 10.1.1.250 port 38606 connected with 10.1.1.4 port 5001 [ 4] local 10.1.1.250 port 5001 connected with 10.1.1.4 port 13830 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 3.88 MBytes 3.25 Mbits/sec [ 5] 0.0-10.0 sec 1.50 MBytes 1.26 Mbits/sec [ 4] 10.0-20.0 sec 3.12 MBytes 2.62 Mbits/sec [ 5] 10.0-20.0 sec 1.12 MBytes 944 Kbits/sec [ 4] 20.0-30.0 sec 4.12 MBytes 3.46 Mbits/sec [ 5] 20.0-30.0 sec 1.38 MBytes 1.15 Mbits/sec [ 4] 30.0-40.0 sec 3.00 MBytes 2.52 Mbits/sec [ 5] 30.0-40.0 sec 1.88 MBytes 1.57 Mbits/sec [ 4] 40.0-50.0 sec 3.10 MBytes 2.60 Mbits/sec [ 5] 40.0-50.0 sec 2.75 MBytes 2.31 Mbits/sec [ 4] 50.0-60.0 sec 3.34 MBytes 2.80 Mbits/sec [ 4] 0.0-60.5 sec 20.8 MBytes 2.88 Mbits/sec [ 5] 50.0-60.0 sec 1.75 MBytes 1.47 Mbits/sec [ 5] 0.0-61.4 sec 10.5 MBytes 1.43 Mbits/sec

    with (PC) server results:

    ***@PC: iperf -s
    Server listening on TCP port 5001 TCP window size: 208 KByte (default) ------------------------------------------------------------ [ 4] local 10.1.1.4 port 5001 connected with 10.1.1.250 port 38606 ------------------------------------------------------------ Client connecting to 10.1.1.250, TCP port 5001 TCP window size: 208 KByte (default) ------------------------------------------------------------ [ 5] local 10.1.1.4 port 13830 connected with 10.1.1.250 port 5001 [ ID] Interval Transfer Bandwidth [ 5] 0.0-60.3 sec 20.8 MBytes 2.89 Mbits/sec [ 4] 0.0-63.1 sec 10.5 MBytes 1.40 Mbits/sec

     

    Internet Speeds - NAS (2xCat6): ***

     

    ***@NAS: wget -O
    Testing download speed................................................................................ Download: 3.87 Mbit/s Testing upload speed................................................................................................ Upload:
    3.73 Mbit/s

     

    Internet Speeds - PC (wireless)

     

    ***@PC: testmy.net
    :::.. Internet Speed Test Result Details ..::: Download Connection Speed:: 14951 kbps or 15 Mbps Download Speed Test Size:: 54 MB or 55296 kB or 56623104 bytes Download Binary File Transfer Speed:: 1869 kB/s or 1.9 MB/s Upload Connection Speed:: 12794 kbps or 12.8 Mbps Upload Speed Test Size:: 6.3 MB or 6464 kB or 6619136 bytes Upload Binary File Transfer Speed:: 1599 kB/s or 1.6 MB/s Timed:: Download: 30.298 seconds | Upload: 4.139 seconds

     

     

    *** Appears there is an issue on the NAS / NIC / Cat 6 Cables or possibly the Router Ethernet (2x1000 Mbps ETH router ports, however there are no errors/failures showing on the router stats):

    ***@Router: Network Statistics
    Interface	Rx Bytes	Tx Bytes	Rx Packets	Tx Packets	Rx Errors	Tx Errors
    Port 1	121842790	352682623	180885	298305	0	0
    Port 2	121461967	342355930	180585	287286	0	0
    WiFi-2.4Ghz	440897786	843543447	543961	816870	0	0
    WiFi-5Ghz	50735	6453765	527	16359	0	0

     

    • Retired_Member's avatar
      Retired_Member

      removed network bonding and appears to have speed up network transfers considerably (46Mbs)

  • StephenB's avatar
    StephenB
    Guru - Experienced User

    Retired_Member wrote:

     

    If anyone else can please post their drive speed test results to compare with, it would be appreaciated.

     


    Not sure why you are using oflag=dsync, so I ran the command both ways.  This is on an RN526x with 4x6TB WD60EFRX drives.

     

    root@NAS:/data/Test# dd bs=1M count=256 if=/dev/zero of=test oflag=dsync
    256+0 records in
    256+0 records out
    268435456 bytes (268 MB, 256 MiB) copied, 28.2462 s, 9.5 MB/s
    root@NAS:/data/Test# dd bs=1M count=256 if=/dev/zero of=test
    256+0 records in
    256+0 records out
    268435456 bytes (268 MB, 256 MiB) copied, 0.61322 s, 438 MB/s
    root@NAS:/data/Test# 

    Retired_Member wrote:

     

    removed network bonding and appears to have speed up network transfers considerably (46Mbs)

     

    You should be seeing ~100 MB/sec for large sequential file transfers over a gigabit network.  Perhaps re-run iperf and see what your raw network speeds are without bonding.

     

    What bonding mode were you using on the NAS (and what mode did you have enabled on the switch)?

     

    Also, what MTU are you using on the NAS?

    • Retired_Member's avatar
      Retired_Member

      StephenB wrote:
      Not sure why you are using oflag=dsync, so I ran the command both ways.  This is on an RN526x with 4x6TB WD60EFRX drives.
      root@NAS:/data/Test# dd bs=1M count=256 if=/dev/zero of=test oflag=dsync
      256+0 records in
      256+0 records out
      268435456 bytes (268 MB, 256 MiB) copied, 28.2462 s, 9.5 MB/s
      root@NAS:/data/Test# dd bs=1M count=256 if=/dev/zero of=test
      256+0 records in
      256+0 records out
      268435456 bytes (268 MB, 256 MiB) copied, 0.61322 s, 438 MB/s
      root@NAS:/data/Test# 

      I believe dsync (data only sync, no meta data) writes sequentially and ensures write cache isn't used or kept to a min (buffer flushed out evey block)...where as direct bypasses linux page cache, however writes concurrently. I'm not sure if there is an option to write sequentially and bypass linux page cahe altogether, so I used dsync for sequential writing for a more accurate write speed. Not that it really matters when comparing the same tests, but it does minimize variance in influence.

       

      (testing with some unavoidable hard-drive use):

       

      ***@NAS:~$ dd bs=1M count=256 if=/dev/zero of=test oflag=dsync                      
      256+0 records in                                                                      
      256+0 records out                                                                     
      268435456 bytes (268 MB, 256 MiB) copied, 22.7576 s, 11.8 MB/s                        
      ***@NAS:~$ dd bs=1M count=256 if=/dev/zero of=test oflag=direct                     
      256+0 records in                                                                      
      256+0 records out                                                                     
      268435456 bytes (268 MB, 256 MiB) copied, 4.33843 s, 61.9 MB/s
      ***@NAS:~$ dd bs=1M count=256 if=/dev/zero of=test
      256+0 records in
      256+0 records out
      268435456 bytes (268 MB, 256 MiB) copied, 0.849287 s, 316 MB/s

      Appears the hard-drive speeds are comparable - phew :)

       

       

       

      StephenB wrote:
      Retired_Member wrote:

       

      removed network bonding and appears to have speed up network transfers considerably (46Mbs)

       

      You should be seeing ~100 MB/sec for large sequential file transfers over a gigabit network.  Perhaps re-run iperf and see what your raw network speeds are without bonding.

       

      What bonding mode were you using on the NAS (and what mode did you have enabled on the switch)?

       

      Also, what MTU are you using on the NAS?

      Config: NAS MTU=1500, same as router. 

      CPE: NAS 2x1gb <-> 2xcat 6 <-> 2x1gb router ports <-> cat6 <-> 1xgb pc

       

      I was using round-ribbon bonding, but appears ok now after swapping out the ethernet cables. I appear to be getting around the 750-890Mbps range, which is around 100MB/sec.

       

       

      ***@PC: iperf -c 10.1.1.250 -i 5 -d
      ------------------------------------------------------------
      Server listening on TCP port 5001
      TCP window size: 208 KByte (default)
      ------------------------------------------------------------
      ------------------------------------------------------------
      Client connecting to 10.1.1.250, TCP port 5001
      TCP window size: 208 KByte (default)
      ------------------------------------------------------------
      [ 4] local 10.1.1.89 port 3937 connected with 10.1.1.250 port 5001
      [ 5] local 10.1.1.89 port 5001 connected with 10.1.1.250 port 33836
      [ ID] Interval Transfer Bandwidth
      [ 4] 0.0- 5.0 sec 447 MBytes 749 Mbits/sec
      [ 4] 5.0-10.0 sec 531 MBytes 891 Mbits/sec
      [ 4] 0.0-10.0 sec 978 MBytes 820 Mbits/sec
      
      
      • StephenB's avatar
        StephenB
        Guru - Experienced User

        It looks like you've sorted it out.

         

        On the bonding - if you are using round-robin in the NAS, then you need to have a static LAG set up on the switch/router.  I don't know of any home router that supports static LAGs.  A couple of home routers (for instance the Netgear R8500) supports LACP bonding - if you have one of those you could try that. Or you could try using TLB or ALB, as they are the two modes that don't require router/switch support.

         

        Though as I mentioned earlier, with link aggregation the router->client connection becomes the bottleneck - so you won't see any speed increase for a single connection using wireless or gigabit.  You would see a gain only if you have multiple clients accessing the NAS simultaneously.  So you could just leave well enough alone, and use a single ethernet connection on the NAS.

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More