NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

phistrom's avatar
Jun 28, 2017
Solved

ReadyNAS 4312X Initial Sync Incredibly Slow

To create a RAID10 array of 12x 6TB drives the web interface is giving me an estimate of 178 hours. After waiting a day it dropped to 154 hours so I guess it's pretty accurate.

 

Is there anything I can do to speed up this initial sync? How can this possibly take so long? I have to wait an entire week to begin using this NAS? Something seems very, very wrong.

 

I am running 6.7.5. Did a fresh factory reset with this firmware. The drives are new Seagate Iron Wolf Pro 6TB and are on the compatible devices list.

  • I have created a RAID 10 array and it completed syncing within almost precisely 2 days (48 hours, 21 minutes, and 16 seconds).

     

    The way I did it was by following the advice here. I am going to paraphrase it here in case that page ever goes down in the future.

     

    1. Create a RAID 10 array using the ReadyNAS GUI as usual.
    2. Stare, mouth agape, at how long that volume will take to finish its initial sync.
    3. Enable SSH under the settings tab so you can get shell access to your NAS.
    4. Log in with the user name "root" and the password whatever you gave to the "admin" user of your NAS.
    5. Set the "read ahead" of your array to 32 mebibytes with the following command:
      blockdev --setra 65536 /dev/md127
      (it's 65,536 times 512 bytes which comes out to 32 mebibytes)
    6. Disable NCQ on all your drives. I'd recommend you make a script for this as follows:
      #!/bin/bash
      for drive in sd{a..l}
      do
      echo 1 > /sys/block/$drive/device/queue_depth
      done
      The part that says {a..l} assumes you have 12 drives. You're going to need to change the l to whatever the last disk letter you have is.
      According to some sources, this doesn't actually disable NCQ. It would require a kernel parameter to do that but I have no idea how to set kernel parameters on ReadyNAS OS 6. The kernel parameter is:
      libata.force=noncq
      So maybe at some point the Netgear devs can implement it. The rationale is, apparently mdadm can do the queuing better than the hard drive can so you want to disable all the disks' NCQ.

    One of the recommendations from that link (Tip #5) is to create a bitmap. I think that just makes resyncs or rebuilds faster but I don't believe it makes initial syncs faster so I would just ignore it.

9 Replies

Replies have been turned off for this discussion
  • StephenB's avatar
    StephenB
    Guru - Experienced User

    You are asking it to do 72 TB worth of disk I/O.  The sync speed estimate works out to 115 MB/s

    • phistrom's avatar
      phistrom
      Star

      I spent a lot of time playing around with SSH on this NAS (since I've got a lot of downtime with it anyway) and discovered it is using mdadm under the hood. After looking around online, that initial sync is just how mdadm works when you create an array like that. I guess I'm just disappointed that it wasn't a little more parallel. 115 MB/sec is what I'd expect to come out of 1 disk, not 12.

      As a side note, I may be remembering wrong but I thought I saw the estimate would be about 14 hours when it was in X-RAID mode after the factory reset. Do you happen to know what it is using under the hood that makes this X-RAID (RAID 6) implemtation go so much faster?  Is it still mdadm and initial sync for RAID 6 is just faster than RAID 10 for the initial sync? 

      • StephenB's avatar
        StephenB
        Guru - Experienced User

        I'd expect RAID-6 sync time to be longer than RAID-10 - so I don't know why it's running slowly.

         

        But I don't think 12x6TB RAID-6 resync would actually have completed in 14 hours either.

         

  • I have created a RAID 10 array and it completed syncing within almost precisely 2 days (48 hours, 21 minutes, and 16 seconds).

     

    The way I did it was by following the advice here. I am going to paraphrase it here in case that page ever goes down in the future.

     

    1. Create a RAID 10 array using the ReadyNAS GUI as usual.
    2. Stare, mouth agape, at how long that volume will take to finish its initial sync.
    3. Enable SSH under the settings tab so you can get shell access to your NAS.
    4. Log in with the user name "root" and the password whatever you gave to the "admin" user of your NAS.
    5. Set the "read ahead" of your array to 32 mebibytes with the following command:
      blockdev --setra 65536 /dev/md127
      (it's 65,536 times 512 bytes which comes out to 32 mebibytes)
    6. Disable NCQ on all your drives. I'd recommend you make a script for this as follows:
      #!/bin/bash
      for drive in sd{a..l}
      do
      echo 1 > /sys/block/$drive/device/queue_depth
      done
      The part that says {a..l} assumes you have 12 drives. You're going to need to change the l to whatever the last disk letter you have is.
      According to some sources, this doesn't actually disable NCQ. It would require a kernel parameter to do that but I have no idea how to set kernel parameters on ReadyNAS OS 6. The kernel parameter is:
      libata.force=noncq
      So maybe at some point the Netgear devs can implement it. The rationale is, apparently mdadm can do the queuing better than the hard drive can so you want to disable all the disks' NCQ.

    One of the recommendations from that link (Tip #5) is to create a bitmap. I think that just makes resyncs or rebuilds faster but I don't believe it makes initial syncs faster so I would just ignore it.

    • Sandshark's avatar
      Sandshark
      Sensei - Experienced User

      WoW!  Thanks for sharing.  I wonder if that would have helped with my 10-day conversion of a 3TB x 6 RAID5 to a 3TB x 7 RAID6.  Too late now, it's done.

      • phistrom's avatar
        phistrom
        Star


        WoW!  Thanks for sharing.


        You're very welcome.

         

        It's entirely possible that you could still see some performance benefits from making the changes I list above. Just be careful to write down what the current settings are in case the changes have a negative effect on performance. For instance, you can safely run

        • blockdev --getra /dev/md127 to get what the current read-ahead value is for your array.
        • cat /sys/block/sdX/device/queue_depth to get the current queue_depth for one of your drives (substitute X with a, b, c... etc)

        If you take a look at the article linked in my previous post, Tip #3 mentions setting a "stripe-cache_size" for RAID 5 or RAID 6 arrays. I skipped this because I was talking specifically about a RAID 10 array here. Looks like you need to be careful with what you set that value to as it seems to increase performance by using more RAM and if you set the value wrong, you could run into Out of Memory conditions.

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More