NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

wdolson's avatar
wdolson
Guide
Nov 07, 2023
Solved

Problem Adding a Drive to ReadyNAS RN628X

I've had a ReadyNAS RN628X for a few years. The first 4 drives were 12 TB and I expanded with 2 16TB drives. The size of the volume expanded as expected each time. It was around 58 TB with 6 drives. The 12 TB drives have 10.9 TB usable and the 16 TB drives have 14.6 TB. It's an X-Raid, so the first 16 TB drive is used for the stripe drive and isn't part of the volume.

 

Two days ago I added a new 16 TB drive and it took two days to rebuild the volume. It finished and the entire capacity only expanded by less than 4 TB instead of 14.6 TB. Anyone know why this happened? And if they do if there is anything that could be done about it (short of rebuilding the entire thing and losing all my data)?

 

I haven't done anything unusual like install another OS.  I'm running 6.10.8. 

 

Thanks

  • OK, so this is the command you need to issue from SSH to change it to RAID5:

     

    mdadm --grow /dev/md127 --level=5 --raid-devices=7 --force --verbose

     

    That will cause it to re-sync to a RAID5 configuration.  For some reason, you may need to issue it again once the sync is complete for it to be properly recognized as RAID5, but it won't have to re-sync again.  If you add yet another drive, you'll have to do it again (with --raid-devices=8, obviously) and you'll likely need to do it for md126, too (with 4 devices), as it will convert to RAID6 when the 4th drive is added (at least that's what happens on my 12-bay NAS).

     

    As with any sync, a power cycle can be deadly to your volume and things can always go wrong.  So insure your backup is up to date and your NAS is on an UPS before you do this.

     

    If that doesn't work, as it didn't once for me, you will have to convert to FlexRAID and remove one drive (logically, not physically) from the volume and re-install it.

     

    For reference, see here:RAID6-to-RAID5-without-volume-re-creation-is-possible

     

12 Replies

Replies have been turned off for this discussion
  • Some more information.  When I added the new drive the messages were different from previous drive adds:

    Nov 07, 2023 11:12:45 AM
    Volume: Volume data is resynced.
    Nov 06, 2023 07:20:03 PM
    Volume: Volume data health changed from Degraded to Redundant.
    Nov 06, 2023 01:00:35 AM
    Volume: Volume data is Degraded.
    Nov 05, 2023 03:50:27 AM
    Volume: Volume data health changed from Redundant to Degraded.
    Nov 05, 2023 03:50:27 AM

    Volume: Resyncing started for Volume data.

     

    I understand that a degraded message happens when a drive is failed, but it's reporting all drives are OK and the Degraded message appears to have gone away now that the volume has been rebuilt.  It's just much smaller than expected.


  • wdolson wrote:

    I've had a ReadyNAS RN628X for a few years. The first 4 drives were 12 TB and I expanded with 2 16TB drives.

     

    Two days ago I added a new 16 TB drive and it took two days to rebuild the volume. It finished and the entire capacity only expanded by less than 4 TB instead of 14.6 TB.


    To make sure I understand this: You currently have 3x16TB + 4x12TB in the array, correct?  

     

    Once you go over 6 drives, XRAID starts using dual redundancy in a desktop ReadyNAS.

     

    I suspect that what happened is that the 7x12TB RAID group was converted to RAID-6 (dual redundancy), and the 3x4 TB RAID group that fills the remaining space on the 16 TB drives was converted from RAID-1 to RAID-5.  That is consistent with the volume growth of only 3.6 TiB.

     

    You can confirm that by downloading the full log zip and looking at mdstat.log.  If you need help interpreting it, then just post it in a reply.

     

    The simplest thing to do is add a fourth 16 TB drive.  You'll end up with 8x12TB RAID-6 + 4x4TB RAID-6 - total volume size of 80 TB (~72.7 TiB).

     

    It likely is possible to modify the RAID configuration using ssh (after switching to flexraid), but it is risky, and could result in data loss.  Sandshark could provide more information on this.

     

     


    wdolson wrote:

    The 12 TB drives have 10.9 TB usable and the 16 TB drives have 14.6 TB. It's an X-Raid, so the first 16 TB drive is used for the stripe drive and isn't part of the volume.

     


    Two slight errors here. 

    • X-RAID doesn't dedicate one drive to parity.  Parity blocks are evenly distributed across all the drives.  That improves write performance, and also evens out the drive workloads.
    • The difference between 12 TB and 10.9 (and 16 vs 14.6) has nothing to do with "usable".  It's simply a units difference.  Manufacturers use TB (1000*1000*1000*1000 byte) units.  The ReadyNAS (like Windows) uses TiB (1024*1024*1024*1024 bytes), but unfortunately still labels it as TB.  12 TB is the same as 10.9 TiB; 16 TB is the same as 14.55 TiB.

     

     

    • Sandshark's avatar
      Sandshark
      Sensei

      Actually, it turns out it isn't normally necessary to disable XRAID to use SSH to convert from RAID6 to RAID5.  I don't know why I had to one time I did it, but I've done it since and didn't need to.  Every time I upgrade a drive in my XRAID/RAID5 12-bay NAS, the RAID group that expands is made into RAID6 instead of RAID5, so I have to switch it.  This seems to be a bug in the OS, making the second RAID group RAID6 once more than 4 drives are in it even though the first group is RAID5.

       

      If you post the results of the cat  /proc/mdstat command, I can give you the specific command necessary to do it.

       

       

      • wdolson's avatar
        wdolson
        Guide

        I thought I posted a reply, but it doesn't look like it went through.

         

        I did once know about the RAID 5 parity vs stripe drive thing, but had a brain fart.  And I was imprecise about my language on the size.  I was pretty tired yesterday and I don't work with this stuff everyday...

         

        The issue with the RaidX converting the array to a RAID 6 explains everything.  I don't think we need the double protection from RAID 6, so converting it back to RAID 5 is probably a better solution.

         

        Here is the result of the cat /pro/mdstat/ command

         

        Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
        md126 : active raid5 sde4[0] sdg4[2] sdf4[1]
        7813728128 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

         

        md127 : active raid6 sda3[0] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
        58570168000 blocks super 1.2 level 6, 64k chunk, algorithm 2 [7/7] [UUUUUU U]

         

        md1 : active raid10 sda2[0] sdg2[6] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1]
        1827840 blocks super 1.2 512K chunks 2 near-copies [7/7] [UUUUUUU]

         

        md0 : active raid1 sda1[0] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1]
        4190208 blocks super 1.2 [7/7] [UUUUUUU]

         

        I assume the RAID 10 and RAID 1 are something the OS uses?  They look small.

    • wdolson's avatar
      wdolson
      Guide

      I'm getting rusty.  I did at one time know that RAID 5 doesn't use a stripe drive, but i did remember that effectively the volume size is smaller than all the drives minus one because of the parity data.  I also forgot about the 1 K == 1024 or 1000 varying from use to use.  They did cover this in my Electronic Engineering courses, but that was 35 years ago...

       

      Anyway the conversion to RAID 6 is something I didn't know and that explains things.  I was concerned something was going wobbly.

       

      The info on how to switch it back to RAID 5 is great.  I don't think we need the double redundancy of RAID 6.  The results of the SSH command is:

       

      Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
      md126 : active raid5 sde4[0] sdg4[2] sdf4[1]
      7813728128 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

       

      md127 : active raid6 sda3[0] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
      58570168000 blocks super 1.2 level 6, 64k chunk, algorithm 2 [7/7] [UUUUUU U]

       

      md1 : active raid10 sda2[0] sdg2[6] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1]
      1827840 blocks super 1.2 512K chunks 2 near-copies [7/7] [UUUUUUU]

       

      md0 : active raid1 sda1[0] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1]
      4190208 blocks super 1.2 [7/7] [UUUUUUU]

       

      I was surprised there was a RAID 1 and a RAID 10 too.  Those being very small I assume the OS uses them internally.

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More