NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

Basso6054's avatar
Basso6054
Aspirant
Jan 26, 2020
Solved

RN104 Capacity does not increase when swapping out to larger disks

I had 4 disks with varying capacities in running Raid5 (appears to have defaulted to that). I have changed out a failed drive to a 4TB and thought whilst I'm at it I will swap out the 1.5TB to another 4TB. 

The other disks are 3TB and 2TB. 

I swapped out systematically allowing each to synchronise and the whole things reports as healthy, no problem there, however I have seen no total volume increase to the array. I have tried rebooting the system and am in the process of doing a back up to allow me to do a factory reset but would prefer to avod if at all possible. 

Any suggestions? Is this a known problem?

 


  • Basso6054 wrote:

    Do I need to let it complete its "Resync Data" process before I log in and start the setup wizard and configuration restoration? Judging by the current progress it looks like it will take about 20 to 30 hours. 

    Again if there's any risk I can wait. 

    Thanks. 


    It's building the RAID groups and computing the parity blocks for each of them now - that requires either reading or writing every block on every disk.  With your particular combination of disk sizes, there are three different groups that need to be built (4x2TB, 3x1TB, 2x1TB).  The completion time is hard to estimate - usually the NAS is reporting the percentage completion of the group it is currently working on, and not the whole set.

     

    You can start the setup wizard, then reinstall any apps, and finally restore the configuration before the resync completes.  While you could also start restoring the data, it usually works out faster if you wait for the resync to finish (doing both at the same time causes a lot of disk thrashing).

13 Replies

Replies have been turned off for this discussion
  • StephenB's avatar
    StephenB
    Guru - Experienced User

    It looks like you are running flexraid - is that the case?  (If you are running XRAID there will be a green stripe across the XRAID control on the volume page).

     

    If you are running flexraid, then you need to expand manually - creating more RAID groups, and concatenating them to the volume.

     

    Can you download the log zip file, and post mdstat.log here (copy/paste it into a reply).

    • Basso6054's avatar
      Basso6054
      Aspirant

      Thanks for replying so quickly.

      I am running flexraid, I can't use Xraid because I "have expanded volumes". 

      I'm not sure how to expand volumes manually. I have had a look and cannot amend the Raid volumes as noted in some previous posts. 

      After a couple of goes managed to attach the log file (didn't realise I had to unzip then find the specific file), again thanks for any help you can give me. 

       

      Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
      md125 : active raid5 sda3[0] sdd3[4] sdc3[5] sdb3[6]
      2915732352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

      md126 : active raid1 sdc5[2] sda5[1]
      488244928 blocks super 1.2 [2/2] [UU]

      md127 : active raid5 sdc4[3] sda4[2] sdb4[4]
      976493824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

      md1 : active raid10 sda2[0] sdb2[3] sdc2[2] sdd2[1]
      1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

      md0 : active raid1 sda1[0] sdc1[3] sdd1[4] sdb1[5]
      4190208 blocks super 1.2 [4/4] [UUUU]

      unused devices: <none>
      /dev/md/0:
      Version : 1.2
      Creation Time : Sat Apr 22 14:58:34 2017
      Raid Level : raid1
      Array Size : 4190208 (4.00 GiB 4.29 GB)
      Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
      Raid Devices : 4
      Total Devices : 4
      Persistence : Superblock is persistent

      Update Time : Sun Jan 26 19:10:55 2020
      State : clean
      Active Devices : 4
      Working Devices : 4
      Failed Devices : 0
      Spare Devices : 0

      Consistency Policy : unknown

      Name : 2fe63934:0 (local to host 2fe63934)
      UUID : f89c2816:d4b7589d:a7cdacd4:7de29871
      Events : 10761

      Number Major Minor RaidDevice State
      0 8 1 0 active sync /dev/sda1
      5 8 17 1 active sync /dev/sdb1
      4 8 49 2 active sync /dev/sdd1
      3 8 33 3 active sync /dev/sdc1
      /dev/md/1:
      Version : 1.2
      Creation Time : Fri Jan 24 17:26:57 2020
      Raid Level : raid10
      Array Size : 1044480 (1020.00 MiB 1069.55 MB)
      Used Dev Size : 522240 (510.00 MiB 534.77 MB)
      Raid Devices : 4
      Total Devices : 4
      Persistence : Superblock is persistent

      Update Time : Sun Jan 26 13:29:07 2020
      State : clean
      Active Devices : 4
      Working Devices : 4
      Failed Devices : 0
      Spare Devices : 0

      Layout : near=2
      Chunk Size : 512K

      Consistency Policy : unknown

      Name : 2fe63934:1 (local to host 2fe63934)
      UUID : 74e9786a:53faadcc:f15aa224:56a355d7
      Events : 19

      Number Major Minor RaidDevice State
      0 8 2 0 active sync set-A /dev/sda2
      1 8 50 1 active sync set-B /dev/sdd2
      2 8 34 2 active sync set-A /dev/sdc2
      3 8 18 3 active sync set-B /dev/sdb2
      /dev/md/data-0:
      Version : 1.2
      Creation Time : Sat Apr 22 14:58:35 2017
      Raid Level : raid5
      Array Size : 2915732352 (2780.66 GiB 2985.71 GB)
      Used Dev Size : 971910784 (926.89 GiB 995.24 GB)
      Raid Devices : 4
      Total Devices : 4
      Persistence : Superblock is persistent

      Update Time : Sun Jan 26 18:37:30 2020
      State : clean
      Active Devices : 4
      Working Devices : 4
      Failed Devices : 0
      Spare Devices : 0

      Layout : left-symmetric
      Chunk Size : 64K

      Consistency Policy : unknown

      Name : 2fe63934:data-0 (local to host 2fe63934)
      UUID : 6d5262bb:0e644935:003dcf8e:f771831c
      Events : 1628

      Number Major Minor RaidDevice State
      0 8 3 0 active sync /dev/sda3
      6 8 19 1 active sync /dev/sdb3
      5 8 35 2 active sync /dev/sdc3
      4 8 51 3 active sync /dev/sdd3
      /dev/md/data-1:
      Version : 1.2
      Creation Time : Sat Apr 22 15:02:30 2017
      Raid Level : raid5
      Array Size : 976493824 (931.26 GiB 999.93 GB)
      Used Dev Size : 488246912 (465.63 GiB 499.96 GB)
      Raid Devices : 3
      Total Devices : 3
      Persistence : Superblock is persistent

      Update Time : Sun Jan 26 18:37:30 2020
      State : clean
      Active Devices : 3
      Working Devices : 3
      Failed Devices : 0
      Spare Devices : 0

      Layout : left-symmetric
      Chunk Size : 64K

      Consistency Policy : unknown

      Name : 2fe63934:data-1 (local to host 2fe63934)
      UUID : 43b7a210:f13eb953:e3200ae2:cc2cac06
      Events : 846

      Number Major Minor RaidDevice State
      3 8 36 0 active sync /dev/sdc4
      4 8 20 1 active sync /dev/sdb4
      2 8 4 2 active sync /dev/sda4
      /dev/md/data-2:
      Version : 1.2
      Creation Time : Sun Apr 23 12:47:02 2017
      Raid Level : raid1
      Array Size : 488244928 (465.63 GiB 499.96 GB)
      Used Dev Size : 488244928 (465.63 GiB 499.96 GB)
      Raid Devices : 2
      Total Devices : 2
      Persistence : Superblock is persistent

      Update Time : Sun Jan 26 18:37:30 2020
      State : clean
      Active Devices : 2
      Working Devices : 2
      Failed Devices : 0
      Spare Devices : 0

      Consistency Policy : unknown

      Name : 2fe63934:data-2 (local to host 2fe63934)
      UUID : 4c6afb5b:0c768580:c07829fc:2101116c
      Events : 282

      Number Major Minor RaidDevice State
      2 8 37 0 active sync /dev/sdc5
      1 8 5 1 active sync /dev/sda5

       

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More