NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Basso6054
Jan 26, 2020Aspirant
RN104 Capacity does not increase when swapping out to larger disks
I had 4 disks with varying capacities in running Raid5 (appears to have defaulted to that). I have changed out a failed drive to a 4TB and thought whilst I'm at it I will swap out the 1.5TB to anothe...
- Jan 27, 2020
Basso6054 wrote:
Do I need to let it complete its "Resync Data" process before I log in and start the setup wizard and configuration restoration? Judging by the current progress it looks like it will take about 20 to 30 hours.
Again if there's any risk I can wait.
Thanks.
It's building the RAID groups and computing the parity blocks for each of them now - that requires either reading or writing every block on every disk. With your particular combination of disk sizes, there are three different groups that need to be built (4x2TB, 3x1TB, 2x1TB). The completion time is hard to estimate - usually the NAS is reporting the percentage completion of the group it is currently working on, and not the whole set.
You can start the setup wizard, then reinstall any apps, and finally restore the configuration before the resync completes. While you could also start restoring the data, it usually works out faster if you wait for the resync to finish (doing both at the same time causes a lot of disk thrashing).
StephenB
Jan 26, 2020Guru - Experienced User
It looks like you are running flexraid - is that the case? (If you are running XRAID there will be a green stripe across the XRAID control on the volume page).
If you are running flexraid, then you need to expand manually - creating more RAID groups, and concatenating them to the volume.
Can you download the log zip file, and post mdstat.log here (copy/paste it into a reply).
- Basso6054Jan 26, 2020Aspirant
Thanks for replying so quickly.
I am running flexraid, I can't use Xraid because I "have expanded volumes".
I'm not sure how to expand volumes manually. I have had a look and cannot amend the Raid volumes as noted in some previous posts.
After a couple of goes managed to attach the log file (didn't realise I had to unzip then find the specific file), again thanks for any help you can give me.
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md125 : active raid5 sda3[0] sdd3[4] sdc3[5] sdb3[6]
2915732352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md126 : active raid1 sdc5[2] sda5[1]
488244928 blocks super 1.2 [2/2] [UU]
md127 : active raid5 sdc4[3] sda4[2] sdb4[4]
976493824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md1 : active raid10 sda2[0] sdb2[3] sdc2[2] sdd2[1]
1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md0 : active raid1 sda1[0] sdc1[3] sdd1[4] sdb1[5]
4190208 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Sat Apr 22 14:58:34 2017
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistentUpdate Time : Sun Jan 26 19:10:55 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0Consistency Policy : unknown
Name : 2fe63934:0 (local to host 2fe63934)
UUID : f89c2816:d4b7589d:a7cdacd4:7de29871
Events : 10761Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
5 8 17 1 active sync /dev/sdb1
4 8 49 2 active sync /dev/sdd1
3 8 33 3 active sync /dev/sdc1
/dev/md/1:
Version : 1.2
Creation Time : Fri Jan 24 17:26:57 2020
Raid Level : raid10
Array Size : 1044480 (1020.00 MiB 1069.55 MB)
Used Dev Size : 522240 (510.00 MiB 534.77 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistentUpdate Time : Sun Jan 26 13:29:07 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0Layout : near=2
Chunk Size : 512KConsistency Policy : unknown
Name : 2fe63934:1 (local to host 2fe63934)
UUID : 74e9786a:53faadcc:f15aa224:56a355d7
Events : 19Number Major Minor RaidDevice State
0 8 2 0 active sync set-A /dev/sda2
1 8 50 1 active sync set-B /dev/sdd2
2 8 34 2 active sync set-A /dev/sdc2
3 8 18 3 active sync set-B /dev/sdb2
/dev/md/data-0:
Version : 1.2
Creation Time : Sat Apr 22 14:58:35 2017
Raid Level : raid5
Array Size : 2915732352 (2780.66 GiB 2985.71 GB)
Used Dev Size : 971910784 (926.89 GiB 995.24 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistentUpdate Time : Sun Jan 26 18:37:30 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0Layout : left-symmetric
Chunk Size : 64KConsistency Policy : unknown
Name : 2fe63934:data-0 (local to host 2fe63934)
UUID : 6d5262bb:0e644935:003dcf8e:f771831c
Events : 1628Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
6 8 19 1 active sync /dev/sdb3
5 8 35 2 active sync /dev/sdc3
4 8 51 3 active sync /dev/sdd3
/dev/md/data-1:
Version : 1.2
Creation Time : Sat Apr 22 15:02:30 2017
Raid Level : raid5
Array Size : 976493824 (931.26 GiB 999.93 GB)
Used Dev Size : 488246912 (465.63 GiB 499.96 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistentUpdate Time : Sun Jan 26 18:37:30 2020
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0Layout : left-symmetric
Chunk Size : 64KConsistency Policy : unknown
Name : 2fe63934:data-1 (local to host 2fe63934)
UUID : 43b7a210:f13eb953:e3200ae2:cc2cac06
Events : 846Number Major Minor RaidDevice State
3 8 36 0 active sync /dev/sdc4
4 8 20 1 active sync /dev/sdb4
2 8 4 2 active sync /dev/sda4
/dev/md/data-2:
Version : 1.2
Creation Time : Sun Apr 23 12:47:02 2017
Raid Level : raid1
Array Size : 488244928 (465.63 GiB 499.96 GB)
Used Dev Size : 488244928 (465.63 GiB 499.96 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistentUpdate Time : Sun Jan 26 18:37:30 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0Consistency Policy : unknown
Name : 2fe63934:data-2 (local to host 2fe63934)
UUID : 4c6afb5b:0c768580:c07829fc:2101116c
Events : 282Number Major Minor RaidDevice State
2 8 37 0 active sync /dev/sdc5
1 8 5 1 active sync /dev/sda5- Basso6054Jan 26, 2020Aspirant
- SandsharkJan 26, 2020Sensei - Experienced User
Was the first drive you removed 1TB? It looks like you must have started with at least one 1TB at some point.
You currently have one layer with 4x1TB, one with 3x.5TB and one with 2 x .5TB. That sounds like what you'd get from 1TB + 1.5TB +2TB + 3TB. You'd also have 1TB of the 3TB unused at that point.
When you swapped in the first 4TB, you had an opportunity to create another 1TB group from that unused portion of the 3TB and 1TB of the 4TB. You'd still have 1TB of the 4TB unused. But you didn't do that, and that may be the rub Now that you have a second 4TB in, you need to create a 3x1TB layer from the 3TB and 4TB drives, then another 1TB from the two 4TB. I'm not sure the GUI is smart enough to let you do that. One of the complicating factors is that going to a fifth partition on some drives mandates using logical partitions since there can only be four primary ones.
Can you also paste in the contents of partitions.log?
Are you comfortable using SSH and the command line? If so, there are commands you can use from there to do the expansion. But if the partitions are not already created, I'm not sure how to accomplish that and stay consistent with the way the OS normally does it.
If you have or can create a backup and can accept the down time, a factory default will clean things up a lot and let you go back to XRAID. That is what I recommend.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!