- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
Re: RN104 Capacity does not increase when swapping out to larger disks
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I had 4 disks with varying capacities in running Raid5 (appears to have defaulted to that). I have changed out a failed drive to a 4TB and thought whilst I'm at it I will swap out the 1.5TB to another 4TB.
The other disks are 3TB and 2TB.
I swapped out systematically allowing each to synchronise and the whole things reports as healthy, no problem there, however I have seen no total volume increase to the array. I have tried rebooting the system and am in the process of doing a back up to allow me to do a factory reset but would prefer to avod if at all possible.
Any suggestions? Is this a known problem?
Solved! Go to Solution.
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Basso6054 wrote:
Do I need to let it complete its "Resync Data" process before I log in and start the setup wizard and configuration restoration? Judging by the current progress it looks like it will take about 20 to 30 hours.
Again if there's any risk I can wait.
Thanks.
It's building the RAID groups and computing the parity blocks for each of them now - that requires either reading or writing every block on every disk. With your particular combination of disk sizes, there are three different groups that need to be built (4x2TB, 3x1TB, 2x1TB). The completion time is hard to estimate - usually the NAS is reporting the percentage completion of the group it is currently working on, and not the whole set.
You can start the setup wizard, then reinstall any apps, and finally restore the configuration before the resync completes. While you could also start restoring the data, it usually works out faster if you wait for the resync to finish (doing both at the same time causes a lot of disk thrashing).
All Replies
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 Capacity does not increase when swapping out to larger disks
It looks like you are running flexraid - is that the case? (If you are running XRAID there will be a green stripe across the XRAID control on the volume page).
If you are running flexraid, then you need to expand manually - creating more RAID groups, and concatenating them to the volume.
Can you download the log zip file, and post mdstat.log here (copy/paste it into a reply).
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 Capacity does not increase when swapping out to larger disks
Thanks for replying so quickly.
I am running flexraid, I can't use Xraid because I "have expanded volumes".
I'm not sure how to expand volumes manually. I have had a look and cannot amend the Raid volumes as noted in some previous posts.
After a couple of goes managed to attach the log file (didn't realise I had to unzip then find the specific file), again thanks for any help you can give me.
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md125 : active raid5 sda3[0] sdd3[4] sdc3[5] sdb3[6]
2915732352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md126 : active raid1 sdc5[2] sda5[1]
488244928 blocks super 1.2 [2/2] [UU]
md127 : active raid5 sdc4[3] sda4[2] sdb4[4]
976493824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md1 : active raid10 sda2[0] sdb2[3] sdc2[2] sdd2[1]
1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md0 : active raid1 sda1[0] sdc1[3] sdd1[4] sdb1[5]
4190208 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Sat Apr 22 14:58:34 2017
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Jan 26 19:10:55 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : 2fe63934:0 (local to host 2fe63934)
UUID : f89c2816:d4b7589d:a7cdacd4:7de29871
Events : 10761
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
5 8 17 1 active sync /dev/sdb1
4 8 49 2 active sync /dev/sdd1
3 8 33 3 active sync /dev/sdc1
/dev/md/1:
Version : 1.2
Creation Time : Fri Jan 24 17:26:57 2020
Raid Level : raid10
Array Size : 1044480 (1020.00 MiB 1069.55 MB)
Used Dev Size : 522240 (510.00 MiB 534.77 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Jan 26 13:29:07 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : unknown
Name : 2fe63934:1 (local to host 2fe63934)
UUID : 74e9786a:53faadcc:f15aa224:56a355d7
Events : 19
Number Major Minor RaidDevice State
0 8 2 0 active sync set-A /dev/sda2
1 8 50 1 active sync set-B /dev/sdd2
2 8 34 2 active sync set-A /dev/sdc2
3 8 18 3 active sync set-B /dev/sdb2
/dev/md/data-0:
Version : 1.2
Creation Time : Sat Apr 22 14:58:35 2017
Raid Level : raid5
Array Size : 2915732352 (2780.66 GiB 2985.71 GB)
Used Dev Size : 971910784 (926.89 GiB 995.24 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Jan 26 18:37:30 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : unknown
Name : 2fe63934:data-0 (local to host 2fe63934)
UUID : 6d5262bb:0e644935:003dcf8e:f771831c
Events : 1628
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
6 8 19 1 active sync /dev/sdb3
5 8 35 2 active sync /dev/sdc3
4 8 51 3 active sync /dev/sdd3
/dev/md/data-1:
Version : 1.2
Creation Time : Sat Apr 22 15:02:30 2017
Raid Level : raid5
Array Size : 976493824 (931.26 GiB 999.93 GB)
Used Dev Size : 488246912 (465.63 GiB 499.96 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sun Jan 26 18:37:30 2020
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : unknown
Name : 2fe63934:data-1 (local to host 2fe63934)
UUID : 43b7a210:f13eb953:e3200ae2:cc2cac06
Events : 846
Number Major Minor RaidDevice State
3 8 36 0 active sync /dev/sdc4
4 8 20 1 active sync /dev/sdb4
2 8 4 2 active sync /dev/sda4
/dev/md/data-2:
Version : 1.2
Creation Time : Sun Apr 23 12:47:02 2017
Raid Level : raid1
Array Size : 488244928 (465.63 GiB 499.96 GB)
Used Dev Size : 488244928 (465.63 GiB 499.96 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sun Jan 26 18:37:30 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : 2fe63934:data-2 (local to host 2fe63934)
UUID : 4c6afb5b:0c768580:c07829fc:2101116c
Events : 282
Number Major Minor RaidDevice State
2 8 37 0 active sync /dev/sdc5
1 8 5 1 active sync /dev/sda5
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 Capacity does not increase when swapping out to larger disks
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 Capacity does not increase when swapping out to larger disks
Was the first drive you removed 1TB? It looks like you must have started with at least one 1TB at some point.
You currently have one layer with 4x1TB, one with 3x.5TB and one with 2 x .5TB. That sounds like what you'd get from 1TB + 1.5TB +2TB + 3TB. You'd also have 1TB of the 3TB unused at that point.
When you swapped in the first 4TB, you had an opportunity to create another 1TB group from that unused portion of the 3TB and 1TB of the 4TB. You'd still have 1TB of the 4TB unused. But you didn't do that, and that may be the rub Now that you have a second 4TB in, you need to create a 3x1TB layer from the 3TB and 4TB drives, then another 1TB from the two 4TB. I'm not sure the GUI is smart enough to let you do that. One of the complicating factors is that going to a fifth partition on some drives mandates using logical partitions since there can only be four primary ones.
Can you also paste in the contents of partitions.log?
Are you comfortable using SSH and the command line? If so, there are commands you can use from there to do the expansion. But if the partitions are not already created, I'm not sure how to accomplish that and stay consistent with the way the OS normally does it.
If you have or can create a backup and can accept the down time, a factory default will clean things up a lot and let you go back to XRAID. That is what I recommend.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 Capacity does not increase when swapping out to larger disks
Thinking I'll do back up and factory default. The downtime doesn't bother me. As it appears that others have had this issue I will keep the thread going.
See log below:
major minor #blocks name
31 0 1536 mtdblock0
31 1 512 mtdblock1
31 2 6144 mtdblock2
31 3 4096 mtdblock3
31 4 118784 mtdblock4
8 0 1953514584 sda
8 1 4194304 sda1
8 2 524288 sda2
8 3 972041912 sda3
8 4 488378024 sda4
8 5 488376000 sda5
8 16 3907018584 sdb
8 17 4194304 sdb1
8 18 524288 sdb2
8 19 972041912 sdb3
8 20 488378024 sdb4
8 32 3907018584 sdc
8 33 4194304 sdc1
8 34 524288 sdc2
8 35 972041912 sdc3
8 36 488378024 sdc4
8 37 488376000 sdc5
8 48 2930266584 sdd
8 49 4194304 sdd1
8 50 524288 sdd2
8 51 972041912 sdd3
8 64 976762584 sde
8 65 976760001 sde1
9 0 4190208 md0
9 1 1044480 md1
9 127 976493824 md127
9 126 488244928 md126
9 125 2915732352 md125
8 80 1465138584 sdf
8 81 1465137152 sdf1
Thanks for your time
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 Capacity does not increase when swapping out to larger disks
@Sandshark wrote:
You currently have one layer with 4x1TB, one with 3x.5TB and one with 2 x .5TB. That sounds like what you'd get from 1TB + 1.5TB +2TB + 3TB. You'd also have 1TB of the 3TB unused at that point.
Right. Likely at that point you ( @Basso6054 ) switched to flexraid, which probably was a mistake.
@Sandshark wrote:
I'm not sure the GUI is smart enough to let you do that.
I'm using XRAID myself, so it's not something I've tried. But I think you should be able to expand the existing RAID groups and make one more from the GUI. Whether you can make two more depends on the logical partition issue that @Sandshark brought up.
If you want to go forward with flexraid, then you'd add the second 4 TB drive. Then after resync you'd
- expand the second RAID group from 3x.5TB RAID-5 to 4x.5TB RAID-5.
- expand the third RAID group from 2x.5TB RAID-1 to 4x.5TB RAID-5.
That would increase your volume from 4.5 TB to 6 TB (5.46 TiB).
I'd then create a 2x2TB RAID-1 RAID group with the two 4TB drives, and add that to the existing volume. That would bring it up to 8 TB (7.28 TiB). That wastes 1 TB of space on the 3 TB drive, but avoids the risk that you can't create two more RAID groups. Then later on you can upgrade the two remaining drives to 4 TB, which would give you a 12 TB volume (using all the space on all the disks).
@Sandshark wrote:
If you have or can create a backup and can accept the down time, a factory default will clean things up a lot and let you go back to XRAID. That is what I recommend.
Yes, I agree. You'd end up with simpler disk structure (removing two unnecessary partitions/RAID groups), and you wouldn't be wasting any space.
Though if you have no backup now, I strongly recommend making one. RAID isn't enough to keep your data safe - many folks here have learned that the hard way.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 Capacity does not increase when swapping out to larger disks
Okay so I've taken a data and configuration back up and running a restore to default. Will I need to move all the data back on or should it recover that itself?
Will my configuration back up return the array to Flex-Raid or allow me to use XRaid?
I don't mind starting from scratch, would rather patiently build this from ground up if necessary than try to short cut anything.
The NAS display is currently showing "Resync data 35.65%" and progressing slowly (which is fine). I can log in using the original factory password but have not run the set up wizard as I do not want to overwrite any recovery process it is working on.
Once again thanks for your help.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 Capacity does not increase when swapping out to larger disks
If you have any apps installed, make sure you re-install them before restoring the configuration backup. I don't think the configuration restoration will change you to FlexRAID. But if it does, you should be able to switch back to XRAID since you now have a more simple volume.
Then, yes, the last thng you will need to do is restore all the data.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 Capacity does not increase when swapping out to larger disks
Do I need to let it complete its "Resync Data" process before I log in and start the setup wizard and configuration restoration? Judging by the current progress it looks like it will take about 20 to 30 hours.
Again if there's any risk I can wait.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 Capacity does not increase when swapping out to larger disks
You can do it concurrently, but it'll just slow things down. It's best to wait.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Basso6054 wrote:
Do I need to let it complete its "Resync Data" process before I log in and start the setup wizard and configuration restoration? Judging by the current progress it looks like it will take about 20 to 30 hours.
Again if there's any risk I can wait.
Thanks.
It's building the RAID groups and computing the parity blocks for each of them now - that requires either reading or writing every block on every disk. With your particular combination of disk sizes, there are three different groups that need to be built (4x2TB, 3x1TB, 2x1TB). The completion time is hard to estimate - usually the NAS is reporting the percentage completion of the group it is currently working on, and not the whole set.
You can start the setup wizard, then reinstall any apps, and finally restore the configuration before the resync completes. While you could also start restoring the data, it usually works out faster if you wait for the resync to finish (doing both at the same time causes a lot of disk thrashing).
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 Capacity does not increase when swapping out to larger disks
Okay so I have restored the RN104, gain full capacity from 4TB to over 8TB.
Some lessons learnt:
- Don't forget to back up you configuration file.
- Keep a full back up (obviously).
- Keep a note of the exact share names, helps with any paths you may have established on computers mapped drives.
- Have patience, let each step take its course, it took a few days to let the drives retore then reload the configuration files and all the back up data.
- Cloud accounts need to be reestablished, not hard just something to be aware of.
- Any automatic back up jobs you have to USB connected drives set up in your old configuration will likely remap the USB drive references, you will need to delete them and reload them.
- Check any automatice computer back ups you have through the computer application.
Finally thanks to StephenB for his advice, it was much appreciated.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 Capacity does not increase when swapping out to larger disks
In step 4, if appropriate, add that you need to restore any apps before you restore the configuration file so you don't end up with something in the configuration that points to a phantom app.