NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Yoshi900
May 05, 2020Aspirant
Readynas not using full capacity
Hi, It seems like my readynas 314 will not expand to the new size. I started with 6tb - 2tb - 6tb - 2tb which gave me ~9tb of space Then I wanted to upgrade the 2tb disks to 4tb disks. I did thi...
Yoshi900
May 06, 2020Aspirant
Ok looks like I didn't know about this limitation.
I have taken out a 6tb drive and used a couple of other drives I had lying around (old 2tb drives) to backup all the data.
I will then do a factory reset and then copy everything back.
I need to do this sooner rather than later as I will have too much data to do the backup.
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md124 : active raid1 sda5[0] sdd5[2]
1953371840 blocks super 1.2 [2/2] [UU]
md125 : active raid5 sdb4[6] sda4[7] sdd4[5] sdc4[4]
2929868352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md126 : active raid5 sda3[4] sdd3[6] sdc3[5] sdb3[7]
2915732352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md127 : active raid1 sdb5[0] sdc5[1]
3906876864 blocks super 1.2 [2/2] [UU]
md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md0 : active raid1 sda1[4] sdd1[6] sdc1[5] sdb1[7]
4190208 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Tue Sep 16 06:37:21 2014
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Tue May 5 14:02:21 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : 7c6ee9a2:0 (local to host 7c6ee9a2)
UUID : ea4dc9bb:b9641fec:7678b6d9:3510a15e
Events : 4993
Number Major Minor RaidDevice State
4 8 1 0 active sync /dev/sda1
7 8 17 1 active sync /dev/sdb1
5 8 33 2 active sync /dev/sdc1
6 8 49 3 active sync /dev/sdd1
/dev/md/1:
Version : 1.2
Creation Time : Mon May 4 20:23:18 2020
Raid Level : raid10
Array Size : 1044480 (1020.00 MiB 1069.55 MB)
Used Dev Size : 522240 (510.00 MiB 534.77 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Tue May 5 03:30:36 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : unknown
Name : 7c6ee9a2:1 (local to host 7c6ee9a2)
UUID : 828b88db:80c85fc5:2148475e:2be97175
Events : 19
Number Major Minor RaidDevice State
0 8 2 0 active sync set-A /dev/sda2
1 8 18 1 active sync set-B /dev/sdb2
2 8 34 2 active sync set-A /dev/sdc2
3 8 50 3 active sync set-B /dev/sdd2
/dev/md/data-0:
Version : 1.2
Creation Time : Tue Sep 16 06:37:21 2014
Raid Level : raid5
Array Size : 2915732352 (2780.66 GiB 2985.71 GB)
Used Dev Size : 971910784 (926.89 GiB 995.24 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Tue May 5 13:56:12 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : unknown
Name : 7c6ee9a2:data-0 (local to host 7c6ee9a2)
UUID : c986ca70:d8d44a62:afac6818:4191584b
Events : 22588
Number Major Minor RaidDevice State
4 8 3 0 active sync /dev/sda3
7 8 19 1 active sync /dev/sdb3
5 8 35 2 active sync /dev/sdc3
6 8 51 3 active sync /dev/sdd3
/dev/md/data-1:
Version : 1.2
Creation Time : Tue Sep 16 16:50:37 2014
Raid Level : raid5
Array Size : 2929868352 (2794.14 GiB 3000.19 GB)
Used Dev Size : 976622784 (931.38 GiB 1000.06 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Tue May 5 13:56:12 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : unknown
Name : 7c6ee9a2:data-1 (local to host 7c6ee9a2)
UUID : 0b86abbf:17ce2701:6a9dacf8:4258271e
Events : 21681
Number Major Minor RaidDevice State
6 8 20 0 active sync /dev/sdb4
4 8 36 1 active sync /dev/sdc4
5 8 52 2 active sync /dev/sdd4
7 8 4 3 active sync /dev/sda4
/dev/md/data-2:
Version : 1.2
Creation Time : Tue Nov 7 05:15:19 2017
Raid Level : raid1
Array Size : 3906876864 (3725.89 GiB 4000.64 GB)
Used Dev Size : 3906876864 (3725.89 GiB 4000.64 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue May 5 13:56:12 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : 7c6ee9a2:data-2 (local to host 7c6ee9a2)
UUID : 6b4b7a26:ecf55b8b:602146d5:15b86742
Events : 948
Number Major Minor RaidDevice State
0 8 21 0 active sync /dev/sdb5
1 8 37 1 active sync /dev/sdc5
/dev/md/data-3:
Version : 1.2
Creation Time : Fri Jan 10 04:50:20 2020
Raid Level : raid1
Array Size : 1953371840 (1862.88 GiB 2000.25 GB)
Used Dev Size : 1953371840 (1862.88 GiB 2000.25 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue May 5 13:56:12 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : 7c6ee9a2:data-3 (local to host 7c6ee9a2)
UUID : 5dec4438:e75b6dae:40c357e0:ec4754d2
Events : 738
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
2 8 53 1 active sync /dev/sdd5here is the log from the rn-expand
-- Reboot -- May 05 11:52:04 GNAS rn-expand[3516]: Trying auto-expand (in-place) May 05 11:52:04 GNAS rn-expand[3516]: Considering inplace auto-expansion for data May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sda is expandable... May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sda is expandable... May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sda is expandable... May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sdb is expandable... May 05 11:52:04 GNAS rn-expand[3516]: Trying auto-extend (grow onto additional disks) May 05 11:52:04 GNAS rn-expand[3516]: auto_extend: Checking disk sda... May 05 11:52:04 GNAS rn-expand[3516]: auto_extend: Checking disk sdb... May 05 11:52:04 GNAS rn-expand[3516]: auto_extend: Checking disk sdc... May 05 11:52:04 GNAS rn-expand[3516]: auto_extend: Checking disk sdd... May 05 11:52:04 GNAS rn-expand[3516]: Trying xraid-expand (tiered expansion) May 05 11:52:04 GNAS rn-expand[3516]: Considering X-RAID auto-expansion for data May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sda is expandable... May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sdb is expandable... May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sdc is expandable... May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sdd is expandable... May 05 11:52:04 GNAS rn-expand[3516]: No enough disks for data-0 to expand [need 2, have 0] May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sda is expandable... May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sdb is expandable... May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sdc is expandable... May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sdd is expandable... May 05 11:52:04 GNAS rn-expand[3516]: No enough disks for data-1 to expand [need 2, have 0] May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sda is expandable... May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sdd is expandable... May 05 11:52:04 GNAS rn-expand[3516]: No enough disks for data-3 to expand [need 2, have 0] May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sdb is expandable... May 05 11:52:04 GNAS rn-expand[3516]: Checking if RAID disk sdc is expandable... May 05 11:52:04 GNAS rn-expand[3516]: No enough disks for data-2 to expand [need 2, have 0] May 05 11:52:04 GNAS rn-expand[3516]: 0 disks expandable in data
Even though I am doing the work around (factory reset) there might be something in the logs that someone else can try if they are having the same issue.
- StephenBMay 07, 2020Guru - Experienced User
It did something I've never seen before:
/dev/md/data-0:
Creation Time : Tue Sep 16 06:37:21 2014
Raid Level : raid5
Used Dev Size : 971910784 (926.89 GiB 995.24 GB)
/dev/md/data-1: Creation Time : Tue Sep 16 16:50:37 2014 Raid Level : raid5 Used Dev Size : 976622784 (931.38 GiB 1000.06 GB)
/dev/md/data-2: Creation Time : Tue Nov 7 05:15:19 2017 Raid Level : raid1 Used Dev Size : 3906876864 (3725.89 GiB 4000.64 GB) 0 8 21 0 active sync /dev/sdb5 1 8 37 1 active sync /dev/sdc5 /dev/md/data-3: Creation Time : Fri Jan 10 04:50:20 2020
Raid Level : raid1
Used Dev Size : 1953371840 (1862.88 GiB 2000.25 GB) 0 8 5 0 active sync /dev/sda5 2 8 53 1 active sync /dev/sdd5Normally XRAID just won't expand when you add an intermediate size disk. But in your case, it apparently did create a jbod RAID group using the first 2 TB of the 4 TB drive, and then converted that to RAID-1 when you installed the second 4 TB drive. That explains the expansion you saw on the first upgrade, and also why the size didn't follow the normal capacity rule.
This apparently happened some months ago, so the info in rn_expand.log isn't relevant.
A factory reset is the best way to get the full volume space. You'll end up with two RAID groups - 2x4TB RAID-5 + 2x2TB RAID-1.
- SandsharkMay 08, 2020Sensei
OK, so that final configuration is what I said was actually possible, but not implemented in XRAID. So, maybe Netgear has added some logic in a recent OS update to implement it. But using the extra 2TB of the first 4TB as JBOD was a really bad thing if you were assuming you had RAID redundancy, since that 2TB did not have redundancy until you added the second 4TB. At this point, it is anyone's guess what the next step would be if you were to further expand. Unless you need the additional space right away, you can hold off on the factory default. But I recommend that you figure out a good time to do it down the road, both to gain the full capacity and to align the volume with a more typical configuration.
- Yoshi900May 09, 2020Aspirant
I have already done the factory reset, I was just able to copy all the data onto the 6tb (which I pulled out of the NAS) and some extra space I had on my server. If I left this too long I wouldn't have the space to do the backup and factory reset.
Currently I have copied everything back and done a resync (auto task)
Now I have added the 6tb back and it is syncing again current config is 4-6-6-4tb
I have 30 hours to go before the resync has completed then I will hopefully have 12tb and I will report back here.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!