NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
TheStillMan
Jan 06, 2022Aspirant
2 volume groups and no Xraid
I have an RN316 (firmware 6.9.6) setup with a raid6 xraid volume. It had 4 x 12tb drives and 2 x 4tb drives. I swapped out 1 of the 4tb drive for another 12tb and let it resync. Now, on the volume p...
StephenB
Jan 06, 2022Guru - Experienced User
Can you post a screenshot of what you are seeing?
Is your data accessible? Also, how large is the data volume?
Is the remaining 4 TB drive in slot 1?
TheStillMan
Jan 06, 2022Aspirant
Yes, data is accessible. The size in the screenshot is the same as it was before the bigger drive - I wasn't sure if it would expand since there's still that last 4tb drive left, which is in bay 6.
- StephenBJan 07, 2022Guru - Experienced User
There should be two RAID groups - normally I'd expect the first one to be 6x4TB RAID-6, and the second to be 5x8TB RAID-6. But it doesn't seem to have automatically expanded.
Can you download the log zip file, and post mdstat.log
- TheStillManJan 07, 2022Aspirant
I can't attached the log, but here is a copy/paste of it.
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md126 : active raid6 sda3[11] sdf3[6] sde3[7] sdd3[8] sdc3[9] sdb3[10]
15608675328 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]
md127 : active raid6 sdb4[0] sde4[3] sdd4[2] sdc4[1]
15623471360 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
md1 : active raid10 sdb2[0] sda2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1]
1569792 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU]
md0 : active raid1 sdf1[6] sda1[10] sdb1[9] sdc1[8] sdd1[7] sde1[5]
4190208 blocks super 1.2 [6/6] [UUUUUU]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Fri Jun 15 09:34:07 2018
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistentUpdate Time : Fri Jan 7 06:47:54 2022
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0Name : 2fe67fe8:0 (local to host 2fe67fe8)
UUID : f5237630:c64bb4be:4fed641f:b3365079
Events : 1057Number Major Minor RaidDevice State
6 8 81 0 active sync /dev/sdf1
5 8 65 1 active sync /dev/sde1
7 8 49 2 active sync /dev/sdd1
8 8 33 3 active sync /dev/sdc1
9 8 17 4 active sync /dev/sdb1
10 8 1 5 active sync /dev/sda1
/dev/md/1:
Version : 1.2
Creation Time : Wed Jan 5 13:22:16 2022
Raid Level : raid10
Array Size : 1569792 (1533.00 MiB 1607.47 MB)
Used Dev Size : 523264 (511.00 MiB 535.82 MB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistentUpdate Time : Wed Jan 5 23:51:58 2022
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0Layout : near=2
Chunk Size : 512KName : 2fe67fe8:1 (local to host 2fe67fe8)
UUID : 6243d5a4:9492aa54:4687fe9f:751334fc
Events : 19Number Major Minor RaidDevice State
0 8 18 0 active sync set-A /dev/sdb2
1 8 34 1 active sync set-B /dev/sdc2
2 8 50 2 active sync set-A /dev/sdd2
3 8 66 3 active sync set-B /dev/sde2
4 8 82 4 active sync set-A /dev/sdf2
5 8 2 5 active sync set-B /dev/sda2
/dev/md/Data-0:
Version : 1.2
Creation Time : Fri Jun 15 09:46:36 2018
Raid Level : raid6
Array Size : 15608675328 (14885.59 GiB 15983.28 GB)
Used Dev Size : 3902168832 (3721.40 GiB 3995.82 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistentUpdate Time : Fri Jan 7 06:48:35 2022
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0Layout : left-symmetric
Chunk Size : 64KName : 2fe67fe8:Data-0 (local to host 2fe67fe8)
UUID : 84dfd3a6:cd9b72d9:45a515b8:1310b289
Events : 44955Number Major Minor RaidDevice State
11 8 3 0 active sync /dev/sda3
10 8 19 1 active sync /dev/sdb3
9 8 35 2 active sync /dev/sdc3
8 8 51 3 active sync /dev/sdd3
7 8 67 4 active sync /dev/sde3
6 8 83 5 active sync /dev/sdf3
/dev/md/Data-1:
Version : 1.2
Creation Time : Fri Jan 22 09:02:00 2021
Raid Level : raid6
Array Size : 15623471360 (14899.70 GiB 15998.43 GB)
Used Dev Size : 7811735680 (7449.85 GiB 7999.22 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistentUpdate Time : Fri Jan 7 06:48:35 2022
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0Layout : left-symmetric
Chunk Size : 64KName : 2fe67fe8:Data-1 (local to host 2fe67fe8)
UUID : deb6a4dd:c5b46702:1226c73e:928b7ffb
Events : 441Number Major Minor RaidDevice State
0 8 20 0 active sync /dev/sdb4
1 8 36 1 active sync /dev/sdc4
2 8 52 2 active sync /dev/sdd4
3 8 68 3 active sync /dev/sde4 - SandsharkJan 07, 2022Sensei
Multiple RAID groups are how the NAS takes care of drives of different sizes. You definately don't want to be removing any (which you'd find very hard to do, anyway), as they both contain your data.
Did you turn off XRAID in an attempt to manually expand the volume? If so, you've run into an unfortunate (and, I believe, unnecessary, perhaps even a bug) restriction about going back to XRAID. Even though your volume was created by XRAID, the NAS just doesn't bother to see if it still meets XRAID criteria and a switch back is possible (or, perhaps, incorrectly determines it doesn't meet the criteria). It didn't always do that, but I don't know when it was introduced. But now that you're in FlexRAID mode, you're doing an expansion (incremental drive size increase) FlexRAID doesn't support. There is an SSH command that will likely do it, but let's see that mdstat.log first to be sure.
- SandsharkJan 07, 2022Sensei
Looks like you were posting that while I was replying. Can you also post the contents of lsblk.log so we can verify it did partition the new drive properly and so we can verify which device that is? If it did, there is a simple command via SSH that should expand the volume.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!