NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
HighOctane
Mar 30, 2025Aspirant
Remove Inactive Volumes to sue the disk - RN316
Reading this forum, it sounds like this is not an uncommon situation... I recently had a hard drive fail on my RN316 - replaced it, knowing another was expected to fail soon, and about 2 hours afte...
StephenB
Mar 30, 2025Guru - Experienced User
Data Recovery is probably the safest path.
I suspect there might be a third RAID group given your mix of disks. Check the partitions on one of the 12 TB disks.
Did you do a btrfs device scan before you tried to mount md127
Can you post mdstat.log?
- HighOctaneMar 30, 2025Aspirant
This is the breakdown from lsblk..
root@HighOctaneNAS:~# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sda2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
└─sda3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
└─md127
sdb
├─sdb1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sdb2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
└─sdb3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
└─md127
sdc
├─sdc1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sdc2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
└─sdc3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
└─md127
sdd
├─sdd1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sdd2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
├─sdd3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
│ └─md127
└─sdd4 linux_raid_member 43f67c14:data-1 09dd9a16-97d5-ab01-52fa-5df224195452
└─md126 btrfs 43f67c14:data 2211f852-4973-412d-97ec-e340df756809
sde
├─sde1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sde2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
└─sde3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
└─md127
sdf
├─sdf1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
├─sdf2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
├─sdf3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
│ └─md127
└─sdf4 linux_raid_member 43f67c14:data-1 09dd9a16-97d5-ab01-52fa-5df224195452
└─md126 btrfs 43f67c14:data 2211f852-4973-412d-97ec-e340df756809- SandsharkMar 30, 2025Sensei
What were the drive sizes and positions before and after the first and second swap? Did you re-boot after the first re-sync?
What may be the issue is that the OS was actually still syncing a second RAID group when you thought it was done. I have often seen a notice that re-sync is complete when it's only the first that is complete.
And something that may be confusing the issue is that when you swap out a drive, it can cause the drives not to be in the order you think (e.g. sda may not be in bay 1) until you re-boot.
- StephenBMar 30, 2025Guru - Experienced User
I am also wondering if you reversed md126 and md127 (maybe sdX3 should have been assembled as md126, and sdX4 as md127).
My own systems have md127 as the final RAID group.
md126 : active raid5 sda3[0] sdb3[6] sdf3[4] sde3[3] sdd3[2] sdc3[7] 14627084160 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] md127 : active raid5 sda4[0] sdc4[2] sdb4[1] 9767255808 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
- HighOctaneMar 30, 2025Aspirant
Sorry for not posting this one earlier StephenB - here's mdstat.log
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md126 : active raid5 sdd4[0] sdf4[3]
11718576384 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/2] [UU_]
md127 : active raid5 sdb3[0] sda3[6] sdf3[7] sde3[3] sdd3[2] sdc3[1]
19510827520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
md1 : active raid10 sda2[0] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1]
1566720 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU]
md0 : active raid1 sdb1[6] sdc1[5] sdd1[7] sda1[8] sde1[1]
4190208 blocks super 1.2 [5/5] [UUUUU]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Sat Jan 3 12:35:26 2015
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistentUpdate Time : Thu Mar 27 06:32:40 2025
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0Consistency Policy : unknown
Name : 43f67c14:0 (local to host 43f67c14)
UUID : e6e3fdb5:875ced54:ab63493a:b1f7bff3
Events : 30662Number Major Minor RaidDevice State
6 8 17 0 active sync /dev/sdb1
1 8 65 1 active sync /dev/sde1
8 8 1 2 active sync /dev/sda1
7 8 49 3 active sync /dev/sdd1
5 8 33 4 active sync /dev/sdc1
/dev/md/1:
Version : 1.2
Creation Time : Mon Mar 24 18:05:45 2025
Raid Level : raid10
Array Size : 1566720 (1530.00 MiB 1604.32 MB)
Used Dev Size : 522240 (510.00 MiB 534.77 MB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistentUpdate Time : Tue Mar 25 17:39:40 2025
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0Layout : near=2
Chunk Size : 512KConsistency Policy : unknown
Name : 43f67c14:1 (local to host 43f67c14)
UUID : ca988b5d:26138aec:9aff2560:000789c7
Events : 19Number Major Minor RaidDevice State
0 8 2 0 active sync set-A /dev/sda2
1 8 18 1 active sync set-B /dev/sdb2
2 8 34 2 active sync set-A /dev/sdc2
3 8 50 3 active sync set-B /dev/sdd2
4 8 66 4 active sync set-A /dev/sde2
5 8 82 5 active sync set-B /dev/sdf2
/dev/md/127:
Version : 1.2
Creation Time : Sun Mar 23 17:37:41 2025
Raid Level : raid5
Array Size : 19510827520 (18606.98 GiB 19979.09 GB)
Used Dev Size : 3902165504 (3721.40 GiB 3995.82 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistentUpdate Time : Tue Mar 25 22:40:48 2025
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0Layout : left-symmetric
Chunk Size : 512KConsistency Policy : unknown
Name : 43f67c14:127 (local to host 43f67c14)
UUID : b5e891a1:66389bd0:7f2532a1:f84f92db
Events : 375Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 35 1 active sync /dev/sdc3
2 8 51 2 active sync /dev/sdd3
3 8 67 3 active sync /dev/sde3
7 8 83 4 active sync /dev/sdf3
6 8 3 5 active sync /dev/sda3
/dev/md/data-1:
Version : 1.2
Creation Time : Sun Apr 9 05:46:36 2023
Raid Level : raid5
Array Size : 11718576384 (11175.71 GiB 11999.82 GB)
Used Dev Size : 5859288192 (5587.85 GiB 5999.91 GB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistentUpdate Time : Tue Mar 25 22:40:48 2025
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0Layout : left-symmetric
Chunk Size : 64KConsistency Policy : unknown
Name : 43f67c14:data-1 (local to host 43f67c14)
UUID : 09dd9a16:97d5ab01:52fa5df2:24195452
Events : 19934Number Major Minor RaidDevice State
0 8 52 0 active sync /dev/sdd4
3 8 84 1 active sync /dev/sdf4
- 0 0 2 removed
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!