NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
HighOctane
Mar 30, 2025Aspirant
Remove Inactive Volumes to sue the disk - RN316
Reading this forum, it sounds like this is not an uncommon situation... I recently had a hard drive fail on my RN316 - replaced it, knowing another was expected to fail soon, and about 2 hours afte...
StephenB
Mar 30, 2025Guru - Experienced User
Data Recovery is probably the safest path.
I suspect there might be a third RAID group given your mix of disks. Check the partitions on one of the 12 TB disks.
Did you do a btrfs device scan before you tried to mount md127
Can you post mdstat.log?
HighOctane
Mar 30, 2025Aspirant
This is the breakdown from lsblk..
root@HighOctaneNAS:~# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sda2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
└─sda3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
└─md127
sdb
├─sdb1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sdb2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
└─sdb3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
└─md127
sdc
├─sdc1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sdc2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
└─sdc3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
└─md127
sdd
├─sdd1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sdd2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
├─sdd3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
│ └─md127
└─sdd4 linux_raid_member 43f67c14:data-1 09dd9a16-97d5-ab01-52fa-5df224195452
└─md126 btrfs 43f67c14:data 2211f852-4973-412d-97ec-e340df756809
sde
├─sde1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sde2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
└─sde3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
└─md127
sdf
├─sdf1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
├─sdf2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
├─sdf3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
│ └─md127
└─sdf4 linux_raid_member 43f67c14:data-1 09dd9a16-97d5-ab01-52fa-5df224195452
└─md126 btrfs 43f67c14:data 2211f852-4973-412d-97ec-e340df756809
- SandsharkMar 30, 2025Sensei
What were the drive sizes and positions before and after the first and second swap? Did you re-boot after the first re-sync?
What may be the issue is that the OS was actually still syncing a second RAID group when you thought it was done. I have often seen a notice that re-sync is complete when it's only the first that is complete.
And something that may be confusing the issue is that when you swap out a drive, it can cause the drives not to be in the order you think (e.g. sda may not be in bay 1) until you re-boot.
- StephenBMar 30, 2025Guru - Experienced User
I am also wondering if you reversed md126 and md127 (maybe sdX3 should have been assembled as md126, and sdX4 as md127).
My own systems have md127 as the final RAID group.
md126 : active raid5 sda3[0] sdb3[6] sdf3[4] sde3[3] sdd3[2] sdc3[7] 14627084160 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] md127 : active raid5 sda4[0] sdc4[2] sdb4[1] 9767255808 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]- HighOctaneMar 30, 2025Aspirant
StephenB -
This could well be part of the issue, I had seen md127 as the most common name and md126 as the expansion array - but I'm definitely ready to accept that I have that backwards if your experience shows it the other way..
- HighOctaneMar 30, 2025Aspirant
Sandshark - Drive 4 went from 10 to 12gb, this was the first drive replaced. Drive 1 had been giving errors and waring it was going to fail, but when I grabbed a drive and went to replace 1, 4 had actually fallen over, so I swapped it out and bought another drive to replace 1 as well.
Drive 4 had finished resyncing by about 2 hours when drive 1 actually fell over, I only know this from the logs though - it finished overnight and the next failure occurred before I saw the NAS in the morning...
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!