NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
HighOctane
Mar 30, 2025Aspirant
Remove Inactive Volumes to sue the disk - RN316
Reading this forum, it sounds like this is not an uncommon situation... I recently had a hard drive fail on my RN316 - replaced it, knowing another was expected to fail soon, and about 2 hours afte...
HighOctane
Mar 30, 2025Aspirant
This is the breakdown from lsblk..
root@HighOctaneNAS:~# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sda2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
└─sda3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
└─md127
sdb
├─sdb1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sdb2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
└─sdb3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
└─md127
sdc
├─sdc1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sdc2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
└─sdc3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
└─md127
sdd
├─sdd1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sdd2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
├─sdd3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
│ └─md127
└─sdd4 linux_raid_member 43f67c14:data-1 09dd9a16-97d5-ab01-52fa-5df224195452
└─md126 btrfs 43f67c14:data 2211f852-4973-412d-97ec-e340df756809
sde
├─sde1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
│ └─md0 btrfs 43f67c14:root 91b6ea7f-9705-4e67-8714-8cc5a9ad584c /
├─sde2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
└─sde3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
└─md127
sdf
├─sdf1 linux_raid_member 43f67c14:0 e6e3fdb5-875c-ed54-ab63-493ab1f7bff3
├─sdf2 linux_raid_member 43f67c14:1 ca988b5d-2613-8aec-9aff-2560000789c7
│ └─md1 swap swap 3f098acf-1ece-46e4-920a-31e7f71ab658 [SWAP]
├─sdf3 linux_raid_member 43f67c14:127 b5e891a1-6638-9bd0-7f25-32a1f84f92db
│ └─md127
└─sdf4 linux_raid_member 43f67c14:data-1 09dd9a16-97d5-ab01-52fa-5df224195452
└─md126 btrfs 43f67c14:data 2211f852-4973-412d-97ec-e340df756809
Sandshark
Mar 30, 2025Sensei
What were the drive sizes and positions before and after the first and second swap? Did you re-boot after the first re-sync?
What may be the issue is that the OS was actually still syncing a second RAID group when you thought it was done. I have often seen a notice that re-sync is complete when it's only the first that is complete.
And something that may be confusing the issue is that when you swap out a drive, it can cause the drives not to be in the order you think (e.g. sda may not be in bay 1) until you re-boot.
- StephenBMar 30, 2025Guru - Experienced User
I am also wondering if you reversed md126 and md127 (maybe sdX3 should have been assembled as md126, and sdX4 as md127).
My own systems have md127 as the final RAID group.
md126 : active raid5 sda3[0] sdb3[6] sdf3[4] sde3[3] sdd3[2] sdc3[7] 14627084160 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] md127 : active raid5 sda4[0] sdc4[2] sdb4[1] 9767255808 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]- HighOctaneMar 30, 2025Aspirant
StephenB -
This could well be part of the issue, I had seen md127 as the most common name and md126 as the expansion array - but I'm definitely ready to accept that I have that backwards if your experience shows it the other way..
- SandsharkMar 30, 2025Sensei
But, as I said in my original message, the NAS can (usually does) report that RAID sync is complete when only one of them is. Based on md126 showing a 3/2 status, it may have still in the process of expanding when you pulled another drive. By replacing the 4TB drive in bay 1, it would have also wanted to sync a third, but it never got there. Did you see that two re-sync's were completed?
But something is still wrong with your sequence, where you say drive 4 went to 12TB since your screen grab shows it as 10TB. Also, start from the beginning with what drive swaps you have made, starting from what I suspect was 6 4TB drives. That's going to help decipher how you got to where you are.
- HighOctaneMar 31, 2025Aspirant
I am also wondering if you reversed md126 and md127 (maybe sdX3 should have been assembled as md126, and sdX4 as md127).
I stopped md126 and md127, and arranged them the other way - md126 with 6 drives (sdX3) and md127 with 2 (sdX4) - then rebooted.
md126 : active raid5 sda3[0] sdb3[6] sdf3[4] sde3[3] sdd3[2] sdc3[7] 14627084160 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] md127 : active raid5 sda4[0] sdc4[2] sdb4[1] 9767255808 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]On reboot, the ReadyNAS OS has reversed it so that we again have...
root@HighOctaneNAS:~# cat /proc/mdstat Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md126 : active raid5 sdd4[0] sdf4[3] 11718576384 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/2] [UU_] md127 : active raid5 sdb3[0] sda3[6] sdf3[7] sde3[3] sdd3[2] sdc3[1] 19510827520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU] md1 : active raid10 sda2[0] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1] 1566720 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU] md0 : active raid1 sdb1[6] sdc1[5] sdd1[7] sda1[8] sde1[1] 4190208 blocks super 1.2 [5/5] [UUUUU]- StephenBMar 31, 2025Guru - Experienced User
HighOctane wrote:
I am also wondering if you reversed md126 and md127 (maybe sdX3 should have been assembled as md126, and sdX4 as md127).
I stopped md126 and md127, and arranged them the other way - md126 with 6 drives (sdX3) and md127 with 2 (sdX4) - then rebooted.
On reboot, the ReadyNAS OS has reversed it...
Can you post what happens when you try
btrfs device scan mount /dev/md127 data
- HighOctaneMar 30, 2025Aspirant
Sandshark - Drive 4 went from 10 to 12gb, this was the first drive replaced. Drive 1 had been giving errors and waring it was going to fail, but when I grabbed a drive and went to replace 1, 4 had actually fallen over, so I swapped it out and bought another drive to replace 1 as well.
Drive 4 had finished resyncing by about 2 hours when drive 1 actually fell over, I only know this from the logs though - it finished overnight and the next failure occurred before I saw the NAS in the morning...
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!