NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
bearklaw23
Jan 30, 2021Aspirant
Volume went from Redundant to Dead overnight - recoverable?
I have a ReadyNAS 516 with 6x 4TB WD Red drives. This morning the shares had vanished from my network, and the ReadyNAS wasn't responsive to the front panel buttons. Drive 6 had a red light next to...
bearklaw23
Jan 30, 2021Aspirant
Update - the logs show it is rejecting disk 2 at boot time, even though the disk shows as healthy. With 2 and 6 not registering the volume is indeed dead. I did notice there is a lot of dust in the unit - I'll take all the drives out, clean it out with compressed air, reseat all the drives and see what happens.
I wish I knew why it was rejecting disk 2, all the logs show is that it boots up, see's all five drives, then "kicking non-fresh sdb3 from array!" Some form of corruption?
- bearklaw23Jan 30, 2021Aspirant
Cleaned everything up, made sure the drives were seated properly, same condition. When starting up the logs show
Jan 30 13:13:46 nasgul kernel: md: md126 stopped.
Jan 30 13:13:46 nasgul kernel: md: bind<sdb3>
Jan 30 13:13:46 nasgul kernel: md: bind<sdc3>
Jan 30 13:13:46 nasgul kernel: md: bind<sdd3>
Jan 30 13:13:46 nasgul kernel: md: bind<sde3>
Jan 30 13:13:46 nasgul kernel: md: bind<sdf3>
Jan 30 13:13:46 nasgul kernel: md: bind<sda3>
Jan 30 13:13:46 nasgul kernel: md: kicking non-fresh sdf3 from array!
Jan 30 13:13:46 nasgul kernel: md: unbind<sdf3>
Jan 30 13:13:46 nasgul kernel: md: export_rdev(sdf3)
Jan 30 13:13:46 nasgul kernel: md: kicking non-fresh sdb3 from array!
Jan 30 13:13:46 nasgul kernel: md: unbind<sdb3>
Jan 30 13:13:46 nasgul kernel: md: export_rdev(sdb3)
Jan 30 13:13:46 nasgul kernel: md/raid:md126: device sda3 operational as raid disk 0
Jan 30 13:13:46 nasgul kernel: md/raid:md126: device sde3 operational as raid disk 4
Jan 30 13:13:46 nasgul kernel: md/raid:md126: device sdd3 operational as raid disk 3
Jan 30 13:13:46 nasgul kernel: md/raid:md126: device sdc3 operational as raid disk 2
Jan 30 13:13:46 nasgul kernel: md/raid:md126: allocated 6474kB
Jan 30 13:13:46 nasgul kernel: md/raid:md126: not enough operational devices (2/6 failed)
Jan 30 13:13:46 nasgul kernel: RAID conf printout:
Jan 30 13:13:46 nasgul kernel: --- level:5 rd:6 wd:4
Jan 30 13:13:46 nasgul kernel: disk 0, o:1, dev:sda3
Jan 30 13:13:46 nasgul kernel: disk 2, o:1, dev:sdc3
Jan 30 13:13:46 nasgul kernel: disk 3, o:1, dev:sdd3
Jan 30 13:13:46 nasgul kernel: disk 4, o:1, dev:sde3
Jan 30 13:13:46 nasgul kernel: md/raid:md126: failed to run raid set.
Jan 30 13:13:46 nasgul kernel: md: pers->run() failed ...
Jan 30 13:13:46 nasgul kernel: md: md126 stopped.
Both suspect drives (2 and 6) show no ATA errors, but started reporting "1 Current Pending Sectors" in the last 3 days. I'm guessing I need to get /dev/md126 to accept one of the two drives to access my data, but I'm not sure how to accomplish this. Could mdadm be used to try and assemble the array again?
- StephenBJan 30, 2021Guru - Experienced User
bearklaw23 wrote:
Could mdadm be used to try and assemble the array again?
There is a flag that would force the assembly. You could end up with file system corruption though.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!