NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

bearklaw23's avatar
bearklaw23
Aspirant
Jan 30, 2021

Volume went from Redundant to Dead overnight - recoverable?

I have a ReadyNAS 516 with 6x 4TB WD Red drives.  This morning the shares had vanished from my network, and the ReadyNAS wasn't responsive to the front panel buttons.  Drive 6 had a red light next to it as if it had failed.  I eventually had to turn the power off to get the box to respond, and when it restarted it showed all six drives as healthy, but there was no volume, and it had split the drives from a single RAID-X volume into a RAID 5 volume and a RAID 1 volume.  The logs simple show "Volume data health changed from Redundant to Degraded" with no indication of an error, and about an hour later "Volume data health changed from Degraded to Dead" at the same time it shows drive 6 as having failed.  I've tried rebooting with drive 6 present and removed and it has no effect on the system when it comes up.

 

I have a fairly recent backup so my data is safe, but I would rather recover the volume than restore 18GB of data.  Is there any chance of salvaging my data at this point?

3 Replies

Replies have been turned off for this discussion
  • Update - the logs show it is rejecting disk 2 at boot time, even though the disk shows as healthy.  With 2 and 6 not registering the volume is indeed dead.  I did notice there is a lot of dust in the unit - I'll take all the drives out, clean it out with compressed air, reseat all the drives and see what happens.  

     

    I wish I knew why it was rejecting disk 2, all the logs show is that it boots up, see's all five drives, then "kicking non-fresh sdb3 from array!"  Some form of corruption?

     

     

    • bearklaw23's avatar
      bearklaw23
      Aspirant

      Cleaned everything up, made sure the drives were seated properly, same condition.  When starting up the logs show

       

      Jan 30 13:13:46 nasgul kernel: md: md126 stopped.

      Jan 30 13:13:46 nasgul kernel: md: bind<sdb3>

      Jan 30 13:13:46 nasgul kernel: md: bind<sdc3>

      Jan 30 13:13:46 nasgul kernel: md: bind<sdd3>

      Jan 30 13:13:46 nasgul kernel: md: bind<sde3>

      Jan 30 13:13:46 nasgul kernel: md: bind<sdf3>

      Jan 30 13:13:46 nasgul kernel: md: bind<sda3>

      Jan 30 13:13:46 nasgul kernel: md: kicking non-fresh sdf3 from array!

      Jan 30 13:13:46 nasgul kernel: md: unbind<sdf3>

      Jan 30 13:13:46 nasgul kernel: md: export_rdev(sdf3)

      Jan 30 13:13:46 nasgul kernel: md: kicking non-fresh sdb3 from array!

      Jan 30 13:13:46 nasgul kernel: md: unbind<sdb3>

      Jan 30 13:13:46 nasgul kernel: md: export_rdev(sdb3)

      Jan 30 13:13:46 nasgul kernel: md/raid:md126: device sda3 operational as raid disk 0

      Jan 30 13:13:46 nasgul kernel: md/raid:md126: device sde3 operational as raid disk 4

      Jan 30 13:13:46 nasgul kernel: md/raid:md126: device sdd3 operational as raid disk 3

      Jan 30 13:13:46 nasgul kernel: md/raid:md126: device sdc3 operational as raid disk 2

      Jan 30 13:13:46 nasgul kernel: md/raid:md126: allocated 6474kB

      Jan 30 13:13:46 nasgul kernel: md/raid:md126: not enough operational devices (2/6 failed)

      Jan 30 13:13:46 nasgul kernel: RAID conf printout:

      Jan 30 13:13:46 nasgul kernel:  --- level:5 rd:6 wd:4

      Jan 30 13:13:46 nasgul kernel:  disk 0, o:1, dev:sda3

      Jan 30 13:13:46 nasgul kernel:  disk 2, o:1, dev:sdc3

      Jan 30 13:13:46 nasgul kernel:  disk 3, o:1, dev:sdd3

      Jan 30 13:13:46 nasgul kernel:  disk 4, o:1, dev:sde3

      Jan 30 13:13:46 nasgul kernel: md/raid:md126: failed to run raid set.

      Jan 30 13:13:46 nasgul kernel: md: pers->run() failed ...

      Jan 30 13:13:46 nasgul kernel: md: md126 stopped.

       

       

      Both suspect drives (2 and 6) show no ATA errors, but started reporting "1 Current Pending Sectors" in the last 3 days.  I'm guessing I need to get /dev/md126 to accept one of the two drives to access my data, but I'm not sure how to accomplish this.  Could mdadm be used to try and assemble the array again? 

      • StephenB's avatar
        StephenB
        Guru - Experienced User

        bearklaw23 wrote:

        Could mdadm be used to try and assemble the array again? 


        There is a flag that would force the assembly.  You could end up with file system corruption though.

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More