NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

poojaratele's avatar
poojaratele
Aspirant
Jul 08, 2017

ReadyNAS 516 : Remove Inactive Volume to use the disk #1,2,3,4,5,6

ReadyNAS 516 : Remove Inactive Volume to use the disk #1,2,3,4,5,6 

 

1st was disk no 5 failed, so i have replaced this disk and resync started before the resynce process finish disk 6 got fail and after that  rysnc msg completed ...

 

but now getting msg like ReadyNAS 516 : Remove Inactive Volume to use the disk #1,2,3,4,5,6 

 

any one can help how to solve this issue and get data back

7 Replies

Replies have been turned off for this discussion
    • poojaratele's avatar
      poojaratele
      Aspirant

      [Sat Jul 8 13:37:27 2017] md: md127 stopped.
      [Sat Jul 8 13:37:27 2017] md: bind<sdb3>
      [Sat Jul 8 13:37:27 2017] md: bind<sdd3>
      [Sat Jul 8 13:37:27 2017] md: bind<sdc3>
      [Sat Jul 8 13:37:27 2017] md: bind<sdf3>
      [Sat Jul 8 13:37:27 2017] md: bind<sde3>
      [Sat Jul 8 13:37:27 2017] md: bind<sda3>
      [Sat Jul 8 13:37:27 2017] md: kicking non-fresh sdf3 from array!
      [Sat Jul 8 13:37:27 2017] md: unbind<sdf3>
      [Sat Jul 8 13:37:27 2017] md: export_rdev(sdf3)
      [Sat Jul 8 13:37:27 2017] md/raid:md127: device sda3 operational as raid disk 0
      [Sat Jul 8 13:37:27 2017] md/raid:md127: device sdc3 operational as raid disk 4
      [Sat Jul 8 13:37:27 2017] md/raid:md127: device sdd3 operational as raid disk 3
      [Sat Jul 8 13:37:27 2017] md/raid:md127: device sdb3 operational as raid disk 1
      [Sat Jul 8 13:37:27 2017] md/raid:md127: allocated 6474kB
      [Sat Jul 8 13:37:27 2017] md/raid:md127: not enough operational devices (2/6 failed)
      [Sat Jul 8 13:37:27 2017] RAID conf printout:
      [Sat Jul 8 13:37:27 2017] --- level:5 rd:6 wd:4
      [Sat Jul 8 13:37:27 2017] disk 0, o:1, dev:sda3
      [Sat Jul 8 13:37:27 2017] disk 1, o:1, dev:sdb3
      [Sat Jul 8 13:37:27 2017] disk 3, o:1, dev:sdd3
      [Sat Jul 8 13:37:27 2017] disk 4, o:1, dev:sdc3
      [Sat Jul 8 13:37:27 2017] md/raid:md127: failed to run raid set.
      [Sat Jul 8 13:37:27 2017] md: pers->run() failed ...
      [Sat Jul 8 13:37:27 2017] md: md127 stopped.
      [Sat Jul 8 13:37:27 2017] md: unbind<sda3>
      [Sat Jul 8 13:37:27 2017] md: export_rdev(sda3)
      [Sat Jul 8 13:37:27 2017] md: unbind<sde3>
      [Sat Jul 8 13:37:27 2017] md: export_rdev(sde3)
      [Sat Jul 8 13:37:27 2017] md: unbind<sdc3>
      [Sat Jul 8 13:37:27 2017] md: export_rdev(sdc3)
      [Sat Jul 8 13:37:27 2017] md: unbind<sdd3>
      [Sat Jul 8 13:37:27 2017] md: export_rdev(sdd3)
      [Sat Jul 8 13:37:27 2017] md: unbind<sdb3>
      [Sat Jul 8 13:37:27 2017] md: export_rdev(sdb3)

      • jak0lantash's avatar
        jak0lantash
        Mentor

        Yep, that's it.

        sde3 is out of the RAID and sdf3 is out of sync.

        Any errors on sdf?

        I presume sde is the new one (you can check channel number in disk_info.log). Do you still have the original sde? Is it completely dead or clonable?

        Do you have a backup?

         

        This is a Data Recovery situation.

        There is usually three ways to attempt fixing the array: clone the old drive you replaced and try to reassemble the array with it, or force the array together, or recreate the array in place. These come with the risk of permanently corrupting the array.

        It's always safer to contact a specialist. NETGEAR does provide this type of service as a contract.

  • You can contact other companies to compare the price. But this type of service is usually expensive.
    Data Recovery by nature is not guaranteed to succeed.
    The cost usually covers the expertise and the labor.
  • I think what happened is:
    - sde failed.
    - sde was replaced.
    - Resync started.
    - sdf died.