NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

schtroumpfmoi's avatar
Sep 22, 2023
Solved

RN104 Remove inactive volumes Disk 3,4

Hi everyone,

one of my 4 2TB drives (#2) failed in my RN104. I went ahead and bought a replacement, but after insertion it looks like i get the "remove inactive volumes disk 3,4"; with some of my drives showing in red while they were healthy.

It could be that dirve #1 is not in a good state either (many ATA errors), but i guess i first have to try and force a rebuild of sorts. It was setup in RAID 5 (which still shows up in green here).

 

Here's the statuses i found in the logs:

[23/09/17 17:40:21 CEST] warning:volume:LOGMSG_HEALTH_VOLUME Volume data health changed from Degraded to Dead.
[23/09/18 01:00:28 CEST] warning:volume:LOGMSG_HEALTH_VOLUME_WARN Volume data is Dead.
[23/09/19 01:00:56 CEST] warning:volume:LOGMSG_HEALTH_VOLUME_WARN Volume data is Dead.
[23/09/20 00:58:18 CEST] notice:volume:LOGMSG_HEALTH_VOLUME Volume data health changed from Dead to Inactive.

 

Any help appreciated as to how to solve this ... I don't have a backup and there are a lot of videos/photos i would have to have lost. 

Thomas

  • StephenB's avatar
    StephenB
    Sep 23, 2023

    schtroumpfmoi wrote:

     

    Sep 22 13:17:49 NAS mdadm[2171]: Fail event detected on md device /dev/md0, component device /dev/sdc1

    [Fri Sep 22 13:50:34 2023] sd 3:0:0:0: [sdd] tag#13 Add. Sense: Unrecovered read error - auto reallocate failed


    So both sdc and sdd are detected as failed.  Unfortunately not a good sign.  It'd have been better if one of the disks was just out of sync.

     

    I'm not sure if either of these is disk 1, you might want to double-check that by looking at disk-info.log.

     


    schtroumpfmoi wrote:

     

    I guess my main question now is: is there still a way i can force a rebuild / should i try to remove/readd the faulty disk ?

     


    With single redundancy, you need three working disks to rebuild the fourth.  You don't have that.

     

    If the disks can be read at all, you could try cloning one or both.  That might help with recovery.

     

    RAID recovery software like ReclaiMe might be able to recover some data from the two remaining disks - not sure.  You can download it and see before you pay for it.  You would need a way to connect the disks to a PC (directly with SATA, or using a USB adapter/dock).

7 Replies

Replies have been turned off for this discussion
  • StephenB's avatar
    StephenB
    Guru - Experienced User

    schtroumpfmoi wrote:

    one of my 4 2TB drives (#2) failed in my RN104. I went ahead and bought a replacement, but after insertion it looks like i get the "remove inactive volumes disk 3,4"; with some of my drives showing in red while they were healthy.

     

    It could be that dirve #1 is not in a good state either (many ATA errors), but i guess i first have to try and force a rebuild of sorts. It was setup in RAID 5 (which still shows up in green here).

     


    Two drive failures would cause the volume to fail.

     

    Download the full log zip file, there is a lot more information in there than you will see in the web ui.

    • schtroumpfmoi's avatar
      schtroumpfmoi
      Aspirant
      I did download it - what should I be looking for ?
      Drive.log confirmed disk1 as ATA error prone - while still deemed alive.
      Disk2 went from online to failed last week - I replaced it this morning.
      Happy to post/ dig further - just let me know what to look for.
      Many thanks
      • StephenB's avatar
        StephenB
        Guru - Experienced User

        schtroumpfmoi wrote:
        I did download it - what should I be looking for ?
        Drive.log confirmed disk1 as ATA error prone - while still deemed alive.
        Disk2 went from online to failed last week - I replaced it this morning.
        Happy to post/ dig further - just let me know what to look for.
        Many thanks

        You likely would see some disk errors around the time that the volume failed (or when the system was rebooted).  They would be in dmesg.log, system.log, systemd-journal.log and/or kernel.log.

         

        If you rebooted the system, then you'll see mdadm errors during the boot.  It'd be useful to know if mdadm was kicking out a "non-fresh" disk from the array, or whether the NAS thought the disk had actually failed.

         

        If you had ssh enabled before the volume failed, you'd be able to log in that way and check some things.  But the system won't let you enable ssh if there is no volume - no idea why.

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More