NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

Issam99999's avatar
Issam99999
Aspirant
Mar 31, 2019

No volume exists

Hi,

Suddenly, I could't access my data on RN214, 

I found the following error message on its web page:

"Volume is inactive or dead"

With knwoing, that it's working on RAID 5 system. now all my data has been gone. 
any help? 
please find the attatched photos to get more details: 

Also please check some pictures might help below: 

 

7 Replies

Replies have been turned off for this discussion
  • Hi

    Would you mind downloading logs and upload them to Google link or similar. PM me the link and I can take a look for you.

    Cheers
    • Issam99999's avatar
      Issam99999
      Aspirant

      thank you for your response. 

      please find below the shared link for the log files,

       

      <Redacted>

      • StephenB's avatar
        StephenB
        Guru - Experienced User

        Issam99999 wrote:

        please find below the shared link for the log files,


        Hopchen said "PM" ->  "Private Message". That's done with the envelope icon in the upper right of the forum page.

         

        Don't post your full logs publicly - there is some private stuff in there.

         

        I redacted your link, but sent it to Hopchen in a PM first on your behalf.

         

         

  •  

    Hi Issam99999 

     

    Thanks for sending over the logs.

     

    The reason for the "Remove Inactive Volumes" error is because the data volume cannot mount. In your case, this happens because the data raid is not running at all.

    Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md1 : active raid6 sdd2[3] sdc2[2] sdb2[1] sda2[0]
    1047424 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU] <<<=== Linux swap space raid
    
    md0 : active raid1 sda1[5] sdb1[1] sdd1[4] sdc1[2]
    4190208 blocks super 1.2 [4/4] [UUUU] <<<=== ReadyNAS OS raid
    
    <<<=== Missing data raid (md127)

     

    You have two dodgy disks in the NAS and this is the reason. Both those disks were kicked from the raid. This will result in a double disk failure in a raid5, rending the raid "dead". Disk 4 stated to show signs of errors around the 3rd of March and disk 1 started to fail around the 16th of March. Errors on those disks increased as time went on and the disks became worse and worse until this happened in the space of a day.

    [19/03/29 16:43:08 PDT] err:disk:LOGMSG_ZFS_DISK_STATUS_CHANGED Disk in channel 4 (Internal) changed state from RESYNC to FAILED. <<<=== Disk 4 fails. The raid is now degraded.
    [19/03/30 01:29:23 PDT] err:disk:LOGMSG_ZFS_DISK_STATUS_CHANGED Disk in channel 1 (Internal) changed state from ONLINE to FAILED. <<<=== Disk 1 fails. Second disk failure and the raid stops.

     

    The raid has been re-syncing multiple times, likely due to the disks dropping in out of the raid as both are faulty. The additional raid syncs caused extra strain on the already failing disks, accelerating the degradation process.

     

    So, at this point you would need to try and clone either disk 1 or disk 4 (likely disk 1 as that disk dropped out last) and then try and manually assemble the raid. This is certainly possible to try, given that the disks gets no worse. However, unless you are very comfortable with Linux and with mdadm raid mechanics, I would advise to contact NETGEAR to discuss data recovery contract. It is going to cost a few hundred bucks so depends on how important the data is.


    If you have a backup then you get two fresh disks to replace the bad ones --> factory reset and start over --> then restore from backups.

    The peculiar thing here is that the disks are quite young. It is very uncommon for disks to fail at this age and thus I suspect that they were faulty from the beginning. Additionally, the majority of these disk failures are mechanical failures inside the drives and those cannot be caused by the NAS itself. If these disks were bought separately from the NAS, the manufacturer will certainly have warranty on them, judging by the young age of these disks. If the disks came with the NAS you should also be covered by NETGEAR as these are very new disks.

     

    Here is the state of the disks.

    Device: sda
    Controller: 0
    Channel: 0 <<<=== Disk 1
    Model: ST6000NM0115-1YZ110
    Serial: (masked)
    Firmware: SN04
    Class: SATA
    RPM: 7200
    Sectors: 11721045168
    Pool: data-0
    PoolType: RAID 5
    PoolState: 5
    PoolHostId: (masked)
    Health data 
    ATA Error Count: 189
    Reallocated Sectors: 123
    Reallocation Events: 123
    Spin Retry Count: 0
    End-to-End Errors: 3
    Command Timeouts: 6
    Current Pending Sector Count: 0
    Uncorrectable Sector Count: 0
    Temperature: 48
    Start/Stop Count: 71
    Power-On Hours: 421
    Power Cycle Count: 71
    Load Cycle Count: 207
    
    Device: sdd
    Controller: 0
    Channel: 3 <<<=== Disk 4
    Model: ST6000NM0115-1YZ110
    Serial: (masked)
    Firmware: SN04
    Class: SATA
    RPM: 7200
    Sectors: 11721045168
    Pool: data-0
    PoolType: RAID 5
    PoolState: 5
    PoolHostId: (masked)
    Health data 
    ATA Error Count: 8875
    Reallocated Sectors: 7565
    Reallocation Events: 7565
    Spin Retry Count: 0
    End-to-End Errors: 0
    Command Timeouts: 26
    Current Pending Sector Count: 8
    Uncorrectable Sector Count: 8
    Temperature: 49
    Start/Stop Count: 71
    Power-On Hours: 420
    Power Cycle Count: 71
    Load Cycle Count: 207

     

    Sorry I could not bring you better news on this one :(

     

    Cheers

     

    • Issam99999's avatar
      Issam99999
      Aspirant

      Thank you for your response, 

      I have replaced the two damaged HDDs, but I am still worried to face this issue again, 

       

      Any advice,

       

      BR,  

      • Danthelf's avatar
        Danthelf
        Star

        EDIT: I now realize I may have misread your post, I thought you replaced the drives and was still seeing the same issue... But now I see that you're worried about having the same issue again. In this case two drives failed, that can happen with any device... The best thing to do is to have a proper backup plan, never store all your data on a single device. In addition to that, set up email alerts to notify you if drives are failing, this reduces the risk of having multiple disk failures as you can replace each disk as they start failing.

         

        Original:

        As per Hopchen's update the RAID is considered dead due to the two faulty drives, you say that you have now replaced the two drives.. Did you just take them out and put in new ones? Each drive would need to be replaced one by one and give the volume time to sync but as the volume is already considered dead this wouldn't work and you probably need to clone the drives (as Hopchen mentioned).. Did you clone both drives? 

         

        Even after cloning the volume may be out of sync due to varying event counts between the drives in the array and would have to be forced back together... At this stage I think it's best to consider restoring from backup, or if you don't have it: Contact netgear support for data recovery options. 

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More