NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

StellarTech's avatar
StellarTech
Aspirant
Dec 28, 2020
Solved

Help Drives not accessible; OS not seeing RAID 5

Christmas Day my ReadyNAS was completely inaccessible. It seemed frozen. Power was on. HD ligts were on. No display on front. Pressing power buttin had no effect. Inaccessible via browser. Ended up pulling power. Waited then powered up.

 

Now browser connects. Can access via Windows File Explorer but all that's available are root folders. Unable to see antything inside fiolders.

 

From browser I can see I'm running firmware 6.10.3. Apparebntly the NAS does not see any volumes. Selecting Volumes in the diagram tyhe first two drives look normal. Drives 3 & 4 are dark. A message at the tp says:

"Remove inactive volumes to use the Disk #1, 2."

 

I need help to resolve this. While I have some backups they are older and incomplete.

 

 

  • Hi StellarTech 

     

    Thanks for providing me with the logs.

     

    Your data raid (md127) is not running. It is not able to start. As we can see by the partitions log, disk 3 and 4 are not even recognized properly in the NAS currently (which would explain why those disks are "dark" in the web interface). They show as mere 4GB disks.

     

    8 0 2930266584 sda <== Disk 1 (Normal size)
    8 1 4194304 sda1
    8 2 524288 sda2
    8 3 2925547936 sda3
    8 16 2930266584 sdb <== Disk 2 (Normal size)
    8 17 4194304 sdb1
    8 18 524288 sdb2
    8 19 2925547936 sdb3
    8 32 4044975 sdc <== Disk 3 (very small size - not read properly)
    8 48 4044975 sdd <== Disk 4 (very small size - not read properly)
    9 0 4190208 md0 <== OS raid
    9 1 523264 md1 <== Linux swap raid
    <=== Data raid (md127) not showing



    The raid log also shows that we are missing the data raid (md127) and the OS raid is reporting disk 3 and 4 as gone.

     

    Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    
    <=== md127 (data raid) missing as it cannot start
    
    md1 : active raid1 sda2[0] sdb2[1]
    523264 blocks super 1.2 [2/2] [UU] <== Linux swap raid
    
    md0 : active raid1 sda1[4] sdb1[1]
    4190208 blocks super 1.2 [4/2] [UU__] <== OS raid showing two missing disks as well



    There are no messages in the status log regarding any disk failures, but the dmesg log is spewing tons of device errors on disk 3 and 4. The below errors are repeated over and over in the dmesg log.

    == Disk 3 ==
    [Fri Dec 25 23:50:37 2020] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 [Fri Dec 25 23:50:37 2020] ata3.00: irq_stat 0x40000000 [Fri Dec 25 23:50:37 2020] ata3.00: failed command: READ DMA [Fri Dec 25 23:50:37 2020] ata3.00: cmd c8/00:04:84:70:7b/00:00:00:00:00/e0 tag 24 dma 2048 in res 51/04:04:84:70:7b/00:00:00:00:00/e0 Emask 0x1 (device error) [Fri Dec 25 23:50:37 2020] ata3.00: status: { DRDY ERR } [Fri Dec 25 23:50:37 2020] ata3.00: error: { ABRT } [Fri Dec 25 23:50:37 2020] ata3.00: configured for UDMA/133 (device error ignored) [Fri Dec 25 23:50:37 2020] ata3: EH complete
    == Disk 4 == [Fri Dec 25 23:50:37 2020] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 [Fri Dec 25 23:50:37 2020] ata4.00: irq_stat 0x40000000 [Fri Dec 25 23:50:37 2020] ata4.00: failed command: READ DMA [Fri Dec 25 23:50:37 2020] ata4.00: cmd c8/00:08:80:70:7b/00:00:00:00:00/e0 tag 29 dma 4096 in res 51/04:08:80:70:7b/00:00:00:00:00/e0 Emask 0x1 (device error) [Fri Dec 25 23:50:37 2020] ata4.00: status: { DRDY ERR } [Fri Dec 25 23:50:37 2020] ata4.00: error: { ABRT } [Fri Dec 25 23:50:37 2020] ata4.00: configured for UDMA/133 (device error ignored) [Fri Dec 25 23:50:37 2020] ata4: EH complete


    So, the NAS cannot assemble the data raid because two disks are essentially "dead" in your raid 5. However, there are no prior warnings about those disks being faulty, in the status log. It looks like they just went bad all of a sudden. This makes me very suspect that the issue could be the disk bays - i.e. faulty chassis. In any case, you absolutely should contact Netgear. Their Level 3 support team will be able to help you further. Even if it is a chassis issue, there is a possibility that they need to manually assemble the raid in a new chassis as it is broken at this point and could require manual intervention.

     

    My guess would be the chassis, as I don't see any reason for two disks dropping off like this, all of a sudden.


    Hope this helped, cheers

4 Replies

  • Hi StellarTech 

     

    When you see this error message, chances are that you have an issue with either your raid or the filesystem on the raid volume. I can take a look at it for you, but I would need some logs.


    You can download logs into a zip file.
    - Go to the ReadyNAS web admin page and navigate to "System" > "Logs".
    - Here you will see an option to "Download Logs" on the right-hand side.
    - This will download a zip file containing the logs.


    Once downloaded, then upload to Google drive, Dropbox or similar and give me a link to download them. PM me this link, don't share it publicly here on the forums.


    Thanks

     

  • Hi StellarTech 

     

    Thanks for providing me with the logs.

     

    Your data raid (md127) is not running. It is not able to start. As we can see by the partitions log, disk 3 and 4 are not even recognized properly in the NAS currently (which would explain why those disks are "dark" in the web interface). They show as mere 4GB disks.

     

    8 0 2930266584 sda <== Disk 1 (Normal size)
    8 1 4194304 sda1
    8 2 524288 sda2
    8 3 2925547936 sda3
    8 16 2930266584 sdb <== Disk 2 (Normal size)
    8 17 4194304 sdb1
    8 18 524288 sdb2
    8 19 2925547936 sdb3
    8 32 4044975 sdc <== Disk 3 (very small size - not read properly)
    8 48 4044975 sdd <== Disk 4 (very small size - not read properly)
    9 0 4190208 md0 <== OS raid
    9 1 523264 md1 <== Linux swap raid
    <=== Data raid (md127) not showing



    The raid log also shows that we are missing the data raid (md127) and the OS raid is reporting disk 3 and 4 as gone.

     

    Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    
    <=== md127 (data raid) missing as it cannot start
    
    md1 : active raid1 sda2[0] sdb2[1]
    523264 blocks super 1.2 [2/2] [UU] <== Linux swap raid
    
    md0 : active raid1 sda1[4] sdb1[1]
    4190208 blocks super 1.2 [4/2] [UU__] <== OS raid showing two missing disks as well



    There are no messages in the status log regarding any disk failures, but the dmesg log is spewing tons of device errors on disk 3 and 4. The below errors are repeated over and over in the dmesg log.

    == Disk 3 ==
    [Fri Dec 25 23:50:37 2020] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 [Fri Dec 25 23:50:37 2020] ata3.00: irq_stat 0x40000000 [Fri Dec 25 23:50:37 2020] ata3.00: failed command: READ DMA [Fri Dec 25 23:50:37 2020] ata3.00: cmd c8/00:04:84:70:7b/00:00:00:00:00/e0 tag 24 dma 2048 in res 51/04:04:84:70:7b/00:00:00:00:00/e0 Emask 0x1 (device error) [Fri Dec 25 23:50:37 2020] ata3.00: status: { DRDY ERR } [Fri Dec 25 23:50:37 2020] ata3.00: error: { ABRT } [Fri Dec 25 23:50:37 2020] ata3.00: configured for UDMA/133 (device error ignored) [Fri Dec 25 23:50:37 2020] ata3: EH complete
    == Disk 4 == [Fri Dec 25 23:50:37 2020] ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 [Fri Dec 25 23:50:37 2020] ata4.00: irq_stat 0x40000000 [Fri Dec 25 23:50:37 2020] ata4.00: failed command: READ DMA [Fri Dec 25 23:50:37 2020] ata4.00: cmd c8/00:08:80:70:7b/00:00:00:00:00/e0 tag 29 dma 4096 in res 51/04:08:80:70:7b/00:00:00:00:00/e0 Emask 0x1 (device error) [Fri Dec 25 23:50:37 2020] ata4.00: status: { DRDY ERR } [Fri Dec 25 23:50:37 2020] ata4.00: error: { ABRT } [Fri Dec 25 23:50:37 2020] ata4.00: configured for UDMA/133 (device error ignored) [Fri Dec 25 23:50:37 2020] ata4: EH complete


    So, the NAS cannot assemble the data raid because two disks are essentially "dead" in your raid 5. However, there are no prior warnings about those disks being faulty, in the status log. It looks like they just went bad all of a sudden. This makes me very suspect that the issue could be the disk bays - i.e. faulty chassis. In any case, you absolutely should contact Netgear. Their Level 3 support team will be able to help you further. Even if it is a chassis issue, there is a possibility that they need to manually assemble the raid in a new chassis as it is broken at this point and could require manual intervention.

     

    My guess would be the chassis, as I don't see any reason for two disks dropping off like this, all of a sudden.


    Hope this helped, cheers

    • StellarTech's avatar
      StellarTech
      Aspirant

      rn_enthusiast,

       

      Thank you very much for your assistance and recommendation.

       

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More