NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

Halex43's avatar
Halex43
Aspirant
Aug 02, 2023
Solved

ReadyNAS 6 Pro Remove inactive volumes to use the disk. Disk #1,2,3,4,5,6.

This all started because my disk 2 was having errors. It was a 8TB and I got a 4TB to replace it because i didn't need extra space(looking back this may have been a mistake). It tried to resync but stayed in a downgraded state. I eventually put the 8TB back in and it kept staying in a downgraded state. At this point I started getting worried and started copying my family pictures off of it. I attempted to set up a backup job and didn't know what i was doing like you can see in the logs. As the transfer went on there was more I/O errors. and eventually it was all I/O errors. I restarted the NAS and when it came back all the drives were red and it was telling me to remove the dead volume. I transferred all the drives to this other ReadyNAS (not pro) I had but same thing.

 

I have since wiped that 8 TB which i now regret doing. That may have doomed me. I read in a post with a similar issue to mine that someone replied there may be some SSH commands I can run that possibly can get me raid config fixed. Is there anyone that could help me with that? or just let me know there is no fixing it. I can't afford support and I would really like to finish getting the family pictures off of there.

 

Thanks in advance!

 

logs:

<redacted>

  •  

     

    [Wed Aug  2 15:43:08 2023] ata1.00: cmd 60/08:60:48:00:00/00:00:00:00:00/40 tag 12 ncq 4096 in
             res 41/40:00:49:00:00/00:00:00:00:00/40 Emask 0x409 (media error) <F>
    [Wed Aug  2 15:43:08 2023] ata1.00: status: { DRDY ERR }
    [Wed Aug  2 15:43:08 2023] ata1.00: error: { UNC }
    [Wed Aug  2 15:43:08 2023] ata1.00: configured for UDMA/133
    [Wed Aug  2 15:43:08 2023] ata1: EH complete
    [Wed Aug  2 15:43:08 2023] do_marvell_9170_recover: ignoring PCI device (8086:2922) at PCI#0
    [Wed Aug  2 15:43:08 2023] ata1.00: exception Emask 0x0 SAct 0x10010000 SErr 0x0 action 0x0
    [Wed Aug  2 15:43:08 2023] ata1.00: irq_stat 0x40000008
    [Wed Aug  2 15:43:08 2023] ata1.00: failed command: READ FPDMA QUEUED
    [Wed Aug  2 15:43:08 2023] ata1.00: cmd 60/08:80:48:00:00/00:00:00:00:00/40 tag 16 ncq 4096 in
             res 41/40:00:49:00:00/00:00:00:00:00/40 Emask 0x409 (media error) <F>
    [Wed Aug  2 15:43:08 2023] ata1.00: status: { DRDY ERR }
    [Wed Aug  2 15:43:08 2023] ata1.00: error: { UNC }
    [Wed Aug  2 15:43:08 2023] ata1.00: configured for UDMA/133
    [Wed Aug  2 15:43:08 2023] Buffer I/O error on dev sda1, logical block 1, async page read
    [Wed Aug  2 15:43:08 2023] ata1: EH complete
    [Wed Aug  2 15:43:08 2023] do_marvell_9170_recover: ignoring PCI device (8086:2922) at PCI#0
    [Wed Aug  2 15:43:08 2023] ata1.00: exception Emask 0x0 SAct 0x8000 SErr 0x0 action 0x0
    [Wed Aug  2 15:43:08 2023] ata1.00: irq_stat 0x40000008
    [Wed Aug  2 15:43:08 2023] ata1.00: failed command: READ FPDMA QUEUED
    [Wed Aug  2 15:43:08 2023] ata1.00: cmd 60/08:78:48:00:00/00:00:00:00:00/40 tag 15 ncq 4096 in
             res 41/40:00:49:00:00/00:00:00:00:00/40 Emask 0x409 (media error) <F>
    [Wed Aug  2 15:43:08 2023] ata1.00: status: { DRDY ERR }
    [Wed Aug  2 15:43:08 2023] ata1.00: error: { UNC }
    [Wed Aug  2 15:43:08 2023] ata1.00: configured for UDMA/133
    [Wed Aug  2 15:43:08 2023] Buffer I/O error on dev sda1, logical block 1, async page read
    [Wed Aug  2 15:43:08 2023] ata1: EH complete
    

    and more recently

    [23/06/06 21:02:13 EDT] crit:disk:LOGMSG_SMART_UNCORR_ERR_WARN Detected high uncorrectable error count: [3058] on disk 2 (Internal) [WDC WD80EFZX-68UW8N0, VKKBL13Y]. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy.
    [23/06/07 01:00:36 EDT] warning:volume:LOGMSG_HEALTH_VOLUME_WARN Volume MONOLITH is Degraded.
    [23/06/13 13:10:53 EDT] notice:system:LOGMSG_SYSTEM_HALT The system is shutting down.
    [23/06/13 13:40:55 EDT] warning:volume:LOGMSG_HEALTH_VOLUME_WARN Volume MONOLITH is Degraded.
    [23/06/13 13:40:56 EDT] info:system:LOGMSG_START_READYNASD ReadyNASOS background service started.
    [23/06/13 13:42:17 EDT] warning:volume:LOGMSG_VOLUME_READONLY The volume MONOLITH encountered an error and was made read-only. It is recommended to backup your data.
    [23/06/13 14:53:17 EDT] info:volume:LOGMSG_DELETE_VOLUME Volume data-0 was deleted from the system.
    [23/06/13 14:54:36 EDT] notice:system:LOGMSG_SYSTEM_REBOOT The system is rebooting.
    [23/06/13 14:56:17 EDT] warning:volume:LOGMSG_HEALTH_VOLUME_WARN Volume MONOLITH is Degraded.
    [23/06/13 14:56:20 EDT] info:system:LOGMSG_START_READYNASD ReadyNASOS background service started.
    [23/06/13 14:56:43 EDT] notice:volume:LOGMSG_RESILVERSTARTED_VOLUME Resyncing started for Volume MONOLITH.
    [23/06/13 14:57:54 EDT] warning:volume:LOGMSG_VOLUME_READONLY The volume MONOLITH encountered an error and was made read-only. It is recommended to backup your data.
    [23/06/13 23:45:47 EDT] notice:disk:LOGMSG_ZFS_DISK_STATUS_CHANGED Disk in channel 2 (Internal) changed state from RESYNC to ONLINE.
    [23/06/13 23:47:43 EDT] notice:disk:LOGMSG_ZFS_DISK_STATUS_CHANGED Disk in channel 2 (Internal) changed state from ONLINE to RESYNC.
    [23/06/14 00:34:30 EDT] notice:disk:LOGMSG_ZFS_DISK_STATUS_CHANGED Disk in channel 2 (Internal) changed state from RESYNC to ONLINE.
    [23/06/14 00:42:35 EDT] notice:volume:LOGMSG_RESILVERCOMPLETE_DEGRADED_VOLUME The resync operation finished on volume MONOLITH. However, the volume is still degraded.
    [23/06/14 0

     

     

    I suggest powering down, removing disk 1, and rebooting.  Then capture a fresh set of logs, and PM me a link (using the envelope icon in the upper right of the forum page).

     

4 Replies

Replies have been turned off for this discussion
  • StephenB's avatar
    StephenB
    Guru - Experienced User

    I redacted your log link, as there is some privacy loss when posting logs publicly.

     

    First glance at your logs shows that the WD80EFZX (serial VKKBL13Y) wasn't healthy - generating thousands of reallocated sectors.

     

    You have a different 8 TB drive in the NAS now - that appears to be ok.  But the first disk (a WD30EFRX serial WD-WCC1T0490492) is flooding the log with errors. Disk 3 ( Serial WD-WMC1T2512584) is also showing one reallocated sector.

     

    • StephenB's avatar
      StephenB
      Guru - Experienced User

       

       

      [Wed Aug  2 15:43:08 2023] ata1.00: cmd 60/08:60:48:00:00/00:00:00:00:00/40 tag 12 ncq 4096 in
               res 41/40:00:49:00:00/00:00:00:00:00/40 Emask 0x409 (media error) <F>
      [Wed Aug  2 15:43:08 2023] ata1.00: status: { DRDY ERR }
      [Wed Aug  2 15:43:08 2023] ata1.00: error: { UNC }
      [Wed Aug  2 15:43:08 2023] ata1.00: configured for UDMA/133
      [Wed Aug  2 15:43:08 2023] ata1: EH complete
      [Wed Aug  2 15:43:08 2023] do_marvell_9170_recover: ignoring PCI device (8086:2922) at PCI#0
      [Wed Aug  2 15:43:08 2023] ata1.00: exception Emask 0x0 SAct 0x10010000 SErr 0x0 action 0x0
      [Wed Aug  2 15:43:08 2023] ata1.00: irq_stat 0x40000008
      [Wed Aug  2 15:43:08 2023] ata1.00: failed command: READ FPDMA QUEUED
      [Wed Aug  2 15:43:08 2023] ata1.00: cmd 60/08:80:48:00:00/00:00:00:00:00/40 tag 16 ncq 4096 in
               res 41/40:00:49:00:00/00:00:00:00:00/40 Emask 0x409 (media error) <F>
      [Wed Aug  2 15:43:08 2023] ata1.00: status: { DRDY ERR }
      [Wed Aug  2 15:43:08 2023] ata1.00: error: { UNC }
      [Wed Aug  2 15:43:08 2023] ata1.00: configured for UDMA/133
      [Wed Aug  2 15:43:08 2023] Buffer I/O error on dev sda1, logical block 1, async page read
      [Wed Aug  2 15:43:08 2023] ata1: EH complete
      [Wed Aug  2 15:43:08 2023] do_marvell_9170_recover: ignoring PCI device (8086:2922) at PCI#0
      [Wed Aug  2 15:43:08 2023] ata1.00: exception Emask 0x0 SAct 0x8000 SErr 0x0 action 0x0
      [Wed Aug  2 15:43:08 2023] ata1.00: irq_stat 0x40000008
      [Wed Aug  2 15:43:08 2023] ata1.00: failed command: READ FPDMA QUEUED
      [Wed Aug  2 15:43:08 2023] ata1.00: cmd 60/08:78:48:00:00/00:00:00:00:00/40 tag 15 ncq 4096 in
               res 41/40:00:49:00:00/00:00:00:00:00/40 Emask 0x409 (media error) <F>
      [Wed Aug  2 15:43:08 2023] ata1.00: status: { DRDY ERR }
      [Wed Aug  2 15:43:08 2023] ata1.00: error: { UNC }
      [Wed Aug  2 15:43:08 2023] ata1.00: configured for UDMA/133
      [Wed Aug  2 15:43:08 2023] Buffer I/O error on dev sda1, logical block 1, async page read
      [Wed Aug  2 15:43:08 2023] ata1: EH complete
      

      and more recently

      [23/06/06 21:02:13 EDT] crit:disk:LOGMSG_SMART_UNCORR_ERR_WARN Detected high uncorrectable error count: [3058] on disk 2 (Internal) [WDC WD80EFZX-68UW8N0, VKKBL13Y]. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy.
      [23/06/07 01:00:36 EDT] warning:volume:LOGMSG_HEALTH_VOLUME_WARN Volume MONOLITH is Degraded.
      [23/06/13 13:10:53 EDT] notice:system:LOGMSG_SYSTEM_HALT The system is shutting down.
      [23/06/13 13:40:55 EDT] warning:volume:LOGMSG_HEALTH_VOLUME_WARN Volume MONOLITH is Degraded.
      [23/06/13 13:40:56 EDT] info:system:LOGMSG_START_READYNASD ReadyNASOS background service started.
      [23/06/13 13:42:17 EDT] warning:volume:LOGMSG_VOLUME_READONLY The volume MONOLITH encountered an error and was made read-only. It is recommended to backup your data.
      [23/06/13 14:53:17 EDT] info:volume:LOGMSG_DELETE_VOLUME Volume data-0 was deleted from the system.
      [23/06/13 14:54:36 EDT] notice:system:LOGMSG_SYSTEM_REBOOT The system is rebooting.
      [23/06/13 14:56:17 EDT] warning:volume:LOGMSG_HEALTH_VOLUME_WARN Volume MONOLITH is Degraded.
      [23/06/13 14:56:20 EDT] info:system:LOGMSG_START_READYNASD ReadyNASOS background service started.
      [23/06/13 14:56:43 EDT] notice:volume:LOGMSG_RESILVERSTARTED_VOLUME Resyncing started for Volume MONOLITH.
      [23/06/13 14:57:54 EDT] warning:volume:LOGMSG_VOLUME_READONLY The volume MONOLITH encountered an error and was made read-only. It is recommended to backup your data.
      [23/06/13 23:45:47 EDT] notice:disk:LOGMSG_ZFS_DISK_STATUS_CHANGED Disk in channel 2 (Internal) changed state from RESYNC to ONLINE.
      [23/06/13 23:47:43 EDT] notice:disk:LOGMSG_ZFS_DISK_STATUS_CHANGED Disk in channel 2 (Internal) changed state from ONLINE to RESYNC.
      [23/06/14 00:34:30 EDT] notice:disk:LOGMSG_ZFS_DISK_STATUS_CHANGED Disk in channel 2 (Internal) changed state from RESYNC to ONLINE.
      [23/06/14 00:42:35 EDT] notice:volume:LOGMSG_RESILVERCOMPLETE_DEGRADED_VOLUME The resync operation finished on volume MONOLITH. However, the volume is still degraded.
      [23/06/14 0

       

       

      I suggest powering down, removing disk 1, and rebooting.  Then capture a fresh set of logs, and PM me a link (using the envelope icon in the upper right of the forum page).

       

      • Halex43's avatar
        Halex43
        Aspirant

        StephenB looked through my logs and let me know my data is likely not salvageable. I appreciate his time and just going to go with what I saved before it died.

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More