NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

nsne's avatar
nsne
Virtuoso
Dec 13, 2022

ReadyNAS 626 Status Is "Degraded" After Successful Resync

Just added a new HDD and am seeing this message after the resync:

 

Dec 13, 2022 06:33:28 AM
Volume: The resync operation finished on volume data. However, the volume is still degraded.

 

Any ideas what the issue could be?

 

I saw on some other threads that this could result from a drive that has a high reallocated sector count or current pending sector count, but I'm not sure how to determine that info. All six HDDs are appearing with a green indicator in the GUI.

 

There was at least one hiccup when adding the HDD: The NAS was initially populated with HDDs 1, 2, 3, 4 and 6. Bay 5 was empty. I removed the existing HDD 4 and (perhaps stupidly?) inserted the new HDD in its place. The NAS seemed to think that new disk was the original and then immediately began to resync, which resulted in an error. So I returned the original HDD 4, waited for the resync, then added the "new" HDD to the empty bay 5. It had data-0 and data-1 volumes on it, which I had to destroy.

 

I'm running a disk test now to make sure everything checks out SMART-wise. All my shares and data appear to be still intact.

14 Replies

Replies have been turned off for this discussion
  • StephenB's avatar
    StephenB
    Guru - Experienced User

    nsne wrote:

     

    There was at least one hiccup when adding the HDD: The NAS was initially populated with HDDs 1, 2, 3, 4 and 6. Bay 5 was empty. I removed the existing HDD 4 and (perhaps stupidly?) inserted the new HDD in its place. The NAS seemed to think that new disk was the original and then immediately began to resync, which resulted in an error. So I returned the original HDD 4, waited for the resync, then added the "new" HDD to the empty bay 5. It had data-0 and data-1 volumes on it, which I had to destroy.

     


    Can you download the log zip file, and post the contents of mdstat.log?  (copy/paste it into a reply).

    • nsne's avatar
      nsne
      Virtuoso

      My posts with the log contents keep vanishing! I guess some automated spam system is in place?

       

      The log is attached as a PDF.

       

      • StephenB's avatar
        StephenB
        Guru - Experienced User

        nsne wrote:

        I guess some automated spam system is in place?


        Yes.  The quarantine is manually reviewed by mods, so at some point the missing posts will probably be released.

         

        This is what you posted:

         

        md123 : active raid1 sdb7[0] sdf7[1]
         1953364992 blocks super 1.2 [2/2] [UU]
        
        md124 : active raid5 sdb6[0] sdf6[3] sdc6[1]
         7811627008 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
        
        md125 : active raid5 sdd5[6] sde5[7](S) sdf5[5] sdc5[4] sdb5[2] sda5[1]
         7809112576 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
        
        md126 : active raid5 sdb4[4] sdf4[7] sda4[3] sdd4[5] sdc4[6]
         9766874560 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_]
        
        md127 : active raid5 sda3[5] sdf3[8] sdc3[7] sdd3[9] sdb3[6]
         29278353920 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_]

         

         

        Because you have unequal size disks, your volume has multiple RAID groups concatenated together.  If you'd always had the mix of disks in your screen shot, you'd have four RAID groups.  You actually have five, which means that at one point in the past you had smaller disks in the the array (6 and 8 TB).  More importantly you should actually have six RAID groups.

         

        What you should have is

        • md127 - 6x6TB RAID-5
        • md126 - 6x2TB RAID-5 
        • md125 - 6x2TB RAID-5 
        • md124 - 4x4TB RAID-5 - should include the 14, 16, and the two 18 TB disks
        • md123 - 3x2TB RAID-5 - should include the 16 TB and the two 18 TB disks
        • md122 - 2x2TB RAID-1 - should include the two 18 TB disks.

         

        What you do have is 

        • md127 - 6x6TB RAID-5 (degraded) -includes all disks, but sde detected as missing
        • md126 - 6x2TB RAID-5 (degraded) - includes all disks, but sde detected as missing
        • md125 - 5x2TB RAID-5 - includes all disks, but sde shown as a spare
        • md124 - 3x4TB RAID-5 - missing sde together
        • md123 - 2x2TB RAID-1 - missing sde altogether, and should be RAID-5

         

        The problem is clearly the 18 TB disk in slot 5 (sde) - the one you just added.

         

        The first thing to figure out is whether sde is healthy. It might not be tested in the volume disk test, since the NAS is confused about its status.  Can you connect it to a Windows PC (either with SATA or a USB adapter/dock)?