NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

jarness17's avatar
jarness17
Aspirant
Jun 13, 2019

ReadyNAS 4220 - Status Degraded

Hi,

 

We have a ReadyNAS 4220 in which the status is "Degraded". 

I have examined the logs and the mdstat.log reports that raid device is removed. Yesterday I reseated disk 12 and left it resyncing. This mornng it is still showing as degraded. I have just started an extensive disk test to see if that finds anything. 

 

If anyone can offer any advice it would be much appreciated. I have attached image of the mdstat.log as it won't let me upload the zip or text file. 

 

If I can provide any futher information let me know.


Thanks,

Jacob

 

 

4 Replies

Replies have been turned off for this discussion
  • StephenB's avatar
    StephenB
    Guru - Experienced User

    jarness17 wrote:

    I have attached image of the mdstat.log

     


    It is a bit curious.  The two RAID groups that are degraded both claim to have 12 of 13 disks.  And of course the NAS only has 12 bays.

     

    Maybe try rebooting after the disk tests complete?

     


    jarness17 wrote:

    it won't let me upload the zip or text file. 

     


    You shouldn't publically post links to the full zip file, as there is some privacy leakage when you do that.

     

    You can upload mdstat if you change the extension from .log to .zip.  Though it's usually best to use the </> (code) tool in the forum, and simply paste the relevant log portions into the code box.

    • jarness17's avatar
      jarness17
      Aspirant

      Thank you for your reply. I've scheduled in to reboot in the morning so will see how I get on. 

       

       

      • jarness17's avatar
        jarness17
        Aspirant
        I rebooted the device this morning and it updated to the latest firmware although it is still stating degraded. The log is still showing 13 disks for some reason. I have copied the mdstat.log below if anyone can give any advice.

        Thanks in advance!

        Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md127 : active raid6 sdi3[0] sde3[11] sdc3[16] sdh3[13] sdf3[17] sdg3[7] sdd3[6] sdb3[15] sda3[12] sdm3[3] sdl3[2] sdj3[1] 21435290624 blocks super 1.2 level 6, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUUUU_] md1 : active raid10 sdj2[0] sdh2[11] sdb2[10] sdf2[9] sdc2[8] sde2[7] sdg2[6] sdd2[5] sda2[4] sdm2[3] sdl2[2] sdi2[1] 3139584 blocks super 1.2 512K chunks 2 near-copies [12/12] [UUUUUUUUUUUU] md0 : active raid1 sdi1[0] sde1[11] sdc1[16] sdh1[13] sdf1[17] sdg1[12] sdd1[6] sdb1[15] sda1[4] sdm1[3] sdl1[2] sdj1[1] 4190208 blocks super 1.2 [13/12] [UUUUUUUUUUUU_] unused devices: <none> /dev/md/0: Version : 1.2 Creation Time : Thu Oct 8 17:31:56 2015 Raid Level : raid1 Array Size : 4190208 (4.00 GiB 4.29 GB) Used Dev Size : 4190208 (4.00 GiB 4.29 GB) Raid Devices : 13 Total Devices : 12 Persistence : Superblock is persistent Update Time : Fri Jun 14 08:58:41 2019 State : active, degraded Active Devices : 12 Working Devices : 12 Failed Devices : 0 Spare Devices : 0 Consistency Policy : unknown Name : 43f61440:0 (local to host 43f61440) UUID : a7a7619c:7aaccfde:b80a3b50:6082776a Events : 1840063 Number Major Minor RaidDevice State 0 8 129 0 active sync /dev/sdi1 1 8 145 1 active sync /dev/sdj1 2 8 177 2 active sync /dev/sdl1 3 8 193 3 active sync /dev/sdm1 4 8 1 4 active sync /dev/sda1 15 8 17 5 active sync /dev/sdb1 6 8 49 6 active sync /dev/sdd1 12 8 97 7 active sync /dev/sdg1 17 8 81 8 active sync /dev/sdf1 13 8 113 9 active sync /dev/sdh1 16 8 33 10 active sync /dev/sdc1 11 8 65 11 active sync /dev/sde1 - 0 0 12 removed /dev/md/1: Version : 1.2 Creation Time : Wed Jun 12 14:49:29 2019 Raid Level : raid10 Array Size : 3139584 (2.99 GiB 3.21 GB) Used Dev Size : 523264 (511.00 MiB 535.82 MB) Raid Devices : 12 Total Devices : 12 Persistence : Superblock is persistent Update Time : Fri Jun 14 05:22:10 2019 State : clean Active Devices : 12 Working Devices : 12 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Consistency Policy : unknown Name : 43f61440:1 (local to host 43f61440) UUID : ef626106:1d36ec1d:dea584cc:2134df70 Events : 19 Number Major Minor RaidDevice State 0 8 146 0 active sync set-A /dev/sdj2 1 8 130 1 active sync set-B /dev/sdi2 2 8 178 2 active sync set-A /dev/sdl2 3 8 194 3 active sync set-B /dev/sdm2 4 8 2 4 active sync set-A /dev/sda2 5 8 50 5 active sync set-B /dev/sdd2 6 8 98 6 active sync set-A /dev/sdg2 7 8 66 7 active sync set-B /dev/sde2 8 8 34 8 active sync set-A /dev/sdc2 9 8 82 9 active sync set-B /dev/sdf2 10 8 18 10 active sync set-A /dev/sdb2 11 8 114 11 active sync set-B /dev/sdh2 /dev/md/data-0: Version : 1.2 Creation Time : Thu Oct 8 17:31:57 2015 Raid Level : raid6 Array Size : 21435290624 (20442.29 GiB 21949.74 GB) Used Dev Size : 1948662784 (1858.39 GiB 1995.43 GB) Raid Devices : 13 Total Devices : 12 Persistence : Superblock is persistent Update Time : Fri Jun 14 08:57:17 2019 State : clean, degraded Active Devices : 12 Working Devices : 12 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Consistency Policy : unknown Name : 43f61440:data-0 (local to host 43f61440) UUID : 5158a2f3:ff19d5b2:90822650:9b53ac56 Events : 10717368 Number Major Minor RaidDevice State 0 8 131 0 active sync /dev/sdi3 1 8 147 1 active sync /dev/sdj3 2 8 179 2 active sync /dev/sdl3 3 8 195 3 active sync /dev/sdm3 12 8 3 4 active sync /dev/sda3 15 8 19 5 active sync /dev/sdb3 6 8 51 6 active sync /dev/sdd3 7 8 99 7 active sync /dev/sdg3 17 8 83 8 active sync /dev/sdf3 13 8 115 9 active sync /dev/sdh3 16 8 35 10 active sync /dev/sdc3 11 8 67 11 active sync /dev/sde3 - 0 0 12 removed

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More