NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

Fletch360's avatar
Aug 25, 2020
Solved

Volume Degraded after replacing 2 of 4 disks

HI,  I am in the process of upgrading my 4* 1TB disks to 4TB each.

I replaced disks 4 and 3 without any issue,  but when I came to replace number 2 the harddrive was faulty and the resync would not complete.

In the end I returned the 1TB disk to the unit and have sent for a replacement of the faulty disk.

The resync has now completed and all disks are up and showing healthy with 2* 1TB and 2* 4TB but the volume is saying it is degraded.

I have a feeling that something has gone wrong with the resync of disk 2, but I am not sure where to start.  I am a novice when it comes to this device as it has not caused me any issues in the past. It "just worked".

I do not want to simply try replacing disk2 again without checking here.

I have had a quick look at other posts here and I have managed to find something in the mdstat.log file  that seems suspect.  This is the only log in fact that looks  like there is an issue. 
/dev/md/data-1  has only 2 disks with 1 removed

I Only have one volume on this NAS, and I don't really know why there are are two data devices. If I were a betting man I would say that it looks like the x-raid has set up two raid pools. data-0 using 1TB from each of the 4 devices, and data-1 using 3TB from the  2*   4TB disks. which would make sense I suppose.

The question then becomes,  ...  Am I fine to simply replace disk number 2 and the data integrity will remain that is in data-0 (Which is clean and synced) ?

 

Ah, it has just ocurred to me that data-1 may be degraded because it has started that new pool with the 3rd disk that I have since  returned as faulty.

 

Any thoughts?

Thanks

Fletch

 


mdstat.log

 

Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md126 : active raid5 sda3[4] sdd3[3] sdc3[6] sdb3[5]
      2915732352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      
md127 : active raid5 sdb4[0] sda4[1]
      5860253824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/2] [UU_]
      
md0 : active raid1 sda1[4] sdd1[3] sdc1[6] sdb1[5]
      4190208 blocks super 1.2 [4/4] [UUUU]
      
md1 : active raid1 sda2[0] sdd2[2] sdb2[1]
      523712 blocks super 1.2 [3/3] [UUU]
      
unused devices: <none>
/dev/md/0:
           Version : 1.2
     Creation Time : Tue Dec 29 22:37:24 2015
        Raid Level : raid1
        Array Size : 4190208 (4.00 GiB 4.29 GB)
     Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Mon Aug 24 12:54:21 2020
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : unknown

              Name : 2fe64d36:0  (local to host 2fe64d36)
              UUID : xxxxxxxxxxxxxxxxxx
            Events : 1739

    Number   Major   Minor   RaidDevice State
       4       8        1        0      active sync   /dev/sda1
       5       8       17        1      active sync   /dev/sdb1
       6       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
/dev/md/1:
           Version : 1.2
     Creation Time : Sat Aug 22 18:56:57 2020
        Raid Level : raid1
        Array Size : 523712 (511.44 MiB 536.28 MB)
     Used Dev Size : 523712 (511.44 MiB 536.28 MB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Mon Aug 24 12:54:22 2020
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : unknown

              Name : 2fe64d36:1  (local to host 2fe64d36)
              UUID : xxxredactedxxx
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8       50        2      active sync   /dev/sdd2
/dev/md/data-0:
           Version : 1.2
     Creation Time : Tue Dec 29 22:37:24 2015
        Raid Level : raid5
        Array Size : 2915732352 (2780.66 GiB 2985.71 GB)
     Used Dev Size : 971910784 (926.89 GiB 995.24 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Mon Aug 24 11:44:05 2020
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : unknown

              Name : 2fe64d36:data-0  (local to host 2fe64d36)
              UUID : xxxredactedxxx
            Events : 5990

    Number   Major   Minor   RaidDevice State
       4       8        3        0      active sync   /dev/sda3
       5       8       19        1      active sync   /dev/sdb3
       6       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3
/dev/md/data-1:
           Version : 1.2
     Creation Time : Wed Aug 19 14:27:03 2020
        Raid Level : raid5
        Array Size : 5860253824 (5588.77 GiB 6000.90 GB)
     Used Dev Size : 2930126912 (2794.39 GiB 3000.45 GB)
      Raid Devices : 3
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Mon Aug 24 11:44:05 2020
             State : clean, degraded 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : unknown

              Name : 2fe64d36:data-1  (local to host 2fe64d36)
              UUID : xxxredactedxxx
            Events : 14054

    Number   Major   Minor   RaidDevice State
       0       8       20        0      active sync   /dev/sdb4
       1       8        4        1      active sync   /dev/sda4
       -       0        0        2      removed

 


  • Fletch360 wrote:
     

     

    I do not want to simply try replacing disk2 again without checking here.

    That is exactly what you need to do - replace disk2 with a 4 TB drive.

     

    When you vertically expand (by upgrading to larger disks), the NAS creates a new RAID group that uses the extra space on the larger disks.  That RAID group is concatenated with the original group(s) to create the larger volume.

     

    In your case, the original volume is data-0 (also known as md126).  You can see at the top that md126 is RAID-5 and has all 4 disks in the array.

    md126 : active raid5 sda3[4] sdd3[3] sdc3[6] sdb3[5]
          2915732352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

    This raid group is 4x1TB.

     

    The new raid group is data-1 (also known as md127).  This is also RAID-5, but has only two disks in the array.  That new RAID group is therefore degraded.

    md127 : active raid5 sdb4[0] sda4[1]
          5860253824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/2] [UU_]

    After the first two disks were upgraded, md127 was RAID-1, with only two disks.  When you inserted the third disk, the system started expand - converting the group to RAID-5.  But that failed.  So you are left with a degraded RAID group.

     

    While if you know how, you can fix this with ssh.  But it would simpler to just try again with a working disk.  When you are done, md127 will be 4x3TB.

     

    FWIW, md0 is the OS partition (which is what the NAS boots from).  md1 is the swap partition - and there is something off with that: it is missing sdc3.  I don't think that is critical (since it is only swap), but it is odd.

     

     

     

     

     

2 Replies

Replies have been turned off for this discussion
  • StephenB's avatar
    StephenB
    Guru - Experienced User

    Fletch360 wrote:
     

     

    I do not want to simply try replacing disk2 again without checking here.

    That is exactly what you need to do - replace disk2 with a 4 TB drive.

     

    When you vertically expand (by upgrading to larger disks), the NAS creates a new RAID group that uses the extra space on the larger disks.  That RAID group is concatenated with the original group(s) to create the larger volume.

     

    In your case, the original volume is data-0 (also known as md126).  You can see at the top that md126 is RAID-5 and has all 4 disks in the array.

    md126 : active raid5 sda3[4] sdd3[3] sdc3[6] sdb3[5]
          2915732352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

    This raid group is 4x1TB.

     

    The new raid group is data-1 (also known as md127).  This is also RAID-5, but has only two disks in the array.  That new RAID group is therefore degraded.

    md127 : active raid5 sdb4[0] sda4[1]
          5860253824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/2] [UU_]

    After the first two disks were upgraded, md127 was RAID-1, with only two disks.  When you inserted the third disk, the system started expand - converting the group to RAID-5.  But that failed.  So you are left with a degraded RAID group.

     

    While if you know how, you can fix this with ssh.  But it would simpler to just try again with a working disk.  When you are done, md127 will be 4x3TB.

     

    FWIW, md0 is the OS partition (which is what the NAS boots from).  md1 is the swap partition - and there is something off with that: it is missing sdc3.  I don't think that is critical (since it is only swap), but it is odd.

     

     

     

     

     

    • Fletch360's avatar
      Fletch360
      Guide

      That's great, Thanks Stephen I will do that now and keep my fingers crossed this disk is better than the last one.

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More