NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

powellandy1's avatar
powellandy1
Virtuoso
Jan 12, 2017
Solved

Volume is degraded (but all disks green)

Hi

 

I have a Pro6 on OS6.6.1

It did have 4x4TB and 2x6TB.

I upgraded disk 1 to 6TB - all went fine and it resync'd in about 24h - free space shows correctly 4TB of 22TB.

I then upgraded disk 2 to 6TB - it's just taken about 10 days to resync. That has finished but it still says volume degraded although the 6 disks each have a green 'light' by them on the volumes page. The free/total space has not changed to reflect the additional 2TB.

The SMART data shows no reallocated sectors and no command timeouts for any disks.

I know when I added the 2x6TB to the 4x4TB it didn't expand properly and Skywalker kindly remoted in and issued a mdadm command to grow the array (which he said was exactly the same one the firmware should have issued itself and it wasn't clear why it didn't).

I'm not sure what else to do now. I've restarted a few times with no success.

I've emailed logs in.

I do have a backup - the Pro6 backs up to the Ultra6 (but that doesn't have redundnacy so don't want to stress it too much if not needed).

 

I feel some things aren't right in mdstat.log - 7/6 devices?? Shouldn't /dev/md/data-0 be raid5 and not raid6??

 

Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md126 : active raid5 sde4[0] sdb4[3] sda4[2] sdf4[1]
      5860124736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      
md127 : active raid6 sdd3[0] sde3[4] sdf3[5] sda3[6] sdb3[7] sdc3[1]
      19510833920 blocks super 1.2 level 6, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
      
md1 : active raid6 sda2[0] sdb2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1]
      2093056 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      
md0 : active raid1 sdd1[0] sde1[4] sdf1[5] sda1[6] sdb1[7] sdc1[1]
      4190208 blocks super 1.2 [7/6] [UUUUUU_]
      
unused devices: <none>
/dev/md/0:
        Version : 1.2
  Creation Time : Thu Feb 13 19:50:28 2014
     Raid Level : raid1
     Array Size : 4190208 (4.00 GiB 4.29 GB)
  Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
   Raid Devices : 7
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Thu Jan 12 22:41:43 2017
          State : clean, degraded 
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

           Name : 33ea2557:0  (local to host 33ea2557)
           UUID : a425c11d:4b9480fd:8fa2e4ff:906c719b
         Events : 1960417

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       33        1      active sync   /dev/sdc1
       7       8       17        2      active sync   /dev/sdb1
       6       8        1        3      active sync   /dev/sda1
       5       8       81        4      active sync   /dev/sdf1
       4       8       65        5      active sync   /dev/sde1
      12       0        0       12      removed
/dev/md/1:
        Version : 1.2
  Creation Time : Fri Dec 30 11:05:58 2016
     Raid Level : raid6
     Array Size : 2093056 (2044.00 MiB 2143.29 MB)
  Used Dev Size : 523264 (511.00 MiB 535.82 MB)
   Raid Devices : 6
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Thu Jan 12 22:25:26 2017
          State : clean 
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : 33ea2557:1  (local to host 33ea2557)
           UUID : 4faaf45f:f1b1b987:a9604d89:34ab73dc
         Events : 35

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       34        1      active sync   /dev/sdc2
       2       8       50        2      active sync   /dev/sdd2
       3       8       66        3      active sync   /dev/sde2
       4       8       82        4      active sync   /dev/sdf2
       5       8       18        5      active sync   /dev/sdb2
/dev/md/data-0:
        Version : 1.2
  Creation Time : Thu Feb 13 19:50:29 2014
     Raid Level : raid6
     Array Size : 19510833920 (18606.98 GiB 19979.09 GB)
  Used Dev Size : 3902166784 (3721.40 GiB 3995.82 GB)
   Raid Devices : 7
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Thu Jan 12 22:26:43 2017
          State : clean, degraded 
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : 33ea2557:data-0  (local to host 33ea2557)
           UUID : 92b2bc5f:cf375e4b:5dcb468f:89d2bd81
         Events : 6660532

    Number   Major   Minor   RaidDevice State
       0       8       51        0      active sync   /dev/sdd3
       1       8       35        1      active sync   /dev/sdc3
       7       8       19        2      active sync   /dev/sdb3
       6       8        3        3      active sync   /dev/sda3
       5       8       83        4      active sync   /dev/sdf3
       4       8       67        5      active sync   /dev/sde3
      12       0        0       12      removed
/dev/md/data-1:
        Version : 1.2
  Creation Time : Tue Nov 24 12:23:49 2015
     Raid Level : raid5
     Array Size : 5860124736 (5588.65 GiB 6000.77 GB)
  Used Dev Size : 1953374912 (1862.88 GiB 2000.26 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Thu Jan 12 22:26:43 2017
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : 33ea2557:data-1  (local to host 33ea2557)
           UUID : 4010d079:27f07b46:33b86d75:a3989442
         Events : 6475

    Number   Major   Minor   RaidDevice State
       0       8       68        0      active sync   /dev/sde4
       1       8       84        1      active sync   /dev/sdf4
       2       8        4        2      active sync   /dev/sda4
       3       8       20        3      active sync   /dev/sdb4

 

 

Thanks for looking,

Kind regards,

Andy

  • StephenB's avatar
    StephenB
    Jan 28, 2017

    mdstat shows 6 TB in md126 and 20 TB in md127 - btw that is backwards from what I'd expect to see.  That totals 26 TB (23.6 TiB), which is what it should be.

     

    Your btrfs fi show output includes 

     devid    2 size 3.64TiB used 1.33TiB path /dev/md126

     which is 2 TB short (as you say).

     

     

    Try btrfs fi resize 2:max /data as there are some posts out there that say you sometimes do need to specify the device id that you want to resize.

4 Replies

Replies have been turned off for this discussion
  • mdgm-ntgr's avatar
    mdgm-ntgr
    NETGEAR Employee Retired

    Wow you have a really long firmware update history on this unit.

    Is your backup up to date?

    • powellandy1's avatar
      powellandy1
      Virtuoso

      Update:

       

      mdgm and Skywalker kindly PM'd. Skywalker accessed remotely and issued a mdadm command to force the array down to 6 disks and convert back to RAID5. It looks like the issue was OS6 hadn't marked the disk as failed when it was removed before I swapped the new one in. That has now reshaped BUT.... it looks like btrfs hasn't resized correctly and I'm left 2TB short.

       

      I've done a

       

      btrfs fi resize max /data

      as suggested by mdgm. I've also balanced the metadata and tried the resize again. Thrown in a few reboots as well!

       

       

      Various outputs below.

       

      Any advice gratefully received

      Thanks

      Andy

       

       

      root@Pro6:~# cat /proc/mdstat
      Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
      md126 : active raid5 sde4[0] sdb4[3] sda4[2] sdf4[1]
            5860124736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      
      md127 : active raid5 sdd3[0] sde3[4] sdf3[5] sda3[6] sdb3[7] sdc3[1]
            19510833920 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
      
      md1 : active raid6 sda2[0] sdb2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1]
            2093056 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      
      md0 : active raid1 sdd1[0] sde1[4] sdf1[5] sda1[6] sdb1[7] sdc1[1]
            4190208 blocks super 1.2 [6/6] [UUUUUU]
      
      unused devices: <none>
      root@Pro6:~# mdadm --detail /dev/md127
      /dev/md127:
              Version : 1.2
        Creation Time : Thu Feb 13 19:50:29 2014
           Raid Level : raid5
           Array Size : 19510833920 (18606.98 GiB 19979.09 GB)
        Used Dev Size : 3902166784 (3721.40 GiB 3995.82 GB)
         Raid Devices : 6
        Total Devices : 6
          Persistence : Superblock is persistent
      
          Update Time : Sat Jan 28 10:53:58 2017
                State : clean
       Active Devices : 6
      Working Devices : 6
       Failed Devices : 0
        Spare Devices : 0
      
               Layout : left-symmetric
           Chunk Size : 64K
      
                 Name : 33ea2557:data-0  (local to host 33ea2557)
                 UUID : 92b2bc5f:cf375e4b:5dcb468f:89d2bd81
               Events : 13329989
      
          Number   Major   Minor   RaidDevice State
             0       8       51        0      active sync   /dev/sdd3
             1       8       35        1      active sync   /dev/sdc3
             7       8       19        2      active sync   /dev/sdb3
             6       8        3        3      active sync   /dev/sda3
             5       8       83        4      active sync   /dev/sdf3
             4       8       67        5      active sync   /dev/sde3
      root@Pro6:~# mdadm --detail /dev/md126
      /dev/md126:
              Version : 1.2
        Creation Time : Tue Nov 24 12:23:49 2015
           Raid Level : raid5
           Array Size : 5860124736 (5588.65 GiB 6000.77 GB)
        Used Dev Size : 1953374912 (1862.88 GiB 2000.26 GB)
         Raid Devices : 4
        Total Devices : 4
          Persistence : Superblock is persistent
      
          Update Time : Sat Jan 28 10:53:57 2017
                State : clean
       Active Devices : 4
      Working Devices : 4
       Failed Devices : 0
        Spare Devices : 0
      
               Layout : left-symmetric
           Chunk Size : 64K
      
                 Name : 33ea2557:data-1  (local to host 33ea2557)
                 UUID : 4010d079:27f07b46:33b86d75:a3989442
               Events : 6475
      
          Number   Major   Minor   RaidDevice State
             0       8       68        0      active sync   /dev/sde4
             1       8       84        1      active sync   /dev/sdf4
             2       8        4        2      active sync   /dev/sda4
             3       8       20        3      active sync   /dev/sdb4
      root@Pro6:~# btrfs fi show
      Label: '33ea2557:data'  uuid: 71e6eb17-3915-4aa1-bf47-ee05fec2bcd2
              Total devices 2 FS bytes used 18.09TiB
              devid    1 size 18.17TiB used 17.16TiB path /dev/md127
              devid    2 size 3.64TiB used 1.33TiB path /dev/md126
      
      root@Pro6:~# btrfs fi df /data
      Data, single: total=18.47TiB, used=18.08TiB
      System, RAID1: total=32.00MiB, used=2.03MiB
      Metadata, RAID1: total=13.00GiB, used=11.57GiB
      GlobalReserve, single: total=512.00MiB, used=0.00B
      root@Pro6:~# df -h
      Filesystem      Size  Used Avail Use% Mounted on
      udev             10M  4.0K   10M   1% /dev
      /dev/md0        4.0G  721M  3.0G  20% /
      tmpfs           998M     0  998M   0% /dev/shm
      tmpfs           998M  6.0M  992M   1% /run
      tmpfs           499M 1004K  498M   1% /run/lock
      tmpfs           998M     0  998M   0% /sys/fs/cgroup
      /dev/md127       22T   19T  3.8T  84% /data
      /dev/md127       22T   19T  3.8T  84% /home
      /dev/md127       22T   19T  3.8T  84% /apps
      • StephenB's avatar
        StephenB
        Guru - Experienced User

        mdstat shows 6 TB in md126 and 20 TB in md127 - btw that is backwards from what I'd expect to see.  That totals 26 TB (23.6 TiB), which is what it should be.

         

        Your btrfs fi show output includes 

         devid    2 size 3.64TiB used 1.33TiB path /dev/md126

         which is 2 TB short (as you say).

         

         

        Try btrfs fi resize 2:max /data as there are some posts out there that say you sometimes do need to specify the device id that you want to resize.

  • StephenB you are a scholar and a gentleman! Many thanks to you, mdgm and Skywalker.

    Should the firmware loop thru the device ids and try to expand each in turn??

    Just to spite me the unit has started making a whining noise... So time to take it apart and check the fans!!

    Cheers
    A

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More