NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
powellandy1
Jan 12, 2017Virtuoso
Volume is degraded (but all disks green)
Hi I have a Pro6 on OS6.6.1 It did have 4x4TB and 2x6TB. I upgraded disk 1 to 6TB - all went fine and it resync'd in about 24h - free space shows correctly 4TB of 22TB. I then upgraded disk 2 ...
- Jan 28, 2017
mdstat shows 6 TB in md126 and 20 TB in md127 - btw that is backwards from what I'd expect to see. That totals 26 TB (23.6 TiB), which is what it should be.
Your btrfs fi show output includes
devid 2 size 3.64TiB used 1.33TiB path /dev/md126
which is 2 TB short (as you say).
Try btrfs fi resize 2:max /data as there are some posts out there that say you sometimes do need to specify the device id that you want to resize.
mdgm-ntgr
Jan 13, 2017NETGEAR Employee Retired
Wow you have a really long firmware update history on this unit.
Is your backup up to date?
- powellandy1Jan 28, 2017Virtuoso
Update:
mdgm and Skywalker kindly PM'd. Skywalker accessed remotely and issued a mdadm command to force the array down to 6 disks and convert back to RAID5. It looks like the issue was OS6 hadn't marked the disk as failed when it was removed before I swapped the new one in. That has now reshaped BUT.... it looks like btrfs hasn't resized correctly and I'm left 2TB short.
I've done a
btrfs fi resize max /data
as suggested by mdgm. I've also balanced the metadata and tried the resize again. Thrown in a few reboots as well!
Various outputs below.
Any advice gratefully received
Thanks
Andy
root@Pro6:~# cat /proc/mdstat Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md126 : active raid5 sde4[0] sdb4[3] sda4[2] sdf4[1] 5860124736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] md127 : active raid5 sdd3[0] sde3[4] sdf3[5] sda3[6] sdb3[7] sdc3[1] 19510833920 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] md1 : active raid6 sda2[0] sdb2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] 2093056 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU] md0 : active raid1 sdd1[0] sde1[4] sdf1[5] sda1[6] sdb1[7] sdc1[1] 4190208 blocks super 1.2 [6/6] [UUUUUU] unused devices: <none> root@Pro6:~# mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Feb 13 19:50:29 2014 Raid Level : raid5 Array Size : 19510833920 (18606.98 GiB 19979.09 GB) Used Dev Size : 3902166784 (3721.40 GiB 3995.82 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Sat Jan 28 10:53:58 2017 State : clean Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : 33ea2557:data-0 (local to host 33ea2557) UUID : 92b2bc5f:cf375e4b:5dcb468f:89d2bd81 Events : 13329989 Number Major Minor RaidDevice State 0 8 51 0 active sync /dev/sdd3 1 8 35 1 active sync /dev/sdc3 7 8 19 2 active sync /dev/sdb3 6 8 3 3 active sync /dev/sda3 5 8 83 4 active sync /dev/sdf3 4 8 67 5 active sync /dev/sde3 root@Pro6:~# mdadm --detail /dev/md126 /dev/md126: Version : 1.2 Creation Time : Tue Nov 24 12:23:49 2015 Raid Level : raid5 Array Size : 5860124736 (5588.65 GiB 6000.77 GB) Used Dev Size : 1953374912 (1862.88 GiB 2000.26 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sat Jan 28 10:53:57 2017 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : 33ea2557:data-1 (local to host 33ea2557) UUID : 4010d079:27f07b46:33b86d75:a3989442 Events : 6475 Number Major Minor RaidDevice State 0 8 68 0 active sync /dev/sde4 1 8 84 1 active sync /dev/sdf4 2 8 4 2 active sync /dev/sda4 3 8 20 3 active sync /dev/sdb4 root@Pro6:~# btrfs fi show Label: '33ea2557:data' uuid: 71e6eb17-3915-4aa1-bf47-ee05fec2bcd2 Total devices 2 FS bytes used 18.09TiB devid 1 size 18.17TiB used 17.16TiB path /dev/md127 devid 2 size 3.64TiB used 1.33TiB path /dev/md126 root@Pro6:~# btrfs fi df /data Data, single: total=18.47TiB, used=18.08TiB System, RAID1: total=32.00MiB, used=2.03MiB Metadata, RAID1: total=13.00GiB, used=11.57GiB GlobalReserve, single: total=512.00MiB, used=0.00B root@Pro6:~# df -h Filesystem Size Used Avail Use% Mounted on udev 10M 4.0K 10M 1% /dev /dev/md0 4.0G 721M 3.0G 20% / tmpfs 998M 0 998M 0% /dev/shm tmpfs 998M 6.0M 992M 1% /run tmpfs 499M 1004K 498M 1% /run/lock tmpfs 998M 0 998M 0% /sys/fs/cgroup /dev/md127 22T 19T 3.8T 84% /data /dev/md127 22T 19T 3.8T 84% /home /dev/md127 22T 19T 3.8T 84% /apps- StephenBJan 28, 2017Guru - Experienced User
mdstat shows 6 TB in md126 and 20 TB in md127 - btw that is backwards from what I'd expect to see. That totals 26 TB (23.6 TiB), which is what it should be.
Your btrfs fi show output includes
devid 2 size 3.64TiB used 1.33TiB path /dev/md126
which is 2 TB short (as you say).
Try btrfs fi resize 2:max /data as there are some posts out there that say you sometimes do need to specify the device id that you want to resize.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!