NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
fastfwd
Oct 16, 2023Virtuoso
Pro 6 running OS6: Array doesn't automatically expand
I'm trying to expand one of my old 6-bay NAS boxes by replacing four of its drives with larger ones: Started with a Pro 6 running OS 6.10.9, with six 4TB drives in XRAID RAID6 for a "16TB" (real...
StephenB
Oct 17, 2023Guru - Experienced User
fastfwd wrote:
I'm trying to expand one of my old 6-bay NAS boxes by replacing four of its drives with larger ones:
- Started with a Pro 6 running OS 6.10.9, with six 4TB drives in XRAID RAID6 for a "16TB" (really 14.54TB) array.
- Replaced one 4TB drive with an 8TB drive. Waited for the resync to complete.
- Repeated that replacement three more times, so I ended up with four 8TB drives and two 4TB drives.
- I expected that the array would automatically resize to something near 24TB after the fourth drive had been replaced, but it didn't.
- I restarted the NAS a couple of times, hoping that that would trigger an automatic resize, but it didn't.
Here's the current status. As you can see, it's as though it still has six 4TB drives:
# btrfs filesystem show
Label: '33ea3f13:root' uuid: dad60fbb-7971-46be-8e32-f2063391a033
Total devices 1 FS bytes used 1.76GiB
devid 1 size 4.00GiB used 4.00GiB path /dev/md0
Label: '33ea3f13:data' uuid: 159d011a-173f-4597-b054-715f06650ab3
Total devices 1 FS bytes used 11.53TiB
devid 1 size 14.54TiB used 13.54TiB path /dev/mapper/data-0# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md127 : active raid6 sda3[9] sdf3[8] sde3[7] sdd3[11] sdc3[6] sdb3[10]
15608675328 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]
md1 : active raid10 sde2[0] sdd2[5] sdc2[4] sdb2[3] sda2[2] sdf2[1]
1566720 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU]
md0 : active raid1 sda1[9] sdf1[8] sde1[7] sdd1[11] sdc1[6] sdb1[10]
4190208 blocks super 1.2 [6/6] [UUUUUU]
unused devices: <none># mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Tue Jan 3 20:54:48 2017
Raid Level : raid6
Array Size : 15608675328 (14885.59 GiB 15983.28 GB)
Used Dev Size : 3902168832 (3721.40 GiB 3995.82 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Mon Oct 16 13:14:39 2023
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : unknown
Name : 33ea3f13:data-0 (local to host 33ea3f13)
UUID : 4d5f86c3:c41a1bc9:0efddce7:58f1c455
Events : 24063
Number Major Minor RaidDevice State
9 8 3 0 active sync /dev/sda3
10 8 19 1 active sync /dev/sdb3
6 8 35 2 active sync /dev/sdc3
11 8 51 3 active sync /dev/sdd3
7 8 67 4 active sync /dev/sde3
8 8 83 5 active sync /dev/sdf3
What's the best way to expand the array to its maximum size?
If you haven't rebooted the NAS, then maybe try doing that.
If that doesn't help, then look to see if there is fourth partition on all of the 8 TB drives (sdX4), and let us know if they are all there.
Sandshark
Oct 17, 2023Sensei
It sounds like XRAID isn't enabled. In order to get your NAS at RAID6, you likely disabled XRAID. Did you re-enable it? Since the NAS is not recognizing your volume as being "expanded", you should still be able to do so.
If it is enabled, then there are some commands you can issue via SSH that may kick-start the process. But you should not try them until you've verified XRAID is on. Once a second layer is created and you have an "expanded" volume, you'll no longer be able to turn on XRAID. So I can give you those commands once you've verified XRAID is enabled.
- StephenBOct 17, 2023Guru - Experienced User
Sandshark wrote:
It sounds like XRAID isn't enabled.
Yeah, makes sense to check that first.
It's easy to do - if you see a green stripe on the XRAID control on the volumes page, then it is enabled.
- fastfwdOct 17, 2023Virtuoso
Thanks for the help so far.
I've rebooted the NAS a couple times, but that has had no effect.
XRAID is definitely enabled:
- StephenBOct 17, 2023Guru - Experienced User
First see if the sdX4 partitions have been created on all four of the 8 TB drives.
- fastfwdOct 17, 2023Virtuoso
There is no fourth partition:
# ll /dev/sd*
brw-rw---- 1 root disk 8, 0 Oct 16 08:03 /dev/sda
brw-rw---- 1 root disk 8, 1 Oct 16 08:03 /dev/sda1
brw-rw---- 1 root disk 8, 2 Oct 16 08:03 /dev/sda2
brw-rw---- 1 root disk 8, 3 Oct 16 08:03 /dev/sda3
brw-rw---- 1 root disk 8, 16 Oct 16 08:03 /dev/sdb
brw-rw---- 1 root disk 8, 17 Oct 16 08:03 /dev/sdb1
brw-rw---- 1 root disk 8, 18 Oct 16 08:03 /dev/sdb2
brw-rw---- 1 root disk 8, 19 Oct 16 08:03 /dev/sdb3
brw-rw---- 1 root disk 8, 32 Oct 16 08:03 /dev/sdc
brw-rw---- 1 root disk 8, 33 Oct 16 08:03 /dev/sdc1
brw-rw---- 1 root disk 8, 34 Oct 16 08:03 /dev/sdc2
brw-rw---- 1 root disk 8, 35 Oct 16 08:03 /dev/sdc3
brw-rw---- 1 root disk 8, 48 Oct 16 08:03 /dev/sdd
brw-rw---- 1 root disk 8, 49 Oct 16 08:03 /dev/sdd1
brw-rw---- 1 root disk 8, 50 Oct 16 08:03 /dev/sdd2
brw-rw---- 1 root disk 8, 51 Oct 16 08:03 /dev/sdd3
brw-rw---- 1 root disk 8, 64 Oct 16 08:03 /dev/sde
brw-rw---- 1 root disk 8, 65 Oct 16 08:03 /dev/sde1
brw-rw---- 1 root disk 8, 66 Oct 16 08:03 /dev/sde2
brw-rw---- 1 root disk 8, 67 Oct 16 08:03 /dev/sde3
brw-rw---- 1 root disk 8, 80 Oct 16 08:03 /dev/sdf
brw-rw---- 1 root disk 8, 81 Oct 16 08:03 /dev/sdf1
brw-rw---- 1 root disk 8, 82 Oct 16 08:03 /dev/sdf2
brw-rw---- 1 root disk 8, 83 Oct 16 08:03 /dev/sdf3
brw-rw---- 1 root disk 8, 112 Oct 16 08:03 /dev/sdh
brw-rw---- 1 root disk 8, 113 Oct 16 08:03 /dev/sdh1
brw-rw---- 1 root disk 8, 128 Oct 16 08:03 /dev/sdi
brw-rw---- 1 root disk 8, 129 Oct 16 08:03 /dev/sdi1
brw-rw---- 1 root disk 8, 144 Oct 16 08:03 /dev/sdj
brw-rw---- 1 root disk 8, 145 Oct 16 08:03 /dev/sdj1
brw-rw---- 1 root disk 8, 146 Oct 16 08:04 /dev/sdj2
brw-rw---- 1 root disk 8, 160 Oct 16 08:03 /dev/sdk
brw-rw---- 1 root disk 8, 161 Oct 16 08:03 /dev/sdk1
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!