NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
be9000
Feb 02, 2021Aspirant
After changing disks X-RAID only shows status "spares" but not "raid"
Hi, i replaced on 3 TB disk with a 6 TB disk and let it resync. When finished I replaced the other 3 TB disk with a 6 TB disk. The Web-GUI says it is finished but the icons are still "green" ...
- Feb 04, 2021
The "LED" is not normally blue -- I've never seen one blue. The LED will be yellow when the volume is re-syncing, which is the change I suspect you saw.. Your NAS volume is as it should be, there is no need to do anything. When a drive is a spare, the part of your drives that is blue is green.
Sandshark
Feb 02, 2021Sensei - Experienced User
md0 is the OS partition, which is a RAID1 (or 10 on larger NAS) with a partition on all drives in the system. While it showing a removed drive is unusual, at least it you've rebooted the system, it's not your main issue.
First, verify XRAID is enabled (green bar on the XRAID button on the volumes page), If you are not in XRAID, expansion isn't automatic.
If XRAID is enabled and you didn't try rebooting the NAS, try that first. It has been known to kick off an expansion.
Your problem is that while you have an md127, which is your data partition and I assume consists of the original 3TB partitions of the drives, it didn't create an md126 from the added 3TB partitions (and likely didn't even create the partitions) and add them to the BTRFS file system. Please post a similar listing for md127, but don't remove the name. I have recently seen a similar issue of non-expansion, and a wrong name was a partial clue. If there is really something you don't want people to know about the name of your volume, then just confirm that the name is the hostid:volumename-0. For example, mine is 7fc780f2:data-0 for volume data.
Also post a listing of fdisk -l /dev/sda /dev/sdb and btrfs filesystem show.
be9000
Feb 03, 2021Aspirant
Thank you very much for your detailed reply!
I rebooted (again, already did that; once from the console and once from GUI).
X-RAID is activated (green Bar).
Let's see md127:
/dev/md127: Version : 1.2 Creation Time : Sun May 1 19:48:46 2016 Raid Level : raid1 Array Size : 5855672800 (5584.40 GiB 5996.21 GB) Used Dev Size : 5855672800 (5584.40 GiB 5996.21 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Feb 3 08:19:51 2021 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : unknown Name : 117ad524:data-0 (local to host 117ad524) UUID : (a UUID) Events : 24229 Number Major Minor RaidDevice State 3 8 19 0 active sync /dev/sdb3 2 8 3 1 active sync /dev/sda3
fdisk - /dev/sda /dev/sdb:
# fdisk -l /dev/sda /dev/sdb Disk /dev/sda: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: (a UUID) Device Start End Sectors Size Type /dev/sda1 64 8388671 8388608 4G Linux RAID /dev/sda2 8388672 9437247 1048576 512M Linux RAID /dev/sda3 9437248 11721045103 11711607856 5.5T Linux RAID Disk /dev/sdb: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: (another UUID) Device Start End Sectors Size Type /dev/sdb1 64 8388671 8388608 4G Linux RAID /dev/sdb2 8388672 9437247 1048576 512M Linux RAID /dev/sdb3 9437248 11721045103 11711607856 5.5T Linux RAID
btrfs filesystem show:
Label: '117ad524:data' uuid: (and another UUID) Total devices 1 FS bytes used 1.50TiB devid 1 size 5.45TiB used 1.51TiB path /dev/md127
- StephenBFeb 03, 2021Guru - Experienced User
Is mdstat showing you an md126 yet?
- be9000Feb 03, 2021Aspirant
No:
# cat /proc/mdstat Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md127 : active raid1 sdb3[3] sda3[2] 5855672800 blocks super 1.2 [2/2] [UU] md1 : active raid1 sda2[0] sdb2[1] 523264 blocks super 1.2 [2/2] [UU] md0 : active raid1 sdb1[3] sda1[4] 4190208 blocks super 1.2 [3/2] [UU_] unused devices: <none>
- StephenBFeb 03, 2021Guru - Experienced User
What drive models did you purchase? I'm wondering if they are SMR (for instance, the WD60EFAX).
Doing this manually would require a number of steps - creating the needed partitions on the new drives, creating a RAID group using those partitions, concatenating that with the existing RAID group, and expanding the BTRFS volume to use the new space. Then there's the challenge with md0 wanting a third drive that doesn't existing on your platform. I don't think that's related to your main expansion issue - but I can't rule it out..
You could download the full log zip file, and see if there are any clues in rn_expand.log
But personally I'd rebuild the NAS from scratch, and restore the data from backup. You'd end up with a completely clean system, so there wouldn't be any more surprises under the surface. Alternatively you could use paid Netgear support - but they might have the same recommendation.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!