NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
be9000
Feb 02, 2021Aspirant
After changing disks X-RAID only shows status "spares" but not "raid"
Hi,
i replaced on 3 TB disk with a 6 TB disk and let it resync. When finished I replaced the other 3 TB disk with a 6 TB disk.
The Web-GUI says it is finished but the icons are still "green" which means "spare" and not blue which means Raid. There is no error message in the GUI-events log.
I activated SSH and investigated the MD-Raid status via /proc/mdstat:
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md127 : active raid1 sdb3[3] sda3[2] 5855672800 blocks super 1.2 [2/2] [UU] md1 : active raid1 sda2[0] sdb2[1] 523264 blocks super 1.2 [2/2] [UU] md0 : active raid1 sdb1[3] sda1[4] 4190208 blocks super 1.2 [3/2] [UU_]
Why does it want to have 3 disks instead of 2 in md0 and shows a degraded state? UU_
Further info on md0:
# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun May 1 19:48:46 2016 Raid Level : raid1 Array Size : 4190208 (4.00 GiB 4.29 GB) Used Dev Size : 4190208 (4.00 GiB 4.29 GB) Raid Devices : 3 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Feb 2 13:23:29 2021 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : unknown Name : (removed by me) UUID : (removed by me) Events : 19508967 Number Major Minor RaidDevice State 3 8 17 0 active sync /dev/sdb1 4 8 1 1 active sync /dev/sda1 - 0 0 2 removed
So whats this strange "RaidDevice 2" in state "removed"?
How can I get the fuzzy warm feeling of having a working Raid1 again?
Thanks for any hints
Bernd
The "LED" is not normally blue -- I've never seen one blue. The LED will be yellow when the volume is re-syncing, which is the change I suspect you saw.. Your NAS volume is as it should be, there is no need to do anything. When a drive is a spare, the part of your drives that is blue is green.
23 Replies
Replies have been turned off for this discussion
- SandsharkSensei
md0 is the OS partition, which is a RAID1 (or 10 on larger NAS) with a partition on all drives in the system. While it showing a removed drive is unusual, at least it you've rebooted the system, it's not your main issue.
First, verify XRAID is enabled (green bar on the XRAID button on the volumes page), If you are not in XRAID, expansion isn't automatic.
If XRAID is enabled and you didn't try rebooting the NAS, try that first. It has been known to kick off an expansion.
Your problem is that while you have an md127, which is your data partition and I assume consists of the original 3TB partitions of the drives, it didn't create an md126 from the added 3TB partitions (and likely didn't even create the partitions) and add them to the BTRFS file system. Please post a similar listing for md127, but don't remove the name. I have recently seen a similar issue of non-expansion, and a wrong name was a partial clue. If there is really something you don't want people to know about the name of your volume, then just confirm that the name is the hostid:volumename-0. For example, mine is 7fc780f2:data-0 for volume data.
Also post a listing of fdisk -l /dev/sda /dev/sdb and btrfs filesystem show.
- be9000Aspirant
Thank you very much for your detailed reply!
I rebooted (again, already did that; once from the console and once from GUI).
X-RAID is activated (green Bar).
Let's see md127:
/dev/md127: Version : 1.2 Creation Time : Sun May 1 19:48:46 2016 Raid Level : raid1 Array Size : 5855672800 (5584.40 GiB 5996.21 GB) Used Dev Size : 5855672800 (5584.40 GiB 5996.21 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Feb 3 08:19:51 2021 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : unknown Name : 117ad524:data-0 (local to host 117ad524) UUID : (a UUID) Events : 24229 Number Major Minor RaidDevice State 3 8 19 0 active sync /dev/sdb3 2 8 3 1 active sync /dev/sda3
fdisk - /dev/sda /dev/sdb:
# fdisk -l /dev/sda /dev/sdb Disk /dev/sda: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: (a UUID) Device Start End Sectors Size Type /dev/sda1 64 8388671 8388608 4G Linux RAID /dev/sda2 8388672 9437247 1048576 512M Linux RAID /dev/sda3 9437248 11721045103 11711607856 5.5T Linux RAID Disk /dev/sdb: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: (another UUID) Device Start End Sectors Size Type /dev/sdb1 64 8388671 8388608 4G Linux RAID /dev/sdb2 8388672 9437247 1048576 512M Linux RAID /dev/sdb3 9437248 11721045103 11711607856 5.5T Linux RAID
btrfs filesystem show:
Label: '117ad524:data' uuid: (and another UUID) Total devices 1 FS bytes used 1.50TiB devid 1 size 5.45TiB used 1.51TiB path /dev/md127
Is mdstat showing you an md126 yet?
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!