NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Retired_Member
Sep 07, 2016X-RAID vertical expansion incomplete
ReadyNAS 314 running OS 6.4.0
Was previously running with 4x1TB drives. As we came close to filling that up, ordered 4x2TB drives for vertical expansion.
Replaced the disk in bay 1, waited for resync to complete, no problems.
Replaced the disk in bay 2, waited for resync to complete, no problems. After that it began a second lengthy operation (I can't remember the exact verbiage used, but I'm fairly confident that this was expanding the capacity to make use of the two newer disks). No problems.
Replaced the disk in bay 3, waited for resync to complete, no problems. Waited for expansion to complete, no problems.
Replaced the disk in bay 4, waited for resync to complete, no problems. I did not immediately notice that the final expansion did not take place.
This unit has SSH enabled, so when I noticed that the capacity wasn't what I expected (it reports 4.45 TB capacity instead of the expected 5.24 TB), I poked around a bit and found this:
root@netgear:~# btrfs filesystem show /data Label: '5e276904:data' uuid: 7e35ca18-0d2e-45f7-aa31-a82747bea137 Total devices 2 FS bytes used 2.61TiB devid 1 size 2.71TiB used 2.63TiB path /dev/md127 devid 2 size 1.82TiB used 171.00GiB path /dev/md126 btrfs-progs v4.1.2 root@netgear:~# mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Tue Jul 9 03:32:40 2002 Raid Level : raid5 Array Size : 2915731968 (2780.66 GiB 2985.71 GB) Used Dev Size : 971910656 (926.89 GiB 995.24 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Tue Sep 6 17:50:49 2016 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : 5e276904:data-0 (local to host 5e276904) UUID : e33eecc1:f9ec98cf:0bae86e1:28333094 Events : 3594 Number Major Minor RaidDevice State 4 8 3 0 active sync /dev/sda3 5 8 19 1 active sync /dev/sdb3 6 8 35 2 active sync /dev/sdc3 7 8 51 3 active sync /dev/sdd3 root@netgear:~# mdadm --detail /dev/md126 /dev/md126: Version : 1.2 Creation Time : Tue Aug 9 17:01:44 2016 Raid Level : raid5 Array Size : 1953245824 (1862.76 GiB 2000.12 GB) Used Dev Size : 976622912 (931.38 GiB 1000.06 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Tue Sep 6 17:50:49 2016 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Name : 5e276904:data-1 (local to host 5e276904) UUID : df0953ba:a63f8097:563d7345:af6846f6 Events : 3031 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 8 20 1 active sync /dev/sdb4 2 8 36 2 active sync /dev/sdc4 3 8 52 - spare /dev/sdd4 root@netgear:~# mdadm --examine /dev/sdd4 /dev/sdd4: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : df0953ba:a63f8097:563d7345:af6846f6 Name : 5e276904:data-1 (local to host 5e276904) Creation Time : Tue Aug 9 17:01:44 2016 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 1953245896 (931.38 GiB 1000.06 GB) Array Size : 1953245824 (1862.76 GiB 2000.12 GB) Used Dev Size : 1953245824 (931.38 GiB 1000.06 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=72 sectors State : clean Device UUID : 3de39043:50f0af6e:cbcdc324:337612b0 Update Time : Fri Aug 12 01:47:27 2016 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 1e90ec50 - correct Events : 3031 Layout : left-symmetric Chunk Size : 64K Device Role : spare Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
It looks like the device kept the original RAID array as-is with the ~3TB storage capactiy in the first 1TB of each drive, built a second RAID 5 array using the next 1TB of each drive, and then used BTRFS features to join them into a single filesystem (which is fine). What I don't understand is why the second RAID array (/dev/md126) is treating the top 1TB of the fourth disk (/dev/sdd4) as a 'spare' rather than an active disk. All four new disks are identical Western Digital Reds.
We have good backups and so in principle I could just wipe the array and start over, however this array is actively used in a business and the associated downtime is not particularly appealing, especially since I'm not often physically on-site with the unit.
Yes, but you do want to check the disks are healthy, partitions are the correct size etc. before doing that.
Doing it yourself is at your own risk.
8 Replies
- mdgm-ntgrNETGEAR Employee Retired
That's easily fixed.
Btw 6.4.0 is old firmware. Once the problem is fixed you may wish to update the firmware.
- omicron_persei8LuminaryYou could try to simply reboot the NAS. Otherwise, I would fail, remove then add sdd4 from/to md126. If you're not comfortable with fixing the RAID yourself, you could try to contact NETGEAR Support.
- omicron_persei8LuminaryActually, you might just need to grow the number of raid devices of md126 from 3 to 4.
- mdgm-ntgrNETGEAR Employee Retired
omicron_persei8 wrote:
Actually, you might just need to grow the number of raid devices of md126 from 3 to 4.Assuming everything is fine, yes. Once the RAID is expanded the volume should automatically expand.
- Retired_Member
mdgm wrote:
omicron_persei8 wrote:
Actually, you might just need to grow the number of raid devices of md126 from 3 to 4.Assuming everything is fine, yes. Once the RAID is expanded the volume should automatically expand.
Am I correct that this would be accomplished simply by running
mdadm --grow /dev/md126 --raid-devices=4
from SSH (assuming the system is in all other ways healthy)? Or are there any special considerations introduced by ReadyNAS or BTRFS?
--- EDIT ---
Nevermind, just saw your PM, mdgm. Thanks for the info, I'll give it a shot after some additional checks.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!