NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
btaroli
Dec 01, 2016Prodigy
Planning Ahead for Capacity Upgrade
Well, the ol' 4TB based volume in my 516 is down below 3TB free space. FWIW, this somehow got created as a RAID-5 under X-RAID2, but with 4TB disks it wasn't so bad. After some thought and looking at...
- Dec 01, 2016
The volume will only expand when redundant space can be added. So if you have one 12TB disk after the RAID-6 volume is rebuilt you will still have dual-redundancy.
After you've created the RAID-6 volume you can re-enable X-RAID.
In fact depending what disks are installed, with a RAID-5 volume with three or more disks you could disable X-RAID and designate it so that when the next empty slot is filled it is used to add parity (i.e. convert to RAID-6). This conversion does take a long time though.
btaroli
Dec 04, 2016Prodigy
Replaced by ticket 27750891, since I goofed and entered the wrong email in the first one. Fun.
mdgm-ntgr
Dec 04, 2016NETGEAR Employee Retired
This appears to be a minor issue that in some rare cases may be run into when migrating disks from one chassis to another.
Edit: Your system looks fixed now.
- btaroliDec 04, 2016Prodigy
Yes, it seems so. :) A bit of a scare! Hadn't see this sort of thing, as I've never moved disks between boxes before. Appreciate your help!
- FramerVDec 04, 2016NETGEAR Employee Retired
Hi btaroli,
If your issue is now resolved, we encourage you to mark the appropriate reply as the “Accept as Solution” or post what resolved it and mark it as solution so others can be confident in benefiting from the solution.
The Netgear community looks forward to hearing from you and being a helpful resource in the future!
Regards,- btaroliDec 06, 2016Prodigy
Well, this thread already has a "solution," since it was meant to be informational anyway. :) But in the case of the volume that was resyncing but not mounting, the issue was that the disks had been moved between NAS'es. Apparently, the configuration managed by ROS behind the scenes applies a label to the /data volume comprised of the hostid (not the host name, the output of the hostid shell command) and the volume name, usually "data" (for X-RAID), so "hostid:data". ROS uses that volume label to find and mount the /data volume.
But when disks are moved bewteen NAS'es, that label won't match. Apparently, there is logic to detect this, relabel the volume, and adjust the configuration so that things "just work." Only in this case that didn't happen for some reason. And eventually, my ROS install decided that it couldn't mount the /data volume. But mdadm *did* see the device array and was busy resyncing it as before. So this gave the very odd presentation of a volume that wasn't available and yet was. :)
The solution was to manually update the volume label and adjust associated ROS configuration so that the expected label was present and mounted at boot time. Of course, this was conducted by Netgear support over an enabled Support Access shell. :) But just sharing the basic details here for completeness, since you asked.
Oh, and for the record, the rebuild (for second parity stripe) is now at 38%. Holy cow... never ever run these big drives with less than RAID6, folks. I can't imagine what rebuild times are going to be like once I get enough 8TB disks in here to make the second 4TB stripe in the array active. LOL
Question... does the mdadm rebuild benefit at all from additional RAM? I know I've got this particular NAS under some memory pressure since it arrived with only 4GB and my installed addons, plus normal usage, keep used memory (w/o cache) at just over 2GB.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!