NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
powellandy1
Aug 29, 2020Virtuoso
Possible Failed RAID5 to RAID6 Converstion
Hi I have a RN528X. I moved a 6 drive RAID5 (XRAID2) array from a RN516. This consisted of 4x14TB and 2x10TB. They have been vertically expanded before (6x6TB -> 6x10TB -> replace 4 with 14TB). ...
StephenB
Aug 29, 2020Guru - Experienced User
powellandy1 wrote:
It would appear md127 is still RAID5 - this has already done it's reshaping. It looks like free space did go up by 4TB - which wasn't the intention.
Am I right in saying at the end of this I won't have true RAID6 dual redundancy and md127 would be vulnerable to >1 disk failure??
It looks like it only converted the RAID groups with 6 disks (now 7) to RAID-6, and left md127 - which had 4 disks (now 5) as RAID-5.
That is weird, and I would consider it a bug. It likely is possible to resolve it with ssh - Sandshark has done some experimentation along these lines. Though this is risky, so make sure your backup is up to date first.
Sandshark
Aug 30, 2020Sensei - Experienced User
I've not done any experiments changing the RAID type of one group of a multiple RAID group configuration. While I'm sure it could be done, the first thing you have to do is turn off XRAID. Otherwise, the OS is going to try and "reclaim" any partitions made free by the SSH commands. And once switched to FlexRAID, this expanded RAID configuration will refuse to go back to XRAID. While it is a lot of work, the only way to have that configuration and XRAID is to back up, factory default, and restore.
While many think XRAID is a proprietary format, it's not. It's "just" a set of logic that determines how to use standard Linux tools (MDADM and BTRFS for OS6, or MDADM, LVM, and EXT for earlier OSes) to expand a RAID volume without operator involvement. I quote "just", because that's not a simple thing. There are a huge number of possible old and new configurations. So, they must have established rules that are intended to handle similar configurations in a similar way. That you ran into a configuration where the logic failed to do what it should is pretty clear, probably because your configuration is a rare one -- maybe even one the programmer(s) didn't anticipate at all.
- powellandy1Aug 30, 2020Virtuoso
Thanks.
So what exactly does the XRAID/FlexRAID toggle button in the GUI actually change??
If I was able to convert md127 and all the prequisites of XRAID still existed (which I assume is to do with consistant arrays/partitions etc.. such that it can automatically expand) can it not be toggled back??
Interestingly I had a different problem (https://community.netgear.com/t5/Using-your-ReadyNAS-in-Business/Adding-a-4th-10TB-drive-to-a-XRAID-array-on-Pro6-6-9-3-please/m-p/1614423#M147974) before when it somehow toggled out of XRAID into FlexRAID, but issuing commands via SSH to correct the issue - vertical expansion not triggering automatically - fixed it and upon rebooting it was back in XRAID!
Cheers
A
- StephenBAug 30, 2020Guru - Experienced User
powellandy1 wrote:
So what exactly does the XRAID/FlexRAID toggle button in the GUI actually change??
In XRAID mode, the system automatically expands when new disks are added (or when smaller disks are replaced with larger ones).
If FlexRAID mode, you need to do all steps manually - creating RAID groups, or adding new disks to the appropriate RAID groups, and concatenating the RAID groups together.
powellandy1 wrote:
If I was able to convert md127 and all the prequisites of XRAID still existed (which I assume is to do with consistant arrays/partitions etc.. such that it can automatically expand) can it not be toggled back??
That should be possible. There are some scenarios when the system won't let you switch from FlexRAID to XRAID - I don't think they are all well characterized.
- powellandy1Aug 31, 2020Virtuoso
StephenB wrote:
powellandy1 wrote:So what exactly does the XRAID/FlexRAID toggle button in the GUI actually change??
In XRAID mode, the system automatically expands when new disks are added (or when smaller disks are replaced with larger ones).
If FlexRAID mode, you need to do all steps manually - creating RAID groups, or adding new disks to the appropriate RAID groups, and concatenating the RAID groups together.
I understand that - is XRAID literally a boolean flag?? Is it automatically determined (by a set of rules) when the RAID system is in a state that it can automatically expanded??
Sandshark - I found this post of yours - https://community.netgear.com/t5/Using-your-ReadyNAS-in-Business/Reducing-RAID-size-removing-drives-WITHOUT-DATA-LOSS-is-possible/td-p/1736125
It seems to be what I want - although my circumstance is more complicated (multiple arrays).
root@MediaMaster:~# btrfs filesystem show /data Label: '0ed5d010:data' uuid: f1374f27-204a-4692-96d6-554d85e68f77 Total devices 3 FS bytes used 44.47TiB devid 1 size 27.27TiB used 25.81TiB path /dev/md127 devid 2 size 18.19TiB used 16.73TiB path /dev/md126 devid 3 size 14.55TiB used 2.08TiB path /dev/md125
root@MediaMaster:~# mdadm --detail /dev/md125 /dev/md125: Version : 1.2 Creation Time : Thu Jun 4 05:01:37 2020 Raid Level : raid5 Array Size : 15623254016 (14899.50 GiB 15998.21 GB) Used Dev Size : 3905813504 (3724.87 GiB 3999.55 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Update Time : Mon Aug 31 06:52:38 2020 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Consistency Policy : unknown Name : 0a436c4c:data-2 (local to host 0a436c4c) UUID : 9b299c45:6236c52f:26afdb09:a5615851 Events : 18795 Number Major Minor RaidDevice State 0 8 37 0 active sync /dev/sdc5 1 8 53 1 active sync /dev/sdd5 2 8 21 2 active sync /dev/sdb5 3 8 5 3 active sync /dev/sda5 4 8 101 4 active sync /dev/sdg5
Do I need to specifically shrink devid 3 (it looks like after a reboot the array in question is now md125)??
btrfs filesystem resize 3:-5t /data
I've picked 5Tb to be on the safe size - the array is the 5 4TB differences between 10TB and 14TB.
I would then proceed with:
mdadm /dev/md125 --fail --verbose /dev/sdg5 mdadm /dev/md125 --remove --verbose /dev/sdg5 mdadm --zero-superblock --verbose /dev/sdg5 mdadm --grow /dev/md125 --array-size 1171743012 mdadm --grow /dev/md125 --raid-devices=4 --backup-file=/backupfile mdadm --grow /dev/md125 --raid-devices=5 --level=6 --add /dev/sdg5
Can the last two commands be combined (ie. can you miss the --raid-devices=4 line and just run the last line??)
Does this sound about right??
Thanks
Andy
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!