- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
How to "fight" XRAID and win
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How to "fight" XRAID and win
In some previous posts, I described how to reduce and partially expand RAID while in FlexRAID mode via SSH. But back then, you could easily switch between XRAID and FlexRAID and back. At some point (possibly unintentionally, but they've never addressed it here), Netgear made it impossible to switch back to XRAID with an "expanded volume", which in this case means one with a second RAID "layer". I recently found myself in a situation where I needed to do something similar again and didn't want to be stuck in FlexRAID mode. So, I found that you can simply stop the readynasd process (systemctl stop readynasd), do what you need to do, and re-start readynasd (systemctl start readynasd). Everything has to be done via SSH because readynasd also includes the GUI, but that was the case, anyway.
In my case, I found out that even if you've forced (via switching in/out of FlexRAID) a greater than 6 drive XRAID volume to be RAID5 instead of RAID6, it will convert the second (and I assume third and on, if applicable) layer to RAID6 when you insert a 7th larger drive to vertically expand the volume. The first layer stays RAID5, so this makes no sense and is clearly an oversight by the XRAID programmers. But, it's not going to get changed now.
For those still interested, here is what worked (RAID and drive designators are obviously specific to my NAS).
The last drive upgraded was sdm, and XRAID created partition sdm4 and added it to mdadm RAID md126 while also converting it from RAID5 to RAID6. So, I tried to remove that partition (mdadm /dev/md126 --fail /dev/sdm4 --remove /dev/sdm4). XRAID immediately saw the "available" space and again started to add it to md126 before I could do anything else. So, after waiting for it to sync again, I tried stopping readynasd and doing it again. That worked -- cat proc/mdstat showed md126 missing a drive and not re-syncing. So, I re-added the partition and converted to RAID5: mdadm --grow /dev/md126 --level=5 --raid-devices=7 --add /dev/sdm4 --backup-file=/temp/mdadm-backupfile. I then re-started readynasd and the GUI re-stared and shows it's re-syncing. cat /proc/mdstat shows it's "reshaping", so I believe it's converting to RAID5, as I wanted since it didn't complain about it. FYI, I first tried just converting to RAID5 (mdadm --grow /dev/md126 --level=raid5 --raid-devices=7 --backup-file=/temp/mdadm-backupfile), but mdadm complained that I was shrinking the volume (which I wasn't), so I tried the removal and add/convert approach, whihc won't take any longer, anyway. I obviously could have removed any of the partitions in md126 instead of the last one added, had I desired.
Of course, I may need to do this every time I add a drive from now on. I don't know if it's just the transition from 6 to 7 drives that triggers conversion to RAID6 or if it'll do it every time with the number >6.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: How to "fight" XRAID and win
So, a follow-up.
Once you are past the 6 drives, XRAID will always (well, at least on both the 7th and 8th) convert to RAID6. So, you'll have to do the above every time. When I swapped the 8th, it at least didn't complain when I tried to convert directly to RAID5. mdadm --grow /dev/md126 --level=5 --raid-devices=8 --backup-file=/temp/mdadm-backupfile almost worked as expected. Almost because it actually converted to a 9-drive RAID6 with a missing drive (or, so cat /proc/mdstat said). But when the sync completed, I just gave it the command again and it instantly changed to RAID5.
In addition, this was the second drive I added that was even larger than any other except the 7th one. XRAID did not see the extra space on those two and create a 2-drive RAID1 from their additional space, as it would using "RAID 5 Rules". So, it clearly uses "RAID6 rules" after you exceed 6 drives and wanted 4 larger drives to create a RAID6.
But, after converting to RAID5, I issued volume_util -e auto and it did the expansion with the extra space, creating yet another RAID "layer".