× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Re: How to do incremental vertical expansion in FlexRAID mode using SSH commands

Sandshark
Sensei

How to do incremental vertical expansion in FlexRAID mode using SSH commands

So, Netgear only lets you do vertical expansion in FlexRAID if you replace all of the drives in the volume. I'm here to tell you that, as long as you are comfortable with the command line via SSH, you can do much more. In fact, this takes some of the mystique out of XRAID. XRAID doesn't do the "magic", it just automates it for the masses. It's really BTRFS that does the magic by allowing a volume to span multiple devices, including growing to include new ones. And, of course, MDADM makes multiple drives or partitions into one RAID "device".

 

I have successfully incrementally vertically expanded a volume on an OS6.9.5 based Ultra4+ NAS in FlexRAID mode, and the steps I followed are below. I chose the Ultra4+ as my "sandbox" because I think it has similar power to the lower tier current NASes (of which I have none), though it is Intel rather than ARM.

 

As is the case with all volume expansions, this puts a strain on your older drives, so having a complete backup before you start is important. This information is provided by myself, just a user like yourself, with no warranty of any kind.   It worked for me, and I copied the commands given here directly, but any time you drop to SSH, you risk screwing up the ReadyNAS OS or database. Doing this particular set of commands wrong has a high probability of that happening. Read it through before starting. Maybe twice. If you have the luxury, try it on a non-production device first (with nothing, or at least much less, to restore if you end up having to factory default to recover). If there is insufficient information for you to figure it out, that likely means you shouldn't try. But Google can be your friend, as it was mine, if your confusion is minor or your configuration different enough from mine. I'd never done anything like this before, and I'm not really a Linux guru, either; so Google was my primary reference. (I'm a hardware guy.).  Make sure you are on an UPS, as a power failure during this can make things go sideways.

 

Don't even try any of this with the NAS still in XRAID mode, as you and the OS will fight each other and it's a lot faster and will win. If you switch from XRAID to FlexRAID to try it, be aware it's a one-way trip. Contrary to a Netgear KB article on the subject, you cannot switch from FlexRAID to XRAID once you have a vertically expanded volume (even if it was XRAID that did the expansion before you switched).

 

You start by inserting the new, larger drives one at a time and allowing them to re-sync the existing data and OS volumes. You are best off if you re-boot afterward so the drive order gets re-assigned in natural order (sda is drive 1, sdb drive 2, sdc drive 3, etc.) just to reduce confusion. Then, use the following steps via SSH to create another RAID group and expand the file system to include it:

First, find out the current device configuration with:

 

 

# cat /proc/mdstat

 

You should see md0 boot partition as RAID1 including the first partition on each your internal drives (sda1, sdb1, etc.). The md1 swap partition is RAID10 using the second partition of all drives excluding any in external chasses like the EDA500 or EDA4000 (sda2, sdb2, etc., except, for reasons unknown, one of my 4200V2's only uses 11 of the 12)), and md127 (,md126, md125...) as the appropriate RAID type(s) for the drives and partitions making up your data and/or other volume(s). My example is for a 4-drive array and just the one default /data volume on RAID5 array md127 which encompasses partitions sda3, sdb3, sdc3, and sdd3. I'm increasing the size of two drives, like would likely be done to incrementally expand.

 

 

Next, create partitions in the unused space with fdisk and change the type to Linux RAID. Rather than my typing a blow-by-blow, here is a reference: How-to-use-fdisk-to-manage-partitions-on-linux. Note that the partition type list in the example is not correct for Debian on the NAS, so make sure you get the proper list and set the partition to the correct type. Be sure that if the drives are not all the same type, you check that the number of free sectors on all drives is the same before accepting the defaults for partition creation, as the partitions should be the same size on all drives. cfdisk can also be used if you are more comfortable with something more interactive (cfdisk tutorial), but I got a segmentation fault when I selected "help", so I wasn't sure I should trust it.

 

Reboot again so that the new partitions get "registered" in /dev (maybe not the proper term, but you get the idea). Confirm all is as you think with

 

# fdisk -l

 

 

Create a new RAID1 group with the new partitions and let it sync. If you have more drives, you can start with 2 drives with the espansion in RAID1 and then add more later, just as XRAID does, but you should also be able to go right to RAID5 with more drives, though I have not tested that. I called the new RAID group md126, being consistent with XRAID's numbering starting with 127 and working down. If you already have more than just md127, then use the next lower number than what you currently have. I started by replacing drives 3 and 4, so my new partitions were sdc4 and sdd4. You can leave out the verbose option, but I like the confirmation it provides.

 

 

# mdadm --create --verbose /dev/md126 --level=mirror --raid-devices=2 /dev/sdc4 /dev/sdd4

 

 

Or, optionally but untested, for three drives simultaneously:

 

 

# mdadm --create --verbose /dev/md126 --level=5 --raid-devices=3 /dev/sdb4 /dev/sdc4 /dev/sdd4

 

Let the new RAID group sync. You can use the UI to see sync status progress, use cat /proc/mdstat for current status, watch cat proc/mdstat for constant updates, and/or use top to see the processes in action. During this time the NAS will also complain about unused volume(s), but you'll fix that later. Once the sync is complete, it should be safe to reboot, but I didn't try it since there was the complaint about unused volumes, and I don't know if that could affect booting. The next step doesn't take long, anyway.

 

Next, add the new RAID group to the /data (or whatever) BTRFS volume:

 

 

# btrfs device add /dev/md126 /data

 

 

I'm not sure if this is the order in which XRAID does these things or not. Another way that works is to create the RAID1 with one drive missing, then expand the volume, then add the second drive and let it re-sync. But I think by creating and syncing the RAID first, you can delete the new RAID and re-try with no impact to your data if something goes awry. Plus, I think that allows you to avoid additional syncs if you are adding more than two drives. Just change the RAID level, number of partitions, and the list of partitions, as shown in the second example of RAID creation (again, untested by me).

 

At this point, a reboot will allow the NAS to recognize that you now have a two-layer volume with the added space, displaying in the UI exactly as it would had you been in XRAID when you went through the drive swaps and then switched to FlexRAID. And so, I'm satisfied these steps have duplicated everything XRAID does, except perhaps still needing a balance and not in the same order.

 

If you have a third larger drive and did not do all the initial insertion, syncing, and partitioning with the other drives, or if you later add a third, here is that process:

 

Insert it, let it re-sync the md0, md1, and md127 arrays, and create a partition in the unused space just like with the first two. Be sure to do the re-boots as listed above, too. Then, switch to RAID5 and add a partition:

 

 

# mdadm /dev/md126 --grow --verbose --level=5 --raid-devices=2 
# mdadm /dev/md126 --grow --verbose --raid-devices=3 --add /dev/sdb4

 

This will require another re-sync.

 

Note that I tried this in one step (# mdadm /dev/md126 --grow --level=5 --raid-devices=3 --add /dev/sdb4), and it totally locked up my NAS with re-sync demanding 100% of CPU and yet making no progress, and I had to start over with the 4x1TB volume. Since the first command takes no time or resources, I haven't put my finger on the cause. Since all the drives in my sandbox collection have >30 re-allocated sectors, it could have been a drive issue. But you have been warned. When I separated the commands as above, it worked as expected, using the same drives.

 

I expected to have to expand the volume to include the new space, but it happened by itself. You can see that with df /data. To see it in the NAS UI, you need to refresh the browser page. If it doesn't expand for you, this should do it:

 

 

# btrfs filesystem resize max /data

 

You can add as many devices you need this way, just increase the number of devices. --level= isn't needed unless you are changing to RAID6. For RAID10 or RAID50, you're on your own. Once you've expanded the volume to include any of the new RAID, adding more than one drive at a time will likely not work, as a RAID5 missing two drives is "dead", not "degraded" and can't retain your data. I'm not sure anything will stop you from trying it and thus killing your volume, so be warned.

 

Not all sites I reviewed said a BTRFS balance is necessary after an expansion. But even if not required for full protection, it's probably a good idea, as it will validate the volume. Do this in SSH or the UI, your choice.

 

Now, this process also seems to allow you to expand a RAID6 with just two drives in a RAID1 layer or three in a RAID5. But keep in mind that the entire volume is only as protected as the lowest level RAID group. So if two of the drives that are in the lesser RAID group fail, the whole volume is lost. You can see why XRAID didn't implement it, as it could give a false sense of higher security.

 

It should also allow you to expand by adding drives smaller than the largest in the system. Just create the partition(s) and add to an existing RAID group. XRAID won't let you do that.

 

And it should also allow you to create a separate volume on the "leftover" space on larger drives rather than adding to the main RAID, but I also have not tested that. (It'll work, I just don't know for sure if ReadyNASOS will recognize the added volume.)

 

Something akin to this process should also allow you to do something else Netgear doesn't give you the tools to do: reduce the number of drives in an array.

 

You can pretty much see why Netgear has likely not made all of these options available through XRAID. It would be hard to figure out the next step, especially making sure it doesn't make you end up with a convoluted volume construct with no cleear forward expansion path. But it should be possible (though not easy) to make them available in FlexRAID mode via the UI.

I've hinted above at some expansions on my experiments. I may attempt some of them in the future, but others are welcome to pick up the torch and explore ahead in any of the suggested directions. It would be nice to know what is actually possible, and then see if Netgear will consider making more of it available via the UI.

Message 1 of 9
Retired_Member
Not applicable

Re: How to do incremental vertical expansion in FlexRAID mode using SSH commands

Hi @Sandshark , thanks for posting this valuable information and kind regards.

Message 2 of 9
StephenB
Guru

Re: How to do incremental vertical expansion in FlexRAID mode using SSH commands

Thanks for posting.  Have you tried reversing an expansion?  The most common scenario here is someone adding their second disk, and expecting jbod or RAID-0 instead of RAID-1.

 


@Sandshark wrote:

 

Now, this process also seems to allow you to expand a RAID6 with just two drives in a RAID1 layer or three in a RAID5. 

Another option (which maintains dual redundancy) is three in RAID-1.

 


@Sandshark wrote:

It should also allow you to expand by adding drives smaller than the largest in the system. Just create the partition(s) and add to an existing RAID group. XRAID won't let you do that.

If the partition size matchs the size of an existing RAID group that will work (and although Netgear doesn't say much about it, XRAID also will work in that case).  For example, if you had 2x4TB, and later expanded it to 2x6TB then XRAID in OS 6 will actually accept a third 4 TB drive.

 

But if you had 2x6TB to begin with, you can'd add a 4 TB drive to that RAID group.  You'd have to create a new group (on new drives).

Message 3 of 9
Sandshark
Sensei

Re: How to do incremental vertical expansion in FlexRAID mode using SSH commands

Yes, there are some additional specifics about expansions I didn't cover.  Thanks for adding that.

 

I started on the experiment of removng a drive and the whole thing locked up on a re-sync again.  Wish I knew which drive was causing that.  I'm starting over, but with a simpler system: 3x1TB RAID5 changing to 2x1TB RAID2 because I know the troublesome drive is one of the 2TB ones.  From there, I can try going back to a single drive so the second can be added as RAID0 or JBOD.  From what I've read, that second step is actually pretty straightforward since each drive contains all the data.

 

I've got the RAID sync'ed and am adding some data.  So, unless it all crashes down on me again, I should have some results in a couple days.

Message 4 of 9
Sandshark
Sensei

Re: How to do incremental vertical expansion in FlexRAID mode using SSH commands

See Reducing-RAID-size-removing-drives-WITHOUT-DATA-LOSS-is-possible for successful reduction experments.

Message 5 of 9
Sandshark
Sensei

Re: How to do incremental vertical expansion in FlexRAID mode using SSH commands

I have done some additional testing and have found the following:

It is possible to concatenate all of the mdadm options in one command. The sync problem I had following doing so was definately a result of a failing drive. So # mdadm /dev/md126 --grow --level=5 --raid-devices=3 --add /dev/sdb4 would have worked with better drives.

It is possible to add more than one drive at a time to an existing RAID volume, so long as it already consists of two drives. You cannot add two drives to a single-drive RAID1 (which is what Netgear's single drive "JBOD" really is) and simultaneously convert it to RAID5.

You can (using --force) add one drive to a single-drive RAID1 and simultaneously convert it to a two-drive RAID5. This goes a lot faster than adding a drive and making it a two-drive RAID1. Of course, like a single-drive RAID1, a two-drive RAID5 is non-redundant. But the NAS will say the volume is healthy. I suspect that's fall-out from it calling a single-drive RAID1 (aka JBOD) volume healthy. But as long as you intend to add at least a third drive, it is a faster way to expand a single drive system to a three-drive RAID5 because you avoid the excruciatingly long sync for a single to two drive RAID1. [I have yet to figure out why that seems to take so much longer than other syncs.]

BTRFS always expands to fill a RAID group that you expand, assuming the BTRFS volume already includes that RAID group. I don't know if that's new in BTRFS and the articles I Googled are out of date or this is unique to the ReadyNAS and some process it runs in the background.

Message 6 of 9
Sandshark
Sensei

Re: How to do incremental vertical expansion in FlexRAID mode using SSH commands

I have verified that it is possible to add multiple drives at once. But you must start with at least a 2-drive RAID1 when you do. So, adding partition three of two drives in slots 3 and 4 to an existing two-drive md127 RAID1 would require:

 

# mdadm /dev/md127 --grow --level=5 --raid-devices=4 --add /dev/sdc3 /dev/sdd3

I have also found that is is possible to grow a single-drive JBOB (really 1-drive RAID1) to a 2-drive RAID0. But, you have to do it in two steps. The following grows a single drive JBOD by adding partition three of a drive in slot 2:

 

# mdadm /dev/md127 --grow --level=0
# mdadm /dev/md127 --grow --raid-devices=2 --add /dev/sdb3

This will temporarilly change it to RAID4, do a sync, then change it to the desired RAID0.

 

If you are adding a drive (like the expansions above, or for any reason) rather than replacing one, it is difficult to set up the proper OS and swap partitions (fdisk doesn't like to start where md0 does on the ReadyNAS) and also to make sure that those partitions get synced with the others that make up the OS and swap RAIDs. The best way I found to do that is to use the GUI to tell the NAS to create a new volume on that drive, then DESTROY the volume (which you can do before it completes, if desired). You can then use cat /proc/mdstat to see the md0 and md1 syncs complete and then use fdisk to create the partition(s) you intend to add.

 

This ends my experiments. I've not touched on RAID6, RAID10, or RAID50, but I'm sure the process is pretty much the same, you just have to insure you have the correct minimum number of drives. If it refuses to do a conversion, you may need to break it into two steps, like for the RAID0 expansion above.

Message 7 of 9
StephenB
Guru

Re: How to do incremental vertical expansion in FlexRAID mode using SSH commands

Thanks for posting this.  My "spare" NAS are just 2-bay units, so there isn't a lot of experiments I can do.

Message 8 of 9
Sandshark
Sensei

Re: How to do incremental vertical expansion in FlexRAID mode using SSH commands

Glad I could potentially help others.  I learned a lot in the process, myself.  As a bonus, the bad drive finally failed completely, so i know not to use it in any future experiments.

 

This is actually also useful for those who may want to use MDADM and BTRFS on a generic linux system.  They can get XRAID-like expansion, just not automatically.

Message 9 of 9
Top Contributors
Discussion stats
  • 8 replies
  • 2136 views
  • 1 kudo
  • 3 in conversation
Announcements