NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
berillio
Aug 31, 2023Aspirant
Best / quickest / least painful way to “vertically” expand a NAS when having a full data backup
Hello Forum This is – somewhat – a follow up to “vertical expansion of the wrong NAS” (https://community.netgear.com/t5/New-ReadyNAS-Users-General/vertical-expansion-of-the-wrong-nas/m-p/2157308#M47...
StephenB
Aug 31, 2023Guru - Experienced User
berillio wrote:
But as I am constantly needing more data space, few days ago I bit the bullet and bought 2x 12TB Seagate Ironwolf, with the intention of fitting them to the the RN214a (the “wrong” NAS in that previous post), simply because adding just 2 disks would give me an extra 8TB, which should be ample for now. I could expand again at a later stage.
Has anybody used such a big array in an RN214?
I don't have an RN214, but I have two 14 TB drives in an RN202, and have not had any issues. One is a WD Red Pro, the other is a Seagate Exos. It should be ok.
Any particular reason you are still on 6.10.3 (2020 firmware)?
berillio wrote:
For example, I could:
- backup the existing configuration, then remove all the disks (if I was deleting the Volume at that moment, the disks would be wiped out and a would lose a 2nd parachute backup);
- insert one 12TB, create the new Volume, and copy all the data;
- insert the 2nd 12TB disk. If I have chosen X-RAID, the NAS should copy over the data making a mirror image (RAID1, I guess).
- insert one (wiped out/reformatted) 4TB disk. The Nas should then resync going into RAID5 (I presume that at this stage, with a dissimilar size disk, the Nas would create the md126 and uses a different RAIDs for md127 and md126)
- insert the 4th (4TB) disk and wait for another resync.
Don't do this. Trying to add a smaller disk to the array will usually fail outright. There are a few cases where I've seen the NAS add smaller disks when you install the second one, but you would lose capacity.
If this worked with 2x12TB+2x4TB you'd end up with 16TB instead of 20TB.
The flaw in your reasoning here: md127 normally is built from partitions on each disks (sda3, sdb3, sdc3, sdd3). md126 is normally built from partitions on the two largest disks (sda4, sdb4 for example). So for this to work properly, the two biggest disks need 4 partitions (sda1 for the OS, sda2 for swap, sda3 for md127 and sda4 for md126). sda3 is the size of the smallest disk (4 TB in this case), sda4 fills the remaining space (8 TB in this case).
But when you start with 12 TB drives, the NAS will create a 12 TB sda3 partition. It will not attempt to repartition the 12 TB drives when you add the smaller ones. So at best you wouldn't end up with RAID-5. You'd end up with two RAID-1 groups instead - which wastes space.
Also, as far as disk stress goes, it's the repeated syncs that creates the disk loads (and takes the extra time needed when upgrading all four disks). Your process doesn't reduce the number of syncs. If you want to start over, the way to do that is to insert all four disks and do a factory default. That will build the volume in the most efficient way (with the least disk i/o).
I think you are over-thinking this. In this situation I'd just do the normal vertical expansion - hotswapping one disk with the 12 TB one, waiting for the resync to complete, and then hotswapping the second one.
berillio wrote:
Could I just do (1), then add One disk, create the Volume structure, copy half a dozen of small text file (one in each share) and the add all the disk as I said earlier? This way resync time would be a matter of seconds . When I then copy the FULL DATA, it would be striped across the array as it is copied. That should be far less demanding on the drives
RAID runs "below" the file system - creating virtual disks (md126 and md127) that the file system uses. The resync time doesn't depend on how full the file system is - RAID neither knows that nor cares. It is exactly the same process for an empty volume as a full one. So no time savings if you do this.
berillio wrote:
Just for info, disks age and usage is as follows:
on RN214a, 4x 4TB disks
Bay 1, 26 Jan 2020, 25205 h:
Bay 2, 15 May 2018, 34603 h;
Bay 3, 26 Jan 2020, 25213 h;
Bay 4, 7 Mar 2018, 36918 h.
But I have a fresher 4TB disk with 18456 hours of use. I could use that one, easy done if I am rebuilding the volume from scratch
Generally when I am replacing disks for expansion, I remove the oldest. So I'd replace disk 4 with a 12 TB model, and after the resync completes I'd replace disk 2.
If you want to use the younger spare, you could also replace the disk in either bay 1 or bay 3 with that disk. I'd do that first, just to get it out of the way.
berillio wrote:
Incidentally, just a THERMAL concern: wouldn’t be better to place the 12TB disks in bay 1 & 4 and fit the 4TB disks in the middle, as the middle drives always tend to run a little hotter.. ? (easier done if I choose to create the new volume from scratch, with a "conventional" expansion I would need another disk swap & resynch as the disk in Bay 2 is the the 2nd oldest disk in the existing array),
Not sure how much it matters. If you want to do that, you could power down and reorder the disks before you begin (swapping disk 1 and disk 2, so the oldest drives are in slots 1 and 4). Then boot up and proceed with the replacements.
berillio wrote:
Incidentally, I was advised to “test” the new drive, before installing it, so I initially run a “quick” scan of both drives on a USB adaptor; now I am testing the first of the two drives (installed on a PC on a SATA3 socket) using Victoria 5.36 (full scan sequential test); that should take another ~7h (~14h in total to scan the entire disk).
I use Victoria because I am reasonably familiar with it, it provides easy understandable informations and it seems to work; I also remember using the no-nonsense old WDDLGD, while I am not familiar with the latest versions of SeaTools and WD dashboard.
Is there any other software or test which I should run?
If Victoria has a full erase test, then I'd also run that. Seatools does have that test (dashboard does not).
My reasoning here is that I have sometimes had new disks pass the full read scan, but fail the write. (I've also had disks that fail on the read scan, and pass on the write).
- StephenBAug 31, 2023Guru - Experienced User
Thinking about this a bit more... wanting to use the younger 4 TB drive in addition to the two 12 TB drives might change the calculus some.
One option is to power down the NAS, and replace the three disks - setting the ones you remove aside. Leave the one disk you want to continue to use in the system. Then do a factory default from the boot menu, set up the NAS again, and restore the data from the backup. Personally I'd also upgrade the firmware to 6.10.9.
Advantages are
- gives you a completely clean system
- gives you a fallback if something goes horribly wrong (as the NAS will boot up with the original volume if you power down and reinsert the three disks you remove - leaving the fourth slot empty)
- rebuilds with minimum disk i/o
Personally I don't think (3) is a major consideration in your case, as you are replacing healthy disks and have a full backup.
- berillioSep 01, 2023Aspirant
Hello Stephen, TY for coming back.
First of all an apology, disks were tested using a SATA2 socket – not a SATA3 (I always meant to say that, I just got the digit wrong).
“Any particular reason you are still on 6.10.3 (2020 firmware)?”
“if it ain’t broke, don’t fix it”, my NASs are all working fine. But I take the advice to upgrade to 6.10.9 – I presume that the best time to do it is after I do the “Factory Default”, in the procedure you recommend in your 2nd message.
Also.. change of program, I just bought 2x 12 TB Ironwolf Pro – I should have them Tuesday.
So I wonder if it wouldn’t make more sense to have the 4x 12TB array in the RN424, which is a better machine.
I know that the RN424 is Intel based and the RN214 is arm based, but I do not know if I can simply and safely move an array from an Intel based platform to an arm based platform.
If that is possible, I could:
- Save the config, remove the array from the RN424 (4x 8TB) and put it in the RN214a, and check if it reads the array.
- Refit the RN424 with the 12TB drives, do a factory default, upgrade to 6.10.9 if so recommended (it has 6.10.4 Hotfix 1).
- Copy over all the data from the RN214 which has the original array.
- Then on the RN214a, I would save the configuration and delete the existing volume;
- Maybe do a factory default? Would that be necessary/advisable? (That array always had 8TB drives), then upgrade the FW to 6.10.9;
- Copy over the existing backup of the data which is on my PC (currently testing both 12TB IronWolf, (Victoria v5.37, W-R-V test since yesterday, ~11h to finish)
That.... assuming that an arm platform has no issues with an array created on an Intel platform. If that is NOT the case, I guess that I would follow the same in procedure, but the RN214 ( save config, remove drives, refit drives, factory default, firmware update, copy over the backup.
Same size drives, no resynch necessary, easy.
I simply need to wait for the new drives and test them. Fingers crossed...
- SandsharkSep 01, 2023Sensei
Yes, you can move an array form an OS6 ARM unit to an OS6 Intel unit (and vice-versa). There is a very slight difference in the way the drives are formatted for the ARM version, the OS partition is EXT instead of BTRFS, and that will remain, but it's not really a factor you need to worry about.
Related Content
- Sep 07, 2016Retired_Member
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!