NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
berillio
May 24, 2022Aspirant
RN214 goes Offline and NICs may be dead
Hello RN Forum, As you may remember from other posts (still on hiatus, sic), my current set up in Birmingham UK is made of: RN214a (4x WD40EFAX), FW 6.10.3, RAID 5 RN214b (3x WD80EFBX + 1x WD40EFR...
berillio
Aug 03, 2022Aspirant
Hello Forum, apologies for the delay, and thank you for the patience
Too long a story. Spent over two months copying the data which could be seen on Array B when loaded on Unit A. Some data was lost, (half dozen folders) but the bulk (9.6 TB) could be copied over. The snapshots, ~4TB, were lost: although the Windows switches were ticked on, neither the dashboard (which would freeze after ~20 seconds, nor Explorer or Total Commander (used to resolve some “long name” issues) could actually SEE any files: the snapshot folders seemed to be empty (there were two “dated folders” which then disappeared at some stage during the copying which took forever).
Unit B was tested with the test disk, swapped bays etc, and seemed flawless. Power bricks were ok too (I tested them before receiving the advice).
The “hang” on Unit B was eventually resolved (OS reinstall with Array Y on Unit B, a previous attempt to backdate the FW on Unit A was unsuccessful). Array Y is now back on Unit B, and seems perfectly operational on eth1 – streaming videos 24/7 to the TV (but I also had 4 VLC processes running videos on a loop on two PC simultaneously for a couple of days, and did not seem to suffer).
It seems to me that the issue was the Array, more than with Unit B itself .
The fan seems to keep HDD temps below 45°C on “Cool”, below 50°C on “Balanced” and ~52°C-53°C on“Quiet” (I presume it would aso respond to CPU temps, but Unit B does not seem to be stressed right now).
Now:
a) which sort of test shoud I run to ensure that Unit B is actually “performing as it should” ( I just switched to eth0 to see if I have issues with that NIC)
b) Array Y was (originally) a 4x4TB array which was vertically expanded to 3xWD80EFBX + the last 4TB; since the entire dataset has been backed up, it makes sense to me to expand the 4th HDD to 8TB, do whatever advised to have a SIMPLE RAID5 (a factory reset maybe) and start afresh without the md126/md127 two-grouping (could that have been responsible for the array corruption?).
c) the WD80EFZZ is not on the compatibilty disk for either the RN424 nor the RN214. Is anybody familiar with this drive yet? (128Mb cache and 5640 rpm..??) (the WD80EFBX seemed and to be “End of Life”, the WD80EFAX is nowhere in sight too.
Many Thanks in advance, Berillio
StephenB
Aug 03, 2022Guru
berillio wrote:
a) which sort of test shoud I run to ensure that Unit B is actually “performing as it should” ( I just switched to eth0 to see if I have issues with that NIC)
I'd run the maintenance test (putting them on a schedule). Personally I run one test a month (cycling through them 3x a year).
You could also measure transfer speeds.
berillio wrote:
b) Array Y was (originally) a 4x4TB array which was vertically expanded to 3xWD80EFBX + the last 4TB; since the entire dataset has been backed up, it makes sense to me to expand the 4th HDD to 8TB, do whatever advised to have a SIMPLE RAID5 (a factory reset maybe) and start afresh without the md126/md127 two-grouping (could that have been responsible for the array corruption?).
Those two RAID groups will last as long as the volume does. Upgrading the last disk won't change that.
But after the upgrade, you could destroy the volume, create a new one, and switch back to XRAID. Or alternatively do a factory default with all disks in place. That will result in a single RAID group (until you choose to expand again). Of course you will need to recreate shares, reinstall apps, and restore all the data.
berillio wrote:
c) the WD80EFZZ is not on the compatibilty disk for either the RN424 nor the RN214. Is anybody familiar with this drive yet? (128Mb cache and 5640 rpm..??) (the WD80EFBX seemed and to be “End of Life”, the WD80EFAX is nowhere in sight too.
Not surprised that it's not on the HCL. Netgear has always been very slow on updating it, and all indications are that they are abandoning the NAS products. So not much incentive for them to keep testing drives.
If should be compatible. The specs show it is lower performing than the WD80EFBX and WD80EFAX (but also uses less power). https://products.wdc.com/library/SpecSheet/ENG/product-brief-wd-red-plus-hdd.pdf
You can of course also look at Ironwolf (mix and match is ok).
- berillioAug 05, 2022Aspirant
Thanks StephenB
Ordered (and got) a Seagate Ironwolf ST8000VN004 (the 022 is end of life – same specs, 256Mb cache, a tad noisier, maybe).
Re the New Volume with 4x 8TB, without “grouping”.
The SM says that I can only destroy a SINGLE Volume if I am in Flex-RAID, but although I am in X-RAID, “Destroy” is not greyed out (?).. So supposedly I could:
- destroy the volume & power off
- swap the 4TB with the new 8TB & power on again
- Create a new Volume
That should avoid a double syncing operation and save time.
Alternatively, I could do a factory reset and could I then inport a configuration back up from this unit, or would it have the md126/127 grouping? (if so, I could use the configuration from the other units and edit the name and network fixed settings)
But I am still uncomfortable, as we don’t really know what happened and why.
I wish to be reasonably confident that I can RELY on this piece of hardware before it goes out of Warranty.
I run an HDD test: it finished without highlighting any issues.
I run some transfer speed tests to highlight possible difference between eth0 & eth1, but I cannot see anything appreciable (within 2%).
“I'd run the maintenance test (putting them on a schedule). Personally I run one test a month (cycling through them 3x a year).”
I think you were referring to the maintenance schedules (scrub, defrag, balance). They should help maintain the disk in good health, but I was asking if there was something else I could do “stress” the hardware to highlight any weaknesses.
With regards to defragging, I was planning to enable the “Autodefrag” with the checkbox: I don’t have any iSCSI device ( that I know of ) and 90% of the data is video or images on store, once written down, it is simply read out.
There is another thing which has gone wrong: the email warning stopped working some time ago, and I cannot seem to get them to work again.
That uses a gmail addreess mailing a yahoo address, so all the server are filled in automatically ( but even filling them myself in the advanced does not change the result): I get error 500.801.0001 .
Thanks again, Berillio
- StephenBAug 05, 2022Guru
berillio wrote:
Re the New Volume with 4x 8TB, without “grouping”.
The SM says that I can only destroy a SINGLE Volume if I am in Flex-RAID, but although I am in X-RAID, “Destroy” is not greyed out (?).. So supposedly I could:
- destroy the volume & power off
- swap the 4TB with the new 8TB & power on again
- Create a new Volume
That should avoid a double syncing operation and save time.
Alternatively, I could do a factory reset and could I then inport a configuration back up from this unit, or would it have the md126/127 grouping? (if so, I could use the configuration from the other units and edit the name and network fixed settings)
No matter how you do it, if you create a volume with 4x8TB disks installed you will end up with only md127. Reapplying the configuration file won't change that. The various raid groups correspond to partitions on the disks, and simply applying a configuration file doesn't reformat/repartition the disks.
As far as double-syncing goes - you can destroy a volume while syncing is in progress. So that is really not a concern.
You should be able to destroy the volume in either XRAID or FlexRAID - though honestly I've only destroyed them in FlexRAID. Of course if you are using XRAID you only have one volume to destroy. If you are in FlexRAID and have multiple volumes, you can certainly destroy them all (one at a time).
Uninstall all your apps before saving the config file, and reinstall them after you reapply the file.
berillio wrote:
But I am still uncomfortable, as we don’t really know what happened and why.
I wish to be reasonably confident that I can RELY on this piece of hardware before it goes out of Warranty.
I run an HDD test: it finished without highlighting any issues.
FWIW, I'm not seeing a whole lot of evidence that two RAID groups makes the system a lot more fragile. My main NAS and one of my backup NAS have two RAID groups, and I've never lost a volume on either one.
But I agree that we really don't know what happened here.
IMO, the best way to minimize any discomfort is to implement a solid backup plan, so your data is safe even if the NAS fails.
berillio wrote:
I think you were referring to the maintenance schedules (scrub, defrag, balance). They should help maintain the disk in good health, but I was asking if there was something else I could do “stress” the hardware to highlight any weaknesses.
Scrub and Disk Test both do a good job of exercising the disks. Stressing the NICs could be done with a transfer test - there are several free programs that will benchmark NAS performance by creating/transfering files over and over. If you just want to test the network, you could install iperf on the NAS and a PC - then use that to test the network performance. Plex transcoding would be a reasonable way to test the CPU, but could be a bit trickier to set up.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!