NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
berillio
Mar 23, 2020Aspirant
RN 104 failed to start with "Boot from USB" - no FW update
RN104, 4x4TB, 6.2.5 Firmware 2020_03_15 - Sunday morning at 3am, the NAS disappeared from the network, as expected, as I have a weekly "reboot". It should have come back online at 4am. But when...
berillio
Mar 29, 2020Aspirant
“Unfortunately, that's a sign that there is likely nothing you can do to fix the situation”
We think that the switches are not faulty and therefore responsible for this behaviour.
So, what else could cause it?
Did the NAS initiate a FW update by mistake (from my part, maybe? I certainly never intended to set the Automatic update, but I cannot exclude ticking a box by mistake). If that was the case, the automatic update “should” have upgraded to 6.5.2, but, if it DID update, did it really update to that FW or maybe it updated to 6.10.3? Because, if that was the case, it might have failed to update properly, because it should have not be done directly but in stages: is that what it could have happened? Maybe I have a 6.10.3 “failed” update, and therefore my recovery to such an early FW is not “legit” (it should be possible to revert from 6.5.2 to 6.2.5). Hence the RN104 asks me to do it again? And it seems to be stuck in a loop because I am repeating the same “wrong” recovery, I should do a recovery to 6.10.3?
Can I invoke any procedure with the reset switch in the back? Could I do a “Factory default”? On the Hardware Manual there is a strong warning :
"The factory default reboot process resets the storage system to factory settings, erases all data, resets all defaults, and reformats the disk to X-RAID2."
But what if the disks are removed beforehand? It cannot reformat an empty space. So anyone could try a factory reset without losing their data – if that was the case, the manual would have suggested, after the warning, to “remove the disks” before doing the Factory default. But it didn’t. Do I miss something here?
If I could do a Factory reset, that could bring me to the same case of a “brand new” kit, which may boot up normally. I could then update the original FW installed in the factory to the 6.2.5 which I had before, if I need to have the SAME fw I had before.
Stephen B says “You can migrate the disks to any OS-6 NAS.”.
That would suggest that it does not really matter which FW is used, provided is an OS-6
I asked (in a previous reply) what would happen if I removed the battery: it seems that on a PC motherboard, removing the battery for at least 20 minutes has the same resetting effect on the BIOS as shorting the jumper – which is not present on the RN104 motherboard. Could that have any beneficial effect (ie reset a “Reboot from USB” flag) ^^?
TBH, I find difficult to believe at an hardware fault. This NAS was operative for ~6 years, and that is not really a long time.
But, if that was really the case... well, how could I justify invest in another one, maybe better quality, more expensive but more bays and more expansion capacity, which could last me a lot longer … which could also die on me so suddenly?
StephenB
Mar 29, 2020Guru - Experienced User
The NAS boots from an OS partition on the disks, and a factory default just reformats the disks and reinstalls the software from the flash.
So you could attempt that procedure. It requires the boot menu though, which could be a problem. You can overcome that problem by zeroing one of the disks, inserting it by itself into the NAS and powering up. That normally does a fresh factory install.
There were a couple of issues when going from old firmware to current firmware:
- https://kb.netgear.com/29974/ReadyNAS-OS-6-RN100-RN2120-cannot-update-due-to-invalid-checksum
- https://kb.netgear.com/30682/ReadyNAS-OS-6-ARM-systems-unable-to-install-or-update-apps-following-upgrade-to-ReadyNAS-OS-version-6-5-0
- https://kb.netgear.com/30107/ReadyNAS-OS-6-4-0-FAQs-on-upgrading-firmware
- https://community.netgear.com/t5/Using-your-ReadyNAS-in-Business/After-update-firmware-6-4-0-RN10400-doesnt-boot/m-p/990817#M94901
- SandsharkMar 29, 2020Sensei
I suppose a bad SYSLINUX.CFG in flash could make it always boot from USB, but it's hard to fathom how unless that's what happens whene there is no valid SYSLINUX.CFG. A USB recovery should have fixed that, though; as it replaces SYSLINUX.CFG.
- berillioMar 29, 2020Aspirant
Stephen B & Sandshark, Thank you both.
In reverse order, Sandshark, I tried to look into the ISO and GZ files with Winrar, but it cannot look into them (to check for SYSLINUX.CFG). Is there any other archiver/other which could do so – assuming there is any way of actually reading its contents and/or editing it – not that I think you wanted me to do so?
StephenB, there were few points to address on your post, but one is very quick. In the last link,
the user talks about “few minutes” for recovery; then power up and "updating" which takes 15 minutes.
In my case, we are talking of
~30 seconds with the button pressed before “Booting”begins (or without pressing, same thing)
6/8 seconds of Booting before “Recovery” begins
30 seconds of Recovery, before the message “Recovery done” is displayed
2 seconds later the NAS is depowered.
Powering up presents the “NETGEAR Storage Welcome” and within 1 second “Booting from USB”.
Are those the times which you would expect for a (successful) recovery? I mean, in my case is recovery happening (but failing), or it should be a much longer procedure, and in my case it is not happening at all?
- berillioMar 29, 2020Aspirant
StephenB , maybe I did not say explicitally that the 11TB of data is the ORIGINAL copy, and I do not have a backup, so I do it now.
But I presume that you understood that anyway, and you procedure consist of :
a) sacrify one disk (of an array of four, which is a “redundant” array) by zeroing its contents (I can use WD utility to do that)
b) Power up NAS and hot insert the “zeroed” disk – which should cause a fresh install
c) hot insert the other three disks, which contain what now is a “non redundant” data set.
The NAS should then rebuild the full data set recopying the data on the first disk which is empty.
On the other hand I might have misunderstood your post, and you were simply reasoning on the best procedure to install 6.10.3, the latest firmware (which, although recommended, IS NOT AT ALL my priority, which is to give me access again to my data I simply said that I cannot exclude the possibility that automatic updates were enabled by mistake).
I am currently running diagnostic on a 4Tb WD Red which I could use as a test for the procedure. But that is now ~5% done, and should take in excess of another 31 hours. - StephenBMar 30, 2020Guru - Experienced User
berillio wrote:
StephenB, there were few points to address on your post, but one is very quick. In the last link,
the user talks about “few minutes” for recovery; then power up and "updating" which takes 15 minutes.
In my case, we are talking of
~30 seconds with the button pressed before “Booting”begins (or without pressing, same thing)
6/8 seconds of Booting before “Recovery” begins
30 seconds of Recovery, before the message “Recovery done” is displayed
2 seconds later the NAS is depowered.
Powering up presents the “NETGEAR Storage Welcome” and within 1 second “Booting from USB”.
Are those the times which you would expect for a (successful) recovery? I mean, in my case is recovery happening (but failing), or it should be a much longer procedure, and in my case it is not happening at all?
I haven't needed to do a USB recovery on my ReadyNAS, but your times are generally about what I would expect. Note in his case the "updating" step was installing the firmware on the hard drives.
It is possible this one could be short
- 30 seconds of Recovery, before the message “Recovery done” is displayed
Perhaps someone who has done the procedure can comment on that.
berillio wrote:
On the other hand I might have misunderstood your post, and you were simply reasoning on the best procedure to install 6.10.3, the latest firmware (which, although recommended, IS NOT AT ALL my priority, which is to give me access again to my data I simply said that I cannot exclude the possibility that automatic updates were enabled by mistake).You had brought up the idea of doing a factory default, and I was just trying to respond to that. The factory default is destructive, there is no avoiding that.
berillio wrote:
But I presume that you understood that anyway, and you procedure consist of :
a) sacrify one disk (of an array of four, which is a “redundant” array) by zeroing its contents (I can use WD utility to do that)
b) Power up NAS and hot insert the “zeroed” disk – which should cause a fresh install
c) hot insert the other three disks, which contain what now is a “non redundant” data set.
The NAS should then rebuild the full data set recopying the data on the first disk which is empty.No, that is not what would happen. Step (c) would resync those disks to your new array, leaving you with a new but empty file system.
berillio wrote:
I am currently running diagnostic on a 4Tb WD Red which I could use as a test for the procedure. But that is now ~5% done, and should take in excess of another 31 hours.Is this disk part of the array?
Assuming no, It is worth zeroing the disk (perhaps using the quick write zeros test), powering down the NAS and inserting that disk by itself, and powering it up. If that fails to install it will confirm that the problem is in the chassis. If it succeeds, then it will point more towards corruption on the hard disks.
FWIW, 31 hours is far longer than I'd expect for the long non-destructive test. That's closer to what I see with a 10 TB WD Red. Are you using a USB 2.0 adapter?
- berillioMar 30, 2020Aspirant
StephenB, TY for your patience in putting up with me.
The disk is not part of the array – but it once was: it is a 4Tb WD red which failed ~1 year ago and replaced with an identical WD red. It is accessed via a StarTech USB 2.0 SATA/IDE adapter, as you correctly guessed. At this moment it is being checked via Window Surface Scanner vers 2.0 (never used this utility before). I tried WD Data Lifeguard Diagnostic, it passed the Quick test but failed the Extended test after 5 minutes with “Error Code 08 - Too many bad sectors”; I was expecting the same result from WSS, but maybe WD DLGDIAG excluded those bad sectors before giving up. WSS has reported no errors so far and it is currently 35% with ~21h to go. Interestingly, it reports a capacity of 3726.02 GB, while WD DLGDIAG reported 4000.79 GB (different units?)
I do not think I have any other unused SATA disks – I would have not used a bad one (although the WD failure was odd and unexpected). Even assuming that it clears the surface tests and it is zeroed, the NAS may read its id, and refuse to use as “failed disk”. I will search for another SATA drive, maybe I can find an unused small USB which I can used for this test.
- berillioMar 30, 2020Aspirant
I found a small WD USB drive, almost empty. BUT I also realised that I would need the adaptor for the 2.5” drive, which would be in the box, which I cannot find. It’s most likely in the loft, and it may be impossible to fetch it now. Sorry, we may have to wait. 40%, 19h to go.
- SandsharkMar 30, 2020Sensei
The SYSLINUX.CFG file it boots from is on the NAS flash, not the USB drive. But unless you can boot to Tech Support mode and can FTP in, you have no way to examine it. But, as I said, it should be replaced with the USB recovery, which sounds like it is working properly.
- StephenBMar 31, 2020Guru - Experienced User
Sandshark wrote:
The SYSLINUX.CFG file it boots from is on the NAS flash, not the USB drive. But unless you can boot to Tech Support mode and can FTP in, you have no way to examine it. But, as I said, it should be replaced with the USB recovery, which sounds like it is working properly.
My understanding is that syslinux is only in the x86 flash, not the arm.
If it is the boot flash (which is possible) then Netgear might be able to rebuild it.
- berillioMar 31, 2020Aspirant
TY again for your contributions.
Unfortunately yesterday (GRRR) I closed WSS by mistake (it was behind something else, I clicked on the edge to highlight it but it was the TOP rhs corner), it had done ~60%. Restarted WD DLGDIAG Extended Test, which seems to progress normally (last time it stopped after few minutes), 13h done, 18h to go. - StephenBMar 31, 2020Guru - Experienced User
Honestly I'm not sure this disk is solid enough to be very useful in checking the NAS. But we'll see.
- berillioMar 31, 2020Aspirant
Nor me. 11 hours to go. Tomorrow we will find out
One thing I don’t understand (amongst the thousands), is on “migrating” a volume.
“The NAS boots from an OS partition on the disks, and a factory default just reformats the disks and reinstalls the software from the flash.”
If the OS is on the disks (albeit on a different partition), when I migrate a volume, by putting the full array of 4 disks in a different NAS, I am also transferring the OS which was there.
So, if my OS is corrupt (because of failed FW update or whichever reason), I am transferring the problem to a new NAS. So, I may have lost the entire dataset anyway.
“Assuming no, It is worth zeroing the disk (perhaps using the quick write zeros test), powering down the NAS and inserting that disk by itself, and powering it up. If that fails to install it will confirm that the problem is in the chassis. If it succeeds, then it will point more towards corruption on the hard disks.”
So, really, I should hope that the first case (failing to install - bad chassis) is true. Because if it succeeds, migrating the full array (page 75 of Hardware User Manual) would bring me back to where I am now….
It seems to be “bad news” either way….
- SandsharkMar 31, 2020Sensei
It is true that the OS is on a partition on the drives and, if corrupt, you'll be moving the problem to another unit. That's why testing the old NAS with a spare drive is a good idea before doing that.
But what you describe sounds more like a problem with the chassis than the volume.
- berillioApr 01, 2020Aspirant
“It is worth zeroing the disk (perhaps using the quick write zeros test), powering down the NAS and inserting that disk by itself, and powering it up. If that fails to install it will confirm that the problem is in the chassis. “
Nope. “Booting from USB” message. I tried another recovery, just to be sure, no difference.
Could “hot inserting” this zeroed drive make a difference?
I could also try to access the Boot menu with the pin in the back – but that may be impossible “It requires the boot menu though, which could be a problem.”
Just for info, the HD was the “quick write zero”, but I don’t think that would change anything.I think I remember reading that the NV+ could not read an OS-6 arrays.
I started the NV+ which was offline, just to check it. It has 4x 3Tb disks (8.1Tb capacity), but disk 3 is DEAD, volume unprotected. But there is no data, nothing, it was empty. Maybe I should remove disk 3, delete the volume and create a new one with a 3 disks array? That would be 6Tb. - StephenBApr 01, 2020Guru - Experienced User
berillio wrote:
I started the NV+ which was offline, just to check it. It has 4x 3Tb disks (8.1Tb capacity), but disk 3 is DEAD, volume unprotected.Something wrong here, because the NV+ can't handle disks > 2 TB.
But legacy systems (4.1.x, 4.2.x, 5.x firmware) can't read OS 6 disks.
berillio wrote:
Could “hot inserting” this zeroed drive make a difference?No. But this test result does point to the chassis, not with the disks.
- berillioApr 01, 2020Aspirant
"Something wrong here, because the NV+ can't handle disks > 2 TB."
Probably my fault, it is not a NV+, it is a NV+ v2. Apologies. - berillioApr 02, 2020Aspirant
So the RN104 is confirmed dead/tot/kaput? Nothing else we could try?
I wish I knew WHY it happened.
Re the NV+v2, should I “delete” that empty volume and “create” a new one without the failed disk? Or should I use a different procedure?
That would give me some space for all the other PCs/devices at home - StephenBApr 02, 2020Guru - Experienced User
berillio wrote:
So the RN104 is confirmed dead/tot/kaput? Nothing else we could try?
Marc_V or JohnCM_S might have some ideas. I think with current firmware your system would allow access with tech support mode - but I don't think 6.2.5 would do that.
Can you ping the NAS in this mode? If so, do you get a login prompt if you try to telnet into the NAS IP address?
- berillioApr 02, 2020Aspirant
I cannot get in Tech Support mode, I tried to access the Boot menu (to attempt the Factory Default) last night, using the reset button, keeping it pressed even before pressing the power button, but nope.
NAS does not respond to a normal ping 192.168.2.3 from cmd, and it is not seen by the router - berillioApr 04, 2020Aspirant
I thought of removing the battery cell, just in case it may loose any settings (and make NAS forget that it had gone bonkers).
Forty minutes later, before putting it back it and reassembling NAS, i thought of checking the battery, possibly putting a new one, if it was tired.
DEAD FLAT (0.24v)
OK, before powering it up with a FRESH battery. Should I have it with the full array (if I get the "Boot from USB" message, to do the FW recovery successfully I should have the full array), or with the sacrificial "zeroed" drive? Or should I just power-up with empty bays and report?
- SandsharkApr 04, 2020Sensei
If you power it up with no drives and you get a status of ":No Disks", that's a good sign. From there, whether you try the zeroed drive before the full volume is entirely up to you. But since you don't even need to wait till it finishes creating the volume to know something is different, I think it could be a good idea. When you do power up with the volume, yes, it should be the complete volume, not just one drive from it. But if your zeroed drive was sacrificed from the volume, so the volume is now non-redundant, don't include that drive, especially if you let the NAS start creating another volume on it. Wait till the volume mounts in degraded mode, then add the zeroed one with power on. You may need to format it (on the NAS) before XRAID will add it to the volume.
- berillioApr 04, 2020Aspirant
With no disks, I get the “Boot from USB”
Zeroed (spare) disk first?
Or the full (redundant) array, and if I get the message again, doing a “normal” recovery? - SandsharkApr 04, 2020Sensei
If you don't see "No Disks" when no drives are installed, then there is still something wrong and you should definately not risk your volume any further. You can try the spare drive, but I don't think you'll find it successful. I believe you have done all you can to diagnose this NAS and there is a hardware issue with it that dictates replacement. Specifically, the NAS believes the Backup button is permanently pushed, though the button itself is OK. Without a schematic, going further is poking in the dark. And even with one, the chances you could repair it are slim.
- berillioApr 04, 2020Aspirant
I tried with the zeroed disk as advised. It did not go into Factory default, but “Boot from USB” as always. Factory default could not be invoked by the reset button either. Nor it can be pinged. Basically, there is no change from earlier.
Incidentally, on the motherboard there are two set of jumpers, J7 & J14, I don't know if there are instruction for their use anywhere (maybe they could be used like the BIOS jumper on a pc motherboard).At least, we found an oddity/fault, the flat battery, however minimal, unimportant and unconsequential that may appear to be.
Or maybe there was another fault, which drained the battery at a faster rate than expected.
I think I will be checking my NV+v2 immediately, just in case
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!