NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
RTSwiss
Aug 05, 2012Aspirant
Fried(?) ReadyNAS NV+ #19179228
I have a ReadyNAS NV+ (3x500GB) that has performed without problem for five years. Yesterday morning I left it unattended while copying a large file to the device from a WinXPPro machine. When I returned the file transfer had been interrupted midstream with a "xxx not available" error message, the NV+ had shut down, and the room smelled of something burning. The NV+ would not restart (surprise!) and, having to leave for a day, I simply disconnected it. This event was not preceded by any sort of warning message.
If I had to guess I would judge from the smell that there's an electrolytic capacitor in the power supply that went bad. So I have four questions. (1) Is there some quick way of verifying if my suspicion is correct, and of replacing the cap if that is what happened? (2) Given that the failure occurred in the middle of i/o activity, what are the chances that the drives are undamaged? (3) Assuming that the drives are undamaged, or that any errors induced by the device failure are recoverable, could I replace just the enclosure (assuming I can track one down) with some reasonable expectation that the data on the drives will show up intact when installed in a new enclosure? (4) If the answer to (3) is yes, should I have the same expectation if I replaced the enclosure with version 2; that is, do the old and new versions use identical (or sufficiently similar) protocols in formatting and communicating with the drives?
I guess I would also be interested in whether others have experienced a failure of this sort, though my primary concern is with recovering the data on the array. I should also add that it is not utterly critical, as much of it has been routinely backed up to an external usb/sata drive.
Any help anyone could offer would be appreciated. Thanks.
Ted
If I had to guess I would judge from the smell that there's an electrolytic capacitor in the power supply that went bad. So I have four questions. (1) Is there some quick way of verifying if my suspicion is correct, and of replacing the cap if that is what happened? (2) Given that the failure occurred in the middle of i/o activity, what are the chances that the drives are undamaged? (3) Assuming that the drives are undamaged, or that any errors induced by the device failure are recoverable, could I replace just the enclosure (assuming I can track one down) with some reasonable expectation that the data on the drives will show up intact when installed in a new enclosure? (4) If the answer to (3) is yes, should I have the same expectation if I replaced the enclosure with version 2; that is, do the old and new versions use identical (or sufficiently similar) protocols in formatting and communicating with the drives?
I guess I would also be interested in whether others have experienced a failure of this sort, though my primary concern is with recovering the data on the array. I should also add that it is not utterly critical, as much of it has been routinely backed up to an external usb/sata drive.
Any help anyone could offer would be appreciated. Thanks.
Ted
24 Replies
Replies have been turned off for this discussion
- RTSwissAspirantSupport had me remove and test the drives using Seatools. One (of 3) appears to be bad. They recommended restarting the NAS and obtaining and sending them the logs to try to determine whether the P/S failure was the cause of the drive failure. I assume, but have asked them to be on the safe side, that they mean restarting it with only the two viable drives installed, in their original positions. It will probably take them a day to get back. Can anyone confirm in the interim that my assumption is correct? Thanks.
- RTSwissAspirantOn re-installing drives 2 and 3 in the 2d & 3d drive bays the machine rebooted (took 2 tries) with data intact. Installation of a replacement drive produced a redundant array after about 2 1/2 hours. The logs disclosed an increasing reallocated sector count on the failed drive over the two weeks preceding the P/S failure, but the drive was not reported as actually failing before the P/S failure, and the logs indicated that it remained readable after the P/S was replaced, although the machine never completed booting with the failed drive in place.
It turns out that the failed drive, an ST3500320AS which is on the NV+ HCL, was still under warranty and is being returned to Seagate for replacement. Before this failure the SMART data for that drive, which showed a small number of reallocated sectors, had been relatively stable. In contrast, the two original ST3500630NS to this day show no errors at all. And this is the second 320AS that has failed on me in this machine, the first occurring about 2 1/2 years ago.
Has anyone else had this sort of experience with this particular model drive?
Thanks for the help. - jwstephens1Aspirant
RTSwiss wrote: Thanks. I'll give it a try.
I had the quota hang problem with my drives. I built up an ubuntu system with a 4 port pci to sata controller. Ensure if you do so you don't get a 4 port raid only sata controller.
Follow the instructions in Mounting Sparc-based ReadyNAS Drives in x86-based Linux http://home.bott.ca/webserver/?p=306 to mount and access your data. It isn't simple, but I have done it to recover my data.
I have not gotten an answer to my problem from Netgear about the quota problem either, and am in the process of working with a tech about the problem.
I'm suspicious they don't have a handle on this quota nonsense and how it is occurring. I didn't have a power supply failure to induce my problem, I was just in the process of rsyncing my 4.5 tb of data to another subsystem and the unit just stopped. It became unresponsive at startup and has the quota nonsense with random numbers on the display as well.
I'd love to see them get ahead of this and debug it, as I have 5 of these units and am not anxious to have to go thru this again.
If they want to scan for my problems here is my Case #: 19010261
You have a different sounding way of inducing the problem with the bootup, but the problem sounds the same. The blue light just pulsates and the led display has the message about quota on it forever. - RTSwissAspirantThose were my symptoms. In my case, though it took a few days, tech support advised me to test the drives using Seatools, downloadable from Seagate at
http://www.seagate.com/support/downloads/seatools/
I did this from an oldish Dell running XP, using a hotswap external USB device, and though the drives were formatted by the NV+ the utility had no difficulty running at least the basic, generic, non-destructive tests. That identified one drive (a Seagate ST3500320AS, a model that is in the HCL but in my experience had already failed once and been replaced in this device) as bad. Tech support then advised my to reinstall the remaining drives, which I did to their original slots (2-3). On the first restart the NV+ hung. So I powered it down and tried again (see below for why) and it came up with all data intact in a non-redundant array. On adding a fresh drive to slot 1 it resynched in a couple of hours and is more or less back to normal. Tech support wanted to see the logs, which indicated in the two weeks preceding the power supply failure the failed drive had been accumulating reallocated sectors, to which T/S assigned the drive failure. I am not so sure that the P/S failure had nothing to do with it, as the drive had not failed before that happened. But that is less important to me than recovering the data.
Before any of this happened, I had on my own removed the existing drives and installed a single fresh 500GB Seagate to see if the machine would restart. Much to my dismay it hung again on start up. Having nothing better to try I powered it down and then restarted it again, and it came up fine. So it may be that something in the firmware or O/S makes it more able to restart after some event of failure after a couple of tries. If you do this you should be able to access the device via frontview, from which you can take a look at the logs and see if one of your drives is experiencing a problem. I'm not that conversant with X-Raid, but I infer that if only one drive has failed you stand a good chance of recovering your data by reinserting the still-viable drives in their original slots.
This strikes me as perhaps easier than trying to mount the drives in a different machine with a different O/S, but chacun a son gout.
Hope this is of help.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!