NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
BtrieveBill
Jul 27, 2016Aspirant
NAS Slow, Reboot Slow, Drive Light Blinking
My ReadyNAS 516 has been unreasonably slow lately. The system is not sharing files properly, seems to be accessing slow for both reading and writing, and even the web interface is slow. The drive a...
- Jul 27, 2016
I would have also opted to replace Drive 6, if it were an option. However, Drive 2 was the one blinking incessently, and even though it had fewer errors, it was apparently the squeakiest wheel today. Further, the reboot NEVER finished. It hung at 94% for over 90 minutes.
I finally gave up on the reboot and powered down the ReadyNAS entirely a second time, replaced Drive 2, and rebooted. As advertised, it booted up in about 5 minutes, detected the degraded array, and immediately started the Rebuild Process. The system is now working substantially better, and even with the RAID rebuild running, it is turning out better performance than I was getting all this week. I can now send that drive back to WD, get the replacement, and then swap out drive 6 later on. (Strangely, drive 6 was the only drive that had been replaced once before. When the new drive 6 was put in, drive 6 started spewing errors after about a week. This makes me wonder if there is not a problem with the SATA controller or cabling, and that perhaps drive 6 is really OK.)
Lessons learned:
1) Don't assume that the system is working properly, just becasue the Web console shows all drives are green.
2) Don't assume that the drive with the most errors is the one with the biggest problem.
3) Ignore the data in the logs and just replace the drive that is blinking out of sync with everyone else.
4) Always have at least one spare drive on standby.
StephenB
Jul 27, 2016Guru - Experienced User
BtrieveBill wrote:
I do have one spare drive already in-house, so I can replace one of the drives. Do I replace #2? Obviously, I need to wait for it to finish booting, though, right? Or is it better to just power it down again, replace the drive, and let the RAID volume rebuild itself?
I'd wait for the resync to finish, and then re-check the SMART stats.
But based on the stats you've posted so far, I'd replace drive 6 next. Current Pending Sectors happen on failed reads, Reallocated Sectors happen on failed writes. Both are bad, and I generally sum them when I'm assessing disk condition. So you have 14 bad sectors on drive 2, but 258 bad sectors on drive 6.
BtrieveBill
Jul 27, 2016Aspirant
I would have also opted to replace Drive 6, if it were an option. However, Drive 2 was the one blinking incessently, and even though it had fewer errors, it was apparently the squeakiest wheel today. Further, the reboot NEVER finished. It hung at 94% for over 90 minutes.
I finally gave up on the reboot and powered down the ReadyNAS entirely a second time, replaced Drive 2, and rebooted. As advertised, it booted up in about 5 minutes, detected the degraded array, and immediately started the Rebuild Process. The system is now working substantially better, and even with the RAID rebuild running, it is turning out better performance than I was getting all this week. I can now send that drive back to WD, get the replacement, and then swap out drive 6 later on. (Strangely, drive 6 was the only drive that had been replaced once before. When the new drive 6 was put in, drive 6 started spewing errors after about a week. This makes me wonder if there is not a problem with the SATA controller or cabling, and that perhaps drive 6 is really OK.)
Lessons learned:
1) Don't assume that the system is working properly, just becasue the Web console shows all drives are green.
2) Don't assume that the drive with the most errors is the one with the biggest problem.
3) Ignore the data in the logs and just replace the drive that is blinking out of sync with everyone else.
4) Always have at least one spare drive on standby.
- HopchenJul 27, 2016Prodigy
1) Don't assume that the system is working properly, just becasue the Web console shows all drives are green.
I believe the green dot is more an indication of whether disks are online or not. Hold your mouse over the disk to see more detailed info.
2) Don't assume that the drive with the most errors is the one with the biggest problem.
Definitely never assume this. A disk with only very errors can cause big issues.
3) Ignore the data in the logs and just replace the drive that is blinking out of sync with everyone else.Don't ignore the logs! :) Those are really important. Rather - always trust the logs. Pull logs regularly and inspect them. I suggest you also setup email alerts to warm you about things such as disk failures.
4) Always have at least one spare drive on standby.
Yup, very good idea. And always have an up-to-date backup.
- StephenBJul 28, 2016Guru - Experienced User
Reallocated and pending sectors are not something the NAS (hardware or software) can create. The issues with drive 6 aren't due to SATA or cabling, it is a drive with hundreds of sectors that can't be read or written. The drive itself is detecting that.
Though I agree with your conclusion that drive 2 turned out to be the bigger problem in this case, I'd replace drive 6 as soon as the current resync finishes.
BtrieveBill wrote:
3) Ignore the data in the logs and just replace the drive that is blinking out of sync with everyone else.
I disagree with this. A better lesson is to replace drives when the SMART error statistics start to climb, and not to delay until you see performance problems. You could easily have lost the entire data volume with two problem disks in the array.
- BtrieveBillAug 01, 2016Aspirant
Latest update, in case anyone cares: After replacing Drive 2 and rebuilding the array, everything went peachy -- for about 18 hours. Shortly after the nightly backups (10PM) and daily snapshots (midnight), the volume crapped out completely. ReadyNAS OS was functional, but the volume was gone, with no hint as to what happened, except for a notice that I had to "Remove in active volumes to use the disk 1, 2, 3, 4, 5, 6". WTF? All of the drives in the array were red instead of blue, and I had no more data volume. Thank you NetGear, for such a reundant RAID array solution. Luckily, this all occurred AFTER the backups, so I had zero data loss.
I pleaded to Dr. Google for help with this for quite some time, I was still unable to find any reasonable solutions to this error other than "Contact NetGear Support" and pay for an incident, and let them tell you to destroy the volume and restore the data. Having no other alternative, I opted to destroy the volume. When I did it, Drive 6 went blue, but the other 5 stayed red, and the message now said that disks 1,2,3,4,5 were an issue. Hmm; still no joy. I destroyed the volume again, and got back drive 5 into a blue state (though I must say that red better matched my mood). Four more times typing DESTROY (I fancied myself a great witch canting a spell) and the disks were finally usable again. Yippie. Next, I was able to create a new data volume, create all new shares, and start restoring the data.
Fast forward about 4 hours, most of the data is restored -- and drive 6 goes kaput. Grey. FUBAR. Whatever you call it, I'm now running again with a degraded array. Luckily, though, the unit is working, and it is as fast as expected. Too bad I am now getting Emails about Drive 3 spitting errors. Maybe it can hold on for JUST a bit longer???
I am now awaiting my two RMA'ed WD drives -- one to replace Drive 6 immediately, and the other to have ready to replace Drive 3 (and set up yet ANOTHER RMA in the prcess). Whee!
- omicron_persei8Aug 02, 2016Luminary
BtrieveBill wrote:Thank you NetGear, for such a reundant RAID array solution.
I was refering to this. But I understand now that I misunderstood your point :D
StephenB wrote:I don't know the cause of the "inactive volume" issue, but we do see it here fairly regularly. It's possibly a bug
Afaik, it's simply that if ReadyNASd can't put the HDD in any existing volume or if a RAID array can't be started or mounted, they're displayed in red. I don't think it's bug, I think it's just a very user-unfriendly way to do it. If a volume can't be mounted, I believe it should be displayed as such and not as "inactive disks" that the GUI invites you to destroy...
Maybe one should post an "idea" for to suggest improving the status of a failed volume in the GUI ;)
- StephenBAug 02, 2016Guru - Experienced User
omicron_persei8 wrote:
Afaik, it's simply that if ReadyNASd can't put the HDD in any existing volume or if a RAID array can't be started or mounted, they're displayed in red. I don't think it's bug,
I've seen enough of these reported that I am suspicious that there might be an underlying bug. In this case, it sounds like the volume was mounted and started, but then something went wrong.
omicron_persei8 wrote: one should post an "idea" for to suggest improving the status of a failed volume in the GUI ;)Yes. It'd be good if the main log gave more hints on why the volume couldn't be mounted.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!