NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Smaky
Jul 04, 2019Guide
ReadyNAS Ultra Plus on OS 6.10.1 becoming unresponsive
After a few weeks having my ReadyNAS Ultra 6 Plus upgraded to OS6 I am starting to experience random situations where the NAS goes completely unresponsive. It is no longer accesible via the web inter...
Smaky
Jul 19, 2019Guide
Another disk test ran yesterday. No errors were reported and the NAS keeps being responsive. Will wait until monday when I will re-enable Emby and will see if the system remains stable.
Smaky
Jul 19, 2019Guide
Just for the sake of completeness I shut down my NAS yesterday and started a memory test from the boot menu. The test ran for over 12 hrs. and no memory errors (4 GB Patriot non ECC memory installed), completed over 11 passes successfully. So I think I can discard any memory issues.
- StephenBJul 19, 2019Guru - Experienced User
I think you have ruled out hardware problems as the cause. It'll be interesting to see if re-enabling Emby causes the problem to recur.
- SmakyJul 24, 2019Guide
StephenB wrote:I think you have ruled out hardware problems as the cause. It'll be interesting to see if re-enabling Emby causes the problem to recur.
It's been a few days sice I reactivated the Emby service on my NAS. Besides having the software updated as there was a new verison availbale and a couple of reboots because I got a new set of RAM to raise memory to 4 GB (vs 3 GB I previously had), there have been no events to report. NAS has been stable.
Will continue to monitor at least a couple of weeks to check whether it goes unresponsive again.
- SmakyAug 01, 2019Guide
Unfortunately something happened. After enabling Emby on the NAS and letting it run for a few days. Suddenly, a few days ago at some point the array on the NAS became alerted. Upon inspection, one of the WD drives failed an became unresponsive. I noticed this only because, again, the NAS became unresposive and the only way to regain access to it was to cold boot it via pressing the power button until the NAS shut down.
Upon restart I noticed a loud sound coming out from one of the drives and once it finally boot the array state was showing as degraded with the faulty drive showing up as offline. I shut down the NAS and replaced the drive with a spare one I had, and the rebuild process started. That lasted for a bit over a day.
Before shutting down the NAS I downloaded the logs, then before shutting it down to get the failed drive replaced I disabled Emby. Once the array was rebuilt I downloaded the logs again. It has been a couple of days since then and an array scrub task ran with no signal of issues.
The drive which failed had no SATA errors reported and now it became completely inaccesible even when connected to a PC. It makes a lot of loud sounds for a couple of minutes and Windows or WD data lifeguard fails to identify it. So it really seems a total drive failure.
I cannot say the failure is related to Emby's reactivation however this was the only change applied to the NAS config since all tests and several weeks of monitoring with no issues occurred. I am wondering if there might be somehting on the logs that can point to some sort of overload caused by Emby that might have cause stress on the drive to the point of failure. Not sure what to look for or where.
- StephenBAug 02, 2019Guru - Experienced User
Smaky wrote:
I am wondering if there might be somehting on the logs that can point to some sort of overload caused by Emby that might have cause stress on the drive to the point of failure. Not sure what to look for or where.
Very unlikely theory. RAID-5 and XRAID are designed to spread the disk load evenly across all the drives - there is no overload on just one of them.
More likely - the drive just happened to fail, and Emby was not the cause.
- SmakyAug 02, 2019Guide
StephenB wrote:More likely - the drive just happened to fail, and Emby was not the cause.
Well.. that's reassuring, I was on the same suspission as one of the aspects of RAID technology is to spread I/O between the discs. I guess it is just bad luck then.
Anyways, I have also noticed that during scrubs drives go up to 47-48 °C. Normally they are around 34-38°C.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!