NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Oversteer71
May 01, 2017Guide
Remove inactive volumes to use the disk. Disk #1,2,3,4.
Firmware 6.6.1 I had 4 x 1TB drives in my system and planned to upgrade one disk a month for four months to aceive a 4 x 4TB system. The initial swap of the first drive seemed to go well but aft...
jak0lantash
May 01, 2017Mentor
As you were replacing the first drive and it failed, you now need to boot with the four original drives first. If the RAID still doesn't start with all the original drive, then you won't be able to rebuild a new drive. Your best chance is to contact NETGEAR Support.
Oversteer71
May 02, 2017Guide
Operationaly and statistically this doesn't make any sense. The drives stay active all the time with backups and streaming media so I'm not sure why doing a disk upgrade would cause abnormal stress. But even if that is the case and drive D suddenly died, I replaced Drive A with the original fully functional drive which should recover the system. Also, the NAS is on a power conditioning UPS so power failure was not a cause.
Based on the MANY threads on this same topic, I don't think this is the result of a double drive failure. I think there is a firmware or hardware issue that is making the single most important feature of a RAID 5 NAS unreliable.
Even if I could figure out how to pay Netgear for support on this, I don't have any confidence this same thing won't happen next time so I'm not sure it's worth the investment.
Thank you for your assitance though. It was greatly appreciated.
- StephenBMay 02, 2017Guru - Experienced User
Oversteer71 wrote:
I'm not sure why doing a disk upgrade would cause abnormal stress.
I just wanted to comment on this aspect. Disk replacement (and also volume expansion) require every sector on every disk in the data volume to either be read or written. If there are as-yet undetected bad sectors they certainly can turn up during the resync process.
The disk I/O also is likely higher during resync than normal operation (though that depends on the normal operating load for the NAS).
As far as replacing the original drive A - if there have been any updates to the volume (including automatic updates like snapshots), then that usually won't help. There are event counters on each drive, and if they don't match mdadm won't mount the volume (unless you force it to).
That said, I have seen too many unexplained "inactive volumes" threads here - so I also am not convinced that double-drive failure is the only cause of this problem.
- dmacleoMay 02, 2017Guide
similar issue happened to me on rn104 about hour after upgrade to 6.7.1
pulled each disc hooked to linux mint system no smart erros on any drive.
ended up having to rebuild volume.
lucky it was only used to backup running nas314 system.
rn104 had 4 3tb drives in it and 1.5tb free space.
about hour after upgrade system became unrsesponsive and brfts errors on lcd.
seems ok now doing backups and resyncs now.
- jak0lantashMay 02, 2017Mentor
Oversteer71 wrote:
Based on the MANY threads on this same topic, I don't think this is the result of a double drive failure. I think there is a firmware or hardware issue that is making the single most important feature of a RAID 5 NAS unreliable.
I doubt that you will see a lot of people here talking about their dead toaster. It's a storage forum, so yes, there is a lot of people talking about storage issues.
Two of your drives have ATA errors, this is a serious condition. It may work perfectly fine in production, but putting strain on them and they may not sustain it. The full smart logs would allow to tell when those ATA errors were raised.
https://kb.netgear.com/19392/ATA-errors-increasing-on-disk-s-in-ReadyNAS
As I mentioned before and as StephenB explained, resync is a stressful process for all drives.
The "issue" may be elsewhere, but it's certainly not obvious that this isn't it.
From mdadm point of view, it's a dual disk failure:
[Mon May 1 17:29:22 2017] md/raid:md127: not enough operational devices (2/4 failed)
Also, if nothing happened at drive level, it is unlikely that sdd1 would get marked as out of sync, as md0 is resynced first. Yet it was:
[Mon May 1 17:29:18 2017] md: kicking non-fresh sdd1 from array!
- Ivo_mMay 03, 2017Aspirant
Hi,
I have a similar problem with my RN104 which up until tonight I had been running without issue with 4x6Tb drives installed , running a RAID5 configuration.
After a reboot I get the message Remove inactive volumes in order to use the disk. Disk #1,2,3,4
I have no backup,and i need the data !
- jak0lantashMay 03, 2017Mentor
Maybe you want to upvote this "idea": https://community.netgear.com/t5/Idea-Exchange-for-ReadyNAS/Change-the-incredibly-confusing-error-message-quot-remove/idi-p/1271658
You probably don't want to hear that right now, but you should always have a backup of your (important) data.
I can't see your screenshot as it wasn't approved by moderators yet, so I'm sorry if it would have replied to one of these questions.
What happened before the volume became red?
Are all your drives physically healthy? You can check under System / Performance, hover the mouse of the colored circled beside each Disk and look at the error counters (Reallocated Sectors, Pending Sectors, ATA Errors, etc.).
What does the LCD of your NAS show?
If you download the logs from the GUI and search for "md127" in dmesg.log, what does it tell you?
What F/W are you running?
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!