NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Oversteer71
May 01, 2017Guide
Remove inactive volumes to use the disk. Disk #1,2,3,4.
Firmware 6.6.1 I had 4 x 1TB drives in my system and planned to upgrade one disk a month for four months to aceive a 4 x 4TB system. The initial swap of the first drive seemed to go well but aft...
jak0lantash
May 01, 2017Mentor
Before starting:
I can't see your screenshot as it wasn't approved by a moderator yet.
Maybe, you would like to upvote this "idead": https://community.netgear.com/t5/Idea-Exchange-for-ReadyNAS/Change-the-incredibly-confusing-error-message-quot-remove/idi-p/1271658
Do you know if any drive is showing errors? Like reallocated sectors, pending sectors, ATA errors? From the GUI, look under System / Performance and hover the cursor on the disk beside the disk number (or look in disk_info.log from the log bundle).
In dmesg.log, do you see any error containing "md127" (start from the end of the file)?
- StephenBMay 01, 2017Guru - Experienced User
I just took care of that.
jak0lantash wrote:
I can't see your screenshot as it wasn't approved by a moderator yet.
- Oversteer71May 01, 2017Guide
Thanks for the fast replies.
On Disks 2, 3 and 4 (the original 1TB drives) I show 10, 0, 4 ATA errors respectively. Disk 1, the new 4TB also shows 0
Here's what I found towards the end of the dmesg.log file:
[Sun Apr 30 20:06:20 2017] md: md127 stopped.
[Sun Apr 30 20:06:21 2017] md: bind<sda3>
[Sun Apr 30 20:06:21 2017] md: bind<sdc3>
[Sun Apr 30 20:06:21 2017] md: bind<sdd3>
[Sun Apr 30 20:06:21 2017] md: bind<sdb3>
[Sun Apr 30 20:06:21 2017] md: kicking non-fresh sda3 from array!
[Sun Apr 30 20:06:21 2017] md: unbind<sda3>
[Sun Apr 30 20:06:21 2017] md: export_rdev(sda3)
[Sun Apr 30 20:06:21 2017] md/raid:md127: device sdb3 operational as raid disk 1
[Sun Apr 30 20:06:21 2017] md/raid:md127: device sdc3 operational as raid disk 2
[Sun Apr 30 20:06:21 2017] md/raid:md127: allocated 4280kB
[Sun Apr 30 20:06:21 2017] md/raid:md127: not enough operational devices (2/4 failed)
[Sun Apr 30 20:06:21 2017] RAID conf printout:
[Sun Apr 30 20:06:21 2017] --- level:5 rd:4 wd:2
[Sun Apr 30 20:06:21 2017] disk 1, o:1, dev:sdb3
[Sun Apr 30 20:06:21 2017] disk 2, o:1, dev:sdc3
[Sun Apr 30 20:06:21 2017] md/raid:md127: failed to run raid set.
[Sun Apr 30 20:06:21 2017] md: pers->run() failed ...
[Sun Apr 30 20:06:21 2017] md: md127 stopped.
[Sun Apr 30 20:06:21 2017] md: unbind<sdb3>
[Sun Apr 30 20:06:21 2017] md: export_rdev(sdb3)
[Sun Apr 30 20:06:21 2017] md: unbind<sdd3>
[Sun Apr 30 20:06:21 2017] md: export_rdev(sdd3)
[Sun Apr 30 20:06:21 2017] md: unbind<sdc3>
[Sun Apr 30 20:06:21 2017] md: export_rdev(sdc3)
[Sun Apr 30 20:06:21 2017] systemd[1]: Started udev Kernel Device Manager.
[Sun Apr 30 20:06:21 2017] systemd[1]: Started MD arrays.
[Sun Apr 30 20:06:21 2017] systemd[1]: Reached target Local File Systems (Pre).
[Sun Apr 30 20:06:21 2017] systemd[1]: Found device /dev/md1.
[Sun Apr 30 20:06:21 2017] systemd[1]: Activating swap md1...
[Sun Apr 30 20:06:21 2017] Adding 1046524k swap on /dev/md1. Priority:-1 extents:1 across:1046524k
[Sun Apr 30 20:06:21 2017] systemd[1]: Activated swap md1.
[Sun Apr 30 20:06:21 2017] systemd[1]: Started Journal Service.
[Sun Apr 30 20:06:21 2017] systemd-journald[1020]: Received request to flush runtime journal from PID 1
[Sun Apr 30 20:07:09 2017] md: md1: resync done.
[Sun Apr 30 20:07:09 2017] RAID conf printout:
[Sun Apr 30 20:07:09 2017] --- level:6 rd:4 wd:4
[Sun Apr 30 20:07:09 2017] disk 0, o:1, dev:sda2
[Sun Apr 30 20:07:09 2017] disk 1, o:1, dev:sdb2
[Sun Apr 30 20:07:09 2017] disk 2, o:1, dev:sdc2
[Sun Apr 30 20:07:09 2017] disk 3, o:1, dev:sdd2
[Sun Apr 30 20:07:51 2017] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[Sun Apr 30 20:07:56 2017] mvneta d0070000.ethernet eth0: Link is Up - 1Gbps/Full - flow control off
[Sun Apr 30 20:07:56 2017] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready- jak0lantashMay 01, 2017Mentor
Well, that's not a very good start. Sda is not in sync with sdb and sdc. Sdd is not in the RAID array. In other words, dual disk failure (one that you removed, one dead). One disk failed before the RAID array finished rebuilding the new one.
You can check the channel numbers, the device names and serial numbers in disk_info.log (channel number starts at zero).
This is a tricky situation, but you can try the following:
1. Gracefully shut down the NAS from the GUI.
2. Remove the new drive you inserted (it's not in sync anyway).
3. Re-insert the old drive.
4. Boot the NAS.
5. If it boots OK and the volume is accessible, make a full backup and/or replace the disk that is not in sync by a brand new one.
You have two disks with ATA errors which is not very good. Resyncing the RAID array put strain on all the disks, which can push a damaged/old disk to its limits.
Alternatively, you can contact NETGEAR for a Data Recovery contract. They can assess the situation and assist you with recovering the situation.
Thanks StephenB for approving the screenshot.
- Oversteer71May 01, 2017Guide
Just to confirm what you are saying, I should actually put the original back in the Drive A slot but then it also seems I should just put the new 4TB WD Red drive in Slot D to replace the "dead" one, correct?
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!