NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Oversteer71
May 01, 2017Guide
Remove inactive volumes to use the disk. Disk #1,2,3,4.
Firmware 6.6.1 I had 4 x 1TB drives in my system and planned to upgrade one disk a month for four months to aceive a 4 x 4TB system. The initial swap of the first drive seemed to go well but aft...
Oversteer71
May 01, 2017Guide
Thanks for the fast replies.
On Disks 2, 3 and 4 (the original 1TB drives) I show 10, 0, 4 ATA errors respectively. Disk 1, the new 4TB also shows 0
Here's what I found towards the end of the dmesg.log file:
[Sun Apr 30 20:06:20 2017] md: md127 stopped.
[Sun Apr 30 20:06:21 2017] md: bind<sda3>
[Sun Apr 30 20:06:21 2017] md: bind<sdc3>
[Sun Apr 30 20:06:21 2017] md: bind<sdd3>
[Sun Apr 30 20:06:21 2017] md: bind<sdb3>
[Sun Apr 30 20:06:21 2017] md: kicking non-fresh sda3 from array!
[Sun Apr 30 20:06:21 2017] md: unbind<sda3>
[Sun Apr 30 20:06:21 2017] md: export_rdev(sda3)
[Sun Apr 30 20:06:21 2017] md/raid:md127: device sdb3 operational as raid disk 1
[Sun Apr 30 20:06:21 2017] md/raid:md127: device sdc3 operational as raid disk 2
[Sun Apr 30 20:06:21 2017] md/raid:md127: allocated 4280kB
[Sun Apr 30 20:06:21 2017] md/raid:md127: not enough operational devices (2/4 failed)
[Sun Apr 30 20:06:21 2017] RAID conf printout:
[Sun Apr 30 20:06:21 2017] --- level:5 rd:4 wd:2
[Sun Apr 30 20:06:21 2017] disk 1, o:1, dev:sdb3
[Sun Apr 30 20:06:21 2017] disk 2, o:1, dev:sdc3
[Sun Apr 30 20:06:21 2017] md/raid:md127: failed to run raid set.
[Sun Apr 30 20:06:21 2017] md: pers->run() failed ...
[Sun Apr 30 20:06:21 2017] md: md127 stopped.
[Sun Apr 30 20:06:21 2017] md: unbind<sdb3>
[Sun Apr 30 20:06:21 2017] md: export_rdev(sdb3)
[Sun Apr 30 20:06:21 2017] md: unbind<sdd3>
[Sun Apr 30 20:06:21 2017] md: export_rdev(sdd3)
[Sun Apr 30 20:06:21 2017] md: unbind<sdc3>
[Sun Apr 30 20:06:21 2017] md: export_rdev(sdc3)
[Sun Apr 30 20:06:21 2017] systemd[1]: Started udev Kernel Device Manager.
[Sun Apr 30 20:06:21 2017] systemd[1]: Started MD arrays.
[Sun Apr 30 20:06:21 2017] systemd[1]: Reached target Local File Systems (Pre).
[Sun Apr 30 20:06:21 2017] systemd[1]: Found device /dev/md1.
[Sun Apr 30 20:06:21 2017] systemd[1]: Activating swap md1...
[Sun Apr 30 20:06:21 2017] Adding 1046524k swap on /dev/md1. Priority:-1 extents:1 across:1046524k
[Sun Apr 30 20:06:21 2017] systemd[1]: Activated swap md1.
[Sun Apr 30 20:06:21 2017] systemd[1]: Started Journal Service.
[Sun Apr 30 20:06:21 2017] systemd-journald[1020]: Received request to flush runtime journal from PID 1
[Sun Apr 30 20:07:09 2017] md: md1: resync done.
[Sun Apr 30 20:07:09 2017] RAID conf printout:
[Sun Apr 30 20:07:09 2017] --- level:6 rd:4 wd:4
[Sun Apr 30 20:07:09 2017] disk 0, o:1, dev:sda2
[Sun Apr 30 20:07:09 2017] disk 1, o:1, dev:sdb2
[Sun Apr 30 20:07:09 2017] disk 2, o:1, dev:sdc2
[Sun Apr 30 20:07:09 2017] disk 3, o:1, dev:sdd2
[Sun Apr 30 20:07:51 2017] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[Sun Apr 30 20:07:56 2017] mvneta d0070000.ethernet eth0: Link is Up - 1Gbps/Full - flow control off
[Sun Apr 30 20:07:56 2017] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
jak0lantash
May 01, 2017Mentor
Well, that's not a very good start. Sda is not in sync with sdb and sdc. Sdd is not in the RAID array. In other words, dual disk failure (one that you removed, one dead). One disk failed before the RAID array finished rebuilding the new one.
You can check the channel numbers, the device names and serial numbers in disk_info.log (channel number starts at zero).
This is a tricky situation, but you can try the following:
1. Gracefully shut down the NAS from the GUI.
2. Remove the new drive you inserted (it's not in sync anyway).
3. Re-insert the old drive.
4. Boot the NAS.
5. If it boots OK and the volume is accessible, make a full backup and/or replace the disk that is not in sync by a brand new one.
You have two disks with ATA errors which is not very good. Resyncing the RAID array put strain on all the disks, which can push a damaged/old disk to its limits.
Alternatively, you can contact NETGEAR for a Data Recovery contract. They can assess the situation and assist you with recovering the situation.
Thanks StephenB for approving the screenshot.
- Oversteer71May 01, 2017Guide
Just to confirm what you are saying, I should actually put the original back in the Drive A slot but then it also seems I should just put the new 4TB WD Red drive in Slot D to replace the "dead" one, correct?
- Oversteer71May 01, 2017Guide
I still had the error message but realize I had never cleared it so wasn't sure if it was still current or not. i cleared and rebooted (with the existing 1TB drive in SlotD) and here's what the DMESG.LOG came back as:
[Mon May 1 17:29:18 2017] md: md0 stopped.
[Mon May 1 17:29:18 2017] md: bind<sdb1>
[Mon May 1 17:29:18 2017] md: bind<sdc1>
[Mon May 1 17:29:18 2017] md: bind<sdd1>
[Mon May 1 17:29:18 2017] md: bind<sda1>
[Mon May 1 17:29:18 2017] md: kicking non-fresh sdd1 from array!
[Mon May 1 17:29:18 2017] md: unbind<sdd1>
[Mon May 1 17:29:18 2017] md: export_rdev(sdd1)
[Mon May 1 17:29:18 2017] md/raid1:md0: active with 3 out of 4 mirrors
[Mon May 1 17:29:18 2017] md0: detected capacity change from 0 to 4290772992
[Mon May 1 17:29:18 2017] md: md1 stopped.
[Mon May 1 17:29:18 2017] md: bind<sdb2>
[Mon May 1 17:29:18 2017] md: bind<sdc2>
[Mon May 1 17:29:18 2017] md: bind<sda2>
[Mon May 1 17:29:18 2017] md/raid:md1: device sda2 operational as raid disk 0
[Mon May 1 17:29:18 2017] md/raid:md1: device sdc2 operational as raid disk 2
[Mon May 1 17:29:18 2017] md/raid:md1: device sdb2 operational as raid disk 1
[Mon May 1 17:29:18 2017] md/raid:md1: allocated 4280kB
[Mon May 1 17:29:18 2017] md/raid:md1: raid level 6 active with 3 out of 4 devices, algorithm 2
[Mon May 1 17:29:18 2017] RAID conf printout:
[Mon May 1 17:29:18 2017] --- level:6 rd:4 wd:3
[Mon May 1 17:29:18 2017] disk 0, o:1, dev:sda2
[Mon May 1 17:29:18 2017] disk 1, o:1, dev:sdb2
[Mon May 1 17:29:18 2017] disk 2, o:1, dev:sdc2
[Mon May 1 17:29:18 2017] md1: detected capacity change from 0 to 1071644672
[Mon May 1 17:29:19 2017] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[Mon May 1 17:29:19 2017] systemd[1]: Failed to insert module 'kdbus': Function not implemented
[Mon May 1 17:29:19 2017] systemd[1]: systemd 230 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
[Mon May 1 17:29:19 2017] systemd[1]: Detected architecture arm.
[Mon May 1 17:29:19 2017] systemd[1]: Set hostname to <ReadyNAS>.
[Mon May 1 17:29:20 2017] systemd[1]: Listening on Journal Socket.
[Mon May 1 17:29:20 2017] systemd[1]: Created slice User and Session Slice.
[Mon May 1 17:29:20 2017] systemd[1]: Reached target Remote File Systems (Pre).
[Mon May 1 17:29:20 2017] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[Mon May 1 17:29:20 2017] systemd[1]: Reached target Encrypted Volumes.
[Mon May 1 17:29:20 2017] systemd[1]: Created slice System Slice.
[Mon May 1 17:29:20 2017] systemd[1]: Created slice system-getty.slice.
[Mon May 1 17:29:20 2017] systemd[1]: Starting Create list of required static device nodes for the current kernel...
[Mon May 1 17:29:20 2017] systemd[1]: Listening on Journal Socket (/dev/log).
[Mon May 1 17:29:20 2017] systemd[1]: Starting Journal Service...
[Mon May 1 17:29:20 2017] systemd[1]: Listening on udev Control Socket.
[Mon May 1 17:29:20 2017] systemd[1]: Starting MD arrays...
[Mon May 1 17:29:20 2017] systemd[1]: Created slice system-serial\x2dgetty.slice.
[Mon May 1 17:29:20 2017] systemd[1]: Reached target Slices.
[Mon May 1 17:29:20 2017] systemd[1]: Starting hwclock.service...
[Mon May 1 17:29:20 2017] systemd[1]: Mounting POSIX Message Queue File System...
[Mon May 1 17:29:20 2017] systemd[1]: Started ReadyNAS LCD splasher.
[Mon May 1 17:29:20 2017] systemd[1]: Starting ReadyNASOS system prep...
[Mon May 1 17:29:20 2017] systemd[1]: Reached target Remote File Systems.
[Mon May 1 17:29:20 2017] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
[Mon May 1 17:29:21 2017] systemd[1]: Reached target Paths.
[Mon May 1 17:29:21 2017] systemd[1]: Listening on udev Kernel Socket.
[Mon May 1 17:29:21 2017] systemd[1]: Starting Load Kernel Modules...
[Mon May 1 17:29:21 2017] systemd[1]: Listening on /dev/initctl Compatibility Named Pipe.
[Mon May 1 17:29:21 2017] systemd[1]: Mounted POSIX Message Queue File System.
[Mon May 1 17:29:21 2017] systemd[1]: Started Create list of required static device nodes for the current kernel.
[Mon May 1 17:29:21 2017] systemd[1]: Started ReadyNASOS system prep.
[Mon May 1 17:29:21 2017] systemd[1]: Started Load Kernel Modules.
[Mon May 1 17:29:21 2017] systemd[1]: Mounting Configuration File System...
[Mon May 1 17:29:21 2017] systemd[1]: Starting Apply Kernel Variables...
[Mon May 1 17:29:21 2017] systemd[1]: Mounting FUSE Control File System...
[Mon May 1 17:29:21 2017] systemd[1]: Starting Create Static Device Nodes in /dev...
[Mon May 1 17:29:21 2017] systemd[1]: Mounted Configuration File System.
[Mon May 1 17:29:21 2017] systemd[1]: Mounted FUSE Control File System.
[Mon May 1 17:29:21 2017] systemd[1]: Started Apply Kernel Variables.
[Mon May 1 17:29:21 2017] systemd[1]: Started hwclock.service.
[Mon May 1 17:29:21 2017] systemd[1]: Reached target System Time Synchronized.
[Mon May 1 17:29:21 2017] systemd[1]: Starting Remount Root and Kernel File Systems...
[Mon May 1 17:29:21 2017] systemd[1]: Started Create Static Device Nodes in /dev.
[Mon May 1 17:29:21 2017] systemd[1]: Starting udev Kernel Device Manager...
[Mon May 1 17:29:21 2017] systemd[1]: Started Remount Root and Kernel File Systems.
[Mon May 1 17:29:21 2017] systemd[1]: Starting Rebuild Hardware Database...
[Mon May 1 17:29:21 2017] systemd[1]: Starting Load/Save Random Seed...
[Mon May 1 17:29:21 2017] systemd[1]: Started Load/Save Random Seed.
[Mon May 1 17:29:22 2017] systemd[1]: Started udev Kernel Device Manager.
[Mon May 1 17:29:22 2017] md: md127 stopped.
[Mon May 1 17:29:22 2017] md: bind<sda3>
[Mon May 1 17:29:22 2017] md: bind<sdc3>
[Mon May 1 17:29:22 2017] md: bind<sdd3>
[Mon May 1 17:29:22 2017] md: bind<sdb3>
[Mon May 1 17:29:22 2017] md: kicking non-fresh sdd3 from array!
[Mon May 1 17:29:22 2017] md: unbind<sdd3>
[Mon May 1 17:29:22 2017] md: export_rdev(sdd3)
[Mon May 1 17:29:22 2017] md: kicking non-fresh sda3 from array!
[Mon May 1 17:29:22 2017] md: unbind<sda3>
[Mon May 1 17:29:22 2017] md: export_rdev(sda3)
[Mon May 1 17:29:22 2017] md/raid:md127: device sdb3 operational as raid disk 1
[Mon May 1 17:29:22 2017] md/raid:md127: device sdc3 operational as raid disk 2
[Mon May 1 17:29:22 2017] md/raid:md127: allocated 4280kB
[Mon May 1 17:29:22 2017] md/raid:md127: not enough operational devices (2/4 failed)
[Mon May 1 17:29:22 2017] RAID conf printout:
[Mon May 1 17:29:22 2017] --- level:5 rd:4 wd:2
[Mon May 1 17:29:22 2017] disk 1, o:1, dev:sdb3
[Mon May 1 17:29:22 2017] disk 2, o:1, dev:sdc3
[Mon May 1 17:29:22 2017] md/raid:md127: failed to run raid set.
[Mon May 1 17:29:22 2017] md: pers->run() failed ...
[Mon May 1 17:29:22 2017] md: md127 stopped.
[Mon May 1 17:29:22 2017] md: unbind<sdb3>
[Mon May 1 17:29:22 2017] md: export_rdev(sdb3)
[Mon May 1 17:29:22 2017] md: unbind<sdc3>
[Mon May 1 17:29:22 2017] md: export_rdev(sdc3)
[Mon May 1 17:29:22 2017] systemd[1]: Started MD arrays.
[Mon May 1 17:29:22 2017] systemd[1]: Reached target Local File Systems (Pre).
[Mon May 1 17:29:22 2017] systemd[1]: Started Journal Service.
[Mon May 1 17:29:22 2017] Adding 1046524k swap on /dev/md1. Priority:-1 extents:1 across:1046524k
[Mon May 1 17:29:22 2017] systemd-journald[996]: Received request to flush runtime journal from PID 1
[Mon May 1 17:31:39 2017] md: export_rdev(sdd1)
[Mon May 1 17:31:39 2017] md: bind<sdd1>
[Mon May 1 17:31:39 2017] RAID1 conf printout:
[Mon May 1 17:31:39 2017] --- wd:3 rd:4
[Mon May 1 17:31:39 2017] disk 0, wo:0, o:1, dev:sda1
[Mon May 1 17:31:39 2017] disk 1, wo:0, o:1, dev:sdb1
[Mon May 1 17:31:39 2017] disk 2, wo:0, o:1, dev:sdc1
[Mon May 1 17:31:39 2017] disk 3, wo:1, o:1, dev:sdd1
[Mon May 1 17:31:39 2017] md: recovery of RAID array md0
[Mon May 1 17:31:39 2017] md: minimum _guaranteed_ speed: 30000 KB/sec/disk.
[Mon May 1 17:31:39 2017] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[Mon May 1 17:31:39 2017] md: using 128k window, over a total of 4190208k.
[Mon May 1 17:33:02 2017] md: md0: recovery done.
[Mon May 1 17:33:02 2017] RAID1 conf printout:
[Mon May 1 17:33:02 2017] --- wd:4 rd:4
[Mon May 1 17:33:02 2017] disk 0, wo:0, o:1, dev:sda1
[Mon May 1 17:33:02 2017] disk 1, wo:0, o:1, dev:sdb1
[Mon May 1 17:33:02 2017] disk 2, wo:0, o:1, dev:sdc1
[Mon May 1 17:33:02 2017] disk 3, wo:0, o:1, dev:sdd1
[Mon May 1 17:33:03 2017] md1: detected capacity change from 1071644672 to 0
[Mon May 1 17:33:03 2017] md: md1 stopped.
[Mon May 1 17:33:03 2017] md: unbind<sda2>
[Mon May 1 17:33:03 2017] md: export_rdev(sda2)
[Mon May 1 17:33:03 2017] md: unbind<sdc2>
[Mon May 1 17:33:03 2017] md: export_rdev(sdc2)
[Mon May 1 17:33:03 2017] md: unbind<sdb2>
[Mon May 1 17:33:03 2017] md: export_rdev(sdb2)
[Mon May 1 17:33:04 2017] md: bind<sda2>
[Mon May 1 17:33:04 2017] md: bind<sdb2>
[Mon May 1 17:33:04 2017] md: bind<sdc2>
[Mon May 1 17:33:04 2017] md: bind<sdd2>
[Mon May 1 17:33:04 2017] md/raid:md1: not clean -- starting background reconstruction
[Mon May 1 17:33:04 2017] md/raid:md1: device sdd2 operational as raid disk 3
[Mon May 1 17:33:04 2017] md/raid:md1: device sdc2 operational as raid disk 2
[Mon May 1 17:33:04 2017] md/raid:md1: device sdb2 operational as raid disk 1
[Mon May 1 17:33:04 2017] md/raid:md1: device sda2 operational as raid disk 0
[Mon May 1 17:33:04 2017] md/raid:md1: allocated 4280kB
[Mon May 1 17:33:04 2017] md/raid:md1: raid level 6 active with 4 out of 4 devices, algorithm 2
[Mon May 1 17:33:04 2017] RAID conf printout:
[Mon May 1 17:33:04 2017] --- level:6 rd:4 wd:4
[Mon May 1 17:33:04 2017] disk 0, o:1, dev:sda2
[Mon May 1 17:33:04 2017] disk 1, o:1, dev:sdb2
[Mon May 1 17:33:04 2017] disk 2, o:1, dev:sdc2
[Mon May 1 17:33:04 2017] disk 3, o:1, dev:sdd2
[Mon May 1 17:33:04 2017] md1: detected capacity change from 0 to 1071644672
[Mon May 1 17:33:04 2017] md: resync of RAID array md1
[Mon May 1 17:33:04 2017] md: minimum _guaranteed_ speed: 30000 KB/sec/disk.
[Mon May 1 17:33:04 2017] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[Mon May 1 17:33:04 2017] md: using 128k window, over a total of 523264k.
[Mon May 1 17:33:05 2017] Adding 1046524k swap on /dev/md1. Priority:-1 extents:1 across:1046524k
[Mon May 1 17:34:01 2017] md: md1: resync done.
[Mon May 1 17:34:01 2017] RAID conf printout:
[Mon May 1 17:34:01 2017] --- level:6 rd:4 wd:4
[Mon May 1 17:34:01 2017] disk 0, o:1, dev:sda2
[Mon May 1 17:34:01 2017] disk 1, o:1, dev:sdb2
[Mon May 1 17:34:01 2017] disk 2, o:1, dev:sdc2
[Mon May 1 17:34:01 2017] disk 3, o:1, dev:sdd2
- daelominMay 04, 2017Aspirant
Hi,
Thanks for the detailed explanation. I had basically exactly the same problem.
I had been upgrading from 2TB drives to 4TB drives progressively but my 4TB on slot4 was showing ATA errors (3175 to be precise..)
I had only slot 1 & 2 to upgrade so I removed the 2TB hdd in slot1 & started resyncing.
Turns out the NAS shut itself down twice while resyncing (!) and each time I powered it back on. After the first time, it kept syncing. After the second time, it wouldn't boot : I had btrfs_search_forward+2ac on the front. Googled it, didn't find much, waited 2h, powered the NAS down & rebooted it.
Then all my volumes had gone red & I had the "Remove inactive volumes".
Anyway, I decided to remove the 4TB disk from slot1 & reinsert the former 2TB drive inside (like you suggested).
Upon rebooting, I'm seeing a "no volume" in the admin page but the screen of the NAS shows "Recovery test 2.86%"
So I am crossing fingers & toes that it is actually rebuilding the array...
If it works, I shall save all I can prior to attempting any further upgrade, but given the situation, how would you go about it? Replace the failing 4TB in slot4 with another 4TB first?
Thanks in advance
Remi
- jak0lantashMay 04, 2017Mentor
Well, that's one of the biggest difficulty with a dead volume. A step can be good in one situation and bad in another. That's why it's always better to assess the issue by looking at logs before trying to remove drives, reinsert others, etc.
3000+ ATA errors is very high, bad even, and there's a good chance that's why your NAS shuts down.
You'll have to wait for the LCD to show complete. Then if your volume is readable, full backup, then replace the erroring drive first. If necessary, download your logs, look for md127 in dmesg.log and post an extract here.
- daelominMay 04, 2017Aspirant
Thu May 4 23:58:14 2017] md: md127 stopped. [Thu May 4 23:58:14 2017] md: bind<sda3> [Thu May 4 23:58:14 2017] md: bind<sdb3> [Thu May 4 23:58:14 2017] md: bind<sdc3> [Thu May 4 23:58:14 2017] md: kicking non-fresh sda3 from array! [Thu May 4 23:58:14 2017] md: unbind<sda3> [Thu May 4 23:58:14 2017] md: export_rdev(sda3) [Thu May 4 23:58:14 2017] md/raid:md127: device sdc3 operational as raid disk 1 [Thu May 4 23:58:14 2017] md/raid:md127: device sdb3 operational as raid disk 3 [Thu May 4 23:58:14 2017] md/raid:md127: allocated 4280kB [Thu May 4 23:58:14 2017] md/raid:md127: not enough operational devices (2/4 failed) [Thu May 4 23:58:14 2017] RAID conf printout: [Thu May 4 23:58:14 2017] --- level:5 rd:4 wd:2 [Thu May 4 23:58:14 2017] disk 1, o:1, dev:sdc3 [Thu May 4 23:58:14 2017] disk 3, o:1, dev:sdb3 [Thu May 4 23:58:14 2017] md/raid:md127: failed to run raid set. [Thu May 4 23:58:14 2017] md: pers->run() failed ... [Thu May 4 23:58:14 2017] md: md127 stopped. [Thu May 4 23:58:14 2017] md: unbind<sdc3> [Thu May 4 23:58:14 2017] md: export_rdev(sdc3) [Thu May 4 23:58:14 2017] md: unbind<sdb3> [Thu May 4 23:58:14 2017] md: export_rdev(sdb3) [Thu May 4 23:58:14 2017] systemd[1]: Started udev Kernel Device Manager. [Thu May 4 23:58:14 2017] md: md127 stopped. [Thu May 4 23:58:14 2017] md: bind<sdb4> [Thu May 4 23:58:14 2017] md: bind<sda4> [Thu May 4 23:58:14 2017] md/raid1:md127: active with 2 out of 2 mirrors [Thu May 4 23:58:14 2017] md127: detected capacity change from 0 to 2000253812736 [Thu May 4 23:58:14 2017] systemd[1]: Found device /dev/md1. [Thu May 4 23:58:14 2017] systemd[1]: Activating swap md1... [Thu May 4 23:58:14 2017] BTRFS: device label 5dbf20e2:test devid 2 transid 1018766 /dev/md127 [Thu May 4 23:58:14 2017] systemd[1]: Found device /dev/disk/by-label/5dbf20e2:test. [Thu May 4 23:58:14 2017] Adding 523708k swap on /dev/md1. Priority:-1 extents:1 across:523708k [Thu May 4 23:58:14 2017] systemd[1]: Activated swap md1. [Thu May 4 23:58:14 2017] systemd[1]: Started Journal Service. [Thu May 4 23:58:15 2017] systemd-journald[1009]: Received request to flush runtime journal from PID 1 [Thu May 4 23:58:15 2017] BTRFS: failed to read the system array on md127 [Thu May 4 23:58:15 2017] BTRFS: open_ctree failed [Thu May 4 23:58:22 2017] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [Thu May 4 23:58:23 2017] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory [Thu May 4 23:58:23 2017] NFSD: starting 90-second grace period (net c097fc40) [Thu May 4 23:58:27 2017] mvneta d0070000.ethernet eth0: Link is Up - 1Gbps/Full - flow control off [Thu May 4 23:58:27 2017] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [Thu May 4 23:59:21 2017] nfsd: last server has exited, flushing export cache [Thu May 4 23:59:21 2017] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory [Thu May 4 23:59:21 2017] NFSD: starting 90-second grace period (net c097fc40)
So, the rebuild failed...
Does it look like I could try a btrfs restore at this point? 2 operational devices out of 4 seems not enough...
It must be why the slot1 disk appears as out of the array (black dot)...
Sigh..
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!