NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
voidness
Mar 05, 2017Guide
Ready Nas - Lost Volume after forced reboot
Hi Support, My 104 crashed and not responding so I unplugged and restarted, now the existing volume is gone, the error on is "remove inactive volumes to use the disk, 1,2,3,4" are you able to...
mdgm-ntgr
Mar 05, 2017NETGEAR Employee Retired
This is the community, support is at netgear.com/support.
You can send in your logs if you like (see the Sending Logs link in my sig) but you'll likely need to contact support (there would be some costs involved).
voidness
Mar 05, 2017Guide
Thanks, do you know if there is anything I could try ?
- SandsharkMar 05, 2017Sensei
I have noted a marked increase in posts about this type of problem since the release of OS 6.6.0. I think somebody at Netgear needs to look into why that is. The system should be more robust than this.
- F_L_Mar 13, 2017Tutor
I got this myself tonight. Ultra 4 running 6.6.1.
Did you ever find a solution how to retreive the volume?
- jak0lantashMar 13, 2017Mentor
Telling you how to re-assemble a RAID array without knowing the current status of the members would be way too dangerous.
If you're able to retrieve the logs, you could look at systemd-journal.log. There should be lines containing "kernel: md" which would bring some light to the status of the members of your RAID. Look at sda3, sdb3, sdc3, sdd3 and md127.
(I wish there was a log file containing the result of mdadm --examine of all the partitions, but it's not the case, so it doesn't make it easy to explain where to look.)
- voidnessMar 14, 2017Guide
I have never got a solution for this, I did a search on this forums and apparently the paid support can help you to get it back, but that will cost you money, the unit has some hidden boot menu that allows you inspect deeper, of course this is reserved for support staff.
I have also sent the logs to the MOD on this forums but never received a reply, I have not had time to contact support but I will soon, currently I just turn the unit off. and try not to think about it, I lost about 8TB worth of data
:mansad:
- mdgm-ntgrMar 14, 2017NETGEAR Employee Retired
voidness your situation doesn't look good.
Mar 01 04:12:27 voidness-nas kernel: md/raid:md127: Disk failure on sdd3, disabling device. md/raid:md127: Operation continuing on 3 devices. Mar 01 04:12:27 voidness-nas kernel: md/raid:md127: Disk failure on sdc3, disabling device. md/raid:md127: Operation continuing on 2 devices. Mar 01 04:12:27 voidness-nas kernel: md/raid:md127: Disk failure on sdb3, disabling device. md/raid:md127: Operation continuing on 1 devices. Mar 01 04:12:27 voidness-nas kernel: md/raid:md127: read error not correctable (sector 1027231616 on sdd3). Mar 01 04:12:27 voidness-nas kernel: md/raid:md127: read error not correctable (sector 1027231624 on sdd3). Mar 01 04:12:27 voidness-nas kernel: BTRFS: bdev /dev/md127 errs: wr 0, rd 1, flush 0, corrupt 0, gen 0 Mar 01 04:12:27 voidness-nas kernel: BTRFS: bdev /dev/md127 errs: wr 0, rd 2, flush 0, corrupt 0, gen 0 Mar 01 04:12:27 voidness-nas kernel: BTRFS: bdev /dev/md127 errs: wr 0, rd 30229, flush 0, corrupt 0, gen 0 Mar 01 04:12:27 voidness-nas kernel: BTRFS: bdev /dev/md127 errs: wr 0, rd 30230, flush 0, corrupt 0, gen 0 Mar 01 04:12:27 voidness-nas kernel: BTRFS: bdev /dev/md127 errs: wr 0, rd 30231, flush 0, corrupt 0, gen 0 Mar 01 04:10:34 voidness-nas mdadm[2224]: Fail event detected on md device /dev/md127, component device /dev/sdb3 Mar 01 04:12:25 voidness-nas mdadm[2224]: Fail event detected on md device /dev/md127, component device /dev/sdc3 Mar 01 04:12:26 voidness-nas mdadm[2224]: Fail event detected on md device /dev/md127, component device /dev/sdd3 Mar 04 23:15:27 voidness-nas kernel: md: md127 stopped. Mar 04 23:15:27 voidness-nas kernel: md/raid:md127: device sda3 operational as raid disk 0 Mar 04 23:15:27 voidness-nas kernel: md/raid:md127: allocated 4280kB Mar 04 23:15:27 voidness-nas kernel: md/raid:md127: not enough operational devices (3/4 failed) Mar 04 23:15:27 voidness-nas kernel: md/raid:md127: failed to run raid set. Mar 04 23:15:27 voidness-nas kernel: md: md127 stopped. Mar 04 23:29:46 voidness-nas kernel: md: md127 stopped. Mar 04 23:29:46 voidness-nas kernel: md/raid:md127: device sda3 operational as raid disk 0 Mar 04 23:29:46 voidness-nas kernel: md/raid:md127: allocated 4280kB Mar 04 23:29:46 voidness-nas kernel: md/raid:md127: not enough operational devices (3/4 failed) Mar 04 23:29:46 voidness-nas kernel: md/raid:md127: failed to run raid set. Mar 04 23:29:46 voidness-nas kernel: md: md127 stopped.All of your disks have ATA errors.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!