NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
schtroumpfmoi
Sep 22, 2023Aspirant
RN104 Remove inactive volumes Disk 3,4
Hi everyone, one of my 4 2TB drives (#2) failed in my RN104. I went ahead and bought a replacement, but after insertion it looks like i get the "remove inactive volumes disk 3,4"; with some of my dr...
- Sep 23, 2023
schtroumpfmoi wrote:
Sep 22 13:17:49 NAS mdadm[2171]: Fail event detected on md device /dev/md0, component device /dev/sdc1
[Fri Sep 22 13:50:34 2023] sd 3:0:0:0: [sdd] tag#13 Add. Sense: Unrecovered read error - auto reallocate failed
So both sdc and sdd are detected as failed. Unfortunately not a good sign. It'd have been better if one of the disks was just out of sync.
I'm not sure if either of these is disk 1, you might want to double-check that by looking at disk-info.log.
schtroumpfmoi wrote:
I guess my main question now is: is there still a way i can force a rebuild / should i try to remove/readd the faulty disk ?
With single redundancy, you need three working disks to rebuild the fourth. You don't have that.
If the disks can be read at all, you could try cloning one or both. That might help with recovery.
RAID recovery software like ReclaiMe might be able to recover some data from the two remaining disks - not sure. You can download it and see before you pay for it. You would need a way to connect the disks to a PC (directly with SATA, or using a USB adapter/dock).
StephenB
Sep 22, 2023Guru - Experienced User
schtroumpfmoi wrote:
one of my 4 2TB drives (#2) failed in my RN104. I went ahead and bought a replacement, but after insertion it looks like i get the "remove inactive volumes disk 3,4"; with some of my drives showing in red while they were healthy.
It could be that dirve #1 is not in a good state either (many ATA errors), but i guess i first have to try and force a rebuild of sorts. It was setup in RAID 5 (which still shows up in green here).
Two drive failures would cause the volume to fail.
Download the full log zip file, there is a lot more information in there than you will see in the web ui.
- schtroumpfmoiSep 22, 2023AspirantI did download it - what should I be looking for ?
Drive.log confirmed disk1 as ATA error prone - while still deemed alive.
Disk2 went from online to failed last week - I replaced it this morning.
Happy to post/ dig further - just let me know what to look for.
Many thanks- StephenBSep 22, 2023Guru - Experienced User
schtroumpfmoi wrote:
I did download it - what should I be looking for ?
Drive.log confirmed disk1 as ATA error prone - while still deemed alive.
Disk2 went from online to failed last week - I replaced it this morning.
Happy to post/ dig further - just let me know what to look for.
Many thanksYou likely would see some disk errors around the time that the volume failed (or when the system was rebooted). They would be in dmesg.log, system.log, systemd-journal.log and/or kernel.log.
If you rebooted the system, then you'll see mdadm errors during the boot. It'd be useful to know if mdadm was kicking out a "non-fresh" disk from the array, or whether the NAS thought the disk had actually failed.
If you had ssh enabled before the volume failed, you'd be able to log in that way and check some things. But the system won't let you enable ssh if there is no volume - no idea why.
- schtroumpfmoiSep 23, 2023Aspirant
Hi again,
Here is the extract of mdadm errors from system.log:
Sep 22 13:17:49 NAS mdadm[2171]: Fail event detected on md device /dev/md0, component device /dev/sdc1
Sep 22 13:17:50 NAS disk_event_handler[19813]: mdadm: set 8:34 faulty in /dev/md1
Sep 22 13:17:50 NAS mdadm[2171]: Fail event detected on md device /dev/md1, component device /dev/sdc2
Sep 22 13:18:00 NAS mdadm[2171]: NewArray event detected on md device /dev/md1
Sep 22 13:18:06 NAS mdadm[2171]: RebuildStarted event detected on md device /dev/md1, component device resync
Sep 22 13:18:58 NAS mdadm[2171]: RebuildFinished event detected on md device /dev/md1, component device resync
Sep 22 13:29:16 NAS disk_event_handler[31784]: mdadm: hot removed 8:33 from /dev/md0
Sep 22 13:29:16 NAS disk_event_handler[31784]: mdadm: hot removed 8:35 from /dev/md127
Sep 22 13:37:50 NAS mdadm[2119]: NewArray event detected on md device /dev/md0
Sep 22 13:37:51 NAS mdadm[2119]: DegradedArray event detected on md device /dev/md0
Sep 22 13:37:51 NAS mdadm[2119]: NewArray event detected on md device /dev/md1Below are some items i thought would be relevant from the other logs, the main one being
[Fri Sep 22 13:36:22 2023] md/raid:md127: not enough operational devices (2/4 failed)
I guess my main question now is: is there still a way i can force a rebuild / should i try to remove/readd the faulty disk ?
Or is it just trying to reclaim data off the two valid disks and pray something is still readable ?
Many thanks for any insights...
DMESG [Fri Sep 22 13:36:18 2023] Buffer I/O error on dev sdd3, logical block 1, async page read —> 20 of these
[Fri Sep 22 13:50:34 2023] sd 3:0:0:0: [sdd] tag#13 Add. Sense: Unrecovered read error - auto reallocate failed
[Fri Sep 22 13:36:18 2023] blk_update_request: I/O error, dev sdd, sector 9437257
[Fri Sep 22 13:36:22 2023] md/raid:md127: device sda3 operational as raid disk 0
[Fri Sep 22 13:36:22 2023] md/raid:md127: device sdb3 operational as raid disk 1
[Fri Sep 22 13:36:22 2023] md/raid:md127: not enough operational devices (2/4 failed)
SYSTEM
Mostly « THROW: open failed path:/data/Pictures errno:2 (No such file or directory) » (and others errors related to my top folders being missing)
KERNEL
Lots of these:
Sep 22 13:44:07 NAS kernel: sd 3:0:0:0: [sdd] tag#15 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Sep 22 13:44:07 NAS kernel: sd 3:0:0:0: [sdd] tag#15 Sense Key : Medium Error [current] [descriptor]
Sep 22 13:44:07 NAS kernel: sd 3:0:0:0: [sdd] tag#15 Add. Sense: Unrecovered read error - auto reallocate failed
Sep 22 13:44:07 NAS kernel: sd 3:0:0:0: [sdd] tag#15 CDB: Read(10) 28 00 00 90 00 48 00 00 08 00
Sep 22 13:44:07 NAS kernel: blk_update_request: I/O error, dev sdd, sector 9437257
Sep 22 13:44:07 NAS kernel: Buffer I/O error on dev sdd3, logical block 1, async page read
SYSTEMD JOURNAL
Mostly « open failed » on the top folders
STATUS:
[23/07/22 20:40:07 CEST] warning:disk:LOGMSG_SMART_ATA_ERR_30DAYS_WARN Detected increasing ATA error count: [62418] on disk 1 (Internal) [WDC WD20EFRX-68EUZN0, WD-WCC4M6EZU2XC] 20733 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!