NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
rxmoss
Mar 19, 2023Aspirant
ReadyNAS RN104 Dead Message
I have had no problems with my ReadyNAS RN104 for 7+ years in RAID 5, when a few days ago I got this message: "Disk in channel 4 (Internal) changed state from ONLINE to FAILED." I had been running wi...
StephenB
Mar 20, 2023Guru - Experienced User
rxmoss wrote:
I booted my three 6tb cloned drives in read-only mode and I see the following. Am I supposed to do something else or does this indicate bad news?
Not good news, but potentially fixable. You definitely don't want to create a new volume.
Try downloading the full log zip file. There's a lot in there - if you want help analyzing it, you can put the full zip into cloud storage (dropbox, google drive, etc), and send me a private message with a download link. Send the PM using the envelope icon in the upper right of the forum page. Don't post the log zip publicly.
Do you have any experience with the linux command line?
I'm asking because there are two main options to pursue:
- try to force the volume to mount in the NAS
- use RAID recovery software with BTRFS support (ex: ReclaiMe) on a Windows PC.
The first option requires technical skills and use of ssh/linux command line, the second option will incur some cost.
rxmoss
Mar 20, 2023Aspirant
StephenB wrote:
rxmoss wrote:I booted my three 6tb cloned drives in read-only mode and I see the following. Am I supposed to do something else or does this indicate bad news?
Not good news, but potentially fixable. You definitely don't want to create a new volume.
Try downloading the full log zip file. There's a lot in there - if you want help analyzing it, you can put the full zip into cloud storage (dropbox, google drive, etc), and send me a private message with a download link. Send the PM using the envelope icon in the upper right of the forum page. Don't post the log zip publicly.
Do you have any experience with the linux command line?
I'm asking because there are two main options to pursue:
- try to force the volume to mount in the NAS
- use RAID recovery software with BTRFS support (ex: ReclaiMe) on a Windows PC.
The first option requires technical skills and use of ssh/linux command line, the second option will incur some cost.
Thank you--I have PM'd you with a link to the log file.
The NAS was used with a Mac, I have a Mac and have used Terminal to some degree.
I also have access to a Linux PC and have some rudimentary command line experience, but my experience is thin. Considering that I have the original 3 4TB drives (and the 4th) that seemed to have produced error-free copies, I can presumably re-copy them if my linux "experience" does more harm than good to the 3 6TBs I have in the NAS now.
At this point, you don't think there is any use in adding back the 4th drive to the NAS, right? Either the 6tb or the 4tb?
- StephenBMar 20, 2023Guru - Experienced User
rxmoss wrote:At this point, you don't think there is any use in adding back the 4th drive to the NAS, right? Either the 6tb or the 4tb?
First, what I am seeing you your logs.
[Mon Mar 20 05:13:12 2023] md: md127 stopped. [Mon Mar 20 05:13:12 2023] md: bind sdb3 [Mon Mar 20 05:13:12 2023] md: bind sdc3 [Mon Mar 20 05:13:12 2023] md: bind sda3 [Mon Mar 20 05:13:12 2023] md: kicking non-fresh sdb3 from array! [Mon Mar 20 05:13:12 2023] md: unbind sdb3 [Mon Mar 20 05:13:12 2023] md: export_rdev(sdb3) [Mon Mar 20 05:13:12 2023] md/raid:md127: device sda3 operational as raid disk 1 [Mon Mar 20 05:13:12 2023] md/raid:md127: device sdc3 operational as raid disk 3 [Mon Mar 20 05:13:12 2023] md/raid:md127: allocated 4294kB [Mon Mar 20 05:13:12 2023] md/raid:md127: not enough operational devices (2/4 failed)RAID maintains event counters on each disk, and if there is much disagreement in those counters, then the disk isn't included in the array. (that is basically detecting cached writes that never were written to the disk). That's happened in your case with disk 2 (sdb).
As you'd expect all disks are healthy (since they are cloned with no errors).
I don't think trying with disk 4 will help, but it wouldn't hurt to try if you boot read-only again. Insert the disk with the NAS powered down. It's not clear which of the four disks is more likely to work.
More likely, you'd need to try forcing disk 2 into the array (with only disks 1-3 inserted). The relevant command would be
mdadm --assemble --really-force /dev/md127 --verbose /dev/sd[abcd]3You'd do this logging in as root, using the NAS admin password. SSH needs to be enabled from system->settings->services. Enablign ssh might fail because you have no volume, and if it does, we'd need to access the NAS via tech support mode. So post back if that is needed.
If md127 is successfully assembled, then just reboot the NAS, and the volume should mount (degraded). You'd then need to try to add the 6 TB drive again (reformatting it in the NAS to start the process).
There could be some corruption in the files (or some folders) since due to the missed writes. If you see much of this, you could still pursue data recovery using the original disks.
- rxmossMar 21, 2023Aspirant
StephenB wrote:You'd do this logging in as root, using the NAS admin password. SSH needs to be enabled from system->settings->services. Enablign ssh might fail because you have no volume, and if it does, we'd need to access the NAS via tech support mode. So post back if that is needed.
Unfortunately, when I tried to enable SSH, I got an error: Unable to start or modify service.
Do you happen to have instructions for how I can enable it in tech support mode?
Once I do that, I should be able to ssh into the device and try the command you have suggested in the earlier post.
Thank you!!
- rxmossMar 21, 2023Aspirant
ebUpdate: I'm back up and running with degraded data--THANK YOU THANK YOU!
I booted in tech support mode, used telnet to log into the device. Logged in using the default p/w found online.
Ran the command you suggested, got the code below, then rebooted and I have the degraded message and my data appears to be intact. THANK YOU!!!# mdadm --assemble --really-force /dev/md127 --verbose /dev/sd[abcd]3 mdadm: looking for devices for /dev/md127 mdadm: /dev/sda3 is identified as a member of /dev/md127, slot 1. mdadm: /dev/sdb3 is identified as a member of /dev/md127, slot 2. mdadm: /dev/sdc3 is identified as a member of /dev/md127, slot 3. mdadm: forcing event count in /dev/sdb3(2) from 42274 upto 43022 mdadm: clearing FAULTY flag for device 1 in /dev/md127 for /dev/sdb3 mdadm: Marking array /dev/md127 as 'clean' mdadm: no uptodate device for slot 0 of /dev/md127 mdadm: added /dev/sdb3 to /dev/md127 as 2 mdadm: added /dev/sdc3 to /dev/md127 as 3 mdadm: added /dev/sda3 to /dev/md127 as 1 mdadm: /dev/md127 assembled from 3 drives - not enough to start the array.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!