NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Archdivan
May 04, 2021Aspirant
rndp6000 inactive volume data
Hello there today I suddenly lost access to files over the network, and the NAS was totally freezed. so i force rebooted the system and opened admin console. there was message that volume data is in...
StephenB
May 04, 2021Guru - Experienced User
Archdivan wrote:
Can i somehow reboot it in read only mode again to save any files?
The RAID array has gotten out of sync. I think the best next step is to test the disks in a Windows PC using vendor tools (Seatools for Seagate; the new WD Digital Dashboard software for Western Digital). You can connect the disk via either SATA or a USB adapter/dock. Run the extended test. Remove the disks with the NAS powered down, and label by slot.
If you can't connect the disks to a PC, then you could try booting up in tech support mode, and then run a similar extended test with smartctl.
There are ways to force the volume to mount, but it is difficult to describe in the forum, as it depends on exactly what failed. Unfortunately tech support isn't an option, since you have a converted NAS.
Archdivan
May 06, 2021Aspirant
Stephen, great thanks for replying!
at the moment I have tried two ways to check disks: using smartctl command via ssh and disk chesk from the boot menu. SMART looks normal with no obvious critical errors. If it necessary, I can send a report. Also i found some errors in dmesg.log
[Thu May 6 03:47:20 2021] RAID conf printout:
[Thu May 6 03:47:20 2021] --- level:5 rd:5 wd:5
[Thu May 6 03:47:20 2021] disk 0, o:1, dev:sda3
[Thu May 6 03:47:20 2021] disk 1, o:1, dev:sdb3
[Thu May 6 03:47:20 2021] disk 2, o:1, dev:sdc3
[Thu May 6 03:47:20 2021] disk 3, o:1, dev:sdd3
[Thu May 6 03:47:20 2021] disk 4, o:1, dev:sde3
[Thu May 6 03:47:20 2021] md126: detected capacity change from 0 to 1980566863872
[Thu May 6 03:47:20 2021] BTRFS: device label 33ea3ec3:data devid 1 transid 1021063 /dev/md126
[Thu May 6 03:47:21 2021] BTRFS info (device md126): has skinny extents
[Thu May 6 03:47:39 2021] BTRFS critical (device md126): corrupt leaf, slot offset bad: block=2977215774720, root=1, slot=131
[Thu May 6 03:47:39 2021] BTRFS critical (device md126): corrupt leaf, slot offset bad: block=2977215774720, root=1, slot=131
[Thu May 6 03:47:41 2021] BTRFS critical (device md126): corrupt leaf, slot offset bad: block=2977215774720, root=1, slot=131
[Thu May 6 03:47:41 2021] BTRFS critical (device md126): corrupt leaf, slot offset bad: block=2977215774720, root=1, slot=131
[Thu May 6 03:47:41 2021] BTRFS warning (device md126): Skipping commit of aborted transaction.
[Thu May 6 03:47:41 2021] BTRFS: error (device md126) in cleanup_transaction:1864: errno=-5 IO failure
[Thu May 6 03:47:41 2021] BTRFS info (device md126): delayed_refs has NO entry
[Thu May 6 03:47:41 2021] BTRFS: error (device md126) in btrfs_replay_log:2436: errno=-5 IO failure (Failed to recover log tree)
[Thu May 6 03:47:41 2021] BTRFS error (device md126): cleaner transaction attach returned -30
[Thu May 6 03:47:41 2021] BTRFS error (device md126): open_ctree failed
[Thu May 6 03:47:42 2021] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[Thu May 6 03:47:42 2021] NFSD: starting 90-second grace period (net ffffffff88d70240)
[Thu May 6 03:48:01 2021] nfsd: last server has exited, flushing export cache
[Thu May 6 03:48:01 2021] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[Thu May 6 03:48:01 2021] NFSD: starting 90-second grace period (net ffffffff88d70240)
[Thu May 6 05:38:06 2021] RPC: fragment too large: 50331695
[Thu May 6 08:13:35 2021] RPC: fragment too large: 50331695
It looks like corruption in filesystem.
Some googling around tells me that it can be repaired with btrfs but i exactly don't understand how to do this.
- StephenBMay 06, 2021Guru - Experienced User
Archdivan wrote:
[Thu May 6 03:47:41 2021] BTRFS: error (device md126) in cleanup_transaction:1864: errno=-5 IO failure[Thu May 6 03:47:41 2021] BTRFS: error (device md126) in btrfs_replay_log:2436: errno=-5 IO failure (Failed to recover log tree)
It looks like corruption in filesystem.
It does look like corruption, but I suggest first looking in system.log and kernel.log for more info on these two particular errors. They look disk-related, and those two logs might give you more info on what disk caused them. Once you know the disk, you could attempt to boot the system up w/o it, and see if the volume mounts.
BTW, if it does mount, then the next step would be to off-load data (at least the most critical stuff).
Archdivan wrote:
Some googling around tells me that it can be repaired with btrfs but i exactly don't understand how to do this.
Though I've done some reading here, it's not something I've needed to do myself. rn_enthusiast has sometimes offered to review logs and offer tailored advice on this. So maybe wait and see if he will chime in.
- rn_enthusiastMay 06, 2021Virtuoso
It looks like a failure to read the filesystem journal.
[Thu May 6 03:47:41 2021] BTRFS: error (device md126) in btrfs_replay_log:2436: errno=-5 IO failure (Failed to recover log tree)
[Thu May 6 03:47:41 2021] BTRFS error (device md126): cleaner transaction attach returned -30
[Thu May 6 03:47:41 2021] BTRFS error (device md126): open_ctree failedGiven the symptoms of the NAS freezing then seeing the journal (tree-log) preventing the mount on next boot, I would expect that a btrfs zero-log will likely fix it.
btrfs rescue zero-log /dev/md126
Then reboot the NASBut the thing is, it is an estimation of what the problem is. It LOOKS like a bad journal more than anything but it could potentially be other things too. Clearing the journal is only good in cases where the journal is actually the issue :). It does look like, that the journal is the problem here but there are people out there who knows a lot more about BTRFS than me. If you care about the data or don't have a backup, you might seek advise from the BTRFS community first. You can use the BTRFS mailing list:
https://btrfs.wiki.kernel.org/index.php/Btrfs_mailing_list
They would expect you know some Linux at least and please look at the section about which info to get for them (in the above link).
If you just want to "try something", then zero-log would probably be the way to go, though would be at your own descresion.
Cheers
- StephenBMay 06, 2021Guru - Experienced User
rn_enthusiast wrote:
Given the symptoms of the NAS freezing then seeing the journal (tree-log) preventing the mount on next boot, I would expect that a btrfs zero-log will likely fix it.
I am wondering if the I/O error was simply due to a bad pointer in the journal, or if there was a disk error when reading the data structure.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!