NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
as_
Nov 11, 2021Aspirant
The volume data encountered an error and was made read-only. It is recommended to backup your data.
Was copying data onto my NAS from external hard drives for backup purposes (just had an external drive fail.. so.. I'm a bit paranoid on that right now..). All of a sudden, I couldn't transfer files ...
as_
Nov 11, 2021Aspirant
Digging a little deeper (figured out how to download the detailed log files).. I see the following in kernel.log (repeated several times for each message):
kernel: BTRFS error (device md127): bad tree block start 14701692634252692971 23322242252800 kernel: BTRFS error (device md127): bad tree block start 18115770242137751560 23322241466368 kernel: BTRFS error (device md127): parent transid verify failed on 23363584 wanted 3050091 found 3050089
rn_enthusiast
Nov 11, 2021Virtuoso
The filesystem is damaged here, which you have found out. That is the reason for the NAS remonting to prevent any further corruption.
Are you OK to send me the logs? I wouldn't mind poking around in them and see where we go from here.
When you access the NAS Admin web page, go to: System > Logs and here you will see a button called "Download Logs" on the right-hand side. Click this and it will download a zip file with all the NAS logs inside.
Take this zip file and upload to your Dropbox, Google Drive or similar and then make a link where I can download the log zip file. PM me this link please. Then I will have a look to see what is going on.
Cheers
- as_Nov 21, 2021Aspirant
So.. I completed the backup.
Now what?
- rn_enthusiastNov 22, 2021Virtuoso
Sorry as_
IRL caught up with me, so didn't get a chance to look at this earlier.
The BTRFS filesystem is reporting you being off with just 2 write transactions: 3050091 found 3050089
There is an argument that you could clear the filesystem journal which will likely help the parent transid verify failed errors.[Thu Nov 11 18:34:14 2021] BTRFS error (device md127): parent transid verify failed on 23363584 wanted 3050091 found 3050089 [Thu Nov 11 18:36:14 2021] BTRFS error (device md127): parent transid verify failed on 23363584 wanted 3050091 found 3050089 [Thu Nov 11 18:36:14 2021] BTRFS error (device md127): parent transid verify failed on 23363584 wanted 3050091 found 3050089
I can 't see in the kernel logs what potentially caused this, because the logs are rolled over with the BTRFS error spam.
I am more concerned about these messages here:
[Thu Nov 11 18:36:34 2021] BTRFS error (device md127): bad tree block start 18115770242137751560 23322241466368 [Thu Nov 11 18:36:34 2021] BTRFS error (device md127): bad tree block start 18115770242137751560 23322241466368 [Thu Nov 11 18:36:34 2021] BTRFS error (device md127): bad tree block start 18115770242137751560 23322241466368 [Thu Nov 11 18:36:34 2021] BTRFS error (device md127): bad tree block start 18115770242137751560 23322241466368 [Thu Nov 11 18:36:35 2021] BTRFS error (device md127): bad tree block start 18115770242137751560 23322241466368 [Thu Nov 11 18:36:35 2021] BTRFS error (device md127): bad tree block start 18115770242137751560 23322241466368 [Thu Nov 11 18:36:35 2021] BTRFS error (device md127): bad tree block start 18115770242137751560 23322241466368 [Thu Nov 11 18:36:35 2021] BTRFS error (device md127): bad tree block start 18115770242137751560 23322241466368 [Thu Nov 11 18:36:35 2021] BTRFS error (device md127): bad tree block start 18115770242137751560 23322241466368 [Thu Nov 11 18:36:35 2021] BTRFS error (device md127): bad tree block start 18115770242137751560 23322241466368 [Thu Nov 11 18:37:08 2021] btree_readpage_end_io_hook: 10 callbacks suppressed
From what I can read, most people end up destroying and rebuilding their BTRFS volume from backups. But seen as you have a backup now, it is probably worth try clearing the FS journal log (you would do this over SSH with the root user).
btrfs rescue zero-log /dev/md127
However, the results of this are hard to predict and if it will actually help. One reason for the FS complaining could be that the volume is very full (95%). BTRFS tends to handle low free space, poorly.
Total devices 1 FS bytes used 20.77TiB devid 1 size 21.82TiB used 20.78TiB path /dev/md127
There is of course also the BTRFS mailing list (https://btrfs.wiki.kernel.org/index.php/Btrfs_mailing_list ) which might help you but realistically, you could be on a position where you need to reset the system and start over. If you feel adventurous give zero-log a go (followed by NAS reboot) and you could even do a BTRFS fscheck with repair flag, but this can make things a worse as well. Should be used with caution. I am not even sure if the NAS will mount the volume post boot so it is good that you have backup now.
- as_Nov 22, 2021Aspirant
Fiddled with it a bit, and figured out how to ssh as root. Oof madone, I really need to RTFM (but that's another story..)
I think I know what to do here, but I just want to make sure.. when I run the command:
btrfs rescue zero-log /dev/md127
I get an error:
ERROR: /dev/md127 is currently mounted
I'm thinking I should do:
umount /dev/md127
but, I'm worried about doing something irreversably bad.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!