× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Failed ReadyNas 104, mount BTRFS volume on Linux

Yermak
Guide

Failed ReadyNas 104, mount BTRFS volume on Linux

Hi community,

I was happy user of ReadyNas 104 (had two of them in different time) and was loyal to netgear brand (routers, mesh, etc).

But my recent experience is not that positive as after about 7 years my ReadyNas finally failed (after switching on it just powers off and continue, seem to be something on electric side), I would expect (from Neatgear) detailed recovery instruction is stick to the top of this forum or even into user manual, I read dozens of threads - my problem is not uncommon, but solution was not found.

 

After quick research I found 3-4 major players to provide recovery programs for windows - all are quite expensive, as people value their data...

I believe I had the latest OS 6 at the even when it failed (2-3 weeks back), 4 disk by 4TB each - raid 5.

So, i decided to give a try with Linux... took Ubuntu 20.04LTS.

 

Installing mdadm and checking btrfs

apt-get update

apt-get install mdadm

modinfo btrfs | grep version
srcversion: ACBD2347FFF0DB004CA4F96
vermagic: 5.15.0-43-generic SMP mod_unload modversions

 

Assembling raid

mdadm --assemble --scan
mdadm: /dev/md/0 has been started with 4 drives.
mdadm: /dev/md/1 has been started with 4 drives.
mdadm: /dev/md/data-0 has been started with 4 drives.

 

Checking raid

cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md127 : active raid5 sde3[0] sda3[3] sdb3[4] sdd3[1]
11706500352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid6 sda2[0] sde2[3] sdd2[2] sdb2[1]
1046528 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid1 sde1[0] sda1[3] sdb1[2] sdd1[1]
4190208 blocks super 1.2 [4/4] [UUUU]

 

More details

mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Tue Nov 11 02:45:28 2014
Raid Level : raid5
Array Size : 11706500352 (10.90 TiB 11.99 TB)
Used Dev Size : 3902166784 (3.63 TiB 4.00 TB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Wed Nov 30 02:26:47 2022
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Consistency Policy : resync

Name : 0e36db36:data-0
UUID : 93bd4b5a:c6d3c47d:cd03b2e6:0f16a169
Events : 37041

Number Major Minor RaidDevice State
0 8 67 0 active sync /dev/sde3
1 8 51 1 active sync /dev/sdd3
4 8 19 2 active sync /dev/sdb3
3 8 3 3 active sync /dev/sda3

 

Checking btrfs volume

btrfs fi label /dev/md127
0e36db36:data

 

More checks

btrfs filesystem show
Label: '0e36db36:data' uuid: 40a8107f-5114-4c1c-94d6-54bc71f69a7c
Total devices 1 FS bytes used 10.10TiB
devid 1 size 10.90TiB used 10.27TiB path /dev/md127

 

Checking logs

dmesg

[ 3115.651958] md: md0 stopped.
[ 3115.661011] md/raid1:md0: active with 4 out of 4 mirrors
[ 3115.661023] md0: detected capacity change from 0 to 8380416
[ 3117.469235] md: md1 stopped.
[ 3117.474369] async_tx: api initialized (async)
[ 3117.499570] md/raid:md1: device sda2 operational as raid disk 0
[ 3117.499574] md/raid:md1: device sde2 operational as raid disk 3
[ 3117.499575] md/raid:md1: device sdd2 operational as raid disk 2
[ 3117.499577] md/raid:md1: device sdb2 operational as raid disk 1
[ 3117.500103] md/raid:md1: raid level 6 active with 4 out of 4 devices, algorithm 2
[ 3117.500120] md1: detected capacity change from 0 to 2093056
[ 3117.648661] md: md127 stopped.
[ 3117.660642] md/raid:md127: device sde3 operational as raid disk 0
[ 3117.660654] md/raid:md127: device sda3 operational as raid disk 3
[ 3117.660660] md/raid:md127: device sdb3 operational as raid disk 2
[ 3117.660665] md/raid:md127: device sdd3 operational as raid disk 1
[ 3117.662671] md/raid:md127: raid level 5 active with 4 out of 4 devices, algorithm 2
[ 3117.662831] md127: detected capacity change from 0 to 23413000704
[ 3117.913560] BTRFS: device label 0e36db36:data devid 1 transid 3643415 /dev/md127 scanned by systemd-udevd (10428)

 

So, at this point I believe raid is OK, I could mount md0 with root of ReadyNas.

So, starting btrfs games now.

 

Command I expected to work according to all possible information I found

mount -t btrfs -o ro /dev/md127 /mnt

mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.

 

dmesg

[ 4308.157063] BTRFS info (device md127): flagging fs with big metadata feature
[ 4308.157074] BTRFS info (device md127): disk space caching is enabled
[ 4310.614947] BTRFS critical (device md127): corrupt leaf: root=1 block=28693330493440 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4310.614970] BTRFS error (device md127): block=28693330493440 read time tree block corruption detected
[ 4310.619823] BTRFS critical (device md127): corrupt leaf: root=1 block=28693330493440 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4310.619846] BTRFS error (device md127): block=28693330493440 read time tree block corruption detected
[ 4310.619891] BTRFS warning (device md127): failed to read root (objectid=2): -5
[ 4310.634017] BTRFS error (device md127): open_ctree failed

 

Retrying with recovery

mount -t btrfs -o ro,recovery /dev/md127 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.

 

dmesg

[ 4498.628078] BTRFS info (device md127): flagging fs with big metadata feature
[ 4498.628095] BTRFS warning (device md127): 'recovery' is deprecated, use 'rescue=usebackuproot' instead
[ 4498.628100] BTRFS info (device md127): trying to use backup root at mount time
[ 4498.628104] BTRFS info (device md127): disk space caching is enabled
[ 4498.664705] BTRFS critical (device md127): corrupt leaf: root=1 block=28693330493440 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4498.664711] BTRFS error (device md127): block=28693330493440 read time tree block corruption detected
[ 4498.664877] BTRFS critical (device md127): corrupt leaf: root=1 block=28693330493440 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4498.664881] BTRFS error (device md127): block=28693330493440 read time tree block corruption detected
[ 4498.664886] BTRFS warning (device md127): failed to read root (objectid=2): -5
[ 4498.708647] BTRFS critical (device md127): corrupt leaf: root=1 block=28693526478848 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4498.708657] BTRFS error (device md127): block=28693526478848 read time tree block corruption detected
[ 4498.712357] BTRFS critical (device md127): corrupt leaf: root=1 block=28693526478848 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4498.712367] BTRFS error (device md127): block=28693526478848 read time tree block corruption detected
[ 4498.712392] BTRFS warning (device md127): failed to read root (objectid=2): -5
[ 4498.731972] BTRFS error (device md127): parent transid verify failed on 28693524611072 wanted 3643412 found 3643414
[ 4498.744606] BTRFS error (device md127): parent transid verify failed on 28693524611072 wanted 3643412 found 3643414
[ 4498.744637] BTRFS warning (device md127): couldn't read tree root
[ 4498.771820] BTRFS error (device md127): parent transid verify failed on 28693524971520 wanted 3643413 found 3643415
[ 4498.781483] BTRFS error (device md127): parent transid verify failed on 28693524971520 wanted 3643413 found 3643415
[ 4498.781513] BTRFS warning (device md127): couldn't read tree root
[ 4498.795150] BTRFS error (device md127): open_ctree failed

 

Let's check filesystem

btrfs check /dev/md127

pening filesystem to check...
Checking filesystem on /dev/md127
UUID: 40a8107f-5114-4c1c-94d6-54bc71f69a7c
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
found 11110486872064 bytes used, no error found
total csum bytes: 436299240
total tree bytes: 2136965120
total fs tree bytes: 840761344
total extent tree bytes: 627933184
btree space waste bytes: 434774783
file data blocks allocated: 11185027407872
referenced 11311939104768

 

So, filesystem seems to be ok, so, I suspect btrfs options are different between OS 6 and Ubuntu 20.04 LTS

 

Going deeper into btrfs

btrfs inspect-internal dump-super /dev/md127
superblock: bytenr=65536, device=/dev/md127
---------------------------------------------------------
csum_type 0 (crc32c)
csum_size 4
csum 0xd7757ba3 [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
fsid 40a8107f-5114-4c1c-94d6-54bc71f69a7c
metadata_uuid 40a8107f-5114-4c1c-94d6-54bc71f69a7c
label 0e36db36:data
generation 3643415
root 28693330231296
sys_array_size 129
chunk_root_generation 3641017
root_level 1
chunk_root 28035093004288
chunk_root_level 1
log_root 0
log_root_transid 0
log_root_level 0
total_bytes 11987456360448
bytes_used 11110486872064
sectorsize 4096
nodesize 32768
leafsize (deprecated) 32768
stripesize 4096
root_dir 6
num_devices 1
compat_flags 0x0
compat_ro_flags 0x0
incompat_flags 0x21
( MIXED_BACKREF |
BIG_METADATA )
cache_generation 18446744073709551615
uuid_tree_generation 3643414
dev_item.uuid 45694a58-7723-45b1-be77-2e8bee500448
dev_item.fsid 40a8107f-5114-4c1c-94d6-54bc71f69a7c [match]
dev_item.type 2
dev_item.total_bytes 11987456360448
dev_item.bytes_used 11289397035008
dev_item.io_align 4096
dev_item.io_width 4096
dev_item.sector_size 4096
dev_item.devid 1
dev_item.dev_group 0
dev_item.seek_speed 0
dev_item.bandwidth 0
dev_item.generation 0

 

So, that's where I stuck, I suspect it's something to do with BIG_METADATA flag marked as incompartible.

Alternatively considering downgrading ubuntu to 14 and potentially get lower version of BTRFS.

 

To be honest in earlier iterations I tried trim zero-log, do check recovery, etc nothing worked.

 

P.S. Additional heart-stopping episode was with my cat messing around HDD hanging from the PC tower and causing HDD to disconnect during btrfs check recovery. Raid went out of sync, assembling 3 drives out of 4. But I was able to restore the raid with

mdadm /dev/md127 --add /dev/sdb

It took about 10 hours to resync, but seems to be fine now.

Message 1 of 5
SamirD
Prodigy

Re: Failed ReadyNas 104, mount BTRFS volume on Linux

I think your idea to downgrade has merit, and I would go one step further and boot up several different versions of linux as you may have luck with another version/type.

Message 2 of 5
StephenB
Guru

Re: Failed ReadyNas 104, mount BTRFS volume on Linux


@Yermak wrote:

 

P.S. Additional heart-stopping episode was with my cat messing around HDD hanging from the PC tower and causing HDD to disconnect during btrfs check recovery. Raid went out of sync, assembling 3 drives out of 4. But I was able to restore the raid with

mdadm /dev/md127 --add /dev/sdb

It took about 10 hours to resync, but seems to be fine now.


Are you saying the mdadm is fine, but btrfs still fails to mount?  Or that you were able to remount the volume as btrfs after the resync???

 

I'm guessing that btrfs still fails to mount.

 


@Yermak wrote:

 

dmesg

[ 4498.628078] BTRFS info (device md127): flagging fs with big metadata feature
[ 4498.628095] BTRFS warning (device md127): 'recovery' is deprecated, use 'rescue=usebackuproot' instead
[ 4498.628100] BTRFS info (device md127): trying to use backup root at mount time
[ 4498.628104] BTRFS info (device md127): disk space caching is enabled
[ 4498.664705] BTRFS critical (device md127): corrupt leaf: root=1 block=28693330493440 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4498.664711] BTRFS error (device md127): block=28693330493440 read time tree block corruption detected
[ 4498.664877] BTRFS critical (device md127): corrupt leaf: root=1 block=28693330493440 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4498.664881] BTRFS error (device md127): block=28693330493440 read time tree block corruption detected
[ 4498.664886] BTRFS warning (device md127): failed to read root (objectid=2): -5
[ 4498.708647] BTRFS critical (device md127): corrupt leaf: root=1 block=28693526478848 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4498.708657] BTRFS error (device md127): block=28693526478848 read time tree block corruption detected
[ 4498.712357] BTRFS critical (device md127): corrupt leaf: root=1 block=28693526478848 slot=49, invalid root flags, have 0x10000 expect mask 0x1000000000001
[ 4498.712367] BTRFS error (device md127): block=28693526478848 read time tree block corruption detected
[ 4498.712392] BTRFS warning (device md127): failed to read root (objectid=2): -5
[ 4498.731972] BTRFS error (device md127): parent transid verify failed on 28693524611072 wanted 3643412 found 3643414
[ 4498.744606] BTRFS error (device md127): parent transid verify failed on 28

 

Alternatively considering downgrading ubuntu to 14 and potentially get lower version of BTRFS.

 


I think downgrading is a long shot.  But if you want to match the btrfs and mdadm versions running on the NAS, here's what it is running (OS 6.10.8):

root@RN102:~# mdadm --version
mdadm - v4.1 - 2018-10-01
root@RN102:~# btrfs version
btrfs-progs v4.16
root@RN102:~#

  

Unfortunately, the modinfo method doesn't work:

root@RN102:~# modinfo btrfs | grep version
modinfo: ERROR: Module btrfs not found.

The only module listed in /proc/modules is the VPD

root@RN102:~# cat /proc/modules
vpd 9104 0 - Live 0xbf000000 (PO)
root@RN102:~#

 

Have you tried using smartctl to run a long test on all the disks?

Message 3 of 5
Yermak
Guide

Re: Failed ReadyNas 104, mount BTRFS volume on Linux

Here is root cause and  solution I found:

incompat_flags 0x21
( MIXED_BACKREF |
BIG_METADATA )

 

It seems my ReadyNas migrated to btrfs at some strange point, where those flags where enabled for btrfs, this resulted btrfs being incompartible with btrfs in ubuntu (it was 5.x version).

Is it an issue for all ReadyNas with i.e. ReadyNas OS have those flags for everynas or is it dependant on time of migration to btrfs, it's hard to say. I was running quite latest version of OS available at that moment.

 

So, eventually I solved the problem using Ubuntu 14, it just worked with btrfs - no incompatibliyt issues. Could it work on something between Ubuntu 14 and Ubuntu 20, - i don't know. But you may want to experiment if you faced this issue.

 

As there is coninuation of story.

 

Once I mounted btrfs I had to restore several LUNs, several of them where encrypted with VeraCrypt.

 

There are multiple guides to mount iscsi file via loop device, e.g. here http://infotinks.com/mount-luns-with-partitions-using-losetup-and-kpartx/

where you would use command

# losetup /dev/loop0 lunfile -o OFFSET

 

To calculate offset you need to use command and multilply offset by sector size normally 512.

# fdisk -lu lunfile

 

The problem with Ubuntu 14, fdisk returns wrong results and does not show correct offset.

# sgdisk -p lunfile - does not work either, despite fdisk show partition in lunfile as GPT.

 

So, i had to copy lunfile to another linux and run fdisk on lunfile there and it worked ok gave me expected rusult of offset (I can't remember exact offset which was used and I did not keep lun files).

 

Then I was able to mount lunfile via losetup as expected.

 

On encrypted lunfiles, i tried to expose on another system via iSCSI hoping to mount them via VeraCrypt on windows machine, but probably iSCSI demon implementation also changed and another linux could not expose raw devices properly.

 

So, i downlowed quite old console version of VeraCrypt to match Ubunu 14 and after mounting lunpartion via loopback was able to mount it via console version of VeraCrypt.

 

So, it was happy end for me. Restoring data from ReadyNas is possible, but good guide or better live linux distro would be nice.

I think it's not diffucult to build linux distro with compatible versions of mdadm, btrfs, fdisk, iSCSI, so, if someone face similar issue again - you just plug your disks and live drive into PC and follow instuction...

 

 

 

 

 

 

 

Message 4 of 5
StephenB
Guru

Re: Failed ReadyNas 104, mount BTRFS volume on Linux


@Yermak wrote:

 

So, it was happy end for me. 

 


Great news, and thx for sharing the details of what you did.

 

Message 5 of 5
Top Contributors
Discussion stats
  • 4 replies
  • 1312 views
  • 3 kudos
  • 3 in conversation
Announcements