NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
jrover
Nov 19, 2019Guide
Corrupted /dev/md0 in OS6.10.2 ReadyNAS Ultra 6 starts in safe mode
Hi All,
Recently the ReadyNAS Ultra 6 I've been running 6.10.1 prompted me to upgrade to 6.10.2. Although I was doing other copy operations, I decided to queue it up for the next time I rebooted. Unfortunately the ReadyNAS eventually crawled to a halt becoming unresponsive and I finally forcibly powered it off. When I rebooted, it started in safe mode and reported the Volume couldn't be mounted. Looking into the logs, I found that it was the /dev/md0 volume that was preventing boot:
Nov 17 05:56:48 readynasos kernel: md/raid10:md1: active with 4 out of 4 devices
Nov 17 05:56:48 readynasos kernel: md1: detected capacity change from 0 to 1069547520
Nov 17 05:56:48 readynasos kernel: BTRFS: device label 33ea9201:root devid 1 transid 362226 /dev/md0
Nov 17 05:56:48 readynasos kernel: BTRFS info (device md0): has skinny extents
Nov 17 05:56:48 readynasos kernel: BTRFS critical (device md0): corrupt node: root=1 block=29540352 slot=121, unaligned pointer, have 4971983878941295603 should be aligned to 4096
Nov 17 05:56:48 readynasos kernel: BTRFS critical (device md0): corrupt node: root=1 block=29540352 slot=121, unaligned pointer, have 4971983878941295603 should be aligned to 4096
Nov 17 05:56:48 readynasos kernel: EXT4-fs (md0): VFS: Can't find ext4 filesystem
I've tried to reinstall the OS and also started in Tech Support mode and was able to mount my data volume (and backed up everything that matters... to a USB drive). What I'm wondering is if I can rebuild the /dev/md0 volume and/or wipe it for the safe mode or tech support mode to rebuild the /dev/md0 volume. Here are the outputs of btrfs:
# start_raids
mdadm: /dev/md/0 has been started with 4 drives.
mdadm: /dev/md/1 has been started with 4 drives.
mdadm: /dev/md/data-0 has been started with 4 drives.
mdadm: /dev/md/data-1 has been started with 3 drives.
mount: mounting LABEL=33ea9201:data on /data failed: No such file or directory
Scanned Btrfs device /dev/md/data-0
Scanned Btrfs device /dev/md/data-1
#
# btrfs fi sh
Label: '33ea9201:data' uuid: 84e083b4-00dc-478f-a78d-6e83c45987e9
Total devices 2 FS bytes used 4.56TiB
devid 1 size 5.44TiB used 4.11TiB path /dev/md127
devid 2 size 1.82TiB used 494.03GiB path /dev/md/data-1
Label: '33ea9201:root' uuid: efa30663-0355-48e4-b602-6328153d99ea
Total devices 1 FS bytes used 780.54MiB
devid 1 size 4.00GiB used 1.64GiB path /dev/md0
# btrfs check /dev/md0
Checking filesystem on /dev/md0
UUID: efa30663-0355-48e4-b602-6328153d99ea
checking extents
bad block 29540352
Errors found in extent allocation tree or chunk allocation
checking free space cache
checking fs roots
root 5 root dir 256 not found
found 818450432 bytes used err is 1
total csum bytes: 0
total tree bytes: 1900544
total fs tree bytes: 16384
total extent tree bytes: 1835008
btree space waste bytes: 417761
file data blocks allocated: 0
referenced 0
I also read in another thread there was a way to wipe the boot volume and rebuild it, but I'm not sure how. Any help or ideas for patching this system would be much appreciated.
Thanks!
15 Replies
jrover wrote:
I also read in another thread there was a way to wipe the boot volume and rebuild it, but I'm not sure how. Any help or ideas for patching this system would be much appreciated.
Thanks!
Support could rebuild it, but I've never seen anything specific on the steps. And of course, support won't help in your case since you running OS-6 on a legacy NAS.
If it were my system, I would do a factory default, rebuild the NAS and restore the data from the backup.
You could attempt to do a btrfs check --repair instead, but you can only do that on an unmounted file system. If that seems to work, then perhaps follow it up with an over-install of OS 6.10.2.
- jroverGuide
Hi StephenB ,
Thanks for your advice. I did pull all my disks, add a spare single drive and did a factory reset and the NAS built a new volume and functioning installation --- so I know that will work. I'll spend more time backing up absolutely everything try the btrfs check --repair /dev/md0 and if that doesn't help, I'll do a factory reset with the 4 original drives installed.
Thanks
- eduj7imAspirant
Hi Jrover,
I have a similar issue.
Would appreciate if you could shed some light how to backup some files on tech-support mode?
This is my Thread
- eduj7imAspirant
Hi jrover ,
I was able to salvage my all my important files
I was wondering if you have managed to fix or just factory reset with drives ?
we have pretty much same issue
"but my md0 appears there no space left
"devid 1 size 4.00GiB used 4.00GiB path /dev/md0"<--- this could be the culprit
# btrfs fi show
Label: '37c0ff7c:data' uuid: 46720122-f338-4fb1-a173-3f41ba608888
Total devices 1 FS bytes used 956.71GiB
devid 1 size 5.44TiB used 960.02GiB path /dev/md127Label: '37c0ff7c:root' uuid: 5d670f7a-6a1c-42fc-9aae-cc277484869b
Total devices 1 FS bytes used 2.84GiB
devid 1 size 4.00GiB used 4.00GiB path /dev/md0# btrfs check /dev/md0
Checking filesystem on /dev/md0
UUID: 5d670f7a-6a1c-42fc-9aae-cc277484869b
checking extents
bad block 29491200
Errors found in extent allocation tree or chunk allocation
checking free space cache
checking fs roots
root 5 root dir 256 not found
found 3047538688 bytes used err is 1
total csum bytes: 0
total tree bytes: 802816
total fs tree bytes: 16384
total extent tree bytes: 737280
btree space waste bytes: 270638
file data blocks allocated: 0
referenced 0- jroverGuideI had to factory reset. There was no mounting my md0 partition to see what was wrong yet alone fix it.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!