NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
ibell63
May 15, 2017Aspirant
btrfs_search_forward+204, volume disappeared after a reboot
Came in this morning, found "btrfs_search_forward+204" on the screen, pulled power because I couldn't get to the admin panel for the NAS, it started up and everything was working. Started a scrub, s...
jak0lantash
May 15, 2017Mentor
For the next time: If the GUI is unreachable, you can gracefully shutdown the NAS by pressing the power button twice. If not working, you can forcefully shutdown the NAS by holding the power button, it's still better than pulling the power cord, which is last resort.
You should also use RAIDar to see if it can give more information about the status of the system.
If the volume is in red in the GUI and marked as "inactive":
Maybe you want to "upvote" this idea: https://community.netgear.com/t5/Idea-Exchange-for-ReadyNAS/Change-the-incredibly-confusing-error-message-quot-remove/idi-p/1271658
Download the logs, look for "BTRFS" or "data" in systemd-journal.log (from the bottom).
Then try reboot the NAS from the GUI, if the volume is still in red: download the logs from the GUI and look for "BTRFS" and "md127" and in dmesg.log.
Post extracts here.
ibell63
May 15, 2017Aspirant
Here's the contents of btrfs.log
Label: '117c606a:data' uuid: fdaa3ed8-54b8-4265-94aa-bc268782b14b Total devices 1 FS bytes used 14.16TiB devid 1 size 16.36TiB used 14.48TiB path /dev/md127 === filesystem /data === === subvolume /data === === btrfs dump-super /dev/md127 superblock: bytenr=65536, device=/dev/md127 --------------------------------------------------------- csum_type 0 (crc32c) csum_size 4 csum 0x8c4a9dd7 [match] bytenr 65536 flags 0x1 ( WRITTEN ) magic _BHRfS_M [match] fsid fdaa3ed8-54b8-4265-94aa-bc268782b14b label 117c606a:data generation 32766 root 32145408 sys_array_size 258 chunk_root_generation 32766 root_level 1 chunk_root 20983092084736 chunk_root_level 1 log_root 0 log_root_transid 0 log_root_level 0 total_bytes 17988620648448 bytes_used 15566949167104 sectorsize 4096 nodesize 32768 leafsize 32768 stripesize 4096 root_dir 6 num_devices 1 compat_flags 0x0 compat_ro_flags 0x0 incompat_flags 0x161 ( MIXED_BACKREF | BIG_METADATA | EXTENDED_IREF | SKINNY_METADATA ) cache_generation 18446744073709551615 uuid_tree_generation 32766 dev_item.uuid c7d01c1f-3f7a-4392-8b61-0dab0e72f7c4 dev_item.fsid fdaa3ed8-54b8-4265-94aa-bc268782b14b [match] dev_item.type 0 dev_item.total_bytes 17988620648448 dev_item.bytes_used 15924748877824 dev_item.io_align 4096 dev_item.io_width 4096 dev_item.sector_size 4096 dev_item.devid 1 dev_item.dev_group 0 dev_item.seek_speed 0 dev_item.bandwidth 0 dev_item.generation 0 sys_chunk_array[2048]: item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520) length 8388608 owner 2 stripe_len 65536 type SYSTEM|DUP io_align 65536 io_width 65536 sector_size 4096 num_stripes 2 sub_stripes 0 stripe 0 devid 1 offset 20971520 dev_uuid c7d01c1f-3f7a-4392-8b61-0dab0e72f7c4 stripe 1 devid 1 offset 29360128 dev_uuid c7d01c1f-3f7a-4392-8b61-0dab0e72f7c4 item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20983092084736) length 33554432 owner 2 stripe_len 65536 type SYSTEM|DUP io_align 65536 io_width 65536 sector_size 4096 num_stripes 2 sub_stripes 1 stripe 0 devid 1 offset 13100761743360 dev_uuid c7d01c1f-3f7a-4392-8b61-0dab0e72f7c4 stripe 1 devid 1 offset 13100795297792 dev_uuid c7d01c1f-3f7a-4392-8b61-0dab0e72f7c4 backup_roots[4]: backup 0: backup_tree_root: 31195136 gen: 32765 level: 1 backup_chunk_root: 20971520 gen: 32699 level: 1 backup_extent_root: 30801920 gen: 32765 level: 2 backup_fs_root: 29851648 gen: 32764 level: 1 backup_dev_root: 30146560 gen: 32764 level: 1 backup_csum_root: 953384960 gen: 32763 level: 2 backup_total_bytes: 17988620648448 backup_bytes_used: 15566949167104 backup_num_devices: 1 backup 1: backup_tree_root: 32145408 gen: 32766 level: 1 backup_chunk_root: 20983092084736 gen: 32766 level: 1 backup_extent_root: 31457280 gen: 32766 level: 2 backup_fs_root: 29851648 gen: 32764 level: 1 backup_dev_root: 31588352 gen: 32766 level: 1 backup_csum_root: 953384960 gen: 32763 level: 2 backup_total_bytes: 17988620648448 backup_bytes_used: 15566949167104 backup_num_devices: 1 backup 2: backup_tree_root: 955514880 gen: 32763 level: 1 backup_chunk_root: 20971520 gen: 32699 level: 1 backup_extent_root: 953581568 gen: 32763 level: 2 backup_fs_root: 30179328 gen: 31941 level: 1 backup_dev_root: 728334336 gen: 32699 level: 1 backup_csum_root: 955940864 gen: 32764 level: 2 backup_total_bytes: 17988620648448 backup_bytes_used: 15566949167104 backup_num_devices: 1 backup 3: backup_tree_root: 29786112 gen: 32764 level: 1 backup_chunk_root: 20971520 gen: 32699 level: 1 backup_extent_root: 29589504 gen: 32764 level: 2 backup_fs_root: 29851648 gen: 32764 level: 1 backup_dev_root: 30146560 gen: 32764 level: 1 backup_csum_root: 953384960 gen: 32763 level: 2 backup_total_bytes: 17988620648448 backup_bytes_used: 15566949167104 backup_num_devices: 1
Also found the following lines in dmesg.log:
[Mon May 15 16:34:31 2017] md: md127 stopped. [Mon May 15 16:34:31 2017] md: bind<sdb3> [Mon May 15 16:34:31 2017] md: bind<sdc3> [Mon May 15 16:34:31 2017] md: bind<sdd3> [Mon May 15 16:34:31 2017] md: bind<sda3> [Mon May 15 16:34:31 2017] md/raid:md127: device sda3 operational as raid disk 0 [Mon May 15 16:34:31 2017] md/raid:md127: device sdd3 operational as raid disk 3 [Mon May 15 16:34:31 2017] md/raid:md127: device sdc3 operational as raid disk 2 [Mon May 15 16:34:31 2017] md/raid:md127: device sdb3 operational as raid disk 1 [Mon May 15 16:34:31 2017] md/raid:md127: allocated 4288kB [Mon May 15 16:34:31 2017] systemd[1]: Started Apply Kernel Variables. [Mon May 15 16:34:31 2017] md/raid:md127: raid level 5 active with 4 out of 4 devices, algorithm 2 [Mon May 15 16:34:31 2017] RAID conf printout: [Mon May 15 16:34:31 2017] --- level:5 rd:4 wd:4 [Mon May 15 16:34:31 2017] disk 0, o:1, dev:sda3 [Mon May 15 16:34:31 2017] disk 1, o:1, dev:sdb3 [Mon May 15 16:34:31 2017] disk 2, o:1, dev:sdc3 [Mon May 15 16:34:31 2017] disk 3, o:1, dev:sdd3 [Mon May 15 16:34:31 2017] md127: detected capacity change from 0 to 17988620648448 [Mon May 15 16:34:31 2017] systemd[1]: Started Journal Service. [Mon May 15 16:34:31 2017] systemd-journald[1027]: Received request to flush runtime journal from PID 1 [Mon May 15 16:34:32 2017] BTRFS: device label 117c606a:data devid 1 transid 32766 /dev/md127 [Mon May 15 16:34:32 2017] Adding 1047420k swap on /dev/md1. Priority:-1 extents:1 across:1047420k [Mon May 15 16:34:32 2017] BTRFS: has skinny extents [Mon May 15 16:34:33 2017] BTRFS critical (device md127): unable to find logical 3390906064896 len 4096 [Mon May 15 16:34:33 2017] BTRFS: failed to read chunk root on md127 [Mon May 15 16:34:33 2017] BTRFS warning (device md127): page private not zero on page 3390906040320 [Mon May 15 16:34:33 2017] BTRFS: open_ctree failed
- jak0lantashMay 15, 2017Mentor
ibell63 wrote:[Mon May 15 16:34:33 2017] BTRFS critical (device md127): unable to find logical 3390906064896 len 4096 [Mon May 15 16:34:33 2017] BTRFS: failed to read chunk root on md127 [Mon May 15 16:34:33 2017] BTRFS warning (device md127): page private not zero on page 3390906040320 [Mon May 15 16:34:33 2017] BTRFS: open_ctree failed
These show BTRFS corruption.
ibell63 wrote:Any advice or anything I can try besides dumping everything and starting over?
So... sorry, but you should dump everything and recreate the volume.
- ibell63May 15, 2017Aspirant
Other than this morning, this system has never shut down improperly with this volume.
I find it really disturbing that one improper shutdown can destroy an entire BTRFS volume...If you look through my post history, you will find that a similar situation causing me to lose a volume happened less than a year ago. Is this a btrfs problem or a ReadyNAS OS problem? These drives have never had a single hardware error. I'm very close to switching to Synology or FreeNAS or Windows 10's Storage Spaces.
I have several other boxes that I could throw these drives in, several with enough SATA ports to plug them all in at once. I'm familiar with Debian. I also have ssh enabled on this device. Anything I could do with Btrfsck? Is it installed by default?- ibell63May 15, 2017Aspirant
Ran: btrfs check /dev/md127 Got this: checksum verify failed on 20983092084736 found 5F3408C6 wanted 2DA5B4C9 checksum verify failed on 20983092084736 found 5F3408C6 wanted 2DA5B4C9 checksum verify failed on 20983092084736 found 5980616F wanted 18E69839 checksum verify failed on 20983092084736 found 5980616F wanted 18E69839 bytenr mismatch, want=20983092084736, have=4008641877751173210 ERROR: cannot read chunk root ERROR: cannot open file system
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!