NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
hdoverobinson
Dec 15, 2017Tutor
RN214 Firmware 6.9.1 - No Volumes Exist
Hello,
I have been running a ReadyNAS RN214 since ~March 2016 with three WD Red Pro 3TB drives (WDC WD4001FFSX-68JNUN0) in X-RAID configuration (RAID 5). I recently updated to firmware 6.9.1 from 6.9.0. The single data volume recently reached 80% capacity - I was planning to add a 4th drive this month. I have been doing disk maintenance on a monthly schedule (defrag, balance, scrub, disk checks, all once per month).
Sometime today I stopped being able to write to the SMB shares as well as the AFP Time Machine share. I could still read from all shares. Thinking this was a permissions issue, I tried resetting the permissions on the shares. This threw errors- " Failed to change permission and ownership of the share". I then tried a reboot through the web GUI. When the unit powered on, I received the error "No volumes exist". This picture shows the Volumes page in the web GUI:
(Before today I was running with a single data volume using the 3 disks.)
I SSH'd into the RN214 and ran short disk tests on all three drives using smartctl (invocation: "smartctl --test=short /dev/sdX)", and all three of them passed. I am running "long" tests now. Per a different forum post on this issue I tried mounting the volume in readonly, recovery mode with the following fstab entry: "LABEL=119c1b84:data /data btrfs defaults,ro,recovery 0 0". This did not allow the volume to mount after another reboot through the web GUI. There are no blinking LEDs on the unit - the LEDs are lit normally.
I can send logs upon request. I've pasted below some relevant lines from dmesg after booting with the fstab entry shown above:
[ 22.120698] md: md127 stopped.
[ 22.121803] md: bind<sdb3>
[ 22.122033] md: bind<sdc3>
[ 22.122267] md: bind<sda3>
[ 22.124225] md/raid:md127: device sda3 operational as raid disk 0
[ 22.124233] md/raid:md127: device sdc3 operational as raid disk 2
[ 22.124239] md/raid:md127: device sdb3 operational as raid disk 1
[ 22.125161] md/raid:md127: allocated 3240kB
[ 22.125221] md/raid:md127: raid level 5 active with 3 out of 3 devices, algorithm 2
[ 22.125226] RAID conf printout:
[ 22.125232] --- level:5 rd:3 wd:3
[ 22.125238] disk 0, o:1, dev:sda3
[ 22.125243] disk 1, o:1, dev:sdb3
[ 22.125248] disk 2, o:1, dev:sdc3
[ 22.125403] md127: detected capacity change from 0 to 7991637573632
[ 22.485486] Adding 523708k swap on /dev/md1. Priority:-1 extents:1 across:523708k
[ 22.506940] BTRFS: device label 119c1b84:data devid 1 transid 789057 /dev/md127
[ 22.669799] BTRFS info (device md127): enabling auto recovery
[ 22.672132] BTRFS critical (device md127): unable to find logical 68648173568 len 4096
[ 22.672144] BTRFS critical (device md127): unable to find logical 68648173568 len 4096
[ 22.672192] BTRFS critical (device md127): unable to find logical 68648173568 len 4096
[ 22.672203] BTRFS critical (device md127): unable to find logical 68648173568 len 4096
[ 22.672243] BTRFS critical (device md127): unable to find logical 68648173568 len 4096
[ 22.672251] BTRFS critical (device md127): unable to find logical 68648173568 len 4096
[ 22.672265] BTRFS error (device md127): failed to read chunk root
[ 22.734127] BTRFS error (device md127): open_ctree failed
Thanks for looking!
EOM
3 Replies
Replies have been turned off for this discussion
- mdgm-ntgrNETGEAR Employee Retired
Is your backup up to date?
I have backups of the most critical files but I'd like to try to recover the volume before restoring from backup. Also, all three disks in the array passed extended offline tests with smartctl.
- mdgm-ntgrNETGEAR Employee Retired
It does sound like you'd need to contact support about this.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!