NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

Archdivan's avatar
Archdivan
Aspirant
May 04, 2021

rndp6000 inactive volume data

Hello there

today I suddenly lost access to files over the network, and the NAS was totally freezed. so i force rebooted the system and opened admin console. there was message that volume data is in read only mode. one more reboot (from console)(so stupid i know) turned it to inactive volume. 

Can i somehow reboot it in read only mode again to save any files?

 

ReadynasOs 6.10.4

10 Replies


  • Archdivan wrote:

     

    Can i somehow reboot it in read only mode again to save any files?


    The RAID array has gotten out of sync.  I think the best next step is to test the disks in a Windows PC using vendor tools (Seatools for Seagate; the new WD Digital Dashboard software for Western Digital).  You can connect the disk via either SATA or a USB adapter/dock. Run the extended test.  Remove the disks with the NAS powered down, and label by slot.

     

    If you can't connect the disks to a PC, then you could try booting up in tech support mode, and then run a similar extended test with smartctl.

     

    There are ways to force the volume to mount, but it is difficult to describe in the forum, as it depends on exactly what failed.  Unfortunately tech support isn't an option, since you have a converted NAS.  

    • Archdivan's avatar
      Archdivan
      Aspirant

      Stephen, great thanks for replying!

      at the moment I have tried two ways to check disks: using smartctl command via ssh and disk chesk from the boot menu. SMART looks normal with no obvious critical errors. If  it necessary, I can send a report. Also i found some errors in dmesg.log

       

      [Thu May 6 03:47:20 2021] RAID conf printout:
      [Thu May 6 03:47:20 2021] --- level:5 rd:5 wd:5
      [Thu May 6 03:47:20 2021] disk 0, o:1, dev:sda3
      [Thu May 6 03:47:20 2021] disk 1, o:1, dev:sdb3
      [Thu May 6 03:47:20 2021] disk 2, o:1, dev:sdc3
      [Thu May 6 03:47:20 2021] disk 3, o:1, dev:sdd3
      [Thu May 6 03:47:20 2021] disk 4, o:1, dev:sde3
      [Thu May 6 03:47:20 2021] md126: detected capacity change from 0 to 1980566863872
      [Thu May 6 03:47:20 2021] BTRFS: device label 33ea3ec3:data devid 1 transid 1021063 /dev/md126
      [Thu May 6 03:47:21 2021] BTRFS info (device md126): has skinny extents
      [Thu May 6 03:47:39 2021] BTRFS critical (device md126): corrupt leaf, slot offset bad: block=2977215774720, root=1, slot=131
      [Thu May 6 03:47:39 2021] BTRFS critical (device md126): corrupt leaf, slot offset bad: block=2977215774720, root=1, slot=131
      [Thu May 6 03:47:41 2021] BTRFS critical (device md126): corrupt leaf, slot offset bad: block=2977215774720, root=1, slot=131
      [Thu May 6 03:47:41 2021] BTRFS critical (device md126): corrupt leaf, slot offset bad: block=2977215774720, root=1, slot=131
      [Thu May 6 03:47:41 2021] BTRFS warning (device md126): Skipping commit of aborted transaction.
      [Thu May 6 03:47:41 2021] BTRFS: error (device md126) in cleanup_transaction:1864: errno=-5 IO failure
      [Thu May 6 03:47:41 2021] BTRFS info (device md126): delayed_refs has NO entry
      [Thu May 6 03:47:41 2021] BTRFS: error (device md126) in btrfs_replay_log:2436: errno=-5 IO failure (Failed to recover log tree)
      [Thu May 6 03:47:41 2021] BTRFS error (device md126): cleaner transaction attach returned -30
      [Thu May 6 03:47:41 2021] BTRFS error (device md126): open_ctree failed
      [Thu May 6 03:47:42 2021] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
      [Thu May 6 03:47:42 2021] NFSD: starting 90-second grace period (net ffffffff88d70240)
      [Thu May 6 03:48:01 2021] nfsd: last server has exited, flushing export cache
      [Thu May 6 03:48:01 2021] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
      [Thu May 6 03:48:01 2021] NFSD: starting 90-second grace period (net ffffffff88d70240)
      [Thu May 6 05:38:06 2021] RPC: fragment too large: 50331695
      [Thu May 6 08:13:35 2021] RPC: fragment too large: 50331695

       

      It looks like corruption in filesystem. 

      Some googling around tells me that it can be repaired with btrfs but i exactly don't understand how to do this. 

      • StephenB's avatar
        StephenB
        Guru

        Archdivan wrote:


        [Thu May 6 03:47:41 2021] BTRFS: error (device md126) in cleanup_transaction:1864: errno=-5 IO failure

        [Thu May 6 03:47:41 2021] BTRFS: error (device md126) in btrfs_replay_log:2436: errno=-5 IO failure (Failed to recover log tree)

         

        It looks like corruption in filesystem. 

         


        It does look like corruption, but I suggest first looking in system.log and kernel.log for more info on these two particular errors.  They look disk-related, and those two logs might give you more info on what disk caused them.  Once you know the disk, you could attempt to boot the system up w/o it, and see if the volume mounts.  

         

        BTW, if it does mount, then the next step would be to off-load data (at least the most critical stuff).

         


        Archdivan wrote:

         

        Some googling around tells me that it can be repaired with btrfs but i exactly don't understand how to do this. 


        Though I've done some reading here, it's not something I've needed to do myself. rn_enthusiast has sometimes offered to review logs and offer tailored advice on this.  So maybe wait and see if he will chime in.

         

         

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More