NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

Birillo71's avatar
Birillo71
Aspirant
Nov 03, 2017
Solved

Recover files using Linux

Hi All,

 

I had a problem with the NAS and I am doing the possible to recover some of the files connecting the disks to a Linux System.

I used the following commands as suggested in another discussion:

# mdadm --assemble --scan
# cat /proc/mdstat
# mount -t btrfs -o ro /dev/md127 /mnt

 

# cat /proc/mdstat

gives me the following output:

Personalities : [raid1] [raid6] [raid5] [raid4]

md127 : active (auto-read-only) raid5 sdd4[6] sdb5[5] sda5[4] sdc4[3]

      1992186528 blocks super 1.2 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]

      

md0 : active (auto-read-only) raid1 sdd1[4] sdc1[5] sda1[7]

      4190208 blocks super 1.2 [4/3] [UUU_]

      

unused devices: <none>

 

While 

 

mount -t btrfs -o ro /dev/md127 /mnt

Gives:

mount: wrong fs type, bad option, bad superblock on /dev/md127,

       missing codepage or helper program, or other error

 

       In some cases useful info is found in syslog - try

       dmesg | tail or so.

 

I am stuck on this.

Has someone any idea please?

Can I mount in someway each single disk to access the files?

 

Thank you

 

  • The next steps depend on how bad you want the data back, and what risks you want to take.

    Of course, you can always contact NETGEAR for Data Recovery, they offer this kind of service as a contract.

     

    While there are some pending sectors on sdf and sdf3 is clearly out of sync (not by much), I don't understand why sdc3 doesn't get included in the RAID array, though it shows bad blocks (mdadm output).

     

    You could try to backup the superblocks (if not already done), then recreate the RAID array. But this could result in irrevocable data loss.

    (I'm not 100% sure of what is the best approach at this stage.)

    Based on the outputs you provided, I think there are two possibilities.

    - Again, this is dangerous territory -

    • Either try to recreate the RAID as "--assume-clean".

    https://raid.wiki.kernel.org/index.php/RAID_Recovery#Restore_array_by_recreating_.28after_multiple_device_failure.29

    http://man7.org/linux/man-pages/man8/mdadm.8.html

    • Or force the RAID array to assemble.

    https://raid.wiki.kernel.org/index.php/RAID_Recovery#Trying_to_assemble_using_--force

     

    For both:

    • Either with sde3 sdd3 sdf3
    • Or sde3 sdd3 sdc3

     

    In theory, as you "--assume-clean" and only include three members, it shouldn't try to rewrite any block of data (but will overwrite the superblocks). So it shouldn't cause permanent damage. But it's a should.

     

     Parameters from the output you provided:

    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
               Name : 0e36f164:data-0
      Creation Time : Sun Feb  1 21:32:44 2015
         Raid Level : raid5
       Raid Devices : 4
     
    Avail Dev Size : 615438961 (293.46 GiB 315.10 GB)
         Array Size : 923158272 (880.39 GiB 945.31 GB)
      Used Dev Size : 615438848 (293.46 GiB 315.10 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
       Unused Space : before=262064 sectors, after=113 sectors
              State : active
        Device UUID : a51d6699:b35d008c:d3e1f115:610f3d0f
     
        Update Time : Tue Oct 31 08:32:47 2017
           Checksum : 3518f348 - correct
             Events : 14159
     
             Layout : left-symmetric
         Chunk Size : 64K
     
       Device Role : Active device 1
       Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sdc3:
       Device Role : Active device 2
    /dev/sdd3:
       Device Role : Active device 1
    /dev/sde3:
       Device Role : Active device 0
    /dev/sdf3:
       Device Role : Active device 3

     

     

    That's what I would try:

    # Backup superblocks for each partition - if not already done
    for partition in /dev/sd[a-f][0-9]; do echo "Backing up superblocks for $partition"; dd if=$partition of=/root/superblocks_$(basename $partition).mdsb bs=64k count=1; done
    ls -lh /root/superblocks_*
    
    # Backup of "mdadm --examine" for each partition - new
    for partition in /dev/sd[a-f][0-9]; do echo "Backing up mdadm information for $partition"; mdadm --examine $partition > mdadm_-E_$(basename $partition).txt; done
    ls -lh /root/mdadm_-E_*
    
    # Start all healthy RAID arrays - if not already done
    mdadm --assemble --verbose /dev/md126 /dev/sdc4 /dev/sdd4 /dev/sde5 /dev/sdf5
    mdadm --assemble --verbose /dev/md125 /dev/sde4 /dev/sdf4
    mdadm --assemble --verbose /dev/md127 /dev/sde6 /dev/sdf6
    
    # Recreate the unhealthy RAID array - new
    mdadm --create --verbose --assume-clean --level=5 --raid-devices=4 --size=461579136K --chunk=64K --data-offset=131072K /dev/md124 /dev/sde3 /dev/sdd3 missing /dev/sdf3
    
    # Check the integrity - do it again
    cat /proc/mdstat
    btrfs device scan
    btrfs filesystem show
    btrfsck --readonly /dev/md127
    mount -o ro /dev/md127 /mnt
    btrfs filesystem usage /mnt

     

25 Replies

Replies have been turned off for this discussion
  • Check your RN104 with:

    $ cat /etc/fstab

     and:

    $ mount -l | grep md127

     to see if outputs report btrfs.

    • Birillo71's avatar
      Birillo71
      Aspirant

      My initial test wa installing the 4 disks on an external Linux system.

      In this case the suggested commands don't report anything related to /dev/md127

       

      When I install all the disks on teh NAS I get the following:

      root@nas-36-F1-64:/# mdadm --assemble --scan

      mdadm: /dev/md/data-0 assembled from 2 drives and 1 rebuilding - not enough to start the array.

      mdadm: No arrays found in config file or automatically

      root@nas-36-F1-64:/# cat /proc/mdstat

      Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]

      md125 : active raid5 sdd4[6] sda5[5] sdb5[4] sdc4[3]

            1992186528 blocks super 1.2 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]

            

      md126 : active raid1 sdb6[0] sda6[1]

            800803520 blocks super 1.2 [2/2] [UU]

            

      md127 : active raid1 sdb4[2] sda4[3]

            175686272 blocks super 1.2 [2/2] [UU]

            

      md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]

            1046528 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

            

      md0 : active raid1 sdd1[4] sda1[6] sdc1[5] sdb1[7]

            4190208 blocks super 1.2 [4/4] [UUUU]

            

      unused devices: <none>

      root@nas-36-F1-64:/# cat /etc/fstab

      LABEL=0e36f164:data /data btrfs defaults 0 0

      root@nas-36-F1-64:/# mount -l | grep md127

      root@nas-36-F1-64:/# mount -l | grep md126

      root@nas-36-F1-64:/# mount -l | grep md125

      root@nas-36-F1-64:/#

      • Birillo71's avatar
        Birillo71
        Aspirant

        Has someone any idea, please of how I can solve this problem?

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More