NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

Birillo71's avatar
Birillo71
Aspirant
Nov 03, 2017
Solved

Recover files using Linux

Hi All,   I had a problem with the NAS and I am doing the possible to recover some of the files connecting the disks to a Linux System. I used the following commands as suggested in another discus...
  • jak0lantash's avatar
    jak0lantash
    Nov 17, 2017

    The next steps depend on how bad you want the data back, and what risks you want to take.

    Of course, you can always contact NETGEAR for Data Recovery, they offer this kind of service as a contract.

     

    While there are some pending sectors on sdf and sdf3 is clearly out of sync (not by much), I don't understand why sdc3 doesn't get included in the RAID array, though it shows bad blocks (mdadm output).

     

    You could try to backup the superblocks (if not already done), then recreate the RAID array. But this could result in irrevocable data loss.

    (I'm not 100% sure of what is the best approach at this stage.)

    Based on the outputs you provided, I think there are two possibilities.

    - Again, this is dangerous territory -

    • Either try to recreate the RAID as "--assume-clean".

    https://raid.wiki.kernel.org/index.php/RAID_Recovery#Restore_array_by_recreating_.28after_multiple_device_failure.29

    http://man7.org/linux/man-pages/man8/mdadm.8.html

    • Or force the RAID array to assemble.

    https://raid.wiki.kernel.org/index.php/RAID_Recovery#Trying_to_assemble_using_--force

     

    For both:

    • Either with sde3 sdd3 sdf3
    • Or sde3 sdd3 sdc3

     

    In theory, as you "--assume-clean" and only include three members, it shouldn't try to rewrite any block of data (but will overwrite the superblocks). So it shouldn't cause permanent damage. But it's a should.

     

     Parameters from the output you provided:

    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
               Name : 0e36f164:data-0
      Creation Time : Sun Feb  1 21:32:44 2015
         Raid Level : raid5
       Raid Devices : 4
     
    Avail Dev Size : 615438961 (293.46 GiB 315.10 GB)
         Array Size : 923158272 (880.39 GiB 945.31 GB)
      Used Dev Size : 615438848 (293.46 GiB 315.10 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
       Unused Space : before=262064 sectors, after=113 sectors
              State : active
        Device UUID : a51d6699:b35d008c:d3e1f115:610f3d0f
     
        Update Time : Tue Oct 31 08:32:47 2017
           Checksum : 3518f348 - correct
             Events : 14159
     
             Layout : left-symmetric
         Chunk Size : 64K
     
       Device Role : Active device 1
       Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sdc3:
       Device Role : Active device 2
    /dev/sdd3:
       Device Role : Active device 1
    /dev/sde3:
       Device Role : Active device 0
    /dev/sdf3:
       Device Role : Active device 3

     

     

    That's what I would try:

    # Backup superblocks for each partition - if not already done
    for partition in /dev/sd[a-f][0-9]; do echo "Backing up superblocks for $partition"; dd if=$partition of=/root/superblocks_$(basename $partition).mdsb bs=64k count=1; done
    ls -lh /root/superblocks_*
    
    # Backup of "mdadm --examine" for each partition - new
    for partition in /dev/sd[a-f][0-9]; do echo "Backing up mdadm information for $partition"; mdadm --examine $partition > mdadm_-E_$(basename $partition).txt; done
    ls -lh /root/mdadm_-E_*
    
    # Start all healthy RAID arrays - if not already done
    mdadm --assemble --verbose /dev/md126 /dev/sdc4 /dev/sdd4 /dev/sde5 /dev/sdf5
    mdadm --assemble --verbose /dev/md125 /dev/sde4 /dev/sdf4
    mdadm --assemble --verbose /dev/md127 /dev/sde6 /dev/sdf6
    
    # Recreate the unhealthy RAID array - new
    mdadm --create --verbose --assume-clean --level=5 --raid-devices=4 --size=461579136K --chunk=64K --data-offset=131072K /dev/md124 /dev/sde3 /dev/sdd3 missing /dev/sdf3
    
    # Check the integrity - do it again
    cat /proc/mdstat
    btrfs device scan
    btrfs filesystem show
    btrfsck --readonly /dev/md127
    mount -o ro /dev/md127 /mnt
    btrfs filesystem usage /mnt

     

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More