NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Birillo71
Nov 03, 2017Aspirant
Recover files using Linux
Hi All, I had a problem with the NAS and I am doing the possible to recover some of the files connecting the disks to a Linux System. I used the following commands as suggested in another discus...
- Nov 17, 2017
The next steps depend on how bad you want the data back, and what risks you want to take.
Of course, you can always contact NETGEAR for Data Recovery, they offer this kind of service as a contract.
While there are some pending sectors on sdf and sdf3 is clearly out of sync (not by much), I don't understand why sdc3 doesn't get included in the RAID array, though it shows bad blocks (mdadm output).
You could try to backup the superblocks (if not already done), then recreate the RAID array. But this could result in irrevocable data loss.
(I'm not 100% sure of what is the best approach at this stage.)
Based on the outputs you provided, I think there are two possibilities.
- Again, this is dangerous territory -
- Either try to recreate the RAID as "--assume-clean".
http://man7.org/linux/man-pages/man8/mdadm.8.html
- Or force the RAID array to assemble.
https://raid.wiki.kernel.org/index.php/RAID_Recovery#Trying_to_assemble_using_--force
For both:
- Either with sde3 sdd3 sdf3
- Or sde3 sdd3 sdc3
In theory, as you "--assume-clean" and only include three members, it shouldn't try to rewrite any block of data (but will overwrite the superblocks). So it shouldn't cause permanent damage. But it's a should.
Parameters from the output you provided:
/dev/sdd3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2 Name : 0e36f164:data-0 Creation Time : Sun Feb 1 21:32:44 2015 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 615438961 (293.46 GiB 315.10 GB) Array Size : 923158272 (880.39 GiB 945.31 GB) Used Dev Size : 615438848 (293.46 GiB 315.10 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262064 sectors, after=113 sectors State : active Device UUID : a51d6699:b35d008c:d3e1f115:610f3d0f Update Time : Tue Oct 31 08:32:47 2017 Checksum : 3518f348 - correct Events : 14159 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3: Device Role : Active device 2 /dev/sdd3: Device Role : Active device 1 /dev/sde3: Device Role : Active device 0 /dev/sdf3: Device Role : Active device 3
That's what I would try:
# Backup superblocks for each partition - if not already done for partition in /dev/sd[a-f][0-9]; do echo "Backing up superblocks for $partition"; dd if=$partition of=/root/superblocks_$(basename $partition).mdsb bs=64k count=1; done ls -lh /root/superblocks_* # Backup of "mdadm --examine" for each partition - new for partition in /dev/sd[a-f][0-9]; do echo "Backing up mdadm information for $partition"; mdadm --examine $partition > mdadm_-E_$(basename $partition).txt; done ls -lh /root/mdadm_-E_* # Start all healthy RAID arrays - if not already done mdadm --assemble --verbose /dev/md126 /dev/sdc4 /dev/sdd4 /dev/sde5 /dev/sdf5 mdadm --assemble --verbose /dev/md125 /dev/sde4 /dev/sdf4 mdadm --assemble --verbose /dev/md127 /dev/sde6 /dev/sdf6 # Recreate the unhealthy RAID array - new mdadm --create --verbose --assume-clean --level=5 --raid-devices=4 --size=461579136K --chunk=64K --data-offset=131072K /dev/md124 /dev/sde3 /dev/sdd3 missing /dev/sdf3 # Check the integrity - do it again cat /proc/mdstat btrfs device scan btrfs filesystem show btrfsck --readonly /dev/md127 mount -o ro /dev/md127 /mnt btrfs filesystem usage /mnt
Birillo71
Nov 03, 2017Aspirant
My initial test wa installing the 4 disks on an external Linux system.
In this case the suggested commands don't report anything related to /dev/md127
When I install all the disks on teh NAS I get the following:
root@nas-36-F1-64:/# mdadm --assemble --scan
mdadm: /dev/md/data-0 assembled from 2 drives and 1 rebuilding - not enough to start the array.
mdadm: No arrays found in config file or automatically
root@nas-36-F1-64:/# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md125 : active raid5 sdd4[6] sda5[5] sdb5[4] sdc4[3]
1992186528 blocks super 1.2 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
md126 : active raid1 sdb6[0] sda6[1]
800803520 blocks super 1.2 [2/2] [UU]
md127 : active raid1 sdb4[2] sda4[3]
175686272 blocks super 1.2 [2/2] [UU]
md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
1046528 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md0 : active raid1 sdd1[4] sda1[6] sdc1[5] sdb1[7]
4190208 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
root@nas-36-F1-64:/# cat /etc/fstab
LABEL=0e36f164:data /data btrfs defaults 0 0
root@nas-36-F1-64:/# mount -l | grep md127
root@nas-36-F1-64:/# mount -l | grep md126
root@nas-36-F1-64:/# mount -l | grep md125
root@nas-36-F1-64:/#
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!