NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Birillo71
Nov 03, 2017Aspirant
Recover files using Linux
Hi All, I had a problem with the NAS and I am doing the possible to recover some of the files connecting the disks to a Linux System. I used the following commands as suggested in another discus...
- Nov 17, 2017
The next steps depend on how bad you want the data back, and what risks you want to take.
Of course, you can always contact NETGEAR for Data Recovery, they offer this kind of service as a contract.
While there are some pending sectors on sdf and sdf3 is clearly out of sync (not by much), I don't understand why sdc3 doesn't get included in the RAID array, though it shows bad blocks (mdadm output).
You could try to backup the superblocks (if not already done), then recreate the RAID array. But this could result in irrevocable data loss.
(I'm not 100% sure of what is the best approach at this stage.)
Based on the outputs you provided, I think there are two possibilities.
- Again, this is dangerous territory -
- Either try to recreate the RAID as "--assume-clean".
http://man7.org/linux/man-pages/man8/mdadm.8.html
- Or force the RAID array to assemble.
https://raid.wiki.kernel.org/index.php/RAID_Recovery#Trying_to_assemble_using_--force
For both:
- Either with sde3 sdd3 sdf3
- Or sde3 sdd3 sdc3
In theory, as you "--assume-clean" and only include three members, it shouldn't try to rewrite any block of data (but will overwrite the superblocks). So it shouldn't cause permanent damage. But it's a should.
Parameters from the output you provided:
/dev/sdd3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2 Name : 0e36f164:data-0 Creation Time : Sun Feb 1 21:32:44 2015 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 615438961 (293.46 GiB 315.10 GB) Array Size : 923158272 (880.39 GiB 945.31 GB) Used Dev Size : 615438848 (293.46 GiB 315.10 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262064 sectors, after=113 sectors State : active Device UUID : a51d6699:b35d008c:d3e1f115:610f3d0f Update Time : Tue Oct 31 08:32:47 2017 Checksum : 3518f348 - correct Events : 14159 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)/dev/sdc3: Device Role : Active device 2 /dev/sdd3: Device Role : Active device 1 /dev/sde3: Device Role : Active device 0 /dev/sdf3: Device Role : Active device 3
That's what I would try:
# Backup superblocks for each partition - if not already done for partition in /dev/sd[a-f][0-9]; do echo "Backing up superblocks for $partition"; dd if=$partition of=/root/superblocks_$(basename $partition).mdsb bs=64k count=1; done ls -lh /root/superblocks_* # Backup of "mdadm --examine" for each partition - new for partition in /dev/sd[a-f][0-9]; do echo "Backing up mdadm information for $partition"; mdadm --examine $partition > mdadm_-E_$(basename $partition).txt; done ls -lh /root/mdadm_-E_* # Start all healthy RAID arrays - if not already done mdadm --assemble --verbose /dev/md126 /dev/sdc4 /dev/sdd4 /dev/sde5 /dev/sdf5 mdadm --assemble --verbose /dev/md125 /dev/sde4 /dev/sdf4 mdadm --assemble --verbose /dev/md127 /dev/sde6 /dev/sdf6 # Recreate the unhealthy RAID array - new mdadm --create --verbose --assume-clean --level=5 --raid-devices=4 --size=461579136K --chunk=64K --data-offset=131072K /dev/md124 /dev/sde3 /dev/sdd3 missing /dev/sdf3 # Check the integrity - do it again cat /proc/mdstat btrfs device scan btrfs filesystem show btrfsck --readonly /dev/md127 mount -o ro /dev/md127 /mnt btrfs filesystem usage /mnt
Birillo71
Nov 13, 2017Aspirant
Hi again,
Since I moved to a different server (updated compared to the previous one),
I also run again the following commands if thsi can help:
root@ubuntu:/mnt# modinfo btrfs | grep -i version
srcversion: E0B44ABE728FFF50C821A52
root@ubuntu:/mnt# btrfs device scan
Scanning for Btrfs filesystems
root@ubuntu:/mnt# btrfs fi show
warning, device 1 is missing
warning, device 1 is missing
Label: '0e36f164:data' uuid: 96549eaf-20c0-497d-b93c-9ebd91951afc
Total devices 4 FS bytes used 3.27TiB
devid 2 size 1.85TiB used 1.61TiB path /dev/md126
devid 3 size 167.55GiB used 167.55GiB path /dev/md122
devid 4 size 763.71GiB used 705.00GiB path /dev/md123
*** Some devices missing
root@ubuntu:/mnt# modinfo btrfs
filename: /lib/modules/4.13.0-16-generic/kernel/fs/btrfs/btrfs.ko
license: GPL
alias: devname:btrfs-control
alias: char-major-10-234
alias: fs-btrfs
srcversion: E0B44ABE728FFF50C821A52
depends: raid6_pq,xor
intree: Y
name: btrfs
vermagic: 4.13.0-16-generic SMP mod_unload
signat: PKCS#7
signer:
sig_key:
sig_hashalgo: md4
root@ubuntu:/mnt# btrfs --version
btrfs-progs v4.12
root@ubuntu:/mnt# mdadm --examine --scan
ARRAY /dev/md/0 metadata=1.2 UUID=66cc8e5d:b103a0ef:bde69e91:e73c4f2b name=0e36f164:0
ARRAY /dev/md/1 metadata=1.2 UUID=5dbf97f0:611ada7b:b80708a3:4c6fee18 name=0e36f164:1
ARRAY /dev/md/data-0 metadata=1.2 UUID=9c92d78d:3fa2e084:a32cf226:37d5c3c2 name=0e36f164:data-0
ARRAY /dev/md/data-2 metadata=1.2 UUID=61df4e1a:d5eb4e15:ced7a143:0fc0057a name=0e36f164:data-2
ARRAY /dev/md/data-1 metadata=1.2 UUID=c7cd34aa:e7c54ab0:ae005385:957067e5 name=0e36f164:data-1
ARRAY /dev/md/data-3 metadata=1.2 UUID=83b7f81a:0cb02a00:b953b83f:c4092a39 name=0e36f164:data-3
Thank you
Gabriele
jak0lantash
Nov 14, 2017Mentor
I don't understand how you got a RAID array made of sde4+sdf4, and another array made of sdc4+sdd4+sde5+sdf5. I don't see how X-RAID logic would create that.
Grouping the devices by Array UUID, these are the correct groups.
- Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
/dev/sdc3:
Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
Creation Time : Sun Feb 1 21:32:44 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Tue Oct 31 08:32:47 2017
Events : 14159
Device Role : Active device 2
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
Creation Time : Sun Feb 1 21:32:44 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Tue Oct 31 08:32:47 2017
Events : 14159
Device Role : Active device 1
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde3:
Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
Creation Time : Sun Feb 1 21:32:44 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Tue Oct 31 08:32:47 2017
Events : 14159
Device Role : Active device 0
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf3:
Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
Creation Time : Sun Feb 1 21:32:44 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Tue Oct 31 08:30:26 2017
Events : 13612
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)sdf3 is out of sync.
- Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5
/dev/sdc4:
Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5
Creation Time : Sun Feb 1 13:35:17 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Fri Nov 3 18:35:57 2017
Events : 13894
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd4:
Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5
Creation Time : Sun Feb 1 13:35:17 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Fri Nov 3 18:35:57 2017
Events : 13894
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde5:
Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5
Creation Time : Sun Feb 1 13:35:17 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Fri Nov 3 18:35:57 2017
Events : 13894
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf5:
Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5
Creation Time : Sun Feb 1 13:35:17 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Fri Nov 3 18:35:57 2017
Events : 13894
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
- Array UUID : 61df4e1a:d5eb4e15:ced7a143:0fc0057a
/dev/sde4:
Array UUID : 61df4e1a:d5eb4e15:ced7a143:0fc0057a
Creation Time : Tue Mar 3 19:40:30 2015
Raid Level : raid1
Raid Devices : 2
Update Time : Tue Oct 31 11:30:21 2017
Events : 3183
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf4:
Array UUID : 61df4e1a:d5eb4e15:ced7a143:0fc0057a
Creation Time : Tue Mar 3 19:40:30 2015
Raid Level : raid1
Raid Devices : 2
Update Time : Tue Oct 31 11:30:21 2017
Events : 3183
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
- Array UUID : 83b7f81a:0cb02a00:b953b83f:c4092a39
/dev/sde6:
Array UUID : 83b7f81a:0cb02a00:b953b83f:c4092a39
Creation Time : Thu Sep 17 18:31:39 2015
Raid Level : raid1
Raid Devices : 2
Update Time : Tue Oct 31 16:22:31 2017
Events : 1102
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf6:
Array UUID : 83b7f81a:0cb02a00:b953b83f:c4092a39
Creation Time : Thu Sep 17 18:31:39 2015
Raid Level : raid1
Raid Devices : 2
Update Time : Tue Oct 31 16:22:31 2017
Events : 1102
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
If you want to double check what the 'Name' of each array is, you can re-run the command with the extra pattern:
for partition in /dev/sd[a-f][0-9]; do echo "$partition: "; mdadm --examine $partition | grep -E "Time|Events|Role|Array UUID|Array State|Raid|Name"; done
This is what I would do:
I guessed the names of the RAID arrays. I don't think it makes any difference, but you can check and amend as per the command above if you want.
# Backup superblocks for each partition
for partition in /dev/sd[a-f][0-9]; do echo "Backing up superblocks for $partition"; dd if=$partition of=/root/superblocks_$(basename $partition).mdsb bs=64k count=1; done
ls -lh /root/superblocks_*
# Start all RAID arrays
# sdf3 is out of sync
mdadm --assemble --verbose --run /dev/md127 /dev/sde3 /dev/sdd3 /dev/sdc3
mdadm --assemble --verbose /dev/md126 /dev/sdc4 /dev/sdd4 /dev/sde5 /dev/sdf5
mdadm --assemble --verbose /dev/md125 /dev/sde4 /dev/sdf4
mdadm --assemble --verbose /dev/md124 /dev/sde6 /dev/sdf6
# Check the health
cat /proc/mdstat
btrfs device scan
btrfs filesystem show
btrfsck --readonly /dev/md127
mount -o ro /dev/md127 /mnt
btrfs filesystem usage /mnt
I want to stress that I haven't tested these commands in your situation. I'm giving you honest help, to the best of my knowledge, but I'm not responsible for anything that could go wrong.
Feel free to research the topic to confirm how you want to proceed.
If you notice any error or worrying message at any step, abort and post here so I can review.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!