- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
Recover files using Linux
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
I had a problem with the NAS and I am doing the possible to recover some of the files connecting the disks to a Linux System.
I used the following commands as suggested in another discussion:
# mdadm --assemble --scan
# cat /proc/mdstat
# mount -t btrfs -o ro /dev/md127 /mnt
# cat /proc/mdstat
gives me the following output:
Personalities : [raid1] [raid6] [raid5] [raid4]
md127 : active (auto-read-only) raid5 sdd4[6] sdb5[5] sda5[4] sdc4[3]
1992186528 blocks super 1.2 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
md0 : active (auto-read-only) raid1 sdd1[4] sdc1[5] sda1[7]
4190208 blocks super 1.2 [4/3] [UUU_]
unused devices: <none>
While
mount -t btrfs -o ro /dev/md127 /mnt
Gives:
mount: wrong fs type, bad option, bad superblock on /dev/md127,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
I am stuck on this.
Has someone any idea please?
Can I mount in someway each single disk to access the files?
Thank you
Solved! Go to Solution.
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The next steps depend on how bad you want the data back, and what risks you want to take.
Of course, you can always contact NETGEAR for Data Recovery, they offer this kind of service as a contract.
While there are some pending sectors on sdf and sdf3 is clearly out of sync (not by much), I don't understand why sdc3 doesn't get included in the RAID array, though it shows bad blocks (mdadm output).
You could try to backup the superblocks (if not already done), then recreate the RAID array. But this could result in irrevocable data loss.
(I'm not 100% sure of what is the best approach at this stage.)
Based on the outputs you provided, I think there are two possibilities.
- Again, this is dangerous territory -
- Either try to recreate the RAID as "--assume-clean".
http://man7.org/linux/man-pages/man8/mdadm.8.html
- Or force the RAID array to assemble.
https://raid.wiki.kernel.org/index.php/RAID_Recovery#Trying_to_assemble_using_--force
For both:
- Either with sde3 sdd3 sdf3
- Or sde3 sdd3 sdc3
In theory, as you "--assume-clean" and only include three members, it shouldn't try to rewrite any block of data (but will overwrite the superblocks). So it shouldn't cause permanent damage. But it's a should.
Parameters from the output you provided:
/dev/sdd3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2 Name : 0e36f164:data-0 Creation Time : Sun Feb 1 21:32:44 2015 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 615438961 (293.46 GiB 315.10 GB) Array Size : 923158272 (880.39 GiB 945.31 GB) Used Dev Size : 615438848 (293.46 GiB 315.10 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262064 sectors, after=113 sectors State : active Device UUID : a51d6699:b35d008c:d3e1f115:610f3d0f Update Time : Tue Oct 31 08:32:47 2017 Checksum : 3518f348 - correct Events : 14159 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3: Device Role : Active device 2 /dev/sdd3: Device Role : Active device 1 /dev/sde3: Device Role : Active device 0 /dev/sdf3: Device Role : Active device 3
That's what I would try:
# Backup superblocks for each partition - if not already done for partition in /dev/sd[a-f][0-9]; do echo "Backing up superblocks for $partition"; dd if=$partition of=/root/superblocks_$(basename $partition).mdsb bs=64k count=1; done ls -lh /root/superblocks_* # Backup of "mdadm --examine" for each partition - new for partition in /dev/sd[a-f][0-9]; do echo "Backing up mdadm information for $partition"; mdadm --examine $partition > mdadm_-E_$(basename $partition).txt; done ls -lh /root/mdadm_-E_* # Start all healthy RAID arrays - if not already done mdadm --assemble --verbose /dev/md126 /dev/sdc4 /dev/sdd4 /dev/sde5 /dev/sdf5 mdadm --assemble --verbose /dev/md125 /dev/sde4 /dev/sdf4 mdadm --assemble --verbose /dev/md127 /dev/sde6 /dev/sdf6 # Recreate the unhealthy RAID array - new mdadm --create --verbose --assume-clean --level=5 --raid-devices=4 --size=461579136K --chunk=64K --data-offset=131072K /dev/md124 /dev/sde3 /dev/sdd3 missing /dev/sdf3 # Check the integrity - do it again cat /proc/mdstat btrfs device scan btrfs filesystem show btrfsck --readonly /dev/md127 mount -o ro /dev/md127 /mnt btrfs filesystem usage /mnt
All Replies
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Check your RN104 with:
$ cat /etc/fstab
and:
$ mount -l | grep md127
to see if outputs report btrfs.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
My initial test wa installing the 4 disks on an external Linux system.
In this case the suggested commands don't report anything related to /dev/md127
When I install all the disks on teh NAS I get the following:
root@nas-36-F1-64:/# mdadm --assemble --scan
mdadm: /dev/md/data-0 assembled from 2 drives and 1 rebuilding - not enough to start the array.
mdadm: No arrays found in config file or automatically
root@nas-36-F1-64:/# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md125 : active raid5 sdd4[6] sda5[5] sdb5[4] sdc4[3]
1992186528 blocks super 1.2 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
md126 : active raid1 sdb6[0] sda6[1]
800803520 blocks super 1.2 [2/2] [UU]
md127 : active raid1 sdb4[2] sda4[3]
175686272 blocks super 1.2 [2/2] [UU]
md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
1046528 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md0 : active raid1 sdd1[4] sda1[6] sdc1[5] sdb1[7]
4190208 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
root@nas-36-F1-64:/# cat /etc/fstab
LABEL=0e36f164:data /data btrfs defaults 0 0
root@nas-36-F1-64:/# mount -l | grep md127
root@nas-36-F1-64:/# mount -l | grep md126
root@nas-36-F1-64:/# mount -l | grep md125
root@nas-36-F1-64:/#
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Has someone any idea, please of how I can solve this problem?
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Is BTRFS installed on your linux system? That would seem to be the issue with the "wrong fs type".
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Thsi is what I get:
# modinfo btrfs | grep -i version
srcversion: C5815B9AF2D7FBF8F18061F
vermagic: 4.9.35-v7+ SMP mod_unload modversions ARMv7 p2v8
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Maybe try these commands before the mount:
# btrfs device scan
# btrfs fi show
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
I think @StephenB and @Sandshark told you to test those commands above on your Linux testing host not on your NAS...
As example, if I try to see the modinfo related to BTRFS (modinfo btrfs) on one of my linux boxes this is the current result (your system will be different):
# modinfo btrfs filename: /lib/modules/4.13.10-200.fc26.x86_64/kernel/fs/btrfs/btrfs.ko.xz license: GPL alias: devname:btrfs-control alias: char-major-10-234 alias: fs-btrfs depends: raid6_pq,xor intree: Y name: btrfs vermagic: 4.13.10-200.fc26.x86_64 SMP mod_unload signat: PKCS#7 signer: sig_key: sig_hashalgo: md4
btrfs-progs can queried via btrfs -version:
# btrfs --version btrfs-progs v4.9.1
Aren't you trying to manage your NAS's disks on a Linux testing box?
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
@Kimera wrote:
I think @StephenB and @Sandshark told you to test those commands above on your Linux testing host not on your NAS...
Yes, though they should also work if the NAS is booted in tech support mode.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Hi All,
I used another Linux system and now this is what I get:
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md122 : active (auto-read-only) raid1 sdd4[2] sdc4[3]
175686272 blocks super 1.2 [2/2] [UU]
md123 : active (auto-read-only) raid1 sdd6[0] sdc6[1]
800803520 blocks super 1.2 [2/2] [UU]
md124 : inactive sdd1[7](S) sdc1[6](S) sdb1[5](S)
12570624 blocks super 1.2
md125 : inactive sdd3[5](S) sdc3[6](S) sdb3[4](S)
923158441 blocks super 1.2
md126 : inactive sdd5[4](S) sdc5[5](S) sdb4[3](S)
1992186741 blocks super 1.2
md127 : inactive sdd2[1](S) sdc2[0](S) sdb2[2](S)
1569792 blocks super 1.2
unused devices: <none>
root@ubuntu:~# btrfs device scan
Scanning for Btrfs filesystems
root@ubuntu:~# btrfs fi show
warning, device 1 is missing
warning, device 2 is missing
warning, device 1 is missing
bytenr mismatch, want=2404137402368, have=0
ERROR: cannot read chunk root
Label: '0e36f164:data' uuid: 96549eaf-20c0-497d-b93c-9ebd91951afc
Total devices 4 FS bytes used 3.27TiB
devid 3 size 167.55GiB used 167.55GiB path /dev/md122
devid 4 size 763.71GiB used 705.00GiB path /dev/md123
*** Some devices missing
Any idea of how to procced, please?
Thank you in advance for your support
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Apparently you're missing two disks (sda and sdb)...aren't you? initially you posted that you used 4 disks...now it seems you just connected 2 only (sdc and sdd)...where are the others 2? are you testing with a live linux distribution on a totally empty box supporting 4 SATA disks (disks of your NAS) or what?
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
I have 4 disks connected.
I am really confuced.
I also tried the folowing command if can help:
mdadm --examine --scan
ARRAY /dev/md/0 metadata=1.2 UUID=66cc8e5d:b103a0ef:bde69e91:e73c4f2b name=0e36f164:0
ARRAY /dev/md/1 metadata=1.2 UUID=5dbf97f0:611ada7b:b80708a3:4c6fee18 name=0e36f164:1
ARRAY /dev/md/data-0 metadata=1.2 UUID=9c92d78d:3fa2e084:a32cf226:37d5c3c2 name=0e36f164:data-0
ARRAY /dev/md/data-2 metadata=1.2 UUID=61df4e1a:d5eb4e15:ced7a143:0fc0057a name=0e36f164:data-2
ARRAY /dev/md/data-1 metadata=1.2 UUID=c7cd34aa:e7c54ab0:ae005385:957067e5 name=0e36f164:data-1
ARRAY /dev/md/data-3 metadata=1.2 UUID=83b7f81a:0cb02a00:b953b83f:c4092a39 name=0e36f164:data-3
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
You can't mount the BTRFS volume (upper layer) because the RAID arrays (lower layer) are not assembled properly.
Please post the output of this command:
for partition in /dev/sd[a-f][0-9]; do echo "$partition: "; mdadm --examine $partition | grep -E "Time|Events|Role|Array UUID|Array State|Raid"; done
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Hi,
Thanks a lot for your help!
This is the output:
root@ubuntu:/mnt# for partition in /dev/sd[a-f][0-9]; do echo "$partition: "; mdadm --examine $partition | grep -E "Time|Events|Role|Array UUID|Array State|Raid"; done
/dev/sda1:
mdadm: No md superblock detected on /dev/sda1.
/dev/sdc1:
Array UUID : 66cc8e5d:b103a0ef:bde69e91:e73c4f2b
Creation Time : Sun Feb 1 21:32:41 2015
Raid Level : raid1
Raid Devices : 4
Update Time : Sun Nov 12 17:11:26 2017
Events : 92223
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc2:
Array UUID : 5dbf97f0:611ada7b:b80708a3:4c6fee18
Creation Time : Wed Nov 1 15:52:00 2017
Raid Level : raid10
Raid Devices : 4
Update Time : Sun Nov 12 14:09:02 2017
Events : 19
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3:
Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
Creation Time : Sun Feb 1 21:32:44 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Tue Oct 31 08:32:47 2017
Events : 14159
Device Role : Active device 2
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc4:
Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5
Creation Time : Sun Feb 1 13:35:17 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Fri Nov 3 18:35:57 2017
Events : 13894
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
Array UUID : 66cc8e5d:b103a0ef:bde69e91:e73c4f2b
Creation Time : Sun Feb 1 21:32:41 2015
Raid Level : raid1
Raid Devices : 4
Update Time : Sun Nov 12 17:11:26 2017
Events : 92223
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd2:
Array UUID : 5dbf97f0:611ada7b:b80708a3:4c6fee18
Creation Time : Wed Nov 1 15:52:00 2017
Raid Level : raid10
Raid Devices : 4
Update Time : Sun Nov 12 14:09:02 2017
Events : 19
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
Creation Time : Sun Feb 1 21:32:44 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Tue Oct 31 08:32:47 2017
Events : 14159
Device Role : Active device 1
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd4:
Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5
Creation Time : Sun Feb 1 13:35:17 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Fri Nov 3 18:35:57 2017
Events : 13894
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde1:
Array UUID : 66cc8e5d:b103a0ef:bde69e91:e73c4f2b
Creation Time : Sun Feb 1 21:32:41 2015
Raid Level : raid1
Raid Devices : 4
Update Time : Sun Nov 12 17:11:26 2017
Events : 92223
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde2:
Array UUID : 5dbf97f0:611ada7b:b80708a3:4c6fee18
Creation Time : Wed Nov 1 15:52:00 2017
Raid Level : raid10
Raid Devices : 4
Update Time : Sun Nov 12 14:09:02 2017
Events : 19
Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde3:
Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
Creation Time : Sun Feb 1 21:32:44 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Tue Oct 31 08:32:47 2017
Events : 14159
Device Role : Active device 0
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde4:
Array UUID : 61df4e1a:d5eb4e15:ced7a143:0fc0057a
Creation Time : Tue Mar 3 19:40:30 2015
Raid Level : raid1
Raid Devices : 2
Update Time : Tue Oct 31 11:30:21 2017
Events : 3183
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde5:
Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5
Creation Time : Sun Feb 1 13:35:17 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Fri Nov 3 18:35:57 2017
Events : 13894
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde6:
Array UUID : 83b7f81a:0cb02a00:b953b83f:c4092a39
Creation Time : Thu Sep 17 18:31:39 2015
Raid Level : raid1
Raid Devices : 2
Update Time : Tue Oct 31 16:22:31 2017
Events : 1102
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf1:
Array UUID : 66cc8e5d:b103a0ef:bde69e91:e73c4f2b
Creation Time : Sun Feb 1 21:32:41 2015
Raid Level : raid1
Raid Devices : 4
Update Time : Sun Nov 12 17:11:26 2017
Events : 92223
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf2:
Array UUID : 5dbf97f0:611ada7b:b80708a3:4c6fee18
Creation Time : Wed Nov 1 15:52:00 2017
Raid Level : raid10
Raid Devices : 4
Update Time : Sun Nov 12 14:09:02 2017
Events : 19
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf3:
Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
Creation Time : Sun Feb 1 21:32:44 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Tue Oct 31 08:30:26 2017
Events : 13612
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf4:
Array UUID : 61df4e1a:d5eb4e15:ced7a143:0fc0057a
Creation Time : Tue Mar 3 19:40:30 2015
Raid Level : raid1
Raid Devices : 2
Update Time : Tue Oct 31 11:30:21 2017
Events : 3183
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf5:
Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5
Creation Time : Sun Feb 1 13:35:17 2015
Raid Level : raid5
Raid Devices : 4
Update Time : Fri Nov 3 18:35:57 2017
Events : 13894
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf6:
Array UUID : 83b7f81a:0cb02a00:b953b83f:c4092a39
Creation Time : Thu Sep 17 18:31:39 2015
Raid Level : raid1
Raid Devices : 2
Update Time : Tue Oct 31 16:22:31 2017
Events : 1102
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@ubuntu:/mnt#
I hope the above will help to debug the problem.
Thank you very much
Gabriele
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Hi again,
Since I moved to a different server (updated compared to the previous one),
I also run again the following commands if thsi can help:
root@ubuntu:/mnt# modinfo btrfs | grep -i version
srcversion: E0B44ABE728FFF50C821A52
root@ubuntu:/mnt# btrfs device scan
Scanning for Btrfs filesystems
root@ubuntu:/mnt# btrfs fi show
warning, device 1 is missing
warning, device 1 is missing
Label: '0e36f164:data' uuid: 96549eaf-20c0-497d-b93c-9ebd91951afc
Total devices 4 FS bytes used 3.27TiB
devid 2 size 1.85TiB used 1.61TiB path /dev/md126
devid 3 size 167.55GiB used 167.55GiB path /dev/md122
devid 4 size 763.71GiB used 705.00GiB path /dev/md123
*** Some devices missing
root@ubuntu:/mnt# modinfo btrfs
filename: /lib/modules/4.13.0-16-generic/kernel/fs/btrfs/btrfs.ko
license: GPL
alias: devname:btrfs-control
alias: char-major-10-234
alias: fs-btrfs
srcversion: E0B44ABE728FFF50C821A52
depends: raid6_pq,xor
intree: Y
name: btrfs
vermagic: 4.13.0-16-generic SMP mod_unload
signat: PKCS#7
signer:
sig_key:
sig_hashalgo: md4
root@ubuntu:/mnt# btrfs --version
btrfs-progs v4.12
root@ubuntu:/mnt# mdadm --examine --scan
ARRAY /dev/md/0 metadata=1.2 UUID=66cc8e5d:b103a0ef:bde69e91:e73c4f2b name=0e36f164:0
ARRAY /dev/md/1 metadata=1.2 UUID=5dbf97f0:611ada7b:b80708a3:4c6fee18 name=0e36f164:1
ARRAY /dev/md/data-0 metadata=1.2 UUID=9c92d78d:3fa2e084:a32cf226:37d5c3c2 name=0e36f164:data-0
ARRAY /dev/md/data-2 metadata=1.2 UUID=61df4e1a:d5eb4e15:ced7a143:0fc0057a name=0e36f164:data-2
ARRAY /dev/md/data-1 metadata=1.2 UUID=c7cd34aa:e7c54ab0:ae005385:957067e5 name=0e36f164:data-1
ARRAY /dev/md/data-3 metadata=1.2 UUID=83b7f81a:0cb02a00:b953b83f:c4092a39 name=0e36f164:data-3
Thank you
Gabriele
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
I don't understand how you got a RAID array made of sde4+sdf4, and another array made of sdc4+sdd4+sde5+sdf5. I don't see how X-RAID logic would create that.
Grouping the devices by Array UUID, these are the correct groups.
- Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
/dev/sdc3: Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2 Creation Time : Sun Feb 1 21:32:44 2015 Raid Level : raid5 Raid Devices : 4 Update Time : Tue Oct 31 08:32:47 2017 Events : 14159 Device Role : Active device 2 Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdd3: Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2 Creation Time : Sun Feb 1 21:32:44 2015 Raid Level : raid5 Raid Devices : 4 Update Time : Tue Oct 31 08:32:47 2017 Events : 14159 Device Role : Active device 1 Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sde3: Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2 Creation Time : Sun Feb 1 21:32:44 2015 Raid Level : raid5 Raid Devices : 4 Update Time : Tue Oct 31 08:32:47 2017 Events : 14159 Device Role : Active device 0 Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdf3: Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2 Creation Time : Sun Feb 1 21:32:44 2015 Raid Level : raid5 Raid Devices : 4 Update Time : Tue Oct 31 08:30:26 2017 Events : 13612 Device Role : Active device 3 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
sdf3 is out of sync.
- Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5
/dev/sdc4: Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5 Creation Time : Sun Feb 1 13:35:17 2015 Raid Level : raid5 Raid Devices : 4 Update Time : Fri Nov 3 18:35:57 2017 Events : 13894 Device Role : Active device 0 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdd4: Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5 Creation Time : Sun Feb 1 13:35:17 2015 Raid Level : raid5 Raid Devices : 4 Update Time : Fri Nov 3 18:35:57 2017 Events : 13894 Device Role : Active device 1 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sde5: Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5 Creation Time : Sun Feb 1 13:35:17 2015 Raid Level : raid5 Raid Devices : 4 Update Time : Fri Nov 3 18:35:57 2017 Events : 13894 Device Role : Active device 2 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdf5: Array UUID : c7cd34aa:e7c54ab0:ae005385:957067e5 Creation Time : Sun Feb 1 13:35:17 2015 Raid Level : raid5 Raid Devices : 4 Update Time : Fri Nov 3 18:35:57 2017 Events : 13894 Device Role : Active device 3 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
- Array UUID : 61df4e1a:d5eb4e15:ced7a143:0fc0057a
/dev/sde4: Array UUID : 61df4e1a:d5eb4e15:ced7a143:0fc0057a Creation Time : Tue Mar 3 19:40:30 2015 Raid Level : raid1 Raid Devices : 2 Update Time : Tue Oct 31 11:30:21 2017 Events : 3183 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdf4: Array UUID : 61df4e1a:d5eb4e15:ced7a143:0fc0057a Creation Time : Tue Mar 3 19:40:30 2015 Raid Level : raid1 Raid Devices : 2 Update Time : Tue Oct 31 11:30:21 2017 Events : 3183 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
- Array UUID : 83b7f81a:0cb02a00:b953b83f:c4092a39
/dev/sde6: Array UUID : 83b7f81a:0cb02a00:b953b83f:c4092a39 Creation Time : Thu Sep 17 18:31:39 2015 Raid Level : raid1 Raid Devices : 2 Update Time : Tue Oct 31 16:22:31 2017 Events : 1102 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdf6: Array UUID : 83b7f81a:0cb02a00:b953b83f:c4092a39 Creation Time : Thu Sep 17 18:31:39 2015 Raid Level : raid1 Raid Devices : 2 Update Time : Tue Oct 31 16:22:31 2017 Events : 1102 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
If you want to double check what the 'Name' of each array is, you can re-run the command with the extra pattern:
for partition in /dev/sd[a-f][0-9]; do echo "$partition: "; mdadm --examine $partition | grep -E "Time|Events|Role|Array UUID|Array State|Raid|Name"; done
This is what I would do:
I guessed the names of the RAID arrays. I don't think it makes any difference, but you can check and amend as per the command above if you want.
# Backup superblocks for each partition
for partition in /dev/sd[a-f][0-9]; do echo "Backing up superblocks for $partition"; dd if=$partition of=/root/superblocks_$(basename $partition).mdsb bs=64k count=1; done
ls -lh /root/superblocks_*
# Start all RAID arrays
# sdf3 is out of sync
mdadm --assemble --verbose --run /dev/md127 /dev/sde3 /dev/sdd3 /dev/sdc3
mdadm --assemble --verbose /dev/md126 /dev/sdc4 /dev/sdd4 /dev/sde5 /dev/sdf5
mdadm --assemble --verbose /dev/md125 /dev/sde4 /dev/sdf4
mdadm --assemble --verbose /dev/md124 /dev/sde6 /dev/sdf6
# Check the health
cat /proc/mdstat
btrfs device scan
btrfs filesystem show
btrfsck --readonly /dev/md127
mount -o ro /dev/md127 /mnt
btrfs filesystem usage /mnt
I want to stress that I haven't tested these commands in your situation. I'm giving you honest help, to the best of my knowledge, but I'm not responsible for anything that could go wrong.
Feel free to research the topic to confirm how you want to proceed.
If you notice any error or worrying message at any step, abort and post here so I can review.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Hi Jak0lantash,
Unfortunately not all the steps recomemnded by you have been completed with success
Please have a look at the below:
root@ubuntu:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sde6[0] sdf6[1]
800803520 blocks super 1.2 [2/2] [UU]
md125 : active raid1 sde4[2] sdf4[3]
175686272 blocks super 1.2 [2/2] [UU]
md126 : active raid5 sdc4[6] sdf5[5] sde5[4] sdd4[3]
1992186528 blocks super 1.2 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
md122 : active raid1 sdc1[4] sdf1[6] sdd1[5] sde1[7]
4190208 blocks super 1.2 [4/4] [UUUU]
md123 : active raid10 sdf2[0] sdc2[3] sdd2[2] sde2[1]
1046528 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
unused devices: <none>
root@ubuntu:~# mdadm --assemble --verbose --run /dev/md124 /dev/sde3 /dev/sdd3 /dev/sdc3
mdadm: looking for devices for /dev/md124
mdadm: /dev/sde3 is identified as a member of /dev/md124, slot 0.
mdadm: /dev/sdd3 is identified as a member of /dev/md124, slot 1.
mdadm: /dev/sdc3 is identified as a member of /dev/md124, slot 2.
mdadm: added /dev/sdd3 to /dev/md124 as 1
mdadm: added /dev/sdc3 to /dev/md124 as 2
mdadm: no uptodate device for slot 3 of /dev/md124
mdadm: added /dev/sde3 to /dev/md124 as 0
mdadm: failed to RUN_ARRAY /dev/md124: Input/output error
mdadm: Not enough devices to start the array.
root@ubuntu:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md124 : inactive sdd3[4] sdc3[7] sde3[5]
923158441 blocks super 1.2
md127 : active raid1 sde6[0] sdf6[1]
800803520 blocks super 1.2 [2/2] [UU]
md125 : active raid1 sde4[2] sdf4[3]
175686272 blocks super 1.2 [2/2] [UU]
md126 : active raid5 sdc4[6] sdf5[5] sde5[4] sdd4[3]
1992186528 blocks super 1.2 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
md122 : active raid1 sdc1[4] sdf1[6] sdd1[5] sde1[7]
4190208 blocks super 1.2 [4/4] [UUUU]
md123 : active raid10 sdf2[0] sdc2[3] sdd2[2] sde2[1]
1046528 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
unused devices: <none>
root@ubuntu:~# btrfs device scan
Scanning for Btrfs filesystems
root@ubuntu:~# btrfs filesystem show
warning, device 1 is missing
warning, device 1 is missing
Label: '0e36f164:data' uuid: 96549eaf-20c0-497d-b93c-9ebd91951afc
Total devices 4 FS bytes used 3.27TiB
devid 2 size 1.85TiB used 1.61TiB path /dev/md126
devid 3 size 167.55GiB used 167.55GiB path /dev/md125
devid 4 size 763.71GiB used 705.00GiB path /dev/md127
*** Some devices missing
root@ubuntu:~# btrfsck --readonly /dev/md127
warning, device 1 is missing
warning, device 1 is missing
Checking filesystem on /dev/md127
UUID: 96549eaf-20c0-497d-b93c-9ebd91951afc
checking extents
checking free space cache
checking fs roots
checking csums
checking root refs
checking quota groups
Counts for qgroup id: 0/257 are different
our: referenced 402350080 referenced compressed 402350080
disk: referenced 415600640 referenced compressed 415600640
diff: referenced -13250560 referenced compressed -13250560
our: exclusive 402350080 exclusive compressed 402350080
disk: exclusive 415600640 exclusive compressed 415600640
diff: exclusive -13250560 exclusive compressed -13250560
Counts for qgroup id: 0/259 are different
our: referenced 83044048896 referenced compressed 83044048896
disk: referenced 83037233152 referenced compressed 83037233152
diff: referenced 6815744 referenced compressed 6815744
our: exclusive 98304 exclusive compressed 98304
disk: exclusive 83063369728 exclusive compressed 83063369728
diff: exclusive -83063271424 exclusive compressed -83063271424
Counts for qgroup id: 0/260 are different
our: referenced 1244251586560 referenced compressed 1244251586560
disk: referenced 1244256391168 referenced compressed 1244256391168
diff: referenced -4804608 referenced compressed -4804608
our: exclusive 32768 exclusive compressed 32768
disk: exclusive 1244277325824 exclusive compressed 1244277325824
diff: exclusive -1244277293056 exclusive compressed -1244277293056
Counts for qgroup id: 0/261 are different
our: referenced 32129851392 referenced compressed 32129851392
disk: referenced 32129884160 referenced compressed 32129884160
diff: referenced -32768 referenced compressed -32768
our: exclusive 98304 exclusive compressed 98304
disk: exclusive 32150831104 exclusive compressed 32150831104
diff: exclusive -32150732800 exclusive compressed -32150732800
Counts for qgroup id: 0/262 are different
our: referenced 579981201408 referenced compressed 579981201408
disk: referenced 579766308864 referenced compressed 579766308864
diff: referenced 214892544 referenced compressed 214892544
our: exclusive 32768 exclusive compressed 32768
disk: exclusive 579775311872 exclusive compressed 579775311872
diff: exclusive -579775279104 exclusive compressed -579775279104
Counts for qgroup id: 0/263 are different
our: referenced 827849297920 referenced compressed 827849297920
disk: referenced 827485573120 referenced compressed 827485573120
diff: referenced 363724800 referenced compressed 363724800
our: exclusive 32768 exclusive compressed 32768
disk: exclusive 829908611072 exclusive compressed 829908611072
diff: exclusive -829908578304 exclusive compressed -829908578304
Counts for qgroup id: 0/264 are different
our: referenced 1482752 referenced compressed 1482752
disk: referenced 1286144 referenced compressed 1286144
diff: referenced 196608 referenced compressed 196608
our: exclusive 1482752 exclusive compressed 1482752
disk: exclusive 1286144 exclusive compressed 1286144
diff: exclusive 196608 exclusive compressed 196608
Counts for qgroup id: 0/266 are different
our: referenced 10128928768 referenced compressed 10128928768
disk: referenced 10165301248 referenced compressed 10165301248
diff: referenced -36372480 referenced compressed -36372480
our: exclusive 32768 exclusive compressed 32768
disk: exclusive 10186215424 exclusive compressed 10186215424
diff: exclusive -10186182656 exclusive compressed -10186182656
Counts for qgroup id: 0/268 are different
our: referenced 204127109120 referenced compressed 204127109120
disk: referenced 204126322688 referenced compressed 204126322688
diff: referenced 786432 referenced compressed 786432
our: exclusive 204127109120 exclusive compressed 204127109120
disk: exclusive 204126322688 exclusive compressed 204126322688
diff: exclusive 786432 exclusive compressed 786432
Counts for qgroup id: 0/335 are different
our: referenced 194561572864 referenced compressed 194561572864
disk: referenced 194667118592 referenced compressed 194667118592
diff: referenced -105545728 referenced compressed -105545728
our: exclusive 22695936 exclusive compressed 22695936
disk: exclusive 201282125824 exclusive compressed 201282125824
diff: exclusive -201259429888 exclusive compressed -201259429888
Counts for qgroup id: 0/635 are different
our: referenced 40960 referenced compressed 40960
disk: referenced 40960 referenced compressed 40960
our: exclusive 32768 exclusive compressed 32768
disk: exclusive 40960 exclusive compressed 40960
diff: exclusive -8192 exclusive compressed -8192
Counts for qgroup id: 0/2754 are different
our: referenced 10559488 referenced compressed 10559488
disk: referenced 10526720 referenced compressed 10526720
diff: referenced 32768 referenced compressed 32768
our: exclusive 32768 exclusive compressed 32768
disk: exclusive 40960 exclusive compressed 40960
diff: exclusive -8192 exclusive compressed -8192
Counts for qgroup id: 0/2755 are different
our: referenced 98304 referenced compressed 98304
disk: referenced 65536 referenced compressed 65536
diff: referenced 32768 referenced compressed 32768
our: exclusive 98304 exclusive compressed 98304
disk: exclusive 65536 exclusive compressed 65536
diff: exclusive 32768 exclusive compressed 32768
Counts for qgroup id: 0/2858 are different
our: referenced 32360566784 referenced compressed 32360566784
disk: referenced 32359649280 referenced compressed 32359649280
diff: referenced 917504 referenced compressed 917504
our: exclusive 98304 exclusive compressed 98304
disk: exclusive 32359645184 exclusive compressed 32359645184
diff: exclusive -32359546880 exclusive compressed -32359546880
Counts for qgroup id: 0/2859 are different
our: referenced 32768 referenced compressed 32768
disk: referenced 0 referenced compressed 0
diff: referenced 32768 referenced compressed 32768
our: exclusive 32768 exclusive compressed 32768
disk: exclusive 0 exclusive compressed 0
diff: exclusive 32768 exclusive compressed 32768
Counts for qgroup id: 0/3842 are different
our: referenced 5056966656 referenced compressed 5056966656
disk: referenced 5056737280 referenced compressed 5056737280
diff: referenced 229376 referenced compressed 229376
our: exclusive 5056966656 exclusive compressed 5056966656
disk: exclusive 5056737280 exclusive compressed 5056737280
diff: exclusive 229376 exclusive compressed 229376
Counts for qgroup id: 0/3956 are different
our: referenced 32768 referenced compressed 32768
disk: referenced 0 referenced compressed 0
diff: referenced 32768 referenced compressed 32768
our: exclusive 32768 exclusive compressed 32768
disk: exclusive 0 exclusive compressed 0
diff: exclusive 32768 exclusive compressed 32768
found 3591169544192 bytes used, no error found
total csum bytes: 2780591748
total tree bytes: 5103910912
total fs tree bytes: 1800110080
total extent tree bytes: 169246720
btree space waste bytes: 599227477
file data blocks allocated: 4113154846720
referenced 4001220476928
root@ubuntu:~# mount -o ro /dev/md127 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error.
root@ubuntu:~# btrfs filesystem usage /mnt
ERROR: not a btrfs filesystem: /mnt
root@ubuntu:~# btrfsck --readonly /dev/md124
ERROR: superblock bytenr 65536 is larger than device size 0
ERROR: cannot open file system
root@ubuntu:~#
Thanks a lot for all the efforts you are doing to help me with this!
Gabriele
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
If one of the arrays don't start, no need to continue the steps with btrfs stuff, it will fail.
md124 failed to start. This is the array with an unsync member.
root@ubuntu:~# mdadm --assemble --verbose --run /dev/md124 /dev/sde3 /dev/sdd3 /dev/sdc3 mdadm: looking for devices for /dev/md124 mdadm: /dev/sde3 is identified as a member of /dev/md124, slot 0. mdadm: /dev/sdd3 is identified as a member of /dev/md124, slot 1. mdadm: /dev/sdc3 is identified as a member of /dev/md124, slot 2. mdadm: added /dev/sdd3 to /dev/md124 as 1 mdadm: added /dev/sdc3 to /dev/md124 as 2 mdadm: no uptodate device for slot 3 of /dev/md124 mdadm: added /dev/sde3 to /dev/md124 as 0 mdadm: failed to RUN_ARRAY /dev/md124: Input/output error mdadm: Not enough devices to start the array.
Does it work if you assemble the RAID with the unsync member (as unsynced)?
mdadm --assemble --verbose /dev/md124 /dev/sde3 /dev/sdd3 /dev/sdc3 /dev/sdf3
If not, please give the output of this:
for partition in /dev/sde3 /dev/sdd3 /dev/sdc3 /dev/sdf3; do echo "$partition: "; mdadm --examine $partition; done
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Hi,
this is what I get:
root@ubuntu:~# mdadm --assemble --verbose /dev/md124 /dev/sde3 /dev/sdd3 /dev/sdc3 /dev/sdf3
mdadm: looking for devices for /dev/md124
mdadm: /dev/sde3 is identified as a member of /dev/md124, slot 0.
mdadm: /dev/sdd3 is identified as a member of /dev/md124, slot 1.
mdadm: /dev/sdc3 is identified as a member of /dev/md124, slot 2.
mdadm: /dev/sdf3 is identified as a member of /dev/md124, slot 3.
mdadm: added /dev/sdd3 to /dev/md124 as 1
mdadm: added /dev/sdc3 to /dev/md124 as 2
mdadm: added /dev/sdf3 to /dev/md124 as 3 (possibly out of date)
mdadm: added /dev/sde3 to /dev/md124 as 0
mdadm: /dev/md124 assembled from 2 drives and 1 rebuilding - not enough to start the array.
root@ubuntu:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md124 : inactive sdd3[4] sdc3[7] sde3[5]
923158441 blocks super 1.2
md127 : active raid1 sde6[0] sdf6[1]
800803520 blocks super 1.2 [2/2] [UU]
md125 : active raid1 sde4[2] sdf4[3]
175686272 blocks super 1.2 [2/2] [UU]
md126 : active raid5 sdc4[6] sdf5[5] sde5[4] sdd4[3]
1992186528 blocks super 1.2 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
md122 : active raid1 sdc1[4] sdf1[6] sdd1[5] sde1[7]
4190208 blocks super 1.2 [4/4] [UUUU]
md123 : active raid10 sdf2[0] sdc2[3] sdd2[2] sde2[1]
1046528 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
unused devices: <none>
root@ubuntu:~# for partition in /dev/sde3 /dev/sdd3 /dev/sdc3 /dev/sdf3; do echo "$partition: "; mdadm --examine $partition; done
/dev/sde3:
/dev/sde3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
Name : 0e36f164:data-0
Creation Time : Sun Feb 1 21:32:44 2015
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 615438961 (293.46 GiB 315.10 GB)
Array Size : 923158272 (880.39 GiB 945.31 GB)
Used Dev Size : 615438848 (293.46 GiB 315.10 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=113 sectors
State : clean
Device UUID : 21f265bc:50b038ed:bb5eb494:a72e710c
Update Time : Tue Oct 31 08:32:47 2017
Checksum : 3547b691 - correct
Events : 14159
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 0
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
/dev/sdd3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
Name : 0e36f164:data-0
Creation Time : Sun Feb 1 21:32:44 2015
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 615438961 (293.46 GiB 315.10 GB)
Array Size : 923158272 (880.39 GiB 945.31 GB)
Used Dev Size : 615438848 (293.46 GiB 315.10 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=113 sectors
State : active
Device UUID : a51d6699:b35d008c:d3e1f115:610f3d0f
Update Time : Tue Oct 31 08:32:47 2017
Checksum : 3518f348 - correct
Events : 14159
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3:
/dev/sdc3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0xa
Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
Name : 0e36f164:data-0
Creation Time : Sun Feb 1 21:32:44 2015
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 615438961 (293.46 GiB 315.10 GB)
Array Size : 923158272 (880.39 GiB 945.31 GB)
Used Dev Size : 615438848 (293.46 GiB 315.10 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Recovery Offset : 72641176 sectors
Unused Space : before=261864 sectors, after=113 sectors
State : active
Device UUID : d8e3f421:779c63e1:5be53254:abe5b67e
Update Time : Tue Oct 31 08:32:47 2017
Bad Block Log : 512 entries available at offset 264 sectors - bad blocks present.
Checksum : c5223dbe - correct
Events : 14159
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf3:
/dev/sdf3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2
Name : 0e36f164:data-0
Creation Time : Sun Feb 1 21:32:44 2015
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 615438961 (293.46 GiB 315.10 GB)
Array Size : 923158272 (880.39 GiB 945.31 GB)
Used Dev Size : 615438848 (293.46 GiB 315.10 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=113 sectors
State : active
Device UUID : 8cc93aeb:3055c6d5:fbcafd76:cc170f57
Update Time : Tue Oct 31 08:30:26 2017
Checksum : 79908597 - correct
Events : 13612
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
root@ubuntu:~#
Thank you!!!!
Gabriele
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
mdadm: added /dev/sdd3 to /dev/md124 as 1 mdadm: added /dev/sdc3 to /dev/md124 as 2 mdadm: added /dev/sdf3 to /dev/md124 as 3 (possibly out of date) mdadm: added /dev/sde3 to /dev/md124 as 0 mdadm: /dev/md124 assembled from 2 drives and 1 rebuilding - not enough to start the array.
Raid Devices : 4
There are 4 RAID devices. You start the array with 4, of which 1 is known out of sync. Yet, it says from 2 drives only.
/dev/sde3: Events : 14159 /dev/sdd3: Events : 14159 /dev/sdc3: Events : 14159
They're in sync... So it should work. But it doesn't.
Maybe there would be some more clues in dmesg.
Can you give the output of that please (dmesg output from the last attempt of assembling the array to the end):
dmesg | tac | sed '/md: md124 stopped./q' | tac
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Hi,
thsi is the output:
root@ubuntu:~# dmesg | tac | sed '/md: md124 stopped./q' | tac
[205216.925353] md: md124 stopped.
[205217.585713] md: kicking non-fresh sdf3 from array!
[205218.283791] md/raid:md124: not clean -- starting background reconstruction
[205218.283857] md/raid:md124: device sdd3 operational as raid disk 1
[205218.283861] md/raid:md124: device sde3 operational as raid disk 0
[205218.285514] md/raid:md124: not enough operational devices (2/4 failed)
[205218.288051] md/raid:md124: failed to run raid set.
[205218.288057] md: pers->run() failed ...
root@ubuntu:~#
Thank you
Gabriele
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Well, it's not picking up sdc3, but it doesn't say why...
In the output of the mdadm --examine for sdc3, there is an attribute not present on the other ones, but I don't know what it means.
/dev/sdc3: Bad Block Log : 512 entries available at offset 264 sectors - bad blocks present.
Can you check with this command if sdf and sdc show any errors (if you post the output, remove the serial numbers):
smartctl -a /dev/sdc smartctl -a /dev/sdf
At this stage, we're getting to desperate measures. There are a few things we can try next, but they're all intrusive and may be destructive. For example, if sdc shows errors, we can try recreating the array as "--assume-clean sde3 sdd3 missing sdf3" . Because the event count and time differences are small on sdf3 compared to the other ones, it might work. But there is always a possibly for the data to end up irreversibly corrupted.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Hi Jac0lantash,
thsi is the output of the commands above:
root@ubuntu:~# smartctl -a /dev/sdc
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.13.0-16-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Caviar Green
Device Model: WDC WD10EAVS-00D7B0
Serial Number: WD-WCAU42030792
LU WWN Device Id: 5 0014ee 2ac99d809
Firmware Version: 01.01A01
User Capacity: 1,000,204,886,016 bytes [1.00 TB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS (minor revision not indicated)
SATA Version is: SATA 2.5, 3.0 Gb/s
Local Time is: Thu Nov 16 07:31:00 2017 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x84) Offline data collection activity
was suspended by an interrupting command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (22200) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 255) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x303f) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0027 159 154 021 Pre-fail Always - 7025
4 Start_Stop_Count 0x0032 094 094 000 Old_age Always - 6155
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 100 253 051 Old_age Always - 0
9 Power_On_Hours 0x0032 029 029 000 Old_age Always - 52539
10 Spin_Retry_Count 0x0032 100 100 051 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 051 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 197
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46
193 Load_Cycle_Count 0x0032 198 198 000 Old_age Always - 6155
194 Temperature_Celsius 0x0022 107 102 000 Old_age Always - 43
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
root@ubuntu:~# smartctl -a /dev/sdf
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.13.0-16-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red
Device Model: WDC WD30EFRX-68EUZN0
Serial Number: WD-WCC4N1FP4JZH
LU WWN Device Id: 5 0014ee 2b6c133f8
Firmware Version: 82.00A82
User Capacity: 3,000,592,982,016 bytes [3.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2 (minor revision not indicated)
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Thu Nov 16 07:31:12 2017 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (40680) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 408) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x703d) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 10
3 Spin_Up_Time 0x0027 182 181 021 Pre-fail Always - 5883
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 57
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 084 084 000 Old_age Always - 12376
10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 51
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 41
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 1656
194 Temperature_Celsius 0x0022 113 108 000 Old_age Always - 37
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 5
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
root@ubuntu:~#
When you say risk of being destructive, is this a risk for latest changes to the disks or for all data?
If it is just for latest changes is not a problem.
Thank you again
Gabriele
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The next steps depend on how bad you want the data back, and what risks you want to take.
Of course, you can always contact NETGEAR for Data Recovery, they offer this kind of service as a contract.
While there are some pending sectors on sdf and sdf3 is clearly out of sync (not by much), I don't understand why sdc3 doesn't get included in the RAID array, though it shows bad blocks (mdadm output).
You could try to backup the superblocks (if not already done), then recreate the RAID array. But this could result in irrevocable data loss.
(I'm not 100% sure of what is the best approach at this stage.)
Based on the outputs you provided, I think there are two possibilities.
- Again, this is dangerous territory -
- Either try to recreate the RAID as "--assume-clean".
http://man7.org/linux/man-pages/man8/mdadm.8.html
- Or force the RAID array to assemble.
https://raid.wiki.kernel.org/index.php/RAID_Recovery#Trying_to_assemble_using_--force
For both:
- Either with sde3 sdd3 sdf3
- Or sde3 sdd3 sdc3
In theory, as you "--assume-clean" and only include three members, it shouldn't try to rewrite any block of data (but will overwrite the superblocks). So it shouldn't cause permanent damage. But it's a should.
Parameters from the output you provided:
/dev/sdd3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2 Name : 0e36f164:data-0 Creation Time : Sun Feb 1 21:32:44 2015 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 615438961 (293.46 GiB 315.10 GB) Array Size : 923158272 (880.39 GiB 945.31 GB) Used Dev Size : 615438848 (293.46 GiB 315.10 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262064 sectors, after=113 sectors State : active Device UUID : a51d6699:b35d008c:d3e1f115:610f3d0f Update Time : Tue Oct 31 08:32:47 2017 Checksum : 3518f348 - correct Events : 14159 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3: Device Role : Active device 2 /dev/sdd3: Device Role : Active device 1 /dev/sde3: Device Role : Active device 0 /dev/sdf3: Device Role : Active device 3
That's what I would try:
# Backup superblocks for each partition - if not already done for partition in /dev/sd[a-f][0-9]; do echo "Backing up superblocks for $partition"; dd if=$partition of=/root/superblocks_$(basename $partition).mdsb bs=64k count=1; done ls -lh /root/superblocks_* # Backup of "mdadm --examine" for each partition - new for partition in /dev/sd[a-f][0-9]; do echo "Backing up mdadm information for $partition"; mdadm --examine $partition > mdadm_-E_$(basename $partition).txt; done ls -lh /root/mdadm_-E_* # Start all healthy RAID arrays - if not already done mdadm --assemble --verbose /dev/md126 /dev/sdc4 /dev/sdd4 /dev/sde5 /dev/sdf5 mdadm --assemble --verbose /dev/md125 /dev/sde4 /dev/sdf4 mdadm --assemble --verbose /dev/md127 /dev/sde6 /dev/sdf6 # Recreate the unhealthy RAID array - new mdadm --create --verbose --assume-clean --level=5 --raid-devices=4 --size=461579136K --chunk=64K --data-offset=131072K /dev/md124 /dev/sde3 /dev/sdd3 missing /dev/sdf3 # Check the integrity - do it again cat /proc/mdstat btrfs device scan btrfs filesystem show btrfsck --readonly /dev/md127 mount -o ro /dev/md127 /mnt btrfs filesystem usage /mnt
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Recover files using Linux
Hi jak0lantash,
I tried to assemble using --force and I am now able to access the RAID!!!
Apparently all my files are safe. I am now making a backup.
I really want to thank you for all the support provided in the previuos weeks!!!!!!!!
You really saved me, not only the files!
Gabriele