NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Birillo71
Nov 03, 2017Aspirant
Recover files using Linux
Hi All, I had a problem with the NAS and I am doing the possible to recover some of the files connecting the disks to a Linux System. I used the following commands as suggested in another discus...
- Nov 17, 2017
The next steps depend on how bad you want the data back, and what risks you want to take.
Of course, you can always contact NETGEAR for Data Recovery, they offer this kind of service as a contract.
While there are some pending sectors on sdf and sdf3 is clearly out of sync (not by much), I don't understand why sdc3 doesn't get included in the RAID array, though it shows bad blocks (mdadm output).
You could try to backup the superblocks (if not already done), then recreate the RAID array. But this could result in irrevocable data loss.
(I'm not 100% sure of what is the best approach at this stage.)
Based on the outputs you provided, I think there are two possibilities.
- Again, this is dangerous territory -
- Either try to recreate the RAID as "--assume-clean".
http://man7.org/linux/man-pages/man8/mdadm.8.html
- Or force the RAID array to assemble.
https://raid.wiki.kernel.org/index.php/RAID_Recovery#Trying_to_assemble_using_--force
For both:
- Either with sde3 sdd3 sdf3
- Or sde3 sdd3 sdc3
In theory, as you "--assume-clean" and only include three members, it shouldn't try to rewrite any block of data (but will overwrite the superblocks). So it shouldn't cause permanent damage. But it's a should.
Parameters from the output you provided:
/dev/sdd3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2 Name : 0e36f164:data-0 Creation Time : Sun Feb 1 21:32:44 2015 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 615438961 (293.46 GiB 315.10 GB) Array Size : 923158272 (880.39 GiB 945.31 GB) Used Dev Size : 615438848 (293.46 GiB 315.10 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262064 sectors, after=113 sectors State : active Device UUID : a51d6699:b35d008c:d3e1f115:610f3d0f Update Time : Tue Oct 31 08:32:47 2017 Checksum : 3518f348 - correct Events : 14159 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3: Device Role : Active device 2 /dev/sdd3: Device Role : Active device 1 /dev/sde3: Device Role : Active device 0 /dev/sdf3: Device Role : Active device 3
That's what I would try:
# Backup superblocks for each partition - if not already done for partition in /dev/sd[a-f][0-9]; do echo "Backing up superblocks for $partition"; dd if=$partition of=/root/superblocks_$(basename $partition).mdsb bs=64k count=1; done ls -lh /root/superblocks_* # Backup of "mdadm --examine" for each partition - new for partition in /dev/sd[a-f][0-9]; do echo "Backing up mdadm information for $partition"; mdadm --examine $partition > mdadm_-E_$(basename $partition).txt; done ls -lh /root/mdadm_-E_* # Start all healthy RAID arrays - if not already done mdadm --assemble --verbose /dev/md126 /dev/sdc4 /dev/sdd4 /dev/sde5 /dev/sdf5 mdadm --assemble --verbose /dev/md125 /dev/sde4 /dev/sdf4 mdadm --assemble --verbose /dev/md127 /dev/sde6 /dev/sdf6 # Recreate the unhealthy RAID array - new mdadm --create --verbose --assume-clean --level=5 --raid-devices=4 --size=461579136K --chunk=64K --data-offset=131072K /dev/md124 /dev/sde3 /dev/sdd3 missing /dev/sdf3 # Check the integrity - do it again cat /proc/mdstat btrfs device scan btrfs filesystem show btrfsck --readonly /dev/md127 mount -o ro /dev/md127 /mnt btrfs filesystem usage /mnt
Birillo71
Nov 15, 2017Aspirant
Hi,
thsi is the output:
root@ubuntu:~# dmesg | tac | sed '/md: md124 stopped./q' | tac
[205216.925353] md: md124 stopped.
[205217.585713] md: kicking non-fresh sdf3 from array!
[205218.283791] md/raid:md124: not clean -- starting background reconstruction
[205218.283857] md/raid:md124: device sdd3 operational as raid disk 1
[205218.283861] md/raid:md124: device sde3 operational as raid disk 0
[205218.285514] md/raid:md124: not enough operational devices (2/4 failed)
[205218.288051] md/raid:md124: failed to run raid set.
[205218.288057] md: pers->run() failed ...
root@ubuntu:~#
Thank you
Gabriele
jak0lantash
Nov 15, 2017Mentor
Well, it's not picking up sdc3, but it doesn't say why...
In the output of the mdadm --examine for sdc3, there is an attribute not present on the other ones, but I don't know what it means.
/dev/sdc3: Bad Block Log : 512 entries available at offset 264 sectors - bad blocks present.
Can you check with this command if sdf and sdc show any errors (if you post the output, remove the serial numbers):
smartctl -a /dev/sdc smartctl -a /dev/sdf
At this stage, we're getting to desperate measures. There are a few things we can try next, but they're all intrusive and may be destructive. For example, if sdc shows errors, we can try recreating the array as "--assume-clean sde3 sdd3 missing sdf3" . Because the event count and time differences are small on sdf3 compared to the other ones, it might work. But there is always a possibly for the data to end up irreversibly corrupted.
- Birillo71Nov 16, 2017Aspirant
Hi Jac0lantash,
thsi is the output of the commands above:
root@ubuntu:~# smartctl -a /dev/sdc
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.13.0-16-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Caviar Green
Device Model: WDC WD10EAVS-00D7B0
Serial Number: WD-WCAU42030792
LU WWN Device Id: 5 0014ee 2ac99d809
Firmware Version: 01.01A01
User Capacity: 1,000,204,886,016 bytes [1.00 TB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS (minor revision not indicated)
SATA Version is: SATA 2.5, 3.0 Gb/s
Local Time is: Thu Nov 16 07:31:00 2017 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x84) Offline data collection activity
was suspended by an interrupting command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (22200) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 255) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x303f) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0027 159 154 021 Pre-fail Always - 7025
4 Start_Stop_Count 0x0032 094 094 000 Old_age Always - 6155
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 100 253 051 Old_age Always - 0
9 Power_On_Hours 0x0032 029 029 000 Old_age Always - 52539
10 Spin_Retry_Count 0x0032 100 100 051 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 051 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 197
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46
193 Load_Cycle_Count 0x0032 198 198 000 Old_age Always - 6155
194 Temperature_Celsius 0x0022 107 102 000 Old_age Always - 43
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
root@ubuntu:~# smartctl -a /dev/sdf
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.13.0-16-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red
Device Model: WDC WD30EFRX-68EUZN0
Serial Number: WD-WCC4N1FP4JZH
LU WWN Device Id: 5 0014ee 2b6c133f8
Firmware Version: 82.00A82
User Capacity: 3,000,592,982,016 bytes [3.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2 (minor revision not indicated)
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Thu Nov 16 07:31:12 2017 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (40680) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 408) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x703d) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 10
3 Spin_Up_Time 0x0027 182 181 021 Pre-fail Always - 5883
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 57
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 084 084 000 Old_age Always - 12376
10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 51
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 41
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 1656
194 Temperature_Celsius 0x0022 113 108 000 Old_age Always - 37
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 5
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
root@ubuntu:~#
When you say risk of being destructive, is this a risk for latest changes to the disks or for all data?
If it is just for latest changes is not a problem.
Thank you again
Gabriele
- jak0lantashNov 17, 2017Mentor
The next steps depend on how bad you want the data back, and what risks you want to take.
Of course, you can always contact NETGEAR for Data Recovery, they offer this kind of service as a contract.
While there are some pending sectors on sdf and sdf3 is clearly out of sync (not by much), I don't understand why sdc3 doesn't get included in the RAID array, though it shows bad blocks (mdadm output).
You could try to backup the superblocks (if not already done), then recreate the RAID array. But this could result in irrevocable data loss.
(I'm not 100% sure of what is the best approach at this stage.)
Based on the outputs you provided, I think there are two possibilities.
- Again, this is dangerous territory -
- Either try to recreate the RAID as "--assume-clean".
http://man7.org/linux/man-pages/man8/mdadm.8.html
- Or force the RAID array to assemble.
https://raid.wiki.kernel.org/index.php/RAID_Recovery#Trying_to_assemble_using_--force
For both:
- Either with sde3 sdd3 sdf3
- Or sde3 sdd3 sdc3
In theory, as you "--assume-clean" and only include three members, it shouldn't try to rewrite any block of data (but will overwrite the superblocks). So it shouldn't cause permanent damage. But it's a should.
Parameters from the output you provided:
/dev/sdd3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 9c92d78d:3fa2e084:a32cf226:37d5c3c2 Name : 0e36f164:data-0 Creation Time : Sun Feb 1 21:32:44 2015 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 615438961 (293.46 GiB 315.10 GB) Array Size : 923158272 (880.39 GiB 945.31 GB) Used Dev Size : 615438848 (293.46 GiB 315.10 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262064 sectors, after=113 sectors State : active Device UUID : a51d6699:b35d008c:d3e1f115:610f3d0f Update Time : Tue Oct 31 08:32:47 2017 Checksum : 3518f348 - correct Events : 14159 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3: Device Role : Active device 2 /dev/sdd3: Device Role : Active device 1 /dev/sde3: Device Role : Active device 0 /dev/sdf3: Device Role : Active device 3
That's what I would try:
# Backup superblocks for each partition - if not already done for partition in /dev/sd[a-f][0-9]; do echo "Backing up superblocks for $partition"; dd if=$partition of=/root/superblocks_$(basename $partition).mdsb bs=64k count=1; done ls -lh /root/superblocks_* # Backup of "mdadm --examine" for each partition - new for partition in /dev/sd[a-f][0-9]; do echo "Backing up mdadm information for $partition"; mdadm --examine $partition > mdadm_-E_$(basename $partition).txt; done ls -lh /root/mdadm_-E_* # Start all healthy RAID arrays - if not already done mdadm --assemble --verbose /dev/md126 /dev/sdc4 /dev/sdd4 /dev/sde5 /dev/sdf5 mdadm --assemble --verbose /dev/md125 /dev/sde4 /dev/sdf4 mdadm --assemble --verbose /dev/md127 /dev/sde6 /dev/sdf6 # Recreate the unhealthy RAID array - new mdadm --create --verbose --assume-clean --level=5 --raid-devices=4 --size=461579136K --chunk=64K --data-offset=131072K /dev/md124 /dev/sde3 /dev/sdd3 missing /dev/sdf3 # Check the integrity - do it again cat /proc/mdstat btrfs device scan btrfs filesystem show btrfsck --readonly /dev/md127 mount -o ro /dev/md127 /mnt btrfs filesystem usage /mnt
- Birillo71Nov 18, 2017Aspirant
Hi jak0lantash,
I tried to assemble using --force and I am now able to access the RAID!!!
Apparently all my files are safe. I am now making a backup.
I really want to thank you for all the support provided in the previuos weeks!!!!!!!!
You really saved me, not only the files!
Gabriele
- jak0lantashNov 18, 2017Mentor
Great news!!! I'm glad I was able to help and that you got your data back!
First thing first, transfer your data to a new storage AND BACK IT UP! ;)
If you want to read about backups, you can find some information here (discard the bits about ReadyCLOUD): https://community.netgear.com/t5/Using-your-ReadyNAS/My-recommendation-Don-t-use-ReadyCloud-user-home-shares/m-p/1258463/highlight/true#M127461
As a note for other people who stumble across this thread: The steps described here are specific to this exact situation and based on the output of some commands. DO NOT ATTEMPT to just run those commands on your system unless you know what you're doing!
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!