- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
RN104 data recovery with single RAID1 volume
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
RN104 data recovery with single RAID1 volume
Hello all,
I have a ReadyNAS 104 with firmware version 6.5.0
It has 2 x WD Red NAS drives WD30EFRX 3TB.
One of the drives started showing degraded errors and by the time I tried to copy my data it failed completely. Now I am left with one (almost) good disk 1 and another certainly bad disk 2. The quick smart scans of both drives with WD Dashboard show them as healthy, while the extended scans fail and do not complete. The CrystalDiskInfo scan of disk 1 is attached and seems recoverable based on the info I found in a number of blogs.
Before pulling the disks, the NAS would get stuck on the "Powering Off" message and a flashing power button for days, and I had to pull off the power cable a couple of times. It seems that now one of the disks is also out of sync. So, the first thing I did was to clone the "good" disk 1 with GParted and a live CD onto a new WD Red 4TB drive. However, the cloned data is not accessible, and the 4TB drive cannot be mounted (it was formatted NTFS before the clone - likely a newbie error on my part).
Before attempting to recover data from disk 1 I want to make sure the clone/image on the 4TB disk is good. I have a Spiral Linux with btrfs support installed and intend to follow this guide to try to rescue my data again to the 4TB disk: https://forum.manjaro.org/t/how-to-rescue-data-from-a-damaged-btrfs-volume/79414. I am also nervous that I may destroy the only data backup in the process.
I would greatly appreciate any guidance and help to accomplish data rescue safely!!
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
@CircuitBreaker wrote:
However, the cloned data is not accessible, and the 4TB drive cannot be mounted (it was formatted NTFS before the clone...)
Not sure what you did here. You needed to do sector-by-sector clone, so the format shouldn't have mattered.
Did you put the 4 TB drive into the NAS by itself, and then power it up? If not, you should try that.
If that fails, then I'd boot up the system with the original "good" disk in its normal slot, with the other slot empty. That should boot with a degraded status. Then copy off as much data as you can to other storage (perhaps using the 4 TB drive for that).
@CircuitBreaker wrote:
I have a Spiral Linux with btrfs support installed and intend to follow this guide to try to rescue my data again to the 4TB disk: https://forum.manjaro.org/t/how-to-rescue-data-from-a-damaged-btrfs-volume/79414. I am also
Why are you thinking you have a damaged volume? It's clear you had a failed disk, and that can create the issues you describe. So the RAID array is degraded. But that doesn't necessarily mean that the volume is damaged.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
@SpephenB thank you so much for the fast reply!!
I started the NAS with the cloned 4TB disk in place of the original disk 1 and the slot 2 empty. I got the message "ERR: Used disks. Check RAIDar". My (newbie) guess is that original the clone was not successful and I may have an image created from GParted, so I was thinking of creating a real clone on the 4TB with CloneZilla or similar. Is this advisable before I continue with recovery?
I also started the NAS with only the original disk 1 in place. It reached 93% after boot and showed a message " Fail to startup". Now sure what to do next here...
@StephenB wrote:
@CircuitBreaker wrote:However, the cloned data is not accessible, and the 4TB drive cannot be mounted (it was formatted NTFS before the clone...)
Not sure what you did here. You needed to do sector-by-sector clone, so the format shouldn't have mattered.
Did you put the 4 TB drive into the NAS by itself, and then power it up? If not, you should try that.
If that fails, then I'd boot up the system with the original "good" disk in its normal slot, with the other slot empty. That should boot with a degraded status. Then copy off as much data as you can to other storage (perhaps using the 4 TB drive for that).
@CircuitBreaker wrote:I have a Spiral Linux with btrfs support installed and intend to follow this guide to try to rescue my data again to the 4TB disk: https://forum.manjaro.org/t/how-to-rescue-data-from-a-damaged-btrfs-volume/79414. I am also
Why are you thinking you have a damaged volume? It's clear you had a failed disk, and that can create the issues you describe. So the RAID array is degraded. But that doesn't necessarily mean that the volume is damaged.
Since the NAS is not booting with disk 1, I would rather mount disk 1 on the Linux box and copy the data as this my plan B. I could not figure out how to mount it successfully after a lot searching and trying. I would appreciate help if the data recovery from the degraded NAS fails.
Thank you again!!
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
@CircuitBreaker wrote:
My (newbie) guess is that original the clone was not successful
Mine too. CloneZilla should work (as will other tools that do a sector by sector clone).
@CircuitBreaker wrote:
Since the NAS is not booting with disk 1, I would rather mount disk 1 on the Linux box and copy the data as this my plan B. I could not figure out how to mount it successfully after a lot searching and trying.
You need to have both mdadm and btrfs installed on the linux box.
The first step is to assemble the mdadm array. If you always used 3 TB disks, then the volume is on disk partition 3. Partition 1 is the OS; Partition 2 is swap.
The array is assembled using
mdadm --assemble /dev/md127 /dev/sdX3
where sdX is the disk device.
Then you mount md127 (I suggest read-only):
mount -r ro /dev/md127 /data
You can of course use any mount point, it doesn't need to be /data.
If you see more than 3 partitions, then the volume was expanded. In that case you need some extra steps - let us know if that is your situation.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
$cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdc3[1](S)
2925414840 blocks super 1.2
md0 : active (auto-read-only) raid1 sdc1[1]
4190208 blocks super 1.2 [2/1] [_U]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
$ sudo mdadm --assemble /dev/md127 /dev/sdc3
mdadm: /dev/sdc3 is busy - skipping
$ sudo mdadm --stop md127
mdadm: stopped md127
$ sudo mdadm --assemble /dev/md127 /dev/sdc3
mdadm: /dev/md127 assembled from 0 drives and 1 spare - not enough to start the array.
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdc3[1](S)
2925414840 blocks super 1.2
md0 : active (auto-read-only) raid1 sdc1[1]
4190208 blocks super 1.2 [2/1] [_U]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
$ mount -r ro /dev/md127 /data
mount: bad usage
I cloned the "good" disk 1 to the 4TB new disk. However, when plugged into the NAS it still shows "ERR: Used Disks. Check RAIDar"
I then tried to assemble and mount disk 1 in the Linux box
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
I cloned the "good" disk 1 to the 4TB new disk. However, when plugged into the NAS it still shows "ERR: Used Disks. Check RAIDar"
I then tried to assemble and mount disk 1 in the Linux box and run into errors:
$cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdc3[1](S)
2925414840 blocks super 1.2
md0 : active (auto-read-only) raid1 sdc1[1]
4190208 blocks super 1.2 [2/1] [_U]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
$ sudo mdadm --assemble /dev/md127 /dev/sdc3
mdadm: /dev/sdc3 is busy - skipping
$ sudo mdadm --stop md127
mdadm: stopped md127
$ sudo mdadm --assemble /dev/md127 /dev/sdc3
mdadm: /dev/md127 assembled from 0 drives and 1 spare - not enough to start the array.
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdc3[1](S)
2925414840 blocks super 1.2
md0 : active (auto-read-only) raid1 sdc1[1]
4190208 blocks super 1.2 [2/1] [_U]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
$ mount -r ro /dev/md127 /data
mount: bad usage
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
md127 : inactive sdc3[1](S)
2925414840 blocks super 1.2
It looks like the NAS was attempting to sync the good drive from the bad one when you powered down the NAS. That is a problem because mdadm marks the drive as a spare when it is resyncing it.
I don't have a good way to clear the spare status - you might need to use RAID recovery software (like ReclaiMe) to recover the data.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
OK, I understand. Is ReclaimMe the best option for a btrfs drive? I have also seen DiskDrill recommended in several places.
Is there any chance of recovering data from any of the two volumes using the process I linked in my original post: [how to] rescue data from a damaged btrfs volume?
Thank you for looking into my problem!
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
@CircuitBreaker wrote:
I have also seen DiskDrill recommended in several places.
No idea on whether DiskDrill will do the job or not (or what version supports BTRFS). It could work.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
The Disk Drill web site does not indicate that it supports any kind of NAS volume.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
First of all big thanks for the help and generosity on this community! I also wanted to share an update of my data recovery process and hope that this may help someone. Most steps below took from one to several days...
So I had to swallow the cost and buy ReclaimMe for ~$200 with tax. Just a warning that it may take up to a day or two after you pay because they wanted to "verify" that I am the real buyer and asked for copy of photo ID, bank statement and/or transaction number before sending me my license key. After a few emails and screenshots of the pending purchase from my bank card app I finally got it.
I decided to keep at least two copies of the (important) data during the recovery, so signed up for BackBlaze since they have no limit on file and backup size. I had a copy on my computer of most of the data on the ReadyNAS (pictures, videos, and documents) but have not synced for some time so made a cloud backup of each copy of the data i.e. the local and from the NAS
1. Backup all local PC data to BackBlaze
2. Recover the most important folders from the NAS "good" drive to a local drive. (I did not have enough space for everything)
3. Backup to BackBlaze
4. Setup a new DS220+ NAS with my new 4TB WD drive
5. Move the first batch of recovered files from my PC to the new NAS.
6. Repeat 2, 3, and 5 for the remaining files from the original RN drive
Once all data was on the new 4TB NAS drive, I wanted to verify that I could recover data from it. However, it is a BRTFS and I still could not mount it under Linux. Below is the dmesg output from this or similar command:
sudo mount UUID=f7477c77-f134-468f-86cb-dfd01af93e8c -o subvolid=5 /mnt/
mount: /mnt: can't read superblock on /dev/mapper/vg1-volume_1.
dmesg(1) may have more information after failed mount system call
[ 2072.479819] BTRFS info (device dm-1): using crc32c (crc32c-intel) checksum algorithm
[ 2072.479832] BTRFS info (device dm-1): using free space tree
[ 2072.493260] BTRFS critical (device dm-1): corrupt leaf: root=1 block=163692544 slot=17, invalid root flags, have 0x400000000 expect mask 0x1000000000001
[ 2072.493272] BTRFS error (device dm-1): read time tree block corruption detected on logical 163692544 mirror 1
[ 2072.497822] BTRFS critical (device dm-1): corrupt leaf: root=1 block=163692544 slot=17, invalid root flags, have 0x400000000 expect mask 0x1000000000001
[ 2072.497831] BTRFS error (device dm-1): read time tree block corruption detected on logical 163692544 mirror 2
[ 2072.497854] BTRFS warning (device dm-1): couldn't read tree root
[ 2072.498374] BTRFS error (device dm-1): open_ctree failed
My guess is that there are some bad sectors on the the new drive from when it was mirrored with a corrupt drive in the RN since this is a brand-new drive. I can see two options here and would appreciate advice from the experts:
A. Try to repair the 4TB drive to ensure mounting and data recovery works and continue to use it in the NAS
B. Wipe and repair the "good" original RN 3TB drive, set it up as Ext4 drive in the new NAS and use ReclaimMe to copy all recovered data from the 4TB drive
The goal in the end is to have a rebuilt RAID1 in the new NAS with the 3TB and 4TB drives until I get a couple of new bigger drives to replace them
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
I am confused on your pathway.
The whole point of purchasing ReclaiMe was to use it to recover files from the damaged file system. That doesn't require mounting it.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
@StephenB wrote:I am confused on your pathway.
The whole point of purchasing ReclaiMe was to use it to recover files from the damaged file system. That doesn't require mounting it.
Sorry for my long-winded post. I have recovered the files from the original 3TB drive from my RN to the new 4TB drive in the DS220+. However, testing my new 4TB drive still shows bad sectors and it cannot be mounted by BRTFS. I am not comfortable ignoring these errors in case I need to recover data again from the new 4TB drive, and I also want to be able to transfer data from it on the Linux box for faster backups in the future.
In short, my question is: Should I try to repair these bad sectors on the new 4TB drive?
My other option B is to copy again everything to a drive I have checked for errors which I hope will be able to be mounted on the Linux box
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
@CircuitBreaker wrote:I have recovered the files from the original 3TB drive from my RN to the new 4TB drive in the DS220+.
Great, I somehow missed that.
@CircuitBreaker wrote:
However, testing my new 4TB drive still shows bad sectors and it cannot be mounted by BRTFS. I am not comfortable
I am not seeing any evidence of bad sectors in your posts. Can you give more info on that?
Generally if you are suspecting bad sectors (perhaps seeing them in smart stats, or WD's Dashboard software. or disk errors in the logs), then the best strategy is to exchange the disk with the seller if you are still within the return period. Otherwise pursue a warranty replacement with WD.
If I am reading your post correctly, the drive was formatted/installed on the Synology, correct? Then you copied the data to the Synology data volume with ReclaiMe. If that is correct, then you should see these errors in the Synology logs. Have you looked there?
ReadyNAS volumes (even JBOD) always need to be assembled with mdadm first, and then you mount the mdadm array. I don't own Synology, so I don't know how they handle this. You should make sure you are using the correct mounting procedure in the Synology forum.
I am hoping you didn't purchase a WD Red. Current WD Red drives use SMR (shingled magnetic recording), which often misbehaves in RAID configurations. WD Red Plus and WD Red Pro are CMR, so don't have this issue.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
@CircuitBreaker wrote:
However, testing my new 4TB drive still shows bad sectors and it cannot be mounted by BRTFS. I am not comfortableI am not seeing any evidence of bad sectors in your posts. Can you give more info on that?
I followed the process as described in this thread with mdadm but did not post all commands before the `mount` attempt. The dmesg output was in one of my previous posts but also adding it here for clarity. It seems that there are some corrupt blocks (leafs) and root flag errors - I am not sure if they can be repaired:
[ 2072.479819] BTRFS info (device dm-1): using crc32c (crc32c-intel) checksum algorithm
[ 2072.479832] BTRFS info (device dm-1): using free space tree
[ 2072.493260] BTRFS critical (device dm-1): corrupt leaf: root=1 block=163692544 slot=17, invalid root flags, have 0x400000000 expect mask 0x1000000000001
[ 2072.493272] BTRFS error (device dm-1): read time tree block corruption detected on logical 163692544 mirror 1
[ 2072.497822] BTRFS critical (device dm-1): corrupt leaf: root=1 block=163692544 slot=17, invalid root flags, have 0x400000000 expect mask 0x1000000000001
[ 2072.497831] BTRFS error (device dm-1): read time tree block corruption detected on logical 163692544 mirror 2
[ 2072.497854] BTRFS warning (device dm-1): couldn't read tree root
[ 2072.498374] BTRFS error (device dm-1): open_ctree failed
The 4TB drive was installed/formatted in Synology but I did not do any error checks. I will check the Synology logs for errors.
It is a WD Red WD40EFRX and I bought it a couple of years ago as a spare for emergencies like this one. I will check it for errors with the WD Dashboard and see if it is still under warranty. Thanks!
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
@CircuitBreaker wrote:
I followed the process as described in this thread with mdadm but did not post all commands before the `mount` attempt. The dmesg output was in one of my previous posts but also adding it here for clarity. It seems that there are some corrupt blocks (leafs) and root flag errors - I am not sure if they can be repaired:
Clearly there is a problem here. But unless you have more data you haven't posted, you are leaping to a conclusion when you attribute this to bad sectors on the disk. You aren't reporting any read errors, and you haven't checked for write errors on the Synology. A bad disk is one possibility, but there are others.
FWIW, I always test my own disk using vendor tools before installing them in the NAS. That includes the full long non-destructive diag and when possible, the full write zeros test. (I prefer the older Lifeguard WD software over their Dashboard, as WD removed the write test in Dashboard). I have found some out-of-the-box disks that pass one of thest tests, but fail the others.
As I said, I don't know the steps needed to mount a synology drive. I am thinking the superblock issue you posted before this snippet perhaps needs to be explored first. But first I'd put it back in the Synology and see if it mounts there when you boot the NAS. If it does, then then almost certainly is due to the mounting steps you are using
While the disk is installed, you could also use smartctl on the synology to run the full non-destructive disk test, if that is more convenient than doing that test with WD software on a PC..
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
Since I installed it, the 4TB drive has mounted without problems on the Synology NAS. I only see issues when trying to mount it on the Linux machine.
I also could not find any problems with the 4TB drive in the Synology logs:
Info System 2024/06/10 00:51:16 xxxxx System starts to optimize [Storage Pool 1].
Info System 2024/06/10 00:51:16 xxxxx System successfully created [Volume 1] on [Storage Pool 1].
Info System 2024/06/10 00:51:16 xxxxx System successfully created [Storage Pool 1](Device Type is [SHR]) with drive [SynologyNAS: 1].
Info System 2024/06/10 00:50:50 xxxxx System starts to create [Volume 1] on [Storage Pool 1].
Info System 2024/06/10 00:50:48 xxxxx System starts to create [Storage Pool 1](Device Type is [SHR]) with drive [SynologyNAS: 1].
Now I am running an extended SMART scan and will report back
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: RN104 data recovery with single RAID1 volume
@CircuitBreaker wrote:
Since I installed it, the 4TB drive has mounted without problems on the Synology NAS. I only see issues when trying to mount it on the Linux machine.
Then I think you are just needing to do something different in your commands.
There is a guide here: