NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
CircuitBreaker
May 31, 2024Aspirant
RN104 data recovery with single RAID1 volume
Hello all,
I have a ReadyNAS 104 with firmware version 6.5.0
It has 2 x WD Red NAS drives WD30EFRX 3TB.
One of the drives started showing degraded errors and by the time I tried to copy my data it failed completely. Now I am left with one (almost) good disk 1 and another certainly bad disk 2. The quick smart scans of both drives with WD Dashboard show them as healthy, while the extended scans fail and do not complete. The CrystalDiskInfo scan of disk 1 is attached and seems recoverable based on the info I found in a number of blogs.
Before pulling the disks, the NAS would get stuck on the "Powering Off" message and a flashing power button for days, and I had to pull off the power cable a couple of times. It seems that now one of the disks is also out of sync. So, the first thing I did was to clone the "good" disk 1 with GParted and a live CD onto a new WD Red 4TB drive. However, the cloned data is not accessible, and the 4TB drive cannot be mounted (it was formatted NTFS before the clone - likely a newbie error on my part).
Before attempting to recover data from disk 1 I want to make sure the clone/image on the 4TB disk is good. I have a Spiral Linux with btrfs support installed and intend to follow this guide to try to rescue my data again to the 4TB disk: https://forum.manjaro.org/t/how-to-rescue-data-from-a-damaged-btrfs-volume/79414. I am also nervous that I may destroy the only data backup in the process.
I would greatly appreciate any guidance and help to accomplish data rescue safely!!
17 Replies
CircuitBreaker wrote:
However, the cloned data is not accessible, and the 4TB drive cannot be mounted (it was formatted NTFS before the clone...)
Not sure what you did here. You needed to do sector-by-sector clone, so the format shouldn't have mattered.
Did you put the 4 TB drive into the NAS by itself, and then power it up? If not, you should try that.
If that fails, then I'd boot up the system with the original "good" disk in its normal slot, with the other slot empty. That should boot with a degraded status. Then copy off as much data as you can to other storage (perhaps using the 4 TB drive for that).
CircuitBreaker wrote:
I have a Spiral Linux with btrfs support installed and intend to follow this guide to try to rescue my data again to the 4TB disk: https://forum.manjaro.org/t/how-to-rescue-data-from-a-damaged-btrfs-volume/79414. I am also
Why are you thinking you have a damaged volume? It's clear you had a failed disk, and that can create the issues you describe. So the RAID array is degraded. But that doesn't necessarily mean that the volume is damaged.
- CircuitBreakerAspirant
@SpephenB thank you so much for the fast reply!!
I started the NAS with the cloned 4TB disk in place of the original disk 1 and the slot 2 empty. I got the message "ERR: Used disks. Check RAIDar". My (newbie) guess is that original the clone was not successful and I may have an image created from GParted, so I was thinking of creating a real clone on the 4TB with CloneZilla or similar. Is this advisable before I continue with recovery?
I also started the NAS with only the original disk 1 in place. It reached 93% after boot and showed a message " Fail to startup". Now sure what to do next here...
StephenB wrote:
CircuitBreaker wrote:However, the cloned data is not accessible, and the 4TB drive cannot be mounted (it was formatted NTFS before the clone...)
Not sure what you did here. You needed to do sector-by-sector clone, so the format shouldn't have mattered.
Did you put the 4 TB drive into the NAS by itself, and then power it up? If not, you should try that.
If that fails, then I'd boot up the system with the original "good" disk in its normal slot, with the other slot empty. That should boot with a degraded status. Then copy off as much data as you can to other storage (perhaps using the 4 TB drive for that).
CircuitBreaker wrote:I have a Spiral Linux with btrfs support installed and intend to follow this guide to try to rescue my data again to the 4TB disk: https://forum.manjaro.org/t/how-to-rescue-data-from-a-damaged-btrfs-volume/79414. I am also
Why are you thinking you have a damaged volume? It's clear you had a failed disk, and that can create the issues you describe. So the RAID array is degraded. But that doesn't necessarily mean that the volume is damaged.
Since the NAS is not booting with disk 1, I would rather mount disk 1 on the Linux box and copy the data as this my plan B. I could not figure out how to mount it successfully after a lot searching and trying. I would appreciate help if the data recovery from the degraded NAS fails.
Thank you again!!
CircuitBreaker wrote:
My (newbie) guess is that original the clone was not successful
Mine too. CloneZilla should work (as will other tools that do a sector by sector clone).
CircuitBreaker wrote:
Since the NAS is not booting with disk 1, I would rather mount disk 1 on the Linux box and copy the data as this my plan B. I could not figure out how to mount it successfully after a lot searching and trying.
You need to have both mdadm and btrfs installed on the linux box.
The first step is to assemble the mdadm array. If you always used 3 TB disks, then the volume is on disk partition 3. Partition 1 is the OS; Partition 2 is swap.
The array is assembled using
mdadm --assemble /dev/md127 /dev/sdX3
where sdX is the disk device.
Then you mount md127 (I suggest read-only):
mount -r ro /dev/md127 /data
You can of course use any mount point, it doesn't need to be /data.
If you see more than 3 partitions, then the volume was expanded. In that case you need some extra steps - let us know if that is your situation.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!