NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
CircuitBreaker
May 31, 2024Aspirant
RN104 data recovery with single RAID1 volume
Hello all, I have a ReadyNAS 104 with firmware version 6.5.0 It has 2 x WD Red NAS drives WD30EFRX 3TB. One of the drives started showing degraded errors and by the time I tried to copy my dat...
StephenB
Jun 01, 2024Guru - Experienced User
CircuitBreaker wrote:
However, the cloned data is not accessible, and the 4TB drive cannot be mounted (it was formatted NTFS before the clone...)
Not sure what you did here. You needed to do sector-by-sector clone, so the format shouldn't have mattered.
Did you put the 4 TB drive into the NAS by itself, and then power it up? If not, you should try that.
If that fails, then I'd boot up the system with the original "good" disk in its normal slot, with the other slot empty. That should boot with a degraded status. Then copy off as much data as you can to other storage (perhaps using the 4 TB drive for that).
CircuitBreaker wrote:
I have a Spiral Linux with btrfs support installed and intend to follow this guide to try to rescue my data again to the 4TB disk: https://forum.manjaro.org/t/how-to-rescue-data-from-a-damaged-btrfs-volume/79414. I am also
Why are you thinking you have a damaged volume? It's clear you had a failed disk, and that can create the issues you describe. So the RAID array is degraded. But that doesn't necessarily mean that the volume is damaged.
- CircuitBreakerJun 01, 2024Aspirant
@SpephenB thank you so much for the fast reply!!
I started the NAS with the cloned 4TB disk in place of the original disk 1 and the slot 2 empty. I got the message "ERR: Used disks. Check RAIDar". My (newbie) guess is that original the clone was not successful and I may have an image created from GParted, so I was thinking of creating a real clone on the 4TB with CloneZilla or similar. Is this advisable before I continue with recovery?
I also started the NAS with only the original disk 1 in place. It reached 93% after boot and showed a message " Fail to startup". Now sure what to do next here...
StephenB wrote:
CircuitBreaker wrote:However, the cloned data is not accessible, and the 4TB drive cannot be mounted (it was formatted NTFS before the clone...)
Not sure what you did here. You needed to do sector-by-sector clone, so the format shouldn't have mattered.
Did you put the 4 TB drive into the NAS by itself, and then power it up? If not, you should try that.
If that fails, then I'd boot up the system with the original "good" disk in its normal slot, with the other slot empty. That should boot with a degraded status. Then copy off as much data as you can to other storage (perhaps using the 4 TB drive for that).
CircuitBreaker wrote:I have a Spiral Linux with btrfs support installed and intend to follow this guide to try to rescue my data again to the 4TB disk: https://forum.manjaro.org/t/how-to-rescue-data-from-a-damaged-btrfs-volume/79414. I am also
Why are you thinking you have a damaged volume? It's clear you had a failed disk, and that can create the issues you describe. So the RAID array is degraded. But that doesn't necessarily mean that the volume is damaged.
Since the NAS is not booting with disk 1, I would rather mount disk 1 on the Linux box and copy the data as this my plan B. I could not figure out how to mount it successfully after a lot searching and trying. I would appreciate help if the data recovery from the degraded NAS fails.
Thank you again!!
- StephenBJun 01, 2024Guru - Experienced User
CircuitBreaker wrote:
My (newbie) guess is that original the clone was not successful
Mine too. CloneZilla should work (as will other tools that do a sector by sector clone).
CircuitBreaker wrote:
Since the NAS is not booting with disk 1, I would rather mount disk 1 on the Linux box and copy the data as this my plan B. I could not figure out how to mount it successfully after a lot searching and trying.
You need to have both mdadm and btrfs installed on the linux box.
The first step is to assemble the mdadm array. If you always used 3 TB disks, then the volume is on disk partition 3. Partition 1 is the OS; Partition 2 is swap.
The array is assembled using
mdadm --assemble /dev/md127 /dev/sdX3
where sdX is the disk device.
Then you mount md127 (I suggest read-only):
mount -r ro /dev/md127 /data
You can of course use any mount point, it doesn't need to be /data.
If you see more than 3 partitions, then the volume was expanded. In that case you need some extra steps - let us know if that is your situation.
- CircuitBreakerJun 01, 2024Aspirant
$cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md127 : inactive sdc3[1](S) 2925414840 blocks super 1.2 md0 : active (auto-read-only) raid1 sdc1[1] 4190208 blocks super 1.2 [2/1] [_U] bitmap: 1/1 pages [4KB], 65536KB chunk unused devices: <none> $ sudo mdadm --assemble /dev/md127 /dev/sdc3 mdadm: /dev/sdc3 is busy - skipping $ sudo mdadm --stop md127 mdadm: stopped md127 $ sudo mdadm --assemble /dev/md127 /dev/sdc3 mdadm: /dev/md127 assembled from 0 drives and 1 spare - not enough to start the array. $ cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md127 : inactive sdc3[1](S) 2925414840 blocks super 1.2 md0 : active (auto-read-only) raid1 sdc1[1] 4190208 blocks super 1.2 [2/1] [_U] bitmap: 1/1 pages [4KB], 65536KB chunk unused devices: <none> $ mount -r ro /dev/md127 /data mount: bad usage
I cloned the "good" disk 1 to the 4TB new disk. However, when plugged into the NAS it still shows "ERR: Used Disks. Check RAIDar"
I then tried to assemble and mount disk 1 in the Linux box
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!