NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
RN716x
Jan 04, 2023Aspirant
ReadyNAS 716X data- Two volumes after Resync
System: Readynas 716X F/W: 6.10.8 Old HDDs: 6TBx6 (WD Red) New HDD: 8TBx1 (WD Red) Raid 5 (X-Raid) I have recently experienced some issues with my Readynas system. I found out that I had to r...
- Jan 06, 2023
RN716x wrote:
Is it asking for an input here?
root@MaQBaLiNAS:~# cat /etc/mdadm/mdadm.conf
CREATE owner=root group=disk mode=0660 auto=yes
This is the same as the file on my system, so nothing unusual here.
RN716x wrote:
root@MaQBaLiNAS:~# mdadm --assemble --scan --force
mdadm: NOT forcing event count in /dev/sdc3(3) from 11031 up to 19055
mdadm: You can use --really-force to do that (DANGEROUS)
mdadm: /dev/md/data-0 assembled from 4 drives - not enough to start the array.
mdadm: No arrays found in config file or automaticallyThis is telling you that /dev/sdc3 is missing a lot of writes - so it is seriously behind the other drives in the array.
- You could proceed with --really-force instead of force. You could end up with a lot of data loss/corruption if you do that.
- Another other option is go back to SDA. Power down, remove SDC, insert the SDA. Power up, and try the same mdadm command again, and see if the event count gap is smaller. If you go this route, you should definitely mount the volume as read-only - as we know that SDA is failing, and writes will likely only accelerate that. Also they would increase the event gap between sdc and the rest of the array (making going back to SDC even more risky).
This isn't an obvious decision. SDA likely will have a smaller event gap, but we already know it has unreadable sectors. Plus it might fail when you try to offload data. Still, I think it is likely the best path (with read-only mounting).
A variant of (2) is to clone SDA, and insert the clone. The cloning process will skip over any unreadable sectors on the original. The benefit of the variant is that there will be no bad sectors detected on the clone (which is a mixed blessing, as bad sectors do give you some information on file corruption). The risk is that the original will completely fail during the cloning process.
Either way (1, 2, or 2 variant), you'd copy off as much data as you can, do a factory default with the two new drives installed in place of SDA and SDC. You'd reconfigure the NAS at that point, and then restore the files from the backup.
StephenB
Jan 04, 2023Guru - Experienced User
RN716x wrote:
If the original sdd (6TB) is still working in the pre-sync state. i.e. I removed it and replaced it with the new 8TB. Would I still need to clone it to a new drive or can I just connect it and try to assemble?
The two paths are to start with the original sdd, or start with the original sda. Which is best depends on the amount of bad sectors on the two drives.
Cloning just eliminates the chance that the drive will fail during resync. SDA already did, so we know that one would be challenging. If the drive is cloned with errors, then there will probably be corrupt files/folders, but the RAID won't be able to detect that (since the sectors would all be readable).
SDD isn't so clear - you had issues with it before. But you could try putting the drive back into the NAS (with SDA removed), force the array to assemble, and then try to sync with a new 8 TB drive in slot 1. Assuming it does assemble, you could try making a backup first.
RN716x wrote:
Would you elaborate further on this part please, just to make sure I'm following correctly:
"Then you could forcibly assemble the array,"
Basically there are write counters on each drive in the RAID array. Those are used by the RAID software to make sure all writes to all drives completed. When you reinsert a drive, in almost all cases the write counters will not be the same (even if you don't think you wrote anything to the array). If the counts aren't very close, the array won't mount - that is one cause of the "inactive volume" problem.
In your case, the count for sda3 is already off, and it was kicked out of the array. Likely that is partly due to the burst of errors at the end of the sync. Since it is off, the original sdd3 is almost certainly also off.
Overall, the process would be to power down the NAS, remove sda, and put back the original sdd. Then power up (perhaps read-only for safety). If that works, then you should back up the data if you can, and hot-insert one of the 8 TB drives.
If it's the one that you already tried, then remove the partitions with Windows Disk Manager before reinserting (or alternatively select it from the NAS web ui, and manually format it).
If that resync completes, then remove sdd, and insert the new 8 TB drive in it's place (NAS running).
However, I think the inactive volume error will persist when you power up the NAS w/o sda and with the original sdd. In that case, you'd need to use some linux commands to force the array to mount. Normally I've done this in tech support mode (not in the normal boot). But the commands should work in a normal boot as well.
Something like
mdadm --force --assemble /dev/md127
btrfs scan device
mount /dev/md127 /data
If you prefer mounting the array read-only then put a "-o ro" after "mount" in the last command.
If you try this, make sure you log in as root (using the NAS admin password) when accessing the NAS with ssh.
RN716x
Jan 05, 2023Aspirant
After connecting to the readynas, and using the mdadm command:
StephenB wrote:Something like
mdadm --force --assemble /dev/md127
btrfs scan device
mount /dev/md127 /data
If you prefer mounting the array read-only then put a "-o ro" after "mount" in the last command.
If you try this, make sure you log in as root (using the NAS admin password) when accessing the NAS with ssh.
I get the below:
root@MaQBaLiNAS:~# mdadm /dev/md127
mdadm: cannot open /dev/md127: No such file or directory
root@MaQBaLiNAS:~# btrfs scan device
btrfs: unknown token 'scan'
btrfs subvolume create [-i <qgroupid>] [<dest>/]<name>
then when trying to mount:
root@MaQBaLiNAS:~# mount /dev/md127 /data
mount: special device /dev/md127 does not exist
Not sure If I'm missing something here.
- StephenBJan 05, 2023Guru - Experienced User
RN716x wrote:
root@MaQBaLiNAS:~# mdadm /dev/md127
mdadm: cannot open /dev/md127: No such file or directoryNot sure If I'm missing something here.
Well, you didn't try the mdadm command I posted.
One puzzle here is the missing files in your log zip. Can you post /etc/mdadm/mdadm.conf ???
I accidentally got the btrfs command backwards - it should have been
btrfs device scanBut you need to get the mdadm part to work before you try the btrfs command.
- RN716xJan 06, 2023Aspirant
StephenB wrote:Well, you didn't try the mdadm command I posted.
root@MaQBaLiNAS:~# mdadm --force --assemble /dev/md127 mdadm: --force does not set the mode, and so cannot be the first option.
StephenB wrote:One puzzle here is the missing files in your log zip. Can you post /etc/mdadm/mdadm.conf ???
root@MaQBaLiNAS:~# /etc/mdadm/mdadm.conf -bash: /etc/mdadm/mdadm.conf: Permission denied
StephenB wrote:btrfs device scanBut you need to get the mdadm part to work before you try the btrfs command.
============
Still tried it even though,
root@MaQBaLiNAS:~# btrfs device scan Scanning for Btrfs filesystemsThen tried this command
root@MaQBaLiNAS:~# mdadm --assemble --scan mdadm: /dev/md/data-0 assembled from 4 drives - not enough to start the array. mdadm: No arrays found in config file or automatically5 drives are connected (inc. original 6TB sdd4). The only drive not connected is the sdda. Should I try connecting sdda which failed during sync but is still working, or perhaps the sdd4 8TB which apparently didn't successfully sync?
The 'mount' command throws this output:
root@MaQBaLiNAS:~# mount /dev/md127 /data mount: special device /dev/md127 does not exist - StephenBJan 06, 2023Guru - Experienced User
RN716x wrote:
root@MaQBaLiNAS:~# /etc/mdadm/mdadm.conf
-bash: /etc/mdadm/mdadm.conf: Permission deniedOne challenge here is sorting out how much linux people know.
You tried to execute the conf file - which can't be done. If you want to list it, you type
cat /etc/mdadm/mdadm.confBut there is a hint that the RAID array isn't in the config file (the error message you got when you tried to do mdadm --assemble --scan).
RN716x wrote:
root@MaQBaLiNAS:~# mdadm --assemble --scan
mdadm: /dev/md/data-0 assembled from 4 drives - not enough to start the array. mdadm: No arrays found in config file or automaticallyAgain, this is the step you HAVE to get resolved first. There is no point in continuing until you can force the mdadm array to assemble.
Try adding --force to this command.
mdadm --assemble --scan --forceFWIW, I am worried that even if you get past this you will end up needing to do a factory default - it's pretty clear from the log zip that you have more issues - not just an inactive volume. So if we can get it mounted, it will be important to offload the data to other storage.
- RN716xJan 06, 2023Aspirant
Is it asking for an input here?
root@MaQBaLiNAS:~# cat /etc/mdadm/mdadm.conf CREATE owner=root group=disk mode=0660 auto=yesThen
root@MaQBaLiNAS:~# mdadm --assemble --scan --force mdadm: NOT forcing event count in /dev/sdc3(3) from 11031 up to 19055 mdadm: You can use --really-force to do that (DANGEROUS) mdadm: /dev/md/data-0 assembled from 4 drives - not enough to start the array. mdadm: No arrays found in config file or automatically - StephenBJan 06, 2023Guru - Experienced User
RN716x wrote:
Is it asking for an input here?
root@MaQBaLiNAS:~# cat /etc/mdadm/mdadm.conf
CREATE owner=root group=disk mode=0660 auto=yes
This is the same as the file on my system, so nothing unusual here.
RN716x wrote:
root@MaQBaLiNAS:~# mdadm --assemble --scan --force
mdadm: NOT forcing event count in /dev/sdc3(3) from 11031 up to 19055
mdadm: You can use --really-force to do that (DANGEROUS)
mdadm: /dev/md/data-0 assembled from 4 drives - not enough to start the array.
mdadm: No arrays found in config file or automaticallyThis is telling you that /dev/sdc3 is missing a lot of writes - so it is seriously behind the other drives in the array.
- You could proceed with --really-force instead of force. You could end up with a lot of data loss/corruption if you do that.
- Another other option is go back to SDA. Power down, remove SDC, insert the SDA. Power up, and try the same mdadm command again, and see if the event count gap is smaller. If you go this route, you should definitely mount the volume as read-only - as we know that SDA is failing, and writes will likely only accelerate that. Also they would increase the event gap between sdc and the rest of the array (making going back to SDC even more risky).
This isn't an obvious decision. SDA likely will have a smaller event gap, but we already know it has unreadable sectors. Plus it might fail when you try to offload data. Still, I think it is likely the best path (with read-only mounting).
A variant of (2) is to clone SDA, and insert the clone. The cloning process will skip over any unreadable sectors on the original. The benefit of the variant is that there will be no bad sectors detected on the clone (which is a mixed blessing, as bad sectors do give you some information on file corruption). The risk is that the original will completely fail during the cloning process.
Either way (1, 2, or 2 variant), you'd copy off as much data as you can, do a factory default with the two new drives installed in place of SDA and SDC. You'd reconfigure the NAS at that point, and then restore the files from the backup.
- RN716xJan 06, 2023Aspirant
Following option 2, removed sdd, connected sda in read-only. The difference is clearly narrower.
root@MaQBaLiNAS:~# mdadm --assemble --scan --force mdadm: NOT forcing event count in /dev/sda3(0) from 18988 up to 19055 mdadm: You can use --really-force to do that (DANGEROUS) mdadm: /dev/md/data-0 assembled from 4 drives - not enough to start the array. mdadm: No arrays found in config file or automatically root@MaQBaLiNAS:~# mdadm --assemble --scan --really-force mdadm: forcing event count in /dev/sda3(0) from 18988 upto 19055 mdadm: /dev/md/data-0 assembled from 5 drives - not enough to start the array. mdadm: No arrays found in config file or automatically - RN716xJan 06, 2023Aspirant
Quick update after restarting the Nas. Not sure if trying to reconnect sdd to resync would be a good option, hoping to have better redundancy till offload the data.
- StephenBJan 06, 2023Guru - Experienced User
RN716x wrote:
Quick update after restarting the Nas. Not sure if trying to reconnect sdd to resync would be a good option, hoping to have better redundancy till offload the data.
I'm glad it finally mounted.
Don't add any more drives - that will start a resync process that will stress the failing one. If you push it over the edge trying to get redundancy, then you will lose all your data. Remember this whole mess started when disk errors on sda caused the resync to fail.
Just offload everything you can as quickly as you can - using what space you have now to get things started.
- StephenBJan 09, 2023Guru - Experienced User
I'm glad you got the data off. It's very nerve-wracking when multiple disks are having problems.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!