NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
mharring54
Jun 30, 2023Aspirant
ReadyNAS 214 "Remove inactive volumes to use the disk. Disk #1,2,3,4."
Hello, I recently updated my firmware on a ReadyNAS 214. Next time I looked I got a degraded volume error on Drive bay #1. I have 4 WD Red WD30EFRX drives but when I searched for a replacement I wa...
- Jul 03, 2023
mharring54 wrote:
Okay - understanding this is what's tripping me up.
Here's a brief (and simplified) explanation.
Sector X on disks a, b, and c are for data. The corresponding Sector X on disk d is a parity block. This is constructed using an exclusive-or (xor) of the three data blocks You can think of this as addition for our purposes here.
Every time Xa, Xb, or Xc are modified, the RAID also updates Xd.
So Xa + Xb + Xc = Xd.
If disk b is replaced, then Xb can be reconstructed by
Xb = Xd - Xa - Xc
Similarly, the contents of any of the other disks can be reconstructed from the remaining 3. That is what is happening when the RAID volume resyncs.
The reconstruction fails if
- the system crashed after Xa, Xb, or Xc was modified, but before Xd was updated.
- two or more disks fail (including a second disk failure during reconstruction).
- a disk read gives a wrong answer (instead of failing). This is sometimes called "bit rot".
- the system can't tell which disk was replaced.
The RAID system counts up the writes to each disk (maintaining an event counter for each disk). So it can detect the first failure mode (because the event counters won't match). When it sees that error, it will refuse to mount the volume. That is a fairly common cause of the inactive volume issue.
Often it is a result of a power failure, someone pulling the plug on the NAS instead of properly shutting it down, or a crash. The RAID array can usually be forcibly assembled (telling the system to ignore the event count mismatch). There can be some data loss, since there were writes that never made it to some of the disks.
Two or more disk failures sounds unlikely, but in fact it does happen. Recovery in that case is far more difficult (often impossible, or cost-prohibitive).
Figuring out what happened in your case requires analysis of the NAS logs. If you want me to take a look at them, you need to download the full log zip file from the NAS log page. Then put it into cloud storage (dropbox, icloud, etc), and send me a private message (PM) using the envelope icon in the upper right of the forum page. Put a link to the zip file in the PM (and set the permissions so anyone with the link can view/download the zip file).
StephenB
Jul 01, 2023Guru - Experienced User
mharring54 wrote:
I then tried different swaps and now have ended up with all the drives redded out under volume status but with a green indicator and the "Remove inactive volumes" message above.
Did you try putting the original drives back in their original slots?
If that fails, remove the drive you first removed (to insert the replacement) and try booting again.
You need to be careful when experimenting with different swaps, as the RAID array can easily get out of sync (making the issue more difficult to fix).
mharring54 wrote:
I was under the impression that I had complete data redundancy across 4 backup drives but my reading of these forums suggests that this NAS backup is not a complete safeguard.
RAID redundancy (though helpful) is not enough to keep data safe. I'm guessing you don't have a backup of the files on the NAS - is that the case?
mharring54
Jul 01, 2023Aspirant
Stephen,
Thanks. Yes, I put the drives back as originally configured and I get all drives redded out with a red status indicator on the original 'degraded' drive. On my performance status I get a black ball on Disk 1 and 3 green balls on Disks 2-4.
My overview shows status as "Healthy," whereas before it was "Volume Degraded." I suspect it's not reading Disk 1, so just the 3 additional drives.
I don't have a backup since I had less than 5TB of data and 12TB (4x3TB total over 4 drives and thought I was mirroring the data on the other drives. The Volume page now shows I have 8.17 TB of data but I have no idea how it is arrayed and how to recover it. It still says to remove inactive volumes 1-4.
I kept reading that the drives were hot swappable but now I'm not so sure and so shut down each time I swap the drives. The replacement drive I bought is a WD30EFZX Red Plus drive and when I put it in and select to Format, nothing happens(?)
I have lots of photographs stored as hires original files so I hope I can recover all. Are there any repair routines to rebuild the Raid array?|
Thanks.
- mharring54Jul 01, 2023Aspirant
Another screen shot of the Performance status...
- Michael
- StephenBJul 02, 2023Guru - Experienced User
mharring54 wrote:
I don't have a backup since I had less than 5TB of data and 12TB (4x3TB total over 4 drives and thought I was mirroring the data on the other drives.
It's not mirrored with RAID-5 (which is the RAID mode X-RAID is using in your case). It's more complicated than that.
mharring54 wrote:
I kept reading that the drives were hot swappable but now I'm not so sure and so shut down each time I swap the drives. The replacement drive I bought is a WD30EFZX Red Plus drive and when I put it in and select to Format, nothing happens(?)
At the moment, you really don't want to be swapping drives around.
mharring54 wrote:
Are there any repair routines to rebuild the Raid array?|
Mounting the volume would require use of ssh (or tech support mode if ssh isn't already enabled).
Do you have any experience using the linux command line interface?
- SandsharkJul 02, 2023Sensei - Experienced User
That you are seeing volume "data" and "data-0" means the OS can't assemble all the drives in one volume, but does recognize that there is something that needs to be assembled and has labeled them separately. That usually means they are out of sync (something changed on some, but not all, drives in the RAID). Both contain parts of your actual "data" volume, so don't delete either of them. But because file contents are spread among all drives, you can't just go into either and get data. StephenB can help you try to manually assemble them into one via the command line. There could end up being a few corrupted files, but most will likely be recoverable.
I think you are prevented from formatting when you have an invalid volume. It's a way the OS keeps you from doing something stupid that makes recovery impossible. Just adding another drive will not fix your problem, anyway. It's too late for that. If the old drive can be read at all, cloning it to the new one may help with the recovery.
If you can assemble the volume, you are going to want to save off all the files and re-format the whole volume, so you are going to need somewhere to put them. After you've restored everything, that somewhere can become your backup, so size it appropriately.
- mharring54Jul 03, 2023Aspirant
Thanks Sandshark,
You wrote:
"If you can assemble the volume, you are going to want to save off all the files and re-format the whole volume, so you are going to need somewhere to put them. After you've restored everything, that somewhere can become your backup, so size it appropriately."
Yes, I guess that is best practice. I have data spread across several USB external HDs but I never trust them bc WD often fails. So, I bot 12TB of these WD Red drives to secure my data on an NAS and now I need to duplicate it with another large harddrive? Are backups possible by just plugging in the USB port on the ReadyNas drive bay?
I'm wondering when all is said and done whether the new WD 3TB Red Plus drive I bought will be necessary to replace the degraded drive?
Thanks
Michael
- mharring54Jul 03, 2023Aspirant
Thanks Stephen...replies below.
@mharring54 wrote:
I don't have a backup since I had less than 5TB of data and 12TB (4x3TB total over 4 drives and thought I was mirroring the data on the other drives.
It's not mirrored with RAID-5 (which is the RAID mode X-RAID is using in your case). It's more complicated than that.
Okay - understanding this is what's tripping me up.
@mharring54 wrote:
I kept reading that the drives were hot swappable but now I'm not so sure and so shut down each time I swap the drives. The replacement drive I bought is a WD30EFZX Red Plus drive and when I put it in and select to Format, nothing happens(?)
At the moment, you really don't want to be swapping drives around.
I have only swapped the "degraded volume" for a newly purchased WD Red Plus. I'm wondering if this new drive is going to be compatible with my WD Red drives or if I should return it before the return window expires?
@mharring54 wrote:
Are there any repair routines to rebuild the Raid array?|
Mounting the volume would require use of ssh (or tech support mode if ssh isn't already enabled).
Do you have any experience using the linux command line interface?
I've done command line writing - mostly in OSX Terminal but not really up to speed without instruction. Linux I have not used since 1998.
- StephenBJul 03, 2023Guru - Experienced User
mharring54 wrote:
Okay - understanding this is what's tripping me up.
Here's a brief (and simplified) explanation.
Sector X on disks a, b, and c are for data. The corresponding Sector X on disk d is a parity block. This is constructed using an exclusive-or (xor) of the three data blocks You can think of this as addition for our purposes here.
Every time Xa, Xb, or Xc are modified, the RAID also updates Xd.
So Xa + Xb + Xc = Xd.
If disk b is replaced, then Xb can be reconstructed by
Xb = Xd - Xa - Xc
Similarly, the contents of any of the other disks can be reconstructed from the remaining 3. That is what is happening when the RAID volume resyncs.
The reconstruction fails if
- the system crashed after Xa, Xb, or Xc was modified, but before Xd was updated.
- two or more disks fail (including a second disk failure during reconstruction).
- a disk read gives a wrong answer (instead of failing). This is sometimes called "bit rot".
- the system can't tell which disk was replaced.
The RAID system counts up the writes to each disk (maintaining an event counter for each disk). So it can detect the first failure mode (because the event counters won't match). When it sees that error, it will refuse to mount the volume. That is a fairly common cause of the inactive volume issue.
Often it is a result of a power failure, someone pulling the plug on the NAS instead of properly shutting it down, or a crash. The RAID array can usually be forcibly assembled (telling the system to ignore the event count mismatch). There can be some data loss, since there were writes that never made it to some of the disks.
Two or more disk failures sounds unlikely, but in fact it does happen. Recovery in that case is far more difficult (often impossible, or cost-prohibitive).
Figuring out what happened in your case requires analysis of the NAS logs. If you want me to take a look at them, you need to download the full log zip file from the NAS log page. Then put it into cloud storage (dropbox, icloud, etc), and send me a private message (PM) using the envelope icon in the upper right of the forum page. Put a link to the zip file in the PM (and set the permissions so anyone with the link can view/download the zip file).
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!