NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
mharring54
Jun 30, 2023Aspirant
ReadyNAS 214 "Remove inactive volumes to use the disk. Disk #1,2,3,4."
Hello, I recently updated my firmware on a ReadyNAS 214. Next time I looked I got a degraded volume error on Drive bay #1. I have 4 WD Red WD30EFRX drives but when I searched for a replacement I wa...
- Jul 03, 2023
mharring54 wrote:
Okay - understanding this is what's tripping me up.
Here's a brief (and simplified) explanation.
Sector X on disks a, b, and c are for data. The corresponding Sector X on disk d is a parity block. This is constructed using an exclusive-or (xor) of the three data blocks You can think of this as addition for our purposes here.
Every time Xa, Xb, or Xc are modified, the RAID also updates Xd.
So Xa + Xb + Xc = Xd.
If disk b is replaced, then Xb can be reconstructed by
Xb = Xd - Xa - Xc
Similarly, the contents of any of the other disks can be reconstructed from the remaining 3. That is what is happening when the RAID volume resyncs.
The reconstruction fails if
- the system crashed after Xa, Xb, or Xc was modified, but before Xd was updated.
- two or more disks fail (including a second disk failure during reconstruction).
- a disk read gives a wrong answer (instead of failing). This is sometimes called "bit rot".
- the system can't tell which disk was replaced.
The RAID system counts up the writes to each disk (maintaining an event counter for each disk). So it can detect the first failure mode (because the event counters won't match). When it sees that error, it will refuse to mount the volume. That is a fairly common cause of the inactive volume issue.
Often it is a result of a power failure, someone pulling the plug on the NAS instead of properly shutting it down, or a crash. The RAID array can usually be forcibly assembled (telling the system to ignore the event count mismatch). There can be some data loss, since there were writes that never made it to some of the disks.
Two or more disk failures sounds unlikely, but in fact it does happen. Recovery in that case is far more difficult (often impossible, or cost-prohibitive).
Figuring out what happened in your case requires analysis of the NAS logs. If you want me to take a look at them, you need to download the full log zip file from the NAS log page. Then put it into cloud storage (dropbox, icloud, etc), and send me a private message (PM) using the envelope icon in the upper right of the forum page. Put a link to the zip file in the PM (and set the permissions so anyone with the link can view/download the zip file).
StephenB
Jul 02, 2023Guru - Experienced User
mharring54 wrote:
I don't have a backup since I had less than 5TB of data and 12TB (4x3TB total over 4 drives and thought I was mirroring the data on the other drives.
It's not mirrored with RAID-5 (which is the RAID mode X-RAID is using in your case). It's more complicated than that.
mharring54 wrote:
I kept reading that the drives were hot swappable but now I'm not so sure and so shut down each time I swap the drives. The replacement drive I bought is a WD30EFZX Red Plus drive and when I put it in and select to Format, nothing happens(?)
At the moment, you really don't want to be swapping drives around.
mharring54 wrote:
Are there any repair routines to rebuild the Raid array?|
Mounting the volume would require use of ssh (or tech support mode if ssh isn't already enabled).
Do you have any experience using the linux command line interface?
mharring54
Jul 03, 2023Aspirant
Thanks Stephen...replies below.
@mharring54 wrote:
I don't have a backup since I had less than 5TB of data and 12TB (4x3TB total over 4 drives and thought I was mirroring the data on the other drives.
It's not mirrored with RAID-5 (which is the RAID mode X-RAID is using in your case). It's more complicated than that.
Okay - understanding this is what's tripping me up.
@mharring54 wrote:
I kept reading that the drives were hot swappable but now I'm not so sure and so shut down each time I swap the drives. The replacement drive I bought is a WD30EFZX Red Plus drive and when I put it in and select to Format, nothing happens(?)
At the moment, you really don't want to be swapping drives around.
I have only swapped the "degraded volume" for a newly purchased WD Red Plus. I'm wondering if this new drive is going to be compatible with my WD Red drives or if I should return it before the return window expires?
@mharring54 wrote:Are there any repair routines to rebuild the Raid array?|
Mounting the volume would require use of ssh (or tech support mode if ssh isn't already enabled).
Do you have any experience using the linux command line interface?
I've done command line writing - mostly in OSX Terminal but not really up to speed without instruction. Linux I have not used since 1998.
- StephenBJul 03, 2023Guru - Experienced User
mharring54 wrote:
Okay - understanding this is what's tripping me up.
Here's a brief (and simplified) explanation.
Sector X on disks a, b, and c are for data. The corresponding Sector X on disk d is a parity block. This is constructed using an exclusive-or (xor) of the three data blocks You can think of this as addition for our purposes here.
Every time Xa, Xb, or Xc are modified, the RAID also updates Xd.
So Xa + Xb + Xc = Xd.
If disk b is replaced, then Xb can be reconstructed by
Xb = Xd - Xa - Xc
Similarly, the contents of any of the other disks can be reconstructed from the remaining 3. That is what is happening when the RAID volume resyncs.
The reconstruction fails if
- the system crashed after Xa, Xb, or Xc was modified, but before Xd was updated.
- two or more disks fail (including a second disk failure during reconstruction).
- a disk read gives a wrong answer (instead of failing). This is sometimes called "bit rot".
- the system can't tell which disk was replaced.
The RAID system counts up the writes to each disk (maintaining an event counter for each disk). So it can detect the first failure mode (because the event counters won't match). When it sees that error, it will refuse to mount the volume. That is a fairly common cause of the inactive volume issue.
Often it is a result of a power failure, someone pulling the plug on the NAS instead of properly shutting it down, or a crash. The RAID array can usually be forcibly assembled (telling the system to ignore the event count mismatch). There can be some data loss, since there were writes that never made it to some of the disks.
Two or more disk failures sounds unlikely, but in fact it does happen. Recovery in that case is far more difficult (often impossible, or cost-prohibitive).
Figuring out what happened in your case requires analysis of the NAS logs. If you want me to take a look at them, you need to download the full log zip file from the NAS log page. Then put it into cloud storage (dropbox, icloud, etc), and send me a private message (PM) using the envelope icon in the upper right of the forum page. Put a link to the zip file in the PM (and set the permissions so anyone with the link can view/download the zip file).
- mharring54Jul 03, 2023Aspirant
StephenB Stephen - will do. Thanks.
- kodandaJul 06, 2023Aspirant
Dear StephenB,
I had exact problem with my NAS 214. It was all working fine, but suddenly all the three (it has 3 disks) disks started to show up as red. The health indicator shows green. There was a power failure in between. We have some important data. I am comfortable with command line, I have been a linux user for sometime. Could you please help?
The log file is also attached.
Cheers,
Kodanda
- StephenBJul 06, 2023Guru - Experienced User
Likely it was the power problem.
Can you download the full log zip file, and then put it into cloud storage (drop box, google drive etc)?
Then send a download link in a private message (PM) to me. You can use the envelope link in the upper right of the forum page.
- RichardStuartJul 13, 2023Aspirant
Firmware 6.10.8
I suspect that StephenB has already answered this query (same, almost, as the original), but here goes:
My NAS was approaching 5% free data and I did not notice this. Then DISK 2 showed as FAILED. I remounted it and all shares and files appeared okay, so I commenced a clean up and ordered a new HDD to replace DISK 2, but 2 days later DISK 3 showed as FAILED as well (DISK 2 did not show a FAIL status again), at which point the SHARES were still accessible but the files were unable to be opened... We were to have a scheduled loss of power before I expected to receive the new HDD so I gracefully powered the NAS down and when the new HDD arrived the next day, powered the NAS up again. At this point I had no shares and no volumes. Replacing the failed disk with the new HDD didn't help.
I tried to use Wondershare's RECOVERIT but it could not connect to the NAS. The NAS is showing as healthy, but disks 1,2, 4 and red, disk 3 is grey and all show as SPARE, not RAID5.
My question is whether there is any tool or process I can use to access the data on disks 1,2,4 to re-synch disk 3 and get my shares and files again?
Thanks in advance
- StephenBJul 13, 2023Guru - Experienced User
Recoverit certainly wouldn't work, since it is for Windows only.
One option is to connect the original disks to a Windows PC using USB/adapter docks and then use ReclaiMe (or some other RAID recovery software that supports both BTRFs and software RAID). You'd also need enough storage to offload your data.
It might be possible to forcibly mount the volume from tech support mode or using ssh (if it is enabled). Have you ever used the linux command line interface?
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!