NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

mharring54's avatar
mharring54
Aspirant
Jun 30, 2023
Solved

ReadyNAS 214 "Remove inactive volumes to use the disk. Disk #1,2,3,4."

Hello, I recently updated my firmware on a ReadyNAS 214. Next time I looked I got a degraded volume error on Drive bay #1. I have 4 WD Red WD30EFRX drives but when I searched for a replacement I wa...
  • StephenB's avatar
    StephenB
    Jul 03, 2023

    mharring54 wrote:

     

    Okay - understanding this is what's tripping me up.

     


    Here's a brief (and simplified) explanation.  

     

    Sector X on disks a, b, and c are for data.  The corresponding Sector X on disk d is a parity block.  This is constructed using an exclusive-or (xor) of the three data blocks  You can think of this as addition for our purposes here.  

     

    Every time Xa, Xb, or Xc are modified, the RAID also updates Xd.

     

    So Xa + Xb + Xc = Xd.

     

    If disk b is replaced, then Xb can be reconstructed by 

    Xb = Xd - Xa - Xc

     

    Similarly, the contents of any of the other disks can be reconstructed from the remaining 3. That is what is happening when the RAID volume resyncs.

     

    The reconstruction fails if

    1. the system crashed after Xa, Xb, or Xc was modified, but before Xd was updated.
    2. two or more disks fail (including a second disk failure during reconstruction).
    3. a disk read gives a wrong answer (instead of failing).  This is sometimes called "bit rot".
    4. the system can't tell which disk was replaced.

    The RAID system counts up the writes to each disk (maintaining an event counter for each disk).  So it can detect the first failure mode (because the event counters won't match).  When it sees that error, it will refuse to mount the volume.  That is a fairly common cause of the inactive volume issue. 

     

    Often it is a result of a power failure, someone pulling the plug on the NAS instead of properly shutting it down, or a crash.  The RAID array can usually be forcibly assembled (telling the system to ignore the event count mismatch).  There can be some data loss, since there were writes that never made it to some of the disks.

     

    Two or more disk failures sounds unlikely, but in fact it does happen. Recovery in that case is far more difficult (often impossible, or cost-prohibitive).

     

    Figuring out what happened in your case requires analysis of the NAS logs.  If you want me to take a look at them, you need to download the full log zip file from the NAS log page.   Then put it into cloud storage (dropbox, icloud, etc), and send me a private message (PM) using the envelope icon in the upper right of the forum page.  Put a link to the zip file in the PM (and set the permissions so anyone with the link can view/download the zip file).