× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

RNDP- 6000 The array is still in degraded mode

Art_Toyboy
Follower

RNDP- 6000 The array is still in degraded mode

I have a problem with which I could hope that someone here could help.

 

I have a ReadyNAS Pro Pioneer with 6 2TB (WD20EARS) drives in X-RAID2 mode. Disk 1 failed and I replaced it with the WD20EFRX.

 

After 2-hour synchronization is completed (I was surprised by such a synchronization time, I thought that more time should pass), they reported:

"The RAID sync has completed on volume C. However, the array is still in degraded mode. This could be due to a failed disk sync or a disk failure in a multi-parity disk array."

 

After that, I saw that drive 5 contains a lot of errors and got ready to replace it. But after synchronizing drive 1 on the Frontview page, there was a message that the volume was unavailable, drive 1 had a spare status, and drive 3 had a failed status (although it did not contain smart errors).

 

Fortunately, I have a backup of this data, so if the volume is ultimately not recoverable, I won’t lose much (if anything). But restoring an array of this size (more than 8 TB) will take several days — I know, because I had to do this in the past. I would like to avoid this.

 

I am ready to replace disk 5 as having problems, but I want to save data whenever possible. I think if in this situation I replace disk 5, then I can definitely lose them, because I have no confidence that disk 1 (which I replaced) is correctly synchronized.

 

Is there anything I can try besides trying to recover in the usual way a few more times? And at what point should I decide that it is useless, and just do a master reset and restore?

If necessary,

 

I am ready to provide logs from the storage.

Model: RNDP600E|ReadyNAS Pro Pioneeer Chassis only
Message 1 of 2
StephenB
Guru

Re: RNDP- 6000 The array is still in degraded mode

Rebuilding the array after replacing a disk requires reading every sector of all the other disks.  Unfortunately that often uncovers issues with  other disks in the array, and that appears to be your situation.  I think the best way forward is to recover from backup (painful though it is).  Even if you get the file system to mount, you are likely to have some file system corruption.

 

I suggest that you test all the remaining disks (at least disk 3) using lifeguard in a Windows PC.  If you are prepared to restore from backup, then I recommend running both the long read test and also a full write zeros test on each drive.  I've found that the write test will uncover errors that the non-destructive tests miss.

 

You could also download the full log zip file, and look for disk-related errors in system.log and kernel.log.  Disk_smart.log is worth look - it includes a cache of disk errors in addition to the smart stats, and that is often useful when evaluating disk health.

 

Another option is just to replace all the disks.  If you go that route, you'll find that fewer (but larger) disks are more cost effective. Four WD40EFRX cost $400 (current US Amazon pricing) and give you a 12 TB volume.  Six WD20EFRX cost $420, and only give you 10.

 

BTW, it is possible to convert your NAS to run OS-6.  The process does require reformatting the disks, but if you are needing to start over anyway, it is a natural time to convert it.

 

 

Message 2 of 2
Top Contributors
Discussion stats
  • 1 reply
  • 452 views
  • 0 kudos
  • 2 in conversation
Announcements