× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

How to recover from "Remove inactive volumes..." error -- a last-chance guide

Sandshark
Sensei

How to recover from "Remove inactive volumes..." error -- a last-chance guide

This information is provided as-is, with no warranty, by myself, just another ReadyNAS user not affiliated with Netgear, for educational purposes. While this procedure worked for me, it may not work for you, as there are many variables that may be different. If it doesn't work, including making things worse, then you probably shouldn't have been attempting the process in the first place and I accept no responsibility. This is intended to be a last-chance process to recover data that you have already accepted may be lost forever. If you cannot accept that, contact Netgear for paid assistance.

 

This process was performed on an OS6 system and I do not belive it to be entirely applicable to older OS versions, though some of the principles likely apply.

 

If you can get into the system via SSH or FTP, then there is a chance you can overcome the dreaded "Remove inactive volumes to use disk..." error. The following commands can be useful in determaning the state of everything:

 

# lsblk  Shows a "picture" of the drives and partitons, including mdadm devices they are a part ot.

# cat  /proc/mdstat  Shows status of each MDADM array

# mdadm  --detail  /dev/mdxxx  Shows details about MDADM array xxx (typically starting with 127 and working down for data volumes, md0 is the OS partition and md1 is the swap partition.  Do this for all arrays.

 

Once you have the details of your current condition, you can attempt to re-build the array. I recommend you do it in read-only mode first so as not to kick off a re-sync (if required) until you are sure you want/need to. Assuming default of a single-layer volume called "data", it would be:

 

# mdadm --assemble /dev/md127 --name=data-0 --readonly

Note that this assembles all partitions containing supperblocks labeled "data-0", the first (or only) layer of volume data. If you have a second layer, you have to assemble it as md126 looking for "data-1" elements, and so on for other layers. After you do that, use the commands at the begining of this post to re-assess the status of your drives and arrays.

 

If all is well, your NAS volumes display should show your volume changed from Inactive to either Redundant or Degraded, and also that is it read-only. If things are bad, it will either not mount or show as dead, and you're done -- your only hope is data recovery. Once you've successfully re-assembled the array, The BTRFS file systemn should mount by itself. If it doesn't, use something like this:

 

# mount /dev/md127 /data

You should now be able to see your files using the command line. The problem is, the GUI and Samba (and, I assume, other protocl drivers) don't see it. At this point, my volume was Redundant, and I just re-booted and all was well.  If there are commands that can be used instead of a re-boot, I have no idea what they would be.

 

If your volume is Degraded, you may want to let it re-sync before you re-boot. To do that, make it (or them) read-write:

 

# mdadm --read-write /dev/md127

Note you can (and should) list all layers of a single volume one after the other in this command. If your array was degraded, it should kick off a re-sync. Then, once it completes the sync, which you should see in the GUI or with:

 

# watch cat /proc/mdstat

If you know there is something wrong with your array and you don't have a current back-up, you might want to back up before you make the volume read-write or reboot. Since the GUI doesn't see it, you're going to have to use the command prompt to do that; and I'm not going to cover that.

 

I see a lot of folks that this error affects, and it finally affected me, so I researched the required commands.  I hope my research is a help to others.  But if this all looks like Greek to you, and you aren't actually from Greece, you probably shouldn't try it.  Note that this does nothing to help with the root cause of the volume not mounting.  So if you have a full OS partition or something else going on, you could end up in the same place after re-boot unless you do something to fix that root cause while you are doing the volume re-assembly.

 

My issue was on an EDA500 and appered to have been a hiccup on the eSATA bus during boot that froze the boot process and made me have to forceably re-boot again.  So fixing the volume was all I needed to do.  Why the NAS can't do this on it's own, I don't know.

Message 1 of 2
Retired_Member
Not applicable

Re: How to recover from "Remove inactive volumes..." error -- a last-chance guide

Hi @Sandshark, thanks for posting and kind regards.

Message 2 of 2
Top Contributors
Discussion stats
  • 1 reply
  • 889 views
  • 2 kudos
  • 2 in conversation
Announcements