NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

BMuuN's avatar
BMuuN
Aspirant
Jan 27, 2021

Remove inactive volumes to use the disk. Disk #1,2,3,4.

Wondering if someone can help...

 

I can't access any of the 4 hard drives in my ReadyNAS 104.  As a result all network shares are inaccessible.  I'm running firmware v6.10.4 and the admin web interface is accessible however it states "Remove inactive volumes to use the disk. Disk #1,2,3,4."

 

Things were working fine until the drive booted today.  I've not recieved any prior warning about any issues with the drives however after looking through the forum and reading some articles I think the amount of free space remaining may have played a part: - there are 4 x 3TB Western Digital drives in the NAS and as you can see from the above screenshot there's 10.90TB of data used, so it's over 80% consumption.

 

I began following this post which talks about running `btrfs check --repair`, mounting the drives and deleting some data to take it below the 80% however as you can see from the screenshot below I've not had much luck.

 

 

This post suggests performing a factory default.  Can anyone clarify if this will format the drives?

 

Any help or advice would be greatly appreciated.

I can upload logs if they are required.

8 Replies

Replies have been turned off for this discussion
  • StephenB's avatar
    StephenB
    Guru - Experienced User

    BMuuN wrote:

     

    This post suggests performing a factory default.  Can anyone clarify if this will format the drives?

    Yes, it will.

     


    BMuuN wrote:

    there are 4 x 4TB Western Digital drives in the NAS and as you can see from the above screenshot there's 10.90TB of data used, so it's over 80% consumption.

     


    The information in your screenshot tells you nothing about free space.

    1. The free space indication is meaningless when the volume is not mounted.
    2. The 10.9 TiB indication is the total volume size, not the amount of space used.  The NAS reports space in TiB (1024*1024*1024*1024 bytes) - even though it uses a TB label.  10.9 TiB is the same as 12 TB - which is the size of a 4x4TB RAID-5 volume.

     


    BMuuN wrote:

     

     

    You are trying to repair the wrong volume.  md0 is the OS partition, and md1 is the swap partition. Neither uses btrfs.  Your data volume should be md127, so maybe try that.

     

    But you are risking making things worse.  Another option is to use Netgear paid support, which would greatly reduce the chance of data loss.

     

    • BMuuN's avatar
      BMuuN
      Aspirant

      Thanks for getting back to me StephenB 

       

      A factory reset is out of the question as I'd like to recover as much of the data as possible.

       


      StephenB wrote:

      You are trying to repair the wrong volume.  md0 is the OS partition, and md1 is the swap partition. Neither uses btrfs.  Your data volume should be md127, so maybe try that.

       

      But you are risking making things worse.  Another option is to use Netgear paid support, which would greatly reduce the chance of data loss.

       


      When I run `ls -la /dev` I don't see `md127`. Is this because the drive is not mounted?

       

      By the sounds of it running `btrfs check --repair /dev/md127` will do more harm than good?  Is my only option here Netgear support?

       

      • StephenB's avatar
        StephenB
        Guru - Experienced User

        BMuuN wrote:

        When I run `ls -la /dev` I don't see `md127`. Is this because the drive is not mounted?

         


        Likely yes.  You might need to do a btrfs device scan before you try the repair.

         


        BMuuN wrote:

         

        By the sounds of it running `btrfs check --repair /dev/md127` will do more harm than good?  Is my only option here Netgear support?

         


        Your data is definitely at risk, and if you don't know what you are doing (which honestly is the case) you can certainly do more damage.

         

        You could use paid Netgear support.  There are other recovery services out there (I can't recommend any, since I've never used them).

         

        If you can connect all the disks to a Windows PC, you could also use RAID recovery software.  ReclaiMe is one package that folks here have used with success.  If you use something else, you need to make sure that it support btrfs.

  • Sandshark's avatar
    Sandshark
    Sensei - Experienced User

    Yes, a factory default will format the drives.

     

    What NAS do you have?  If it's an ARM model, md0 is not BTRFS.  But that's not the problem, anyway.  md0 is the OS partition.  Your /data partition is most likely md127, which isn't mounted properly and is, thus, your problem.

     

    One thing you failed to note in the post you referenced is that the user is doing that in another Linux system, not the NAS, and he has just RAID1, so can do that with only one drive.

     

    You can try the --repair option, but I don't think it's going to work without at least unmounting /data and /data-0.  Normally, I believe it has to be done via the support mode so it's already unmounted and you and the OS are not fighting over it.

     

    One other thing you can try is to boot up with just three drives, trying each combination.  It is possible that one drive is causing the problem and the volume will mount degraded (without redundancy) without that problem drive.

    • StephenB's avatar
      StephenB
      Guru - Experienced User

      Sandshark wrote:

       

      You can try the --repair option, but I don't think it's going to work without at least unmounting /data and /data-0. 


      His screenshot (mdstat) shows they aren't actually mounted.

    • BMuuN's avatar
      BMuuN
      Aspirant

      Thanks for getting back to me Sandshark 

       


      Sandshark wrote:

      Yes, a factory default will format the drives.


      As mentioned above a factory default is out of the question as I'd like to retrieve as much of the data as possible.

       


      Sandshark wrote:

      What NAS do you have?  If it's an ARM model, md0 is not BTRFS.  But that's not the problem, anyway.  md0 is the OS partition.  Your /data partition is most likely md127, which isn't mounted properly and is, thus, your problem.


      I have a ReadyNAS 104 with ARM processor.  As mentioned above I don't see `md127` in `/dev`.  Could this be the reason for the issues and do you have any tips on trying to remount this partition?

       


      Sandshark wrote:

      One other thing you can try is to boot up with just three drives, trying each combination.  It is possible that one drive is causing the problem and the volume will mount degraded (without redundancy) without that problem drive.


      I've attempted this several times and had no luck: - I'm always faced with the same message telling me to remove all disks :smileysad: