NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

Ozxpat's avatar
Ozxpat
Aspirant
Apr 20, 2025

Balancing does not help with "no space left" despite capacity available.

Hi all,

I've got an RN516 with a six disk xraid volume that reports no space left, despite having 1.6TB free. It is 6 x 12TB for about 55TB usable.

 

problem:

root@ReadyNas:/nas/Data# touch test.txt
touch: cannot touch 'test.txt': No space left on device

 

usage:

df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             10M  4.0K   10M   1% /dev
/dev/md0        4.0G  1.6G  2.1G  43% /
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           1.9G  5.7M  1.9G   1% /run
tmpfs           960M  2.5M  958M   1% /run/lock
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/md127       55T   54T  1.6T  98% /nas
/dev/md127       55T   54T  1.6T  98% /home
/dev/md127       55T   54T  1.6T  98% /apps
/dev/md127       55T   54T  1.6T  98% /run/nfs4/nas/Data

 

This command shows that the metadata is full:

btrfs filesystem df /nas
Data, single: total=54.07TiB, used=52.55TiB
System, DUP: total=8.00MiB, used=6.09MiB
Metadata, DUP: total=243.50GiB, used=243.00GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

 

I ran a rebalance command from the GUI but this didn't help. I am no expert on these commands, but from SSH I tried this:

btrfs balance start -m -v -dusage=0 -musage=0 /nas
Dumping filters: flags 0x7, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=0
  METADATA (flags 0x2): balancing, usage=0
  SYSTEM (flags 0x2): balancing, usage=0
Done, had to relocate 0 out of 55859 chunks

 

and this:

btrfs balance start -m -v -musage=90 /nas
Dumping filters: flags 0x6, state 0x0, force is off
  METADATA (flags 0x2): balancing, usage=90
  SYSTEM (flags 0x2): balancing, usage=90
ERROR: error during balancing '/nas': No space left on device

 

Not sure what to try next and would greatly appreciate advice. Currently I'm running a scrub to see if that helps. 

I looked closely at this thread and indeed stole the title from it. But my situation seems different because it's my nas volume that reports no space left and the metadata seems full for /nas.

20 Replies

Replies have been turned off for this discussion
  • StephenB's avatar
    StephenB
    Guru - Experienced User

    Ozxpat wrote:

     

    I've got an RN516 with a six disk xraid volume that reports no space left, despite having 1.6TB free. It is 6 x 12TB for about 55TB usable.

     

    Not sure what to try next and would greatly appreciate advice. Currently I'm running a scrub to see if that helps. I looked closely at this thread and indeed stole the title from it. But my situation seems different because it's my nas volume that reports no space left and the metadata seems full for /nas.


    FWIW, BTRFS doesn't handle free space exhaution very well, so I always recommend keeping at least 15% free space (even for large volumes like yours).

     

    In terms of next steps, I would begin by offloading some data, and if you have snapshots then delete some or all of them.  That will give you some more free space.  

     

    Then you can run multiple balances, starting with the 0 threshold you used before.  This should consolidate the free space you created by offl

    btrfs balance start -m -v -dusage=0 -musage=0 /nas

    then up the percentage to 10, 20, etc.

     

    Note the first time you ran with this threshold, it didn't really do anything (as it didn't relocate any blocks).

     

     

    • Ozxpat's avatar
      Ozxpat
      Aspirant

      Thanks for such a quick reply. I am copying off about 10tb, will then delete it and see how the metadata usage is, then run the incrementally larger balance commands. It will take a while, but hopefully it worksđŸ€ž

    • Ozxpat's avatar
      Ozxpat
      Aspirant

      No luck. I copied off my 10TB and am ready to delete, but after a failed balance it now is stuck in read only mode. Even after reboot it goes into read only mode as soon as I try to delete the first additional file. This started after I tried 0,10,20,30 (0 chunks) and finally 40 found 3 chunks, but it failed. Before that I was able to delete files OK (notice i have an extra 1tb now, and a couple of gig of metadata).

      Seems like I have a brief window of rw after a reboot, and that ro is triggered by the first file delete.

       

      root@ReadyNas:~# btrfs filesystem df /nas
      Data, single: total=54.07TiB, used=51.73TiB
      System, DUP: total=8.00MiB, used=6.09MiB
      Metadata, RAID1: total=449.88MiB, used=447.72MiB
      Metadata, DUP: total=244.50GiB, used=242.56GiB
      GlobalReserve, single: total=512.00MiB, used=0.00B
      
      root@ReadyNas:~# mount | grep /nas
      /dev/md127 on /nas type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=5,subvol=/)
      /dev/md127 on /run/nfs4/nas/Data type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=263,subvol=/Data)

      the logs:

      root@ReadyNas:~# dmesg | tail -n 50
      [ 298.049409] BTRFS: error (device md126) in __btrfs_free_extent:7004: errno=-28 No space left
      [ 298.049411] BTRFS info (device md126): forced readonly
      [ 298.049412] BTRFS: error (device md126) in btrfs_run_delayed_refs:2995: errno=-28 No space left

      • StephenB's avatar
        StephenB
        Guru - Experienced User

        Ozxpat wrote:


        Seems like I have a brief window of rw after a reboot, and that ro is triggered by the first file delete.

         


        Try rebooting to regain rw, and then see if you can truncate the files instead of deleting them.  

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More