NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Ozxpat
Apr 20, 2025Aspirant
Balancing does not help with "no space left" despite capacity available.
Hi all, I've got an RN516 with a six disk xraid volume that reports no space left, despite having 1.6TB free. It is 6 x 12TB for about 55TB usable. problem: root@ReadyNas:/nas/Data# touch test...
Ozxpat
Apr 22, 2025Aspirant
StephenB wrote:I guess you could try expanding the volume by upgrading two disks to a larger size.
A much higher risk path would be to use mdadm to convert your RAID arrays to RAID-0 and manually expand the file system to use the extra space. Then do the deletions and balance, and when that finishes shrink the file system and reverse the conversion to get you back to RAID-5. Not something I've ever tried myself - Sandshark has some experimentation in this area, so I am tagging him.
I now have more free space on the data and the metadata than when I started, but I'm now unable to delete even a single file. I assumed it would get more and more stable in rw as I made more space. Curious if this is consistent with other situations you've observed (I've never tried configuring btrfs before this).
Sandshark
Apr 22, 2025Sensei
I have never converted to RAID-0 and back to RAID-5, but I have no reason to believe the method I used for other conversions won't work since it's just standard mdadm. But I also have no idea if doing so would help in this situation. I don't know if a ro volume will expand to any added space.
But more concerning is whether the ro nature is still due to the volume being too full or if the too-full status lead to some other corruption. If it's the latter, making more space won't fix it. Moreover, I've never had any success making a ro volume due to BTRFS errors go back to rw and stay that way. I've tried it twice: first when I had a real need because my EDA500 volume became corrupt when the eSATA cable came loose and once when I was doing some experiments and something went wrong. In the first case, I was in a hurry to get the volume back up and had a backup of everything that wasn't backup itself, so I didn't spend a lot of time (and didn't know about btrfs scan). Since the second instance was during experiments on a "sandbox" NAS, I had time (and more experience) to try more, and still never succeeded. But I don't know exactly what caused the second incident.
I'm wondering if booting to tech support mode and then doing a btrfs scan of the unmounted volume might help or at least point the way to a solution. You can do a btrfs scan with the --force option from SSH with it mounted, but you can't fix anything.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!