NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Platypus69
Dec 24, 2020Luminary
Should I ever run a Defrag operation if I only every insert/append to NAS (with no Snapshots enabled
I'm back... :) Just making sure my "theory is correct", as I am not sure of the nuances with BTRFS. Do I ever need to run a Defrag operation for the use case below? All I had scheduled wa...
StephenB
Dec 24, 2020Guru - Experienced User
rn_enthusiast wrote:
If you NAS runs fine and you have no performance issues, then don't defrag. That is my opinion. It is worth keeping in mind that defragging can increase data usage on the volume as it can cause data bloat because snapshot reflinks might have to be broken as part of the defrag process.
Personally I do defrag my volumes every three months via the volume maintenace schedule, and haven't had any issues with space usage doing that.
But overall, I agree that it's fine not to run them (and disable the auto defrag option for the share), and only do defrags when you run into performance issues due to fragmented files.
Defragmenting a file that is in a snapshot (or otherwise reflinked) can increase the on-disk usage. But this only happens if the file was modified in place on the NAS. Personally I rarely do that (though re-tagging media files is one exception).
Platypus69 wrote:
... as I learnt this week I should also run Balance operations periodically to optimize meta-data / allocation.
Just to expand on this a bit. BTRFS allocates space in chunks - and the usual chunk size is 1 Gigabyte. It's not that good at keeping the chunks full, so you often end up with quite a few partially filled chunks. This results in free space that the file system can't actually use. Even in your particular usage, you will end up with these partially filled chunks.
One thing that the balance does is that it consolidates these partially filled chunks - which creates fully empty ones that the file system can use.
Also, defragging a file can increase the number of partially filled chunks, since it is basically rewriting the file into unallocated space - leaving some of the original chunks partially filled.
rn_enthusiast
Dec 24, 2020Virtuoso
Back in the days, in support we did come across some extreme cases where people literally acutally ran out of storage mid/post-defrag. So, it is certainly a thing that can happen depending on how the snapshot/reflink situation is - something not easy to see/control from a user-level.
It is something that can potentially occur. For me, it is the same as with balance. I run it only if I need to but then again, I would know if/when I need to. So, it is fine that Netgear allows people to do it on a scheduled basis to ensure these tasks are done - I'm not against that :)
- StephenBDec 24, 2020Guru - Experienced User
rn_enthusiast wrote:
For me, it is the same as with balance. I run it only if I need to but then again, I would know if/when I need to.
Part of the puzzle here is that with BTRFS there is a big difference between free space and unallocated space. BTRFS needs enough unallocated space in order to work correctly. If it runs totally out of unallocated space, a balance might not be able to fix it.
Unfortunately, the admin UI doesn't report the amount of unallocated space - so you either need to monitor it with SSH commands, or you need to use free space as a rough proxy for unallocated space. And that is rough - what started this discussion was a volume that had ~700 GiB of free space, but no unallocated space.
I think for almost all users here, the best approach is to be generous on the amount of free space you have, and run periodic balances to make sure the system maintains enough unallocated space. Most posters don't have the linux skills and BTRFS knowledge needed to assess the health of the file system with ssh (and resolve any issues), and many of the rest simply aren't going to monitor the file system that closely (until something goes wrong).
- rn_enthusiastDec 24, 2020Virtuoso
StephenB wrote:
rn_enthusiast wrote:For me, it is the same as with balance. I run it only if I need to but then again, I would know if/when I need to.
Part of the puzzle here is that with BTRFS there is a big difference between free space and unallocated space. BTRFS needs enough unallocated space in order to work correctly. If it runs totally out of unallocated space, a balance might not be able to fix it.
Unfortunately, the admin UI doesn't report the amount of unallocated space - so you either need to monitor it with SSH commands, or you need to use free space as a rough proxy for unallocated space. And that is rough - what started this discussion was a volume that had ~700 GiB of free space, but no unallocated space.
I think for almost all users here, the best approach is to be generous on the amount of free space you have, and run periodic balances to make sure the system maintains enough unallocated space. Most posters don't have the linux skills and BTRFS knowledge needed to assess the health of the file system with ssh (and resolve any issues), and many of the rest simply aren't going to monitor the file system that closely (until something goes wrong).
I totally agree with everything you said. It is a good thing that the NAS allows you to "maintain" the filesystem without having to acquire deeper knowledge around it. I have a little self-made script that will pull filesystem stats every couple of weeks and dump that into a file. Admittedly, I look at the file probably every 3 months cause mostly I just forget about it :) But then again, if an issue does arise I am likely able to fix it anyway. For others it seems more reasonable with a "proactive" approach, indeed.
- SandsharkDec 24, 2020Sensei - Experienced User
rn_enthusiast wrote:I have a little self-made script that will pull filesystem stats every couple of weeks and dump that into a file. Admittedly, I look at the file probably every 3 months cause mostly I just forget about it :) But then again, if an issue does arise I am likely able to fix it anyway. For others it seems more reasonable with a "proactive" approach, indeed.
I do something similar every day so it's fresh when I look at it. It over-writes the old file, and it takes insignificant CPU time, so it does no harm to run more often. On a system with drive spin-down enabled, it would wake them up, so that is a consideration. Scheduling it in the same time frame as backup jobs would insure the drives are already awake. On my backup systems which only power on for backups, they even RSYNC the report to my main NAS so I don't have to power them on to check their status.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!