NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Platypus69
Feb 19, 2021Luminary
Cannot copy files to RN316 although I have 22TB free...
Hi all. I have the following RN316: Firmware 6.10.4 Running 6 x 10TB IronWolf HDDs X-RAID 21.9TB free / 23.4TB used History: Last year I replaced all the 8TB Ironwold HDDs (from memory) one by...
StephenB
Feb 21, 2021Guru - Experienced User
Sandshark wrote:
I've seen some posts on general Linux forums recommending a balance after a BTRFS expansion, and it does not appear Netgear does that automatically. Apparently, that helps properly allocate data and metadata across the volume. The MAN page says "The primary purpose of the balance feature is to spread block groups across all devices so they match constraints defined by the respective profiles". I've not found a good list of those restraints, but you may have arrived at one of them.
I'm wondering that also. Looking at the first post, I see this.
Label: 'data' uuid: ... Total devices 2 FS bytes used 23.43TiB devid 1 size 18.17TiB used 18.17TiB path /dev/md127 devid 2 size 27.28TiB used 5.29TiB path /dev/md126
Note all the unallocated space is on md126. md127 is completely full.
Platypus69
Mar 05, 2021Luminary
Hi all....
So I have not been able to resolve the issue yet, I believe.
BTW is 6.10.4 (Hotfix 1) buggy??? Should I be downgrading to 6.10.3?
I've been offline as the RN316 was horrendously unavailabe during my scrub, and even after it failed or completed after 7 days it still feels sluggish to me...
Anyway... I have moved about 500GB off the RN316. But is that enough? It have moved most of the files I have copied across this year, but also some older stuff from late last year, but probably AFTER I replaced all the 4TB HDDs with 10TB HDDs so I am again wondering if everything is going to the old md127 and not new md126???
So right now here are my stats / logs / telemetry:
BTRFS.LOG
Label: 'blah:root' uuid: blah-blah Total devices 1 FS bytes used 1.43GiB devid 1 size 4.00GiB used 3.61GiB path /dev/md0 Label: 'blah:data' uuid: blah-blah Total devices 2 FS bytes used 22.95TiB devid 1 size 18.17TiB used 18.14TiB path /dev/md127 devid 2 size 27.28TiB used 4.84TiB path /dev/md126 === filesystem /data === Data, single: total=22.95TiB, used=22.93TiB System, RAID1: total=32.00MiB, used=2.95MiB Metadata, RAID1: total=5.85GiB, used=5.32GiB Metadata, DUP: total=10.50GiB, used=10.03GiB GlobalReserve, single: total=512.00MiB, used=0.00B === subvolume /data ===
Why does Data, single: showing 22.95TiB, which suspiciously seems to be the limit of the amount of data I can store? Recall the UI is showing data 22.96TB and Free space: 22.49TB. Is this the "smoking gun"???
VOLUME.LOG
data disk test 2020-09-01 01:00:01 2020-09-01 15:20:27 pass data resilver 2020-09-13 15:54:14 2020-09-14 20:35:46 completed data balance 2021-02-18 21:17:16 2021-02-18 21:18:42 completed ERROR: error during balancing '/data': No space left on device T data scrub 2021-02-18 21:29:29 data disk test 2021-03-01 08:15:53 data balance 2021-03-01 21:03:16 2021-03-01 21:04:48 completed ERROR: error during balancing '/data': No space left on device T data balance 2021-03-03 15:34:37 2021-03-04 03:44:36 completed ERROR: error during balancing '/data': No space left on device T data balance 2021-03-05 09:34:32 2021-03-05 10:29:27 completed ERROR: error during balancing '/data': No space left on device T data balance 2021-03-05 19:39:44 2021-03-05 19:49:07 completed ERROR: error during balancing '/data': No space left on device T data balance 2021-03-05 21:09:45 2021-03-05 21:27:23 completed ERROR: error during balancing '/data': No space left on device T data balance 2021-03-05 21:28:15 2021-03-05 21:28:19 completed Done, had to relocate 1 out of 23557 chunks data balance 2021-03-05 21:45:20 2021-03-05 21:46:05 completed Done, had to relocate 29 out of 23557 chunks data balance 2021-03-05 21:57:26 2021-03-05 21:57:31 completed Done, had to relocate 1 out of 23529 chunks data balance 2021-03-05 21:59:22 2021-03-05 21:59:27 completed Done, had to relocate 1 out of 23529 chunks data balance 2021-03-05 21:59:48 2021-03-05 21:59:53 completed Done, had to relocate 1 out of 23529 chunks data balance 2021-03-05 22:25:13 2021-03-05 22:25:18 completed Done, had to relocate 1 out of 23529 chunks
Why does it keep relocate 1 out of 23529 chunks only? That chunk size does not go down? I have no idea. Do I keep doing balances?
Should I Defrag now?
I also have SMB Plus installed and have enabled Preallocate (FYI: Preallocate disk space before writing data. This can slow down write speed slightly, but should result in the file being nicely laid out on the disk, with minimal fragmentation.)
I have removed a lot of snapshots but would like to keep the ones that I have set up for OneDrive and DropBox apps. Both report 19 snapshots with 2 years protection.
I am happy to turn off the snapshots, ie: set them to manual. But can drop all snapshots if people think that's good. Just being a bit nervous...
I'm pulling my hair out... What do I do?
Has the problem been solved? Can you tell? Or should I try to find much older data and remove another 500Gb or 1 TB of older data before I try balancing again?
Any help appreciated!
For what it's worth:
KERNEL.LOG
Mar 05 22:11:19 RN316 systemd[1]: Set hostname to <RN316>. Mar 05 22:11:19 RN316 systemd[1]: systemd-journald-audit.socket: Cannot add dependency job, ignoring: Unit systemd-journald-audit.socket is masked. Mar 05 22:11:19 RN316 systemd[1]: systemd-journald-audit.socket: Cannot add dependency job, ignoring: Unit systemd-journald-audit.socket is masked. Mar 05 22:11:19 RN316 systemd[1]: Started Forward Password Requests to Wall Directory Watch. Mar 05 22:11:19 RN316 systemd[1]: Listening on Journal Socket (/dev/log). Mar 05 22:11:19 RN316 systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Mar 05 22:11:19 RN316 systemd[1]: Created slice System Slice. Mar 05 22:11:19 RN316 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 05 22:11:19 RN316 systemd[1]: Created slice system-getty.slice. Mar 05 22:11:19 RN316 systemd[1]: Listening on /dev/initctl Compatibility Named Pipe. Mar 05 22:11:19 RN316 systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Mar 05 22:11:19 RN316 systemd[1]: Reached target Encrypted Volumes. Mar 05 22:11:19 RN316 systemd[1]: Listening on udev Control Socket. Mar 05 22:11:19 RN316 systemd[1]: Reached target Paths. Mar 05 22:11:19 RN316 systemd[1]: Reached target Remote File Systems (Pre). Mar 05 22:11:19 RN316 systemd[1]: Reached target Remote File Systems. Mar 05 22:11:19 RN316 systemd[1]: Listening on udev Kernel Socket. Mar 05 22:11:19 RN316 systemd[1]: Listening on Journal Socket. Mar 05 22:11:19 RN316 systemd[1]: Starting Remount Root and Kernel File Systems... Mar 05 22:11:19 RN316 systemd[1]: Mounting POSIX Message Queue File System... Mar 05 22:11:19 RN316 systemd[1]: Starting Create Static Device Nodes in /dev... Mar 05 22:11:19 RN316 systemd[1]: Mounting Debug File System... Mar 05 22:11:19 RN316 systemd[1]: Created slice User and Session Slice. Mar 05 22:11:19 RN316 systemd[1]: Reached target Slices. Mar 05 22:11:19 RN316 systemd[1]: Listening on Syslog Socket. Mar 05 22:11:19 RN316 systemd[1]: Starting Journal Service... Mar 05 22:11:19 RN316 systemd[1]: Starting Load Kernel Modules... Mar 05 22:11:19 RN316 systemd[1]: Started ReadyNAS LCD splasher. Mar 05 22:11:19 RN316 systemd[1]: Starting ReadyNASOS system prep... Mar 05 22:11:19 RN316 systemd[1]: Mounted POSIX Message Queue File System. Mar 05 22:11:19 RN316 systemd[1]: Mounted Debug File System. Mar 05 22:11:19 RN316 systemd[1]: Started Remount Root and Kernel File Systems. Mar 05 22:11:19 RN316 systemd[1]: Started Create Static Device Nodes in /dev. Mar 05 22:11:19 RN316 systemd[1]: Started Load Kernel Modules. Mar 05 22:11:19 RN316 systemd[1]: Starting Apply Kernel Variables... Mar 05 22:11:19 RN316 systemd[1]: Mounting FUSE Control File System... Mar 05 22:11:19 RN316 systemd[1]: Mounting Configuration File System... Mar 05 22:11:19 RN316 systemd[1]: Starting udev Kernel Device Manager... Mar 05 22:11:19 RN316 systemd[1]: Starting Load/Save Random Seed... Mar 05 22:11:19 RN316 systemd[1]: Starting Rebuild Hardware Database... Mar 05 22:11:19 RN316 systemd[1]: Mounted Configuration File System. Mar 05 22:11:19 RN316 systemd[1]: Mounted FUSE Control File System. Mar 05 22:11:19 RN316 systemd[1]: Started Apply Kernel Variables. Mar 05 22:11:19 RN316 systemd[1]: Started ReadyNASOS system prep. Mar 05 22:11:19 RN316 systemd[1]: Started Load/Save Random Seed. Mar 05 22:11:19 RN316 systemd[1]: Started udev Kernel Device Manager. Mar 05 22:11:19 RN316 systemd[1]: Started Journal Service. Mar 05 22:11:19 RN316 kernel: md: md127 stopped. Mar 05 22:11:19 RN316 kernel: md: bind<sdb3> Mar 05 22:11:19 RN316 kernel: md: bind<sdc3> Mar 05 22:11:19 RN316 kernel: md: bind<sdd3> Mar 05 22:11:19 RN316 kernel: md: bind<sde3> Mar 05 22:11:19 RN316 kernel: md: bind<sdf3> Mar 05 22:11:19 RN316 kernel: md: bind<sda3> Mar 05 22:11:19 RN316 kernel: md/raid:md127: device sda3 operational as raid disk 0 Mar 05 22:11:19 RN316 kernel: md/raid:md127: device sdf3 operational as raid disk 5 Mar 05 22:11:19 RN316 kernel: md/raid:md127: device sde3 operational as raid disk 4 Mar 05 22:11:19 RN316 kernel: md/raid:md127: device sdd3 operational as raid disk 3 Mar 05 22:11:19 RN316 kernel: md/raid:md127: device sdc3 operational as raid disk 2 Mar 05 22:11:19 RN316 kernel: md/raid:md127: device sdb3 operational as raid disk 1 Mar 05 22:11:19 RN316 kernel: md/raid:md127: allocated 6474kB Mar 05 22:11:19 RN316 kernel: md/raid:md127: raid level 5 active with 6 out of 6 devices, algorithm 2 Mar 05 22:11:19 RN316 kernel: RAID conf printout: Mar 05 22:11:19 RN316 kernel: --- level:5 rd:6 wd:6 Mar 05 22:11:19 RN316 kernel: disk 0, o:1, dev:sda3 Mar 05 22:11:19 RN316 kernel: disk 1, o:1, dev:sdb3 Mar 05 22:11:19 RN316 kernel: disk 2, o:1, dev:sdc3 Mar 05 22:11:19 RN316 kernel: disk 3, o:1, dev:sdd3 Mar 05 22:11:19 RN316 kernel: disk 4, o:1, dev:sde3 Mar 05 22:11:19 RN316 kernel: disk 5, o:1, dev:sdf3 Mar 05 22:11:19 RN316 kernel: created bitmap (30 pages) for device md127 Mar 05 22:11:19 RN316 kernel: md127: bitmap initialized from disk: read 2 pages, set 0 of 59543 bits Mar 05 22:11:19 RN316 kernel: md127: detected capacity change from 0 to 19979093934080 Mar 05 22:11:19 RN316 kernel: Adding 1566716k swap on /dev/md1. Priority:-1 extents:1 across:1566716k Mar 05 22:11:20 RN316 kernel: BTRFS: device label 43f5fa04:data devid 1 transid 1895561 /dev/md127 Mar 05 22:11:20 RN316 kernel: md: md126 stopped. Mar 05 22:11:20 RN316 kernel: md: bind<sdb4> Mar 05 22:11:20 RN316 kernel: md: bind<sdc4> Mar 05 22:11:20 RN316 kernel: md: bind<sdd4> Mar 05 22:11:20 RN316 kernel: md: bind<sde4> Mar 05 22:11:20 RN316 kernel: md: bind<sdf4> Mar 05 22:11:20 RN316 kernel: md: bind<sda4> Mar 05 22:11:20 RN316 kernel: md/raid:md126: device sda4 operational as raid disk 0 Mar 05 22:11:20 RN316 kernel: md/raid:md126: device sdf4 operational as raid disk 5 Mar 05 22:11:20 RN316 kernel: md/raid:md126: device sde4 operational as raid disk 4 Mar 05 22:11:20 RN316 kernel: md/raid:md126: device sdd4 operational as raid disk 3 Mar 05 22:11:20 RN316 kernel: md/raid:md126: device sdc4 operational as raid disk 2 Mar 05 22:11:20 RN316 kernel: md/raid:md126: device sdb4 operational as raid disk 1 Mar 05 22:11:20 RN316 kernel: md/raid:md126: allocated 6474kB Mar 05 22:11:20 RN316 kernel: md/raid:md126: raid level 5 active with 6 out of 6 devices, algorithm 2 Mar 05 22:11:20 RN316 kernel: RAID conf printout: Mar 05 22:11:20 RN316 kernel: --- level:5 rd:6 wd:6 Mar 05 22:11:20 RN316 kernel: disk 0, o:1, dev:sda4 Mar 05 22:11:20 RN316 kernel: disk 1, o:1, dev:sdb4 Mar 05 22:11:20 RN316 kernel: disk 2, o:1, dev:sdc4 Mar 05 22:11:20 RN316 kernel: disk 3, o:1, dev:sdd4 Mar 05 22:11:20 RN316 kernel: disk 4, o:1, dev:sde4 Mar 05 22:11:20 RN316 kernel: disk 5, o:1, dev:sdf4 Mar 05 22:11:20 RN316 kernel: md126: detected capacity change from 0 to 29999560785920 Mar 05 22:11:20 RN316 kernel: BTRFS: device label 43f5fa04:data devid 2 transid 1895561 /dev/md126 Mar 05 22:13:08 RN316 kernel: e1000e: eth1 NIC Link is Down Mar 05 22:13:09 RN316 kernel: IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready Mar 05 22:13:09 RN316 kernel: 8021q: adding VLAN 0 to HW filter on device eth1 Mar 05 22:13:12 RN316 kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None Mar 05 22:13:12 RN316 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Mar 05 22:13:15 RN316 kernel: Adjusting tsc more than 11% (6835455 vs 8751273) Mar 05 22:16:07 RN316 kernel: nr_pdflush_threads exported in /proc is scheduled for removal Mar 05 22:25:11 RN316 kernel: BTRFS info (device md126): relocating block group 36377627721728 flags system|raid1
Not sure if these snapperd errors are relevant:
SYSTEM.LOG
Mar 05 22:21:54 RN316 dbus[2986]: [system] Activating service name='org.opensuse.Snapper' (using servicehelper) Mar 05 22:21:54 RN316 dbus[2986]: [system] Successfully activated service 'org.opensuse.Snapper' Mar 05 22:21:54 RN316 snapperd[6838]: loading 13409 failed Mar 05 22:21:54 RN316 snapperd[6838]: loading 19029 failed Mar 05 22:21:54 RN316 snapperd[6838]: loading 19504 failed Mar 05 22:21:54 RN316 snapperd[6838]: loading 19543 failed Mar 05 22:21:54 RN316 snapperd[6838]: loading 19557 failed Mar 05 22:21:54 RN316 snapperd[6838]: loading 19608 failed Mar 05 22:21:54 RN316 snapperd[6838]: loading 19614 failed ... Mar 05 22:25:25 RN316 clamd[4134]: SelfCheck: Database status OK. Mar 05 22:25:32 RN316 snapperd[6838]: loading 13409 failed Mar 05 22:25:32 RN316 snapperd[6838]: loading 19029 failed Mar 05 22:25:32 RN316 snapperd[6838]: loading 19504 failed ... Mar 05 22:25:32 RN316 snapperd[6838]: loading 12924 failed Mar 05 22:25:32 RN316 snapperd[6838]: loading 12925 failed Mar 05 22:25:32 RN316 snapperd[6838]: loading 1036 failed
- StephenBMar 05, 2021Guru - Experienced User
Platypus69 wrote:
BTRFS.LOG
Label: 'blah:root' uuid: blah-blah Total devices 1 FS bytes used 1.43GiB devid 1 size 4.00GiB used 3.61GiB path /dev/md0 Label: 'blah:data' uuid: blah-blah Total devices 2 FS bytes used 22.95TiB devid 1 size 18.17TiB used 18.14TiB path /dev/md127 devid 2 size 27.28TiB used 4.84TiB path /dev/md126 === filesystem /data === Data, single: total=22.95TiB, used=22.93TiB System, RAID1: total=32.00MiB, used=2.95MiB Metadata, RAID1: total=5.85GiB, used=5.32GiB Metadata, DUP: total=10.50GiB, used=10.03GiB GlobalReserve, single: total=512.00MiB, used=0.00B === subvolume /data ===
Why does Data, single: showing 22.95TiB, which suspiciously seems to be the limit of the amount of data I can store? Recall the UI is showing data 22.96TB and Free space: 22.49TB. Is this the "smoking gun"???
The "total=22.95TB" doesn't mean what you think it means. It is the total of the allocated space (and is essentially the same as the 22.96 TB you are seeing in the UI). The two sizes further up are the size of your storage - md127 has size 18.17, md126 has size 27.28. the total size is therefore 45.45TiB, which is the correct size for 6x10TB single redundancy RAID.
The problem here is that for some reason your system has completely filled md127. You can see that by subtracting the "18.14 used" from the "18.17 size" for md127.
There is a brute-force solution - which is to do a factory default, set up the NAS again, and restore your data from backup. That would give you a single RAID group, and you'd have plenty of free space. In addition to being time-consuming, you would lose all your snapshots. Though painful, if it were my own system I'd do the reset and start over.
The other option I see is to delete all your existing snapshots, and see if that frees up space on md127. You'd wait for a while after deletion - then download the log zip again, and look at the "used" space for that RAID group. Hopefully it will drop substantially.
- Platypus69Mar 05, 2021Luminary
Firstly, thanks a million as always.
Of course I don't know, but I would be surprised if snapshots are the root cause....
I only have set up snapshots for my OneDrive and Dropbox shares. Which represent a fraction of the photos and movies that are stored on the RN316.
Any other snapshots, which likewise were not large anyway, are now gone. So because I only used the free versions of these services that are limited in size. Dropbox = 16GB, OneDrive I can't remember, but probably around 16GB as well. So I thought I would use the snapshot feature of the RN316 for these shares as the free tiers of OneDrive and Dropbox do not have this functionality.
Do you really think it will make a difference if I remove these underlying snapshots? They are small no? But perhaps they take up a lot of meta data???? I don't know...
OneDrive share UI says:
- 7149 files, 98 folders, 13GB
- 20 snapshots (2year(s) protection)
DropBox share UI says:
- 15365 files, 571 folders, 13.9GB
- 19 snapshots (2year(s) protection)
I too have concluded/decided that I will at some point, as soon as I can, buy 8 x 16TB HDDs for my new DS1819+, and do as you suggest with moving all the data off the RN316, refomatting it and movingit back. But I cannot afford the 8 x 16TB HDDs right now, in one hit.
So the frustrating thing is I have run out of space on all my ReadyNASes. I have this 20TB free but I cannot use it!!!! ArggghHh.... :)
So would you suggest an action plan of trying to removing 1TB of old data from md127, then doing a balance, then doing a defrag, then doing a balance and then trying to copy the data back????????
Of course I am very curious as to what the problem is and how to avoid it in the future. It sounds to me that using a strategy of going from (6 x 4TB HDDs) to (6 x 10TB HDDs) to (6 x 16TB HDDs) in the future is not a viable solution for this BTRFS based RAID NASes.
Unless of course I should have had monthly balances/defrags, Which I never did. Netgear never recommended it. I had assumed (incorrectly it seems) that you never needed to run these operatiosn as I only predominanlty only add my family photos and videos.
So I want to learn the lesson here, but am struggling to learn what I did wrong and how to avoid this in the future, other than your "brute force" technique.
So I was planning to fill out my new DS1819+
- Buy 1 x 16TB HDDs in the first month (Yes I know there is no RAID)
- Add 1 x 16TB HDD every month after that, so as to stagger the HDDs lifetime, reduced the chance of them all failing simultaneously, and also staggering the cost
But given all the dramas I am having with BTRFS, I wondering whether this is a horrendous idea, and I would be better off buying 8 x 16TB HDDs and setting up one massive pool. So take the hit on the wallet! :(
Or can I get away with perhaps buying 4 x 16TB HDDs and set up one pool this year. And in 12-24 months buying 4 x 16TB HDDs and set up another pool?
I am begining to suspect that buy the 8 x 16TB HDDs in one hit in the best way to go... Ouch!
- StephenBMar 05, 2021Guru - Experienced User
Platypus69 wrote:
So would you suggest an action plan of trying to removing 1TB of old data from md127, then doing a balance, then doing a defrag, then doing a balance and then trying to copy the data back????????
Well, we can't see what is actually on md127 (as opposed to md126). But you could try copying off some older shares, and then delete them. After the space is reclaimed (hopefully from md127), you can try a balance (which should succeed if there's enough space on md127). A scrub might also reallocate some space. After that, you could recreate the shares and copy the data back.
A defrag won't help - and it can reduce free space in the shares that have snapshots enabled.
Platypus69 wrote:
Unless of course I should have had monthly balances/defrags, Which I never did. Netgear never recommended it. I had assumed (incorrectly it seems) that you never needed to run these operatiosn as I only predominanlty only add my family photos and videos.
So I want to learn the lesson here, but am struggling to learn what I did wrong and how to avoid this in the future
They don't offer any guidance on volume maintenance. My current practice is to schedule each of the four tasks (scrub, disk test, balance, and defrag). I cycle through one each month, so over a year each runs 3 times. Defrag probably isn't necessary - but I have enough free space to avoid the downside, so I just run it anyway.
Opinions here differ on balance - mdgm for instance only runs it rarely (if at all). But I have seen posts here where it has reclaimed unallocated space. In general, if a balance isn't needed then it runs very quickly and I've never had any problems running them. So I continue to run them on this schedule.
I don't know how your system ended up this way. FWIW I also have multiple RAID groups on my NAS.
Label: '2fe72582:data' uuid: a665beff-2a06-4b88-b538-f9fa4fb2dfef Total devices 2 FS bytes used 13.54TiB devid 1 size 16.36TiB used 12.72TiB path /dev/md127 devid 2 size 10.91TiB used 1.27TiB path /dev/md126
Unallocated space isn't evenly split across the two RAID groups, but fortunately I do have reasonable space on the original md127 RAID group.
It seems to me that btrfs balance should handle this better - not sure if there are options that would spread the unallocated space more evenly. I'll try to research it if I can find the time.
- Platypus69Mar 06, 2021Luminary
Ha! Ha! Ha!
Clearly you are a 1,000,000% correct in saying The "total=22.95TB" doesn't mean what you think it means.Yes I understand that md127 + md126 should equal to 45.35TiB, but everything else is confusing... Does the "Data" label in the UI denote the amount of storage space consumed or it just referring to the Data pool, or something else?
Why do I ask? Well...
So this is what I did yesterday:
- Moved some files off the new share I created in Feb 2021 and an older one (May 2020).
- Turned off the remaining 2 snapshots that I had for OneDrive and DropBox
- Deleted all recent 2021 daily snapshots for these two shares that have been created.
- I decided to keep the monthly snapshots for these shares. As discussed before there are only about 2 x 19 of them going back 2 years. (Now I am not sure how the RN316 implements snapshots. If I have only ever added files and hardly ever modified or deleted the files, does the snapshot make a copy of the 16GB work of files, or just does it just maintain metadat in the file system, and when you modify or delete a files that is when it "moves" it into the snapshot, if you know what I mean)
- Before I went to sleep last night I kickjed off aa Defrag.
- Today I kicked of a Balance through the UI
Operations performed yesterday and kicked off today
But right now 3.5 hours later here is what the UI is showing:
Why has Data dropped from 23.40TB to 13.26TB???
So obviously the confusion/concern is that Data dropped from 23.40TB to 13.26TB!
Is this being recalculated dynamically as the Balance is performing it's "dark magic"? So I have to wait for it to finish to see what these values end up? Or is it accurate right now?
Is this expected? Obviously something has happened, and is happening. How exciting :(
My concern of course it that I have "lost data" as I am pretty sure I have NOT moved off 10TB as I don't have enough free spare storage. :)
Recall that I:
- Turned off Smart Snapshots from about 5 shares which had very little data. I did not ever have snapshot my main shares that I used for Videos, Photos, Software and Backups, which would probably account for 90% off files. I only had a number of snapshots for the OneDrive and DropBox shares, but remember they are both limited to 16GB as i am only using the free tiers.
- Moved 500GB of the file that I copied across to the RN316 in February 2021 and some older files from May 2020 (so before I swapped out the last 10TB HDD)
So this clearly is well under 10TB. Not even 1TB.
So why the big decrease in size reported for Data?
Apologies if the answer is really simple and I am being dumb...
- StephenBMar 06, 2021Guru - Experienced User
Platypus69 wrote:
Yes I understand that md127 + md126 should equal to 45.35TiB, but everything else is confusing... Does the "Data" label in the UI denote the amount of storage space consumed or it just referring to the Data pool, or something else?
Going back to where you started, I'm going to slightly revise the original report, which hopefully will help provide clarity.
Label: 'data' uuid: ... Total devices 2 FS bytes used 23.43TiB devid 1 size 18.17TiB allocated 18.17TiB path /dev/md127 devid 2 size 27.28TiB allocated 5.29TiB path /dev/md126 === filesystem /data === Data, single: allocated=23.43TiB, used=23.42TiB System, RAID1: allocated=32.00MiB, used=2.99MiB Metadata, RAID1: allocated=5.85GiB, used=5.84GiB Metadata, DUP: allocated=10.50GiB, used=10.01GiB GlobalReserve, single: allocated=512.00MiB, used=33.05MiB === subvolume /data ===
Looking at line 4, md126 has a size of 27.28 TiB, but only 5.29 TiB was allocated. Per line 3, md127 has a size of 18.17 TiB, but all of it was allocated. This totals 23.46 allocated.
Looking at line 6, the volume had 23.43 TiB allocated for data. Since we have 23.46 total allocation, that means .03 TiB (~30 GiB) was allocated to system and metadata. Of course there are rounding errors, since the reports aren't exact - so if you add up the system and metadata stuff on lines 7-10, it is a bit less than that.
Now, looking again at line 6, you had 23.43 TiB of allocated space, but 23.42 TiB of used space. That means you had .01 TiB of space that is allocated but not used. Although it is natural to label this "allocated but not used" bucket as "unused", personally I think it is more appropriate to label it "unusable". This unusable space is there because BTRFS allocates space in blocks. There will be some lost (unusable) space in many blocks.
I've been careful not to use the word "free", because that concept is a bit slippery with btrfs. The Web UI is labeling unallocated space as "free" - which is reasonable, but sometimes misleading. What you really have is allocated (used and "unusable") and unallocated.
As rn_enthusiast points out, the balance was failing because the file system
- is set up to duplicate metadata (so there is a copy on both md126 and md127)
- and there was no allocated space at all on md127
Your deletions apparently did reclaim some unallocated space, and it looks like the balance is now doing its job. But what exactly does "doing it's job" mean?
As Sandshark said (quoting the man page), The primary purpose of the balance feature is to spread block groups across all devices... There is useful side effect though. A balance will also consolidate the allocated space, so there is less "unusable" space. So even if you only have one device (your original md127), it can be useful to run balances from time to time.
So when your balance completes, you should expect to see more unallocated space on md127, and more allocated space on md126. You should look at the unallocated space you end up with on both volumes when it's done.
But as rn_enthusiast says Running balance from GUI isn't really a full balance. The GUI will use parameters during the balance so it only balances parts of the volume. So what's that about? Well, mostly it's about how long the balance takes. A full balance (with no filter parameters) will take several days on your system. Lots of users complained about the run time, so Netgear added in some filters - which speed it up, but at the cost of not balancing completely. What these parameters do is focus the balance on chunks that have unusable space.
For instance, rn_enthusiast also suggested running running the balance from the command line:
btrfs balance start -dusage=10 /data
The -d is a filter for data blocks (not metadata or system). The usage=10 tells the balance to only process blocks that have 10% (or less) used space - in other words, only process blocks that have 90% or more unusable space. That will run more quickly, and it will be easier for the system to consolidate the unusable space - converting it back to the unallocated space you needed. The system needs some working space in order for the considation to happen, and setting the dusage low reduces that space. FWIW, I'd So why the big decrease in size reported for Data?have suggested starting with -dusage=0 as there often are some allocated blocks that end up completely empty, and the system can convert them back to unallocated without needing any working space.
Platypus69 wrote:
So why the big decrease in size reported for Data?
Good question, and I'm not sure I have a fully satisfactory answer. But I believe the issue is that you had more unusable space than the system was reporting (that the used fraction in the reports is an estimate). Then once the balance was able to really get going, it found a lot more unusable space that it could shift to unallocated.
This could be related to the snapshots you deleted - the system perhaps wasn't able to reclaim the space at the time, but now that you have some unallocated space to work with, the system is getting that space back.
Platypus69 wrote:
- Before I went to sleep last night I kicked off a Defrag.
You got away with this, but it was a bad idea.
A defrag is basically rewriting a fragmented file, so it is unfragmented. Doing that requires unallocated space that you didn't have.
Even with older file systems like FAT32, defragging the files results in fragmented free space, and defragging the free space results in fragmented files. It's similar with BTRFS - defragging files will end up reducing the unallocated space.
Also, defragging a share with snapshots can sharply increase the amount of disk space used by the share. If you want to defrag regularly, you really do want to limit snapshot retention (as I suggested earlier).
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!