× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

ReadyNAS 314 with 5.45TB volume, and a 4.9TB iSCSI share somehow full

mdj_
Aspirant

ReadyNAS 314 with 5.45TB volume, and a 4.9TB iSCSI share somehow full

Our client has a ReadyNAS 314 with 3 x 3TB drives in X-RAID. This gives them a 5.45 TB volume. From that volume we have created a single 4.9TB iSCSI LUN. There are no snapshots (and they are disabled). Somehow, with no other data other than the 4.9TB iSCSI LUN, there is only 64KB free on the 5.45 TB volume, which is making the LUN unusable.

 

du output showing nothing more than the single iSCSI LUN.

 

 

root@nas1:/data# du -hc /data/
0 /data/home
0 /data/.apps/DO_NOT_DELETE
76K /data/.apps/.readydlna
0 /data/.apps/.forked-daapd
0 /data/.apps/.xdg/config/tracker
0 /data/.apps/.xdg/config
0 /data/.apps/.xdg
80K /data/.apps
0 /data/.vault
16K /data/._share/veeam
20K /data/._share
0 /data/.timemachine
5.0T /data/veeam/.iscsi
5.0T /data/veeam
0 /data/.purge
5.0T /data/
5.0T total
root@nas1:/data# du -hc /apps 0 /apps/DO_NOT_DELETE 76K /apps/.readydlna 0 /apps/.forked-daapd 0 /apps/.xdg/config/tracker 0 /apps/.xdg/config 0 /apps/.xdg 80K /apps 80K total
root@nas1:/data# du -hc /home 0 /home 0 total
df/btrfs showing that somehow 5.5TB is being used....
root@nas1:/data# df -h Filesystem Size Used Avail Use% Mounted on udev 10M 4.0K 10M 1% /dev /dev/md0 4.0G 553M 3.2G 15% / tmpfs 992M 0 992M 0% /dev/shm tmpfs 992M 3.6M 989M 1% /run tmpfs 496M 16K 496M 1% /run/lock tmpfs 992M 0 992M 0% /sys/fs/cgroup /dev/md127 5.5T 5.5T 64K 100% /data /dev/md127 5.5T 5.5T 64K 100% /home /dev/md127 5.5T 5.5T 64K 100% /apps   root@nas1:/data# btrfs filesystem show Label: '2fe703ce:root' uuid: acff10fc-78dc-4cc6-8521-d31b2bb6f1be Total devices 1 FS bytes used 525.65MiB devid 1 size 4.00GiB used 1.12GiB path /dev/md0 Label: '2fe703ce:data' uuid: bf034193-dc2d-41a9-95ff-8b5bfc3da759 Total devices 1 FS bytes used 5.44TiB devid 1 size 5.45TiB used 5.45TiB path /dev/md127

 

A balance does not work

 

 

root@nas1:/data/.apps# btrfs balance start /data
WARNING:
Full balance without filters requested. This operation is very
intense and takes potentially very long. It is recommended to
use the balance filters to narrow down the scope of balance.
Use 'btrfs balance start --full-balance' option to skip this
warning. The operation will start in 10 seconds.
Use Ctrl-C to stop it.
10 9 8 7 6 5 4 3 2 1
Starting balance without any filters.
ERROR: error during balancing '/data': No space left on device There may be more info in syslog - try dmesg | tail
root@nas1:/data# dmesg | tail [ 432.573621] BTRFS info (device md127): relocating block group 8836924571648 flags metadata|dup [ 480.442848] BTRFS info (device md127): relocating block group 8610901917696 flags metadata|dup [ 522.897445] BTRFS info (device md127): relocating block group 8140066127872 flags metadata|dup [ 589.994902] BTRFS info (device md127): relocating block group 7703590076416 flags metadata|dup [ 635.633801] BTRFS info (device md127): relocating block group 7471124971520 flags metadata|dup [ 679.401111] BTRFS info (device md127): relocating block group 7161350455296 flags metadata|dup [ 720.849443] BTRFS info (device md127): relocating block group 6864460840960 flags metadata|dup [ 795.208829] BTRFS info (device md127): relocating block group 6497778008064 flags metadata|dup [ 841.574578] BTRFS info (device md127): relocating block group 6006541123584 flags metadata|dup [ 887.605289] BTRFS info (device md127): 5581 enospc errors during balance

How the hell does a 4.9 TB iSCSI Share take up 5.45 TB when there are no snapshots, and how can it be fixed....

 

Message 1 of 8
Marty_M
NETGEAR Employee Retired

Re: ReadyNAS 314 with 5.45TB volume, and a 4.9TB iSCSI share somehow full

Hello mdj_,

 

It does appear the ISCSI Lun used all of the capacity of the volume. What type provisioning is set when you created the luns? For guide for the provisioning of the iscsi luns please go here.

 
Welcome to the community!
 
Regards,
Marty_M 
NETGEAR Community Team

Message 2 of 8
mdj_
Aspirant

Re: ReadyNAS 314 with 5.45TB volume, and a 4.9TB iSCSI share somehow full

It was a thick provisioned LUN. I managed to get the LUN to mount and copy our data off.

 

I have since re-created the entire X-RAID array and the iSCSI LUN (thick). The NAS was initially showing ~557 GB free after the initial creation and initlisation, and now after copying our data back on it's now showing 551GB free. 

 

I'm scared that it will fill up the NAS again, and I'm at a loss as to how this is happening, and how to explain it to our customer...

Message 3 of 8
gn00347026
Guide

Re: ReadyNAS 314 with 5.45TB volume, and a 4.9TB iSCSI share somehow full

 
Model: RN51600|ReadyNAS 516 6-Bay
Message 4 of 8
gn00347026
Guide

Re: ReadyNAS 314 with 5.45TB volume, and a 4.9TB iSCSI share somehow full

I got a similar situation before and the root cause is fragmentation .

the solution for this is, perform a firmware update and run a defrag on regular basis and it would be good to leave a t least 20% free space. 

 

.

 

Message 5 of 8
mdj_
Aspirant

Re: ReadyNAS 314 with 5.45TB volume, and a 4.9TB iSCSI share somehow full

So I thought the defrag was the solution, I kicked off the defrag and it went from ~470 GB free to close to 500 GB. Unfortunatly, as the defrag went on, the free capacity started dropping, and after finishing, we're now at 440GB free - so we've 'lost' an additional 30GB.

 

Ran a balance after, and got 1GB back.

 

We're now at 100GB mysteriously missing, and I have a feeling given enough time it'll drop down to 64kb again and the iSCSI lun will be unmountable. 

Message 6 of 8
mdgm-ntgr
NETGEAR Employee Retired

Re: ReadyNAS 314 with 5.45TB volume, and a 4.9TB iSCSI share somehow full

Were you using bit-rot protection?

 

What was the metadata usage like on the data volume?

Message 7 of 8
gn00347026
Guide

Re: ReadyNAS 314 with 5.45TB volume, and a 4.9TB iSCSI share somehow full

Hi mdj

 

here're the existing settings. . so far ok

lun - thick provisioning 

defrag + balance running every week
cow and snapshots disable, sync writes disable

 

hope it can help

 

 

Message 8 of 8
Top Contributors
Discussion stats
  • 7 replies
  • 1249 views
  • 0 kudos
  • 4 in conversation
Announcements