NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
InterClaw
May 31, 2015Aspirant
NAS not starting any services #25220444
I had a hard time coming up with a good subject for this problem, since I don't really know what's wrong and how to describe it. Here's the story: Suddenly I noticed that CIFS was not behaving norm...
InterClaw
Jun 30, 2015Aspirant
Thanks for following up!
I've reduced my selected files for backup to 4.6TB and I also changed the retention for deleted files to 6 months like you have now. I tried compacting, but it still shows as 3.9TB server side. Maybe that has to do with that the backup is still trying to complete initially for a large portion of the selected files. I still only have 2.7TB actually backed up yet. Yeah, speed towards the US datacenter is not very good for me. :) But a lot of what it is backing up now is very static, so once done it should be better.
One thing I don't understand though is if it is the amount of data on the server or the selected amount on the client that causes the out of memory issues. Or a combination maybe? For you the solution was to reduce the server archive size and then both de-duplication and compression would be beneficial. However, de-duplication and compression might be detrimental to memory consumption on the client side you theorize. So do you think there might be a trade-off one has to make here to strike the right balance between the two? My guess would be that these are features you want to use. I have both on automatic.
About 64-bit, that doesn't help us on our systems since Embedded Java only exists in 32-bit, right? :/ So until that changes there's no point in being able to allocate more than the ~3700MBs right? Still, if you run with ~3700MB then there might be a good idea to have 8GB for the system as a whole.
I've reduced my selected files for backup to 4.6TB and I also changed the retention for deleted files to 6 months like you have now. I tried compacting, but it still shows as 3.9TB server side. Maybe that has to do with that the backup is still trying to complete initially for a large portion of the selected files. I still only have 2.7TB actually backed up yet. Yeah, speed towards the US datacenter is not very good for me. :) But a lot of what it is backing up now is very static, so once done it should be better.
One thing I don't understand though is if it is the amount of data on the server or the selected amount on the client that causes the out of memory issues. Or a combination maybe? For you the solution was to reduce the server archive size and then both de-duplication and compression would be beneficial. However, de-duplication and compression might be detrimental to memory consumption on the client side you theorize. So do you think there might be a trade-off one has to make here to strike the right balance between the two? My guess would be that these are features you want to use. I have both on automatic.
About 64-bit, that doesn't help us on our systems since Embedded Java only exists in 32-bit, right? :/ So until that changes there's no point in being able to allocate more than the ~3700MBs right? Still, if you run with ~3700MB then there might be a good idea to have 8GB for the system as a whole.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!