NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
InterClaw
May 31, 2015Aspirant
NAS not starting any services #25220444
I had a hard time coming up with a good subject for this problem, since I don't really know what's wrong and how to describe it. Here's the story:
Suddenly I noticed that CIFS was not behaving normally. I could browse shares, but not add any files.
This can of course be because of a million things, so I figured I try disabling and enabling that service in RAIDiator (4.2.27). So I disabled it and tried enabling it again, but selecting that service and clicking apply does absolutely nothing. The checkbox disappears and nothing more happens. No error message. Weird...? FTP still worked though, at least for browsing.
So I restarted the NAS. Then the CIFS service was still off and can't be started and the shares (naturally) disappeared from the network. Uh?! Not only that, but FTP had disabled itself. And, yeah, it won't start either. SSH had stopped working. Addons like Transmission as well won't start. I have Crashplan running on it, which I can't access either, so I don't really know if that starts or not.
Browsing via HTTPS still works, so the volume and the shares are still there. Everything looks green. Disks are ok etc. Status display on the front of the NAS seems normal.
So I turn to the logs. "No logs exist." Ehh? I try do download all logs anyway and it gives me some XML error message, which I guess is normal when no logs exist. I sure haven't deleted them.
So I try the memory test on the boot menu. No problems there. I also try the disk test on the boot menu. Finished fine and the NAS booted up again.
So I figure that the OS Reinstall on the boot menu should be able to solve this. Something is very corrupt/weird with how the NAS behaves! I think I managed to download a copy of the settings before I proceeded, since I wasn't sure what would actually be reset, and then I did the OS Reinstall.
The only thing that has changed is that the admin password is reset to standard - and I can't even change that. I fill in the fields and try to apply it. And, yes, nothing happens.
Then it struck me, maybe it's the browser (Chrome) or something?? So I tried IE as well, but no luck there. Also tried Safari in my phone.
I then tried getting SSH to work again by trying to upload that addon file, but I just got "Update file is not valid for this architecture" as error message.
So now I'm pretty much stuck here. I'm out of ideas! The NAS says everything is A-OK, but refuses to run anything I want to run on it and it doesn't tell me why. I can't even start rsync, which does not bode well either...
Please help! :(
Suddenly I noticed that CIFS was not behaving normally. I could browse shares, but not add any files.
This can of course be because of a million things, so I figured I try disabling and enabling that service in RAIDiator (4.2.27). So I disabled it and tried enabling it again, but selecting that service and clicking apply does absolutely nothing. The checkbox disappears and nothing more happens. No error message. Weird...? FTP still worked though, at least for browsing.
So I restarted the NAS. Then the CIFS service was still off and can't be started and the shares (naturally) disappeared from the network. Uh?! Not only that, but FTP had disabled itself. And, yeah, it won't start either. SSH had stopped working. Addons like Transmission as well won't start. I have Crashplan running on it, which I can't access either, so I don't really know if that starts or not.
Browsing via HTTPS still works, so the volume and the shares are still there. Everything looks green. Disks are ok etc. Status display on the front of the NAS seems normal.
So I turn to the logs. "No logs exist." Ehh? I try do download all logs anyway and it gives me some XML error message, which I guess is normal when no logs exist. I sure haven't deleted them.
So I try the memory test on the boot menu. No problems there. I also try the disk test on the boot menu. Finished fine and the NAS booted up again.
So I figure that the OS Reinstall on the boot menu should be able to solve this. Something is very corrupt/weird with how the NAS behaves! I think I managed to download a copy of the settings before I proceeded, since I wasn't sure what would actually be reset, and then I did the OS Reinstall.
The only thing that has changed is that the admin password is reset to standard - and I can't even change that. I fill in the fields and try to apply it. And, yes, nothing happens.
Then it struck me, maybe it's the browser (Chrome) or something?? So I tried IE as well, but no luck there. Also tried Safari in my phone.
I then tried getting SSH to work again by trying to upload that addon file, but I just got "Update file is not valid for this architecture" as error message.
So now I'm pretty much stuck here. I'm out of ideas! The NAS says everything is A-OK, but refuses to run anything I want to run on it and it doesn't tell me why. I can't even start rsync, which does not bode well either...
Please help! :(
47 Replies
Replies have been turned off for this discussion
- InterClawAspirantYeah, the compact button does not seem to do anything at the moment. Did you get some sort of progress bar when you did this? I think the backup needs to be stopped for something to happen. Or maybe the compact command is queued now after backup completes. Just 4.7 months left! :D My speeds are 1-2 Mbit/s... If I ever get to complete this backup I'll take another look at compacting. For now it seems to be under control memory wise though and I should be able to increase the allocation from 2048 towards ~3700 if things get hairy. Just need to check /tmp once in a while for accumulating Java files.
I used to have my PC backups on the NAS selected for backup as well, but deselected those since there are too many changes and it takes too long to upload to the server. 5.1TB -> 4.6TB. I feel it's enough with the live version on the computers + the one backup on the NAS. The data unique to the NAS is what is being backup up now. And being without all those versions related to the PC backups might be what I need to fly below the radar here in terms of memory consumption.
It's a bit weird, my cache now is only 41MB in /usr/local/crashplan/cache so it doesn't seem to be filling up my OS partition very much. If things change I might take another stab at moving the cache to C instead though, like you have.
It seems to me as well that CrashPlan can handle a lot more than the memory requirements state. Perhaps they include really large margins or error. Good news for us if that's the case. :) - StephenBGuru - Experienced UserI reset the daily resync to 14 days while I was compacting. That eliminated the endless restarts, but it still took quite a while. I also needed to do it twice, not sure why. After the "deep compacting" step it should also do a version pruning step. I started with Crashplan in 2012, and have been backing up weekly image backups for all our PCs on the Pro. The churn from those created a lot of space on the server archive (even with de-duplication).
My backup speed is usually in the 30-50 mb/s range (not including de-duplication savings). Though I'm in the US.
There's not a lot of information on the memory usage, but my impression is that the answer is a combination. There's a hash value stored for each block on the cloud server, and every new block in the local NAS needs to be hashed (and that hash looked up in the cloud server table). It isn't very clear though, esp. since there is also an on-disk cache (without any info on what is stored in that). My cache is about 1.5 GB right now (and it is set up to be on the data volume, not the OS partition).
On 64 bit Java - I thought it was available for the Pro, but I must admit I haven't checked. I did order an 8 GB memory upgrade, under the theory that even with ~3700 MB dedicated to crashplan, I'd want at least 6 GB for the system. Plus the DDR2 memory is getting expensive, and I only wanted to buy it once. - InterClawAspirantThanks for following up!
I've reduced my selected files for backup to 4.6TB and I also changed the retention for deleted files to 6 months like you have now. I tried compacting, but it still shows as 3.9TB server side. Maybe that has to do with that the backup is still trying to complete initially for a large portion of the selected files. I still only have 2.7TB actually backed up yet. Yeah, speed towards the US datacenter is not very good for me. :) But a lot of what it is backing up now is very static, so once done it should be better.
One thing I don't understand though is if it is the amount of data on the server or the selected amount on the client that causes the out of memory issues. Or a combination maybe? For you the solution was to reduce the server archive size and then both de-duplication and compression would be beneficial. However, de-duplication and compression might be detrimental to memory consumption on the client side you theorize. So do you think there might be a trade-off one has to make here to strike the right balance between the two? My guess would be that these are features you want to use. I have both on automatic.
About 64-bit, that doesn't help us on our systems since Embedded Java only exists in 32-bit, right? :/ So until that changes there's no point in being able to allocate more than the ~3700MBs right? Still, if you run with ~3700MB then there might be a good idea to have 8GB for the system as a whole. - StephenBGuru - Experienced User
Just to follow up on this...StephenB wrote: I've just run into this also (seeing a bunch of restart.2015*.log files but am not seeing jna files)- and opened a ticket with crashplan support.
Even with 8 GB of ram you will be limited to ~3700M heap size. The Crashplan JVM is 32 bit, so the address space is 4 GB. Some of the address space is reserved. You can probe it with java -XmxAAAAm -showversion. If AAAA is small enough, the command will display some java info. If it gets much over 3715 on my pro6 it will crash.
FWIW my volume size is ~8.5TB and ~550K files. The size (and crashplan) have been stable for quite a while. Crashplan began to fail a couple of weeks ago. There was about 100 GB of churn (17K files) in one folder that may have triggered it.
I was able to get backups going again with help from Crashplan support. The fix required adjusting the retention for deleted files from the default "never" down to 6 months, and then compacting the archive. That reduced the size of the server archive from ~19 TB down to ~10 TB. (De-duplication is against the server achive, not the local storage - so reducing that size helps). Then I rebuilt the cache and rebooted the NAS.
I am getting more memory for the Pro, since this will likely happen again at some point in the future.
One small mystery - my backups are working, with much less memory than Crashplan says is needed. One reason might be because compression is off and de-duplication is set to "minimal".
Crashplan support says yes.StephenB wrote: One of my questions is whether crashplan can run with a 64bit mode jvm on linux. - InterClawAspirantLooking forward to it, thanks. :)
- StephenBGuru - Experienced User
They are actively working the case, I'll post an update when I know the resolution.InterClaw wrote: Hmm, well that sucks. How can they advertise unlimited cloud storage and then their software won't handle more than whatever fits in 32 bits of memory space? Is it how CrashPlan is implemented to use Java that's the problem? I guess Java can run in 64 bit mode as well? Even on our NAS:es?
One of my questions is whether crashplan can run with a 64bit mode jvm on linux. - InterClawAspirantHmm, well that sucks. How can they advertise unlimited cloud storage and then their software won't handle more than whatever fits in 32 bits of memory space? Is it how CrashPlan is implemented to use Java that's the problem? I guess Java can run in 64 bit mode as well? Even on our NAS:es?
It's good to have your numbers as a reference point at least! - StephenBGuru - Experienced UserI've just run into this also (seeing a bunch of restart.2015*.log files but am not seeing jna files)- and opened a ticket with crashplan support.
Even with 8 GB of ram you will be limited to ~3700M heap size. The Crashplan JVM is 32 bit, so the address space is 4 GB. Some of the address space is reserved. You can probe it with java -XmxAAAAm -showversion. If AAAA is small enough, the command will display some java info. If it gets much over 3715 on my pro6 it will crash.
FWIW my volume size is ~8.5TB and ~550K files. The size (and crashplan) have been stable for quite a while. Crashplan began to fail a couple of weeks ago. There was about 100 GB of churn (17K files) in one folder that may have triggered it. - InterClawAspirantI found the underlying problem for these jna files being created and eventually filling the partition. After just a few days of use now there were a large amount of them again in tmp. After some more searching I found about 20,000 log files of CrashPlan restarting (hence the creation of the jna files). Not much info in those log files, but I figured out it's because CrashPlan is running out of RAM due to my backup size.
One symptom of this happening could be that the GUI disconnects, which it has been doing actually, but not for very long, so the partition must have filled pretty rapidly over a series of days, maybe weeks, but probably not months. It's not like I'm in the CrashPlan GUI that often to react to the disconnections.
I found these recommendations on memory allocation. I had no idea about this either. :)
http://support.code42.com/CrashPlan/Lat ... expectedly
I have about 100,000 files in 5.1 TB selected atm. According to these recommendations I should allocate 5 GB of memory. I have 4 GB + the 2 GB swap. I tried settings of both 5120 and 4096, but CrashPlan wouldn't start for some reason. I then tried a more moderate setting of 2048 and that has been running stable now for a while. No crashes. For now...
free -m returns this:total used free shared buffers cached
Mem: 3956 3827 128 0 67 1766
-/+ buffers/cache: 1993 1962
Swap: 2047 0 2047
I noticed someone running 2 x 4 GB in the Pro 6, but I don't think that's an option really for me. DDR2 is getting ridiculously expensive. :) This also pushes me towards the RN516 when it's time to upgrade. Now I have another reason to upgrade besides running out of space on C again. 2 x 8 GB on the RN516 vs. 1 x 4 on the RN316. - InterClawAspirantOK, thanks for all your input. :)
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!