NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
brando56894
Dec 17, 2013Aspirant
ReadyNAS 104 Crashing Constantly
I know there's already a thread open about this but I tried those work arounds and they don't seem to really help me. I just bought the 104 about 5 days ago to store all of my drives so I can access my media while I have my motherboard RMA'd but it's taking a lot longer than expected to transfer my data since the stupid thing keeps crashing! :evil: This is my first experience with a NAS and I don't really have half a grand to spend on one of the x86 versions so I opted to get the ARM version since it was low-power (which was one of the reasons why I got it so I didn't have to leave my gaming/server pc on 24/7), but I'm seriously considering returning it because it has become such a headache!
On the first day that I got it I figured I could just throw in my drives and access them separately until I wanted to put them in RAID, but that's obviously not the case, so I copied all the data I could off of my WD 3TB green on to the other WD 3 TB green (and other disks) that I still had in my PC, then I placed the empty, partitionless drive into the NAS and started the formatting/installation process. Once it was finished I upgraded the firmware and started copying about 2.5 TB worth of data over to it via my network. First I tried NFS since both the NAS and my PC are Linux based and the transfer would start out at around 100 MB/sec but quickly drop down to a few hundred KB and then completely stop after a few minutes, then slowly rise back up and then drop down again. I tried using SMB instead but the same problem would happen, same goes for RSYNC over SSH. I then decided to use FTP which would give me about 7 or 8 MB/sec with only one transfer at a time even though the router/switch, my pc and the NAS are all gigabit Ethernet jacks connected by either Cat5e or Cat6 cables. I let it run over night expecting it to be finished when I woke up in the morning, only to find out that it had crashed only a few hours after I started the transfer and it only copied about 180 GB out of 2+ TB! :x Finding that the webpage and SSH were unresponsive, along with the power button I was forced to pull the plug on it and wait for it to boot up again. Once it was back up I started to transfer the files again, then head off to work. I checked on it later remotely, only to find out it was inaccessible because it had crashed once again! This continued for about another day until I got really pissed off and just decided to delete the content on the 3TB WD Green that was in my PC so I could finally put it in the NAS. Once I wiped the drive clean and put the bare drive in the NAS it told me that the array was degraded and that it was going to rebuild it but it was going to take about 15 hours. I let it rebuild over night and found that it had completed successfully but it was in RAID1, instead of RAID0 which I thought it would have been (I now realize that this is by design). I just wanted the extra space until I get two more of the same drives for Christmas, oh well.
So now I finally have two drives in the NAS in a RAID1 array and my data was copied over. I have Sickbeard, SABnzbd, Couch Potato and a few other apps (MySQL server, VPN server, HTML5 SSH client, PHP and PHP My Admin, Python [obviously], PHP Info, Log Analyzer, Syslog server) installed but SickBeard and SABnzbd are the only ones that are currently being utilized (the SQL database has no tables in it). SAB now crashes the NAS on a daily basis and if it doesn't crash it, it slows it down like molasses in winter. I only have SAB using 12 connections (out of my 30 total) and even limited to 2 MB/sec out of my total 6.8 MB/sec bandwidth it still slows to a crawl and CPU usage is at 100%. I have noticed though that when I kill all the connections in SAB that the CPU usage drops down to about 5%.
I would really like to know what's going on here and if this thing can handle the load that I'm throwing at it (which I don't think is all that much), if it can't I'm going to return it. I've looked at the logs and the only errors that I see are that it says it can't connect to my router via UDP or something like that (I'm at work so I can't see the actual logs, because surprise surprise the stupid thing crashed again!!), nothing really catastrophic like kernel panics or anything. One odd thing that I have noticed is that when SSH and the web interface (Apache?) aren't available, it will still respond to pings so it isn't completely locked up. I'll see if I can SSH into my box and grab some old logs that I downloaded before for you guys to take a look at. I may have to set up remote syslogging and the SMTP server so I can actually get some info out of it when the stupid thing dies.
Edit: Here are two sets of logs I was able to grab.
https://drive.google.com/file/d/0B5ma-aNkwQ_ROVIzd2Q3RWZFVHM/edit?usp=sharing
https://drive.google.com/file/d/0B5ma-aNkwQ_RVmdqRjRCUzQyVDA/edit?usp=sharing
On the first day that I got it I figured I could just throw in my drives and access them separately until I wanted to put them in RAID, but that's obviously not the case, so I copied all the data I could off of my WD 3TB green on to the other WD 3 TB green (and other disks) that I still had in my PC, then I placed the empty, partitionless drive into the NAS and started the formatting/installation process. Once it was finished I upgraded the firmware and started copying about 2.5 TB worth of data over to it via my network. First I tried NFS since both the NAS and my PC are Linux based and the transfer would start out at around 100 MB/sec but quickly drop down to a few hundred KB and then completely stop after a few minutes, then slowly rise back up and then drop down again. I tried using SMB instead but the same problem would happen, same goes for RSYNC over SSH. I then decided to use FTP which would give me about 7 or 8 MB/sec with only one transfer at a time even though the router/switch, my pc and the NAS are all gigabit Ethernet jacks connected by either Cat5e or Cat6 cables. I let it run over night expecting it to be finished when I woke up in the morning, only to find out that it had crashed only a few hours after I started the transfer and it only copied about 180 GB out of 2+ TB! :x Finding that the webpage and SSH were unresponsive, along with the power button I was forced to pull the plug on it and wait for it to boot up again. Once it was back up I started to transfer the files again, then head off to work. I checked on it later remotely, only to find out it was inaccessible because it had crashed once again! This continued for about another day until I got really pissed off and just decided to delete the content on the 3TB WD Green that was in my PC so I could finally put it in the NAS. Once I wiped the drive clean and put the bare drive in the NAS it told me that the array was degraded and that it was going to rebuild it but it was going to take about 15 hours. I let it rebuild over night and found that it had completed successfully but it was in RAID1, instead of RAID0 which I thought it would have been (I now realize that this is by design). I just wanted the extra space until I get two more of the same drives for Christmas, oh well.
So now I finally have two drives in the NAS in a RAID1 array and my data was copied over. I have Sickbeard, SABnzbd, Couch Potato and a few other apps (MySQL server, VPN server, HTML5 SSH client, PHP and PHP My Admin, Python [obviously], PHP Info, Log Analyzer, Syslog server) installed but SickBeard and SABnzbd are the only ones that are currently being utilized (the SQL database has no tables in it). SAB now crashes the NAS on a daily basis and if it doesn't crash it, it slows it down like molasses in winter. I only have SAB using 12 connections (out of my 30 total) and even limited to 2 MB/sec out of my total 6.8 MB/sec bandwidth it still slows to a crawl and CPU usage is at 100%. I have noticed though that when I kill all the connections in SAB that the CPU usage drops down to about 5%.
I would really like to know what's going on here and if this thing can handle the load that I'm throwing at it (which I don't think is all that much), if it can't I'm going to return it. I've looked at the logs and the only errors that I see are that it says it can't connect to my router via UDP or something like that (I'm at work so I can't see the actual logs, because surprise surprise the stupid thing crashed again!!), nothing really catastrophic like kernel panics or anything. One odd thing that I have noticed is that when SSH and the web interface (Apache?) aren't available, it will still respond to pings so it isn't completely locked up. I'll see if I can SSH into my box and grab some old logs that I downloaded before for you guys to take a look at. I may have to set up remote syslogging and the SMTP server so I can actually get some info out of it when the stupid thing dies.
Edit: Here are two sets of logs I was able to grab.
https://drive.google.com/file/d/0B5ma-aNkwQ_ROVIzd2Q3RWZFVHM/edit?usp=sharing
https://drive.google.com/file/d/0B5ma-aNkwQ_RVmdqRjRCUzQyVDA/edit?usp=sharing
9 Replies
Replies have been turned off for this discussion
- JMehringApprenticeWhile I do not have a 104 I see you are running a ton of apps.
I believe the 104 only has 512MB of RAM so you could be running out of RAM. You may want to look at the thread that increases swappiness to 60 to prevent having apps killed and use less apps. Increasing swappiness has really helped out on my system since I also run tons of apps, but I also have 4GB RAM.
For instance, this is what some apps normally use up in RAM (these are my top RAM using apps):3243 admin 20 0 926m 282m 6144 S 13.8 7.1 108:10.90 Bitcasa
28672 admin 20 0 473m 129m 3688 S 0.0 3.3 15:03.76 python <--- Sickbeard
25942 admin 20 0 724m 127m 4256 S 3.6 3.2 38:29.19 sabnzbd
12955 root 20 0 551m 90m 5196 S 0.0 2.3 3:56.13 Plex Media Serv
16279 admin 20 0 242m 58m 4224 S 0.0 1.5 0:29.76 python <--- CouchPotato
12963 root 35 15 373m 53m 2828 S 0.0 1.4 2:22.53 python <--- Plex Media Server
EDIT: On a side note, I have found using rsync (without ssh) has always been the best way for me to transfer data to/from the NAS for backup/restore especially since you can resume nicely. I also use rsync (via NAS web interface) to sync my files to the Bitcasa clould. - brando56894AspirantThanks for the reply! I did read through that thread yesterday and hoped that it would fix my issues but it didn't, I did increase swappiness to 60 and it seemed to help a little bit but it still crashes. I would keep an eye on CPU and RAM usage with htop and my RAM would always be around 350-400 MB (you are correct in saying that it only has 512 MB) and when I checked it this morning it has swapped about 150 MB of data to disk so it is working as it's supposed to but it still seems to either be freezing or just killing off essential services via OOM as another user mentioned in that thread.
- JMehringApprenticeDoes your NAS lock-up even if you turn off all apps?
If you have not tried this, try turning off everything, especially custom apps you installed, anti-virus and DNLA. If this works, you can always run sb, etc on another machine and use the NAS as storage only.
These are the steps I use when debugging and I will eventually add in one service at a time. - JMehringApprenticeSabnzbd takes a lot of resources (especially when unpacking) so you can also try to add these parms to the settings -> switches. They help make sabnzbd play more nicely with the system.
Nice Parameters: -n10
IONice Parameters: -c2 -n4 - brando56894AspirantI was going to attempt to do that but it was locking up before I could even turn things off! I may have to do a reinstall and start from scratch :-/ I would rather not have it run on another PC since that was the point of getting the NAS, although I do have a small ARM based PC that has a Samsung Exynos 4412 quadcore clocked at 1.8 GHz with 2 GB of RAM which could probably take over running SAB and MySQL if need be since it's a more powerful device. As of now I have it running Android with XBMC but I think I'm going to switch to Ubuntu or something else so I can run a stable version of XBMC.
I was actually looking into doing that yesterday since I had enabled par2 multicore since I (wrongly) assumed that the processor was multi, not single core. I also disabled some other things such as the extra checks using SFV and the windows and mac options. I did set ionice to that but I set nice to 10 or 12 I believe. Would I want to decrease it's nice number so that it would take up less resources instead of increase it so that it eats up more? - JMehringApprenticeThe higher the number for nice the nicer it is to the system, so 12 will be nicer than 10 hehe. So your setting was higher and better on resources. But, look at the values, ionice does not take a number. Look at my examples above.
My cousin got a ReadyNAS and was looking at the 104, but I recommended her to get the 314 since it has Intel processor and more memory. It is able to run all the apps you have so if you are able to return and afford a 314 that's the route I would go since I really do not think the lower end units are designed to do much more than be a NAS.
If you do end up getting a 314, please be aware of the threads stating you will have to do a factory reset when 6.1.5 comes out since there is a bug in the OS that freezes up system (not sure if maybe you already effected by this bug). Just saying that so you have good backups until everything is sorted out.
Good luck!!! - brando56894AspirantJust for the hell of it I set one of the SAB processes to -19 (or maybe I set all four that were running) but it didn't really seem to change much. Since I don't have $300 more to shell out for the x86 version I think I may just offload SAB to the Odroid (the quadcore Exynos device) since it's being really under utilized currently and I keep it on 24/7 anyway so that sickbeard will update my SQL database for my XBMC library and it only consumes like 10 watts of power.
I think I'm going to reinstall the firmware tonight and only have SAB and Sickbeard running and see how everything fares. - rebbyAspirant
JMehring wrote: The higher the number for nice the nicer it is to the system, so 12 will be nicer than 10 hehe. So your setting was higher and better on resources. But, look at the values, ionice does not take a number. Look at my examples above.
My cousin got a ReadyNAS and was looking at the 104, but I recommended her to get the 314 since it has Intel processor and more memory. It is able to run all the apps you have so if you are able to return and afford a 314 that's the route I would go since I really do not think the lower end units are designed to do much more than be a NAS.
If you do end up getting a 314, please be aware of the threads stating you will have to do a factory reset when 6.1.5 comes out since there is a bug in the OS that freezes up system (not sure if maybe you already effected by this bug). Just saying that so you have good backups until everything is sorted out.
Good luck!!!
I have the EXACT SAME issues with the 314 as I do with my 2 104's. The 314 is NOT a solution. - brando56894AspirantI think the problem is that I just have too much stuff running and/or SABnzbd just takes up all the system resouces. I have pretty much everything disabled except for SABnzbd and Sickbeard and it seems fine most of the time so far, I had it running all night and I think it may have only crashed once but it restarted itself. I deleted all my tv shows like a dumb ass (wanted to rename the share and thinking that deleting the share from the reasyNAS web interface would just delete the share not the entire folder) so I'm off-loading SAB and Sickbeard duties to my PC (Core i7 950 @ 3.2 GHz, 128 GB SSD, 8 GB DDR3). Lets see if it can handle transferring about a terabyte over NFS (in small increments, a few gigabytes at a time).
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!