NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
bedlam1
Feb 04, 2017Prodigy
Couchpotato Errors on systemd-journal.log
RN Pro 4 OS 6.6.1 Looking through my systemd-journal.log I am seeing a CONTINUAL loop of the following entries Feb 04 17:54:45 NAS_IAN systemd[1]: Started CouchPotato application instance. Feb ...
- Feb 07, 2017
I am not seeing a very good anwer to this issue as of yet. There are two things that could "work".
1) The current method where you get the errors from systemd.
The errors occur from system when there has been a fork in the process and the service attempts to start. Each attempt to start the service will try 5 times. You could work around it by "Shutdown" from the CouchPotato UI. This would end the process completely preventing the conflicts from systemd. The errors would not comback till there is another "Restart" from CouchPotato UI. The restart could be triggered from the UI or an (auto)update.
I will reach out to the CouchPotato dev to see if he has looked into the restart signalling to help prevent this issue.
2) systemd's and CouchPotato's recommends an alternative configuration
The configuration is very clean and I would like to use it, but once you run the "Restart" from CouchPotato UI or run the update from CouchPotato; it would be in the shut off state. You would then just go into the ReadyNAS Admin page to turn the CouchPotato app back on.
Here is what the configuration would look like with the 2nd option:
[Unit] Description=CouchPotato application instance After=network.target [Service] ExecStart=/usr/bin/python /apps/couchpotato/CouchPotato.py --data_dir=/apps/couchpotato/app-config Type=simple Nice=18 User=admin Group=admin [Install] WantedBy=multi-user.target
The auto updates are nice, but since the development on that project is slower than some others.... the option two might be a good option to look at.
bedlam1 did you want to try that systemd config out?
Did you need instructions or a quick test build using that config?
Mhynlo
Feb 05, 2017Luminary
I think I had a bad systemd configuration that may have caused this behavior.
I am going to try to run this config to see if that helps the behavior:
[Unit] Description=CouchPotato application instance After=network.target [Service] ExecStart=/usr/bin/python /apps/couchpotato/CouchPotato.py --data_dir=/apps/couchpotato/app-config Type=simple Nice=18 User=admin Group=admin [Install] WantedBy=multi-user.target
That should be closer to CouchPotato has their systemd configuration.
- bedlam1Feb 05, 2017Prodigy
OK thanks, sadly I am cluless on this stuff, do I need to do anything?
- MhynloFeb 05, 2017Luminary
That would have been too good to be true:
https://github.com/CouchPotato/CouchPotatoServer/pull/6065
There was a fix to the CouchPotato config, but someone was running into that same thing I ran into before...
When you restart the CouchPotato app through the UI/software (example, when you do an update through the CouchPotato software...) the systemd would not bring that service back online. It seems both cases are not very clean. I did notice that with the current config that I have for systemd does not monitor the service as close as it needs to... When you update CouchPotato throught their software; the service restarts and systemd stops monitoring the service. When systemd (CP) service tries to start it sees there is a conflict with the pid file. (Or at least that is what I am suspecting at this time.)
TL;DR I will need to look into it, if someone finds a solution before me; feel free to post!
- bedlam1Feb 06, 2017Prodigy
So do I need to do anything or will a Couchpotato update (from you Mhynlo or who?) cure this ?
- MhynloFeb 07, 2017Luminary
I am not seeing a very good anwer to this issue as of yet. There are two things that could "work".
1) The current method where you get the errors from systemd.
The errors occur from system when there has been a fork in the process and the service attempts to start. Each attempt to start the service will try 5 times. You could work around it by "Shutdown" from the CouchPotato UI. This would end the process completely preventing the conflicts from systemd. The errors would not comback till there is another "Restart" from CouchPotato UI. The restart could be triggered from the UI or an (auto)update.
I will reach out to the CouchPotato dev to see if he has looked into the restart signalling to help prevent this issue.
2) systemd's and CouchPotato's recommends an alternative configuration
The configuration is very clean and I would like to use it, but once you run the "Restart" from CouchPotato UI or run the update from CouchPotato; it would be in the shut off state. You would then just go into the ReadyNAS Admin page to turn the CouchPotato app back on.
Here is what the configuration would look like with the 2nd option:
[Unit] Description=CouchPotato application instance After=network.target [Service] ExecStart=/usr/bin/python /apps/couchpotato/CouchPotato.py --data_dir=/apps/couchpotato/app-config Type=simple Nice=18 User=admin Group=admin [Install] WantedBy=multi-user.target
The auto updates are nice, but since the development on that project is slower than some others.... the option two might be a good option to look at.
bedlam1 did you want to try that systemd config out?
Did you need instructions or a quick test build using that config?
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!