NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Killerjerick
Jan 17, 2021Aspirant
Readynas 424 SSH service not starting (no error)
Hi all, I updated from 6.10.3 -> 6.10.4 today, against my better judgement (I've had issues with updating firmware in the past) and again, I'm having issues, the firmware update screwed with mono, so...
Killerjerick
Feb 27, 2021Aspirant
Hey StephenB It's been over a month and no response from either mod, I assume they are both too busy right now, what's my next course of action? Ideally I'd love to not shell out the $$ for netgear support, but is there anything else I can do from my end? I've checked both of the logs you suggested and nothing jumps out at me, but my experience in linux in particular is lacking, I've checked and I seem to be able to install apps from the netgear "upload" option, but if my problem stems from an incorrect install from a particular app, I'm not able to uninstall it, the only thing that I did around the time of the issues was attempt to install a full version of FFMPEG, which is why in my case I assumed I had accidentally installed it to the OS partition, but that seems to not be the case, judging from log files.
I'm hesitant to attempt to install ffmpeg again using the upload function as I'm positive that'll mess something up.
Thanks in advance.
rn_enthusiast
Feb 27, 2021Virtuoso
I am thinking browser cache to be honest. Did you try and clear all browser history and cache and then try to enable SSH from the Web UI again after that? The browser cache often messes with me after updates to newer firmwares.
This issue sounds like something I have seen a bunch of times, TBH.
You can go in over Tech-Support mode and re-enable SSH if all comes to all but start with attacking the browser cache and let us know (even try different browsers as well). If you hit a dead-end let me know I will throw you some commands for Tech Support mode.
- KillerjerickFeb 28, 2021Aspirant
Hey rn_enthusiast I indded did try that, including using Edge, Chrome, Firefox and my phone with no luck, thanks anyway.
Those commands would be wonderful, I successfully telnetted into the nas with the netgear tech support mode, but I didn't know where to go from there, and stopped myself short of causing irreversible damage. My limited knowledge has gotten me into a pickle once before but I managed to solve that myself (just linux permission based stuff).
- rn_enthusiastFeb 28, 2021Virtuoso
Hi again
The below is at your own risk. Making mistakes can have consequences, as you know yourself :) If you see errors or unexpected behaviour then stop and think before proceeding. This should be the steps to enable SSH, if my memory serves me well.
Once you are in over tech support mode, you can start the OS raid manually:
mdadm --assemble /dev/md0 /dev/sd[a-z]1
You should see md0 running now:
cat /proc/mdstat
Hereafter, mount the OS volume:
mount /dev/md0 /sysroot
You should see it mounted to /sysroot. You can check with:
mount
Now, use "vi" to change the default services file in order to enable SSH.
vi /sysroot/etc/default/services
Find these 5 lines in the file and change them to the same values as per below.
SSH=1 SSHPORT=22 SSH_UI_ENABLE=1 REMOTE_ACCESS_SSH_PORT=0 SSH_PASSWORD_AUTHENTICATION=1
Save the changes and exit the "vi" editor.
Un-mount the OS partition
umount /sysroot
Stop the OS raid:
mdadm --stop /dev/md*
Check that the raid is stopped:
cat /proc/mdstat
Flush memory to disk and reboot the NAS:
sync
reboot -fnCheers
- KillerjerickMar 02, 2021Aspirant
Hi rn_enthusiast thanks again for all your help, unfortunately it looks like my problem is worse than originally thought, your first command fails every time with the error
"mdadm: No super block found on /dev/sdc1 (expected magic a92b4efc, got 07020701)
mdadm: no RAID superblock on /dev/sdc1
mdadm: /dev/sdc1 has no superblock - assembly aborted"
is there something else I should be doing? Sorry for the late reply, I can only really do this after college between 9-11pm AEST (Timezones suck)
- mdgmMar 02, 2021Virtuoso
Killerjerick wrote:Hi rn_enthusiast thanks again for all your help, unfortunately it looks like my problem is worse than originally thought, your first command fails every time
This is why givng instructions on how to deal with problems like this can be problematic. There are a bunch of other commands that we would nearly always run when checking a system and we get used to seeing problems like this.
No superblock is bad, but the 4GB root volume uses raid-1 so one disk isn't too bad. It does warrant further investigation to get an idea as to whether there's a problem with that disk. For starters
smartctl --all /dev/sdc
Would tell you the SMART stats from that disk.
You can also examine the raid device underneath the 4GB root volume using e.g.
for i in /dev/sd[a-z]1; do echo $i; mdadm -E $i | egrep "Creat|Update|State|Raid"; done;
or
mdadm --examine /dev/sd[a-z]1 | egrep -i 'sd|event|uuid|recovery|super|role|update'
Killerjerick wrote:
Sorry for the late reply, I can only really do this after college between 9-11pm AEST (Timezones suck)Different time zones is expected with an international community.
Killerjerick wrote:
rnutil chroot seems to throw me into "root@<serialnumber>:/#"
That's what you expect to see when you chroot in. If you are in the chroot instead of
vi /sysroot/etc/default/services
You would do e.g.
vi /etc/default/services
You should still look for the lines rn_enthusiast mentioned, bearing in mind that as above some of the commands are slightly different when you are inside the chroot.
- KillerjerickMar 02, 2021Aspirant
smartctl --all /dev/sdc
throws
/dev/sdc: Unknown USB bridge [0x13fe:0x5200 (0x100)] Please specify device type with the -d option. Use smartctl -h to get a usage summary
while
for i in /dev/sd[a-z]1; do echo $i; mdadm -E $i | egrep "Creat|Update|State|Raid"; done;
throws
Creation Time : Mon Sep 23 18:19:13 2019 Raid Level : raid1 Raid Devices : 2 State : clean Update Time : Tue Mar 2 20:56:40 2021 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdb1 Creation Time : Mon Sep 23 18:19:13 2019 Raid Level : raid1 Raid Devices : 2 State : clean Update Time : Tue Mar 2 20:56:40 2021 Array State : AA ('A' == active, '.' == missing, 'R' == replacing)Thank you for bearing with me thus far :)
- mdgmMar 02, 2021Virtuoso
So it looks like /dev/sdc isn't a hard disk at all, but rather something else e.g. internal flash or a USB key, so the error message about sdc can be ignored.
You can do e.g.# ls -la /dev/disk/internal
to list the internal disks (note the path to the disk may change so you shouldn't rely on the list obtained from a previous boot).
For future reference (not needed now as the RAID is already started) you can modify a command like
mdadm --assemble /dev/md0 /dev/sd[a-z]1
to
mdadm --assemble /dev/md0 /dev/sd[a-b]1
If you want if you know the disks are just a and b. However if the disks were a, b and d for example you'd still have to have a command that looks at something that isn't a hard disk if you want to keep the command as short as possible to type.
When you are looking at problems multiple times a day you can get into the habit of copying and pasting regularly used commands which is why we use commands written in a way where they will hopefully work with every system regardless of the disk configuration.
You should be able to proceed to check the services file has the correct entries and that the symlink exists. - KillerjerickMar 02, 2021Aspirant
/Facepalm
I can't believe I missed that I still had an old USB plugged in to the back, I had completely forgot about. Thanks!
Now for the mounting of the sysroot, trying Rn_Enthusiast's commands (after exiting out of chroot using ^d), and #mdadm --assemble /dev/md0 /dev/sd[a-b]1
results in
mdadm: /dev/sda1 is busy - skipping mdadm: /dev/sdb1 is busy - skipping
#cat /proc/mdstat
shows
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sda2[0] sdb2[1] 523264 blocks super 1.2 [2/2] [UU] md0 : active raid1 sdb1[0] sda1[1] 4190208 blocks super 1.2 [2/2] [UU]# mount /dev/md0 /sysroot
results in
mount: mounting /dev/md0 on /sysroot failed: Device or resource busy
plain old #mount
shows
rootfs on / type rootfs (rw,size=999996k,nr_inodes=249999) proc on /proc type proc (rw,noatime,nodiratime) sysfs on /sys type sysfs (rw,noatime,nodiratime) udev on /dev type devtmpfs (rw,noatime,nodiratime,size=10240k,nr_inodes=250004,mode=755) devpts on /dev/pts type devpts (rw,noatime,nodiratime,mode=600,ptmxmode=000) /dev/md0 on /sysroot type btrfs (rw,noatime,nodiratime,nospace_cache,subvolid=5,subvol=/)
- mdgmMar 02, 2021Virtuoso
One would have expected that the system would have been smart enough to realise that it should ignore sdc and start the RAID anyway, but it didn't in this case. Doing the rnutil chroot started the RAID.
What's more the rnutil chroot command also mounts /dev/md0 to /sysroot as you have observed.
So it's a handy command to quickly start all the RAID arrays, mount the 4GB root volume and chroot onto it. This is nice but if there's a problem with the RAID or a disk it can be advisable to proceed in a more cautious way entering the commands for the different steps manually.
- rn_enthusiastMar 02, 2021Virtuoso
Yea /dev/sdc isn't a raid disk but mdadm tried to use it in the raid assembly. So, that is why it complained. It is fine but mdadm should be more clever here and I expected it would be. Anyway...
By running rnutil chroot, the NAS started the raids, mounted to OS and chroot'ed you into the OS. This changes things slightly because you are chroot'ed into the OS now.
Steps to take now:
Now, use "vi" editor to change the default services file in order to enable SSH.
vi /etc/default/services
Find these 5 lines in the file and change them to the same values as per below.
SSH=1 SSHPORT=22 SSH_UI_ENABLE=1 REMOTE_ACCESS_SSH_PORT=0 SSH_PASSWORD_AUTHENTICATION=1
Save the changes and exit the "vi" editor.
Exit out of chroot.
exit
Un-mount the OS partition
umount /sysroot
Stop all running raids:
mdadm --stop /dev/md*
Check that the raids are stopped:
cat /proc/mdstat
Flush memory to disk and reboot the NAS:
sync
reboot -fn - KillerjerickMar 02, 2021Aspirant
Alright, thanks a tonne, I've learned a fair bit here.
As for the issue, when I went into the default services file, I found that
the values that were suggested by rn_enthusiast were already what was listed, I exited vi umounted, mdadm stopped, rebooted, tried to ssh, no luck, the readynas webpage still shows ssh as being disabled.
- mdgmMar 02, 2021Virtuoso
Checking that the symlink I mentioned in an earlier post is there is probably what I would check next.
ln -s /lib/systemd/system/ssh.service /sysroot/etc/systemd/system/multi-user.target.wants/ssh.service
or inside the chroot
ln -s /lib/systemd/system/ssh.service /etc/systemd/system/multi-user.target.wants/ssh.service
After that there's things like checking logs, checking the management service database and comparing SSH configuration files with a working ReadyNAS system or a OS6 Virtual Machine, comparing debian package versions with a working machine running the same firmware and probably other things as well that escape my memory at the moment.
- KillerjerickMar 02, 2021Aspirant
Thanks again, looks like after mounting the system again using mdadm (it all went smoothly this time)
your first command
ln -s /lib/systemd/system/ssh.service /sysroot/etc/systemd/system/multi-user.target.wants/ssh.service
results in
ln: /sysroot/etc/systemd/system/multi-user.target.wants/ssh.service: File exists
- StephenBMar 02, 2021Guru - Experienced User
Does it look like this?
root@NAS:/etc/systemd/system/multi-user.target.wants# cat ssh.service [Unit] Description=SSH Server Wants=ssh-avahi.service After=network.target [Service] ExecStart=/usr/sbin/sshd -D KillMode=process Restart=always [Install] WantedBy=multi-user.target
Also, if you have a log zip file, can you check systemd-journal.log for errors related to ssh and sshd?
If you've done the chroot, I think you could try manually starting the service, and see what happens.
# systemctl start ssh.service
- KillerjerickMar 19, 2021AspirantSorry for the long wait, life got in the way as usual, so last night, my router started not connecting to Ethernet devices and after trial and error I discovered that the NAS was causing the issue, whenever the nas is connected to my router, all wired devices cease connection, is the nas flooding dhcp requests or is something else happening? I’m close to giving up on this device, Synology looks nice.
- KillerjerickMar 19, 2021AspirantI just noticed something that may be a coincidence or may not be, but this took place exactly 2 months after I began to have issues, to the day, is there any services on the nas that like to reboot themselves after 2 months?
- mdgmMar 19, 2021Virtuoso
One wonders whether the unit has been hacked (did you forward ports to the device?) or something you did using SSH is causing problems. Can only guess without looking at the unit.
- KillerjerickMar 19, 2021Aspirant
I highly doubt it's been hacked, there's nothing of value on there, there's a plex media server and 2 ports forwarded to it, not a single password is default, and none of them are duplicated, it's far more likely I've screwed something up, but since ssh stopped working I haven't done anything except for maintain the plex server, I have no access to the backend, and starting the service manually doesn't work, why is there no way to just re-install the root partition without losing data in seperate folders? I understand this is most likely my fault, but on the same token, not having a sanitized data partition that is completely seperate from the OS seems like an oversight.
Edit: p.s. restarting the NAS and plugging it into a switch fixed the dhcp flood (I assume) issue for now, if it reappears I'm switching to synology, at least I'll have access to SSH.
- StephenBMar 20, 2021Guru - Experienced User
Killerjerick wrote:
plugging it into a switch fixed the dhcp flood (I assume) issue for now,
That's a bit odd.
Are you only using one of the NIC ports of the NAS?
Did you see specific evidence of a DHCP flood, or are you just guessing that was the root cause?
Also, was this a managed switch that has storm control enabled, or an unmanaged switch (which likely wouldn't).
If you move the ethernet cable back to the router does the problem start up again? Or does the router remain working?
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!