NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
scottonaharley
Apr 18, 2022Aspirant
readynas 4220 memory segmentation fault
This unit has been humming along for several years when this error started to pop up following the 6.10.7 upgrade. I am able to access via ssh but the management service is not running. The web s...
- Apr 20, 2022
Upgrading to 6.10.6 brought back the problem in the same lib along with the failure of the management interface after a short period of time.
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<diag output>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>- 2022-04-20 06:02:31: snapshot_monito[6361]: segfault at 30 ip 00007f1ffa430950 sp 00007f1fc7ffeae8 error 4 in libapr-1.so.0.5.1[7f1ffa413000+32000]
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Downgrading to 6.10.5 eliminated the problem. I wonder if the clam can be updated independently of the OS. This is from the 6.10.6 release notes:
- Antivirus ClamAV is upgraded to version 0.103.2+dfsg.
It looks like stability is restored at the 6.10.5 level.
StephenB
Apr 18, 2022Guru - Experienced User
scottonaharley wrote:
The results of diagnostics via RAIDar are cryptic. It lists these two system errors:
- Disk 1 has 8 Current Pending Sectors
- Volume root is degraded
The second error suggests something is wrong with RAID array for the OS (and the first says that you are seeing disk errors on disk 1).
I'd try powering down, removing disk 1, and then try rebooting the NAS as read-only. If that results in normal access to the nas admin ui, then I'd test the disk with vendor tools. Personally I run both the full read test and a full write zeros/erase test.
scottonaharley wrote:
This leads me to believe that perhaps something in front view is broken?
Is there a way to restart the management service via the cmd line?
You can restart the apache2 with
# systemctl restart apache2
readynasd is also a service, so you could try
# systemctl status readynasd
# systemctl restart readynasd
scottonaharley
Apr 18, 2022Aspirant
Which disk is "Disk 1". Is it the disk labeled "sda" or is it "channel 1" when viewing the disk status drop down in the management interface?
I'm going to shutdown and remove the disk tagged as "channel 1" and reboot.
Executing the restart on the readynasd service brought the interface back up.
The error on the last line does not concern me. The "/data/video" share was old and has been removed. That error should no longer occur.
<<<<<<<<<<<<<<<<Command execution and output>>>>>>>>>>>>>>>>>>
root@poseidon:~# systemctl status readynasd
● readynasd.service - ReadyNAS System Daemon
Loaded: loaded (/lib/systemd/system/readynasd.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Sat 2022-04-16 15:45:24 EDT; 1 day 18h ago
Main PID: 5144 (code=killed, signal=SEGV)
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
root@poseidon:~# systemctl restart readynasd
root@poseidon:~# systemctl status readynasd
● readynasd.service - ReadyNAS System Daemon
Loaded: loaded (/lib/systemd/system/readynasd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-04-18 10:31:45 EDT; 12s ago
Main PID: 21377 (readynasd)
Status: "Start Main process"
CGroup: /system.slice/readynasd.service
└─21377 /usr/sbin/readynasd -v 3 -t
Apr 18 10:31:45 poseidon rn-expand[21377]: Checking if RAID disk sdb is expandable...
Apr 18 10:31:45 poseidon rn-expand[21377]: Checking if RAID disk sdg is expandable...
Apr 18 10:31:45 poseidon rn-expand[21377]: Checking if RAID disk sdf is expandable...
Apr 18 10:31:45 poseidon rn-expand[21377]: Checking if RAID disk sdi is expandable...
Apr 18 10:31:45 poseidon rn-expand[21377]: No enough disks for data-0 to expand [need 4, have 0]
Apr 18 10:31:45 poseidon rn-expand[21377]: 0 disks expandable in data
Apr 18 10:31:45 poseidon systemd[1]: Started ReadyNAS System Daemon.
Apr 18 10:31:45 poseidon readynasd[21377]: ReadyNASOS background service started.
Apr 18 10:31:45 poseidon readynasd[21377]: Snapper SetConfig successfully.
Apr 18 10:31:45 poseidon readynasd[21377]: Failed to chmod snap_path /data/video/.snapshots, errno = 2
root@poseidon:~#
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
- scottonaharleyApr 18, 2022Aspirant
Replaced the disk located in channel 1 and the "8 pending sectors" message is now gone. While the messages "Volume root is degraded" (existing) and "Volume Data is degraded" (new) do appear I am fairly confident that they will no longer appear once the array completes resyncing. That process will take more than 24 hours so I will post the results here upon completion.
- scottonaharleyApr 18, 2022Aspirant
The problem is still here. It is only related to the management interface portion of the system. All other services seem unaffected.
Here is the diagnostics post readynasd failure:
<<<<<<<<<<<<<<<<<<<<,diagnostic output>>>>>>>>>>>>>>>>>>>>>>
Successfully completed diagnostics
System
- Volume root is degraded
- Volume data is degraded
Logs
- 2022-04-18 13:57:42: enclosure_monit[5053]: segfault at 30 ip 00007f594f448950 sp 00007f5941586ae8 error 4 in libapr-1.so.0.5.1[7f594f42b000+32000]
- 2022-04-18 12:13:11: md/raid:md127: raid level 6 active with 11 out of 12 devices, algorithm 2
System Management
- 2022-04-18 13:59:13: readynasd.service: Failed to fork: Cannot allocate memory
- 2022-04-18 13:59:13: Failed to start ReadyNAS System Daemon.
- 2022-04-18 13:59:13: Failed to start ReadyNAS System Daemon.
- 2022-04-18 13:59:12: readynasd.service: Failed to fork: Cannot allocate memory
- 2022-04-18 13:59:12: Failed to start ReadyNAS System Daemon.
- 2022-04-18 13:59:12: readynasd.service: Failed to fork: Cannot allocate memory
- 2022-04-18 13:59:12: Failed to start ReadyNAS System Daemon.
- 2022-04-18 13:59:12: readynasd.service: Failed to fork: Cannot allocate memory
- 2022-04-18 13:59:12: Failed to start ReadyNAS System Daemon.
- 2022-04-18 13:59:12: readynasd.service: Failed to fork: Cannot allocate memory
- 2022-04-18 13:59:12: Failed to start ReadyNAS System Daemon.
- 2022-04-18 12:13:49: NetworkStats eth0 failed: ERROR: mmaping file '/run/readynasd/stats/network_eth0_pkts.rrd': Invalid argument
- 2022-04-18 12:13:43: DB (main) schema version: 24 ==> 24
- 2022-04-18 12:13:43: DB (queue) schema version: new ==> 0
- scottonaharleyApr 19, 2022Aspirant
I have downgraded to v6.10.5 and the management interface appears to be more stable. It has been up continuously for 24 hours (usually failed after a few hours) however I still have these messages in the diagnostic output. Any thoughts would be appreciated.
<<<<<<<<<<<<<<<<<<<<<<begin diagnostic output>>>>>>>>>>>>>>>>>>>>>
System
- Volume root is degraded
Logs
- 2022-04-18 17:10:32: md/raid:md127: raid level 6 active with 11 out of 12 devices, algorithm 2
- 2022-04-18 13:57:42: enclosure_monit[5053]: segfault at 30 ip 00007f594f448950 sp 00007f5941586ae8 error 4 in libapr-1.so.0.5.1[7f594f42b000+32000]
- 2022-04-18 12:13:11: md/raid:md127: raid level 6 active with 11 out of 12 devices, algorithm 2
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!