NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
hawke84
Oct 09, 2016Tutor
Readynas OS 6.6 drives wont spindown
Hi, I recently upgraded to OS6.6 on my RN104, never had a problem with disk spindown before but soon as I have upgraded the disks dont really spindown anymore, they spindown for about 10 seconds then...
mdgm-ntgr
Oct 09, 2016NETGEAR Employee Retired
If you are comfortable with SSH, an easy to enable the block_dump option to noflushd is to add it to the TIMEOUT variable in /etc/default/noflushd. For example, if the content of that file contains "TIMEOUT=5", change it to "TIMEOUT=5 -b".
Then `systemctl restart noflushd`.
The logs you get after enabling the block_dump option would be useful.
dacwe
Oct 10, 2016Tutor
Hi, okey, I think I found it. Running block dump during spindowns shows rnutil writing to disk each time it spins down (and up ~ 300 second delay beween each block of writes, same interval that I set them to spin down):
[152085.230218] md0_raid1(872): WRITE block 8 on sda1 (1 sectors) [152085.230302] md0_raid1(872): WRITE block 8 on sdb1 (1 sectors) [152087.341075] rnutil(18266): dirtied inode 7065 (event.sq3-journal) on md0 [152090.020301] jbd2/md0-8(887): WRITE block 3699200 on md0 (8 sectors) [152090.020382] md0_raid1(872): WRITE block 8 on sda1 (1 sectors) [152090.020450] md0_raid1(872): WRITE block 8 on sdb1 (1 sectors) [152092.220612] jbd2/md0-8(887): WRITE block 3699208 on md0 (8 sectors) [152092.220645] jbd2/md0-8(887): WRITE block 3699216 on md0 (8 sectors) [152092.220663] jbd2/md0-8(887): WRITE block 3699224 on md0 (8 sectors) [152092.220680] jbd2/md0-8(887): WRITE block 3699232 on md0 (8 sectors) [152092.220696] jbd2/md0-8(887): WRITE block 3699240 on md0 (8 sectors) [152092.220712] jbd2/md0-8(887): WRITE block 3699248 on md0 (8 sectors) [152092.220729] jbd2/md0-8(887): WRITE block 3699256 on md0 (8 sectors) [152092.223595] jbd2/md0-8(887): WRITE block 3699264 on md0 (8 sectors) [152092.440233] md0_raid1(872): WRITE block 8 on sda1 (1 sectors) [152092.440310] md0_raid1(872): WRITE block 8 on sdb1 (1 sectors) [152094.636059] rnutil(18269): dirtied inode 7065 (event.sq3-journal) on md0 [152094.700631] rnutil(18272): dirtied inode 7065 (event.sq3-journal) on md0 [152395.630210] md0_raid1(872): WRITE block 8 on sda1 (1 sectors) [152395.630294] md0_raid1(872): WRITE block 8 on sdb1 (1 sectors) [152397.740973] rnutil(18279): dirtied inode 7065 (event.sq3-journal) on md0 [152401.030310] jbd2/md0-8(887): WRITE block 3699400 on md0 (8 sectors) [152401.030393] md0_raid1(872): WRITE block 8 on sda1 (1 sectors) [152401.030466] md0_raid1(872): WRITE block 8 on sdb1 (1 sectors) [152403.415237] jbd2/md0-8(887): WRITE block 3699408 on md0 (8 sectors) [152403.415272] jbd2/md0-8(887): WRITE block 3699416 on md0 (8 sectors) [152403.415290] jbd2/md0-8(887): WRITE block 3699424 on md0 (8 sectors) [152403.415306] jbd2/md0-8(887): WRITE block 3699432 on md0 (8 sectors) [152403.415322] jbd2/md0-8(887): WRITE block 3699440 on md0 (8 sectors) [152403.415340] jbd2/md0-8(887): WRITE block 3699448 on md0 (8 sectors) [152403.415357] jbd2/md0-8(887): WRITE block 3699456 on md0 (8 sectors) [152403.418191] jbd2/md0-8(887): WRITE block 3699464 on md0 (8 sectors) [152403.640207] md0_raid1(872): WRITE block 8 on sda1 (1 sectors) [152403.640256] md0_raid1(872): WRITE block 8 on sdb1 (1 sectors) [152405.035925] rnutil(18282): dirtied inode 7065 (event.sq3-journal) on md0 [152405.100149] rnutil(18285): dirtied inode 7065 (event.sq3-journal) on md0 [152706.030227] md0_raid1(872): WRITE block 8 on sda1 (1 sectors) [152706.030309] md0_raid1(872): WRITE block 8 on sdb1 (1 sectors) [152708.131392] rnutil(18300): dirtied inode 7065 (event.sq3-journal) on md0 [152711.030303] jbd2/md0-8(887): WRITE block 3699592 on md0 (8 sectors) [152711.030384] md0_raid1(872): WRITE block 8 on sda1 (1 sectors) [152711.030451] md0_raid1(872): WRITE block 8 on sdb1 (1 sectors) [152713.299433] jbd2/md0-8(887): WRITE block 3699600 on md0 (8 sectors) [152713.299467] jbd2/md0-8(887): WRITE block 3699608 on md0 (8 sectors) [152713.299485] jbd2/md0-8(887): WRITE block 3699616 on md0 (8 sectors) [152713.299502] jbd2/md0-8(887): WRITE block 3699624 on md0 (8 sectors) [152713.299518] jbd2/md0-8(887): WRITE block 3699632 on md0 (8 sectors) [152713.299534] jbd2/md0-8(887): WRITE block 3699640 on md0 (8 sectors) [152713.299552] jbd2/md0-8(887): WRITE block 3699648 on md0 (8 sectors) [152713.302469] jbd2/md0-8(887): WRITE block 3699656 on md0 (8 sectors) [152713.520204] md0_raid1(872): WRITE block 8 on sda1 (1 sectors) [152713.520263] md0_raid1(872): WRITE block 8 on sdb1 (1 sectors) [152715.425875] rnutil(18303): dirtied inode 7065 (event.sq3-journal) on md0 [152715.490575] rnutil(18306): dirtied inode 7065 (event.sq3-journal) on md0 [153016.420218] md0_raid1(872): WRITE block 8 on sda1 (1 sectors) [153016.420303] md0_raid1(872): WRITE block 8 on sdb1 (1 sectors) [153018.531622] rnutil(18314): dirtied inode 7065 (event.sq3-journal) on md0 [153019.125033] readynasd(2303): dirtied inode 69756 (blkid.tab) on tmpfs [153022.030313] jbd2/md0-8(887): WRITE block 3699784 on md0 (8 sectors) [153022.030396] md0_raid1(872): WRITE block 8 on sda1 (1 sectors) [153022.030468] md0_raid1(872): WRITE block 8 on sdb1 (1 sectors) [153024.473808] jbd2/md0-8(887): WRITE block 3699792 on md0 (8 sectors) [153024.473841] jbd2/md0-8(887): WRITE block 3699800 on md0 (8 sectors) [153024.473859] jbd2/md0-8(887): WRITE block 3699808 on md0 (8 sectors) [153024.473875] jbd2/md0-8(887): WRITE block 3699816 on md0 (8 sectors) [153024.473890] jbd2/md0-8(887): WRITE block 3699824 on md0 (8 sectors) [153024.473906] jbd2/md0-8(887): WRITE block 3699832 on md0 (8 sectors) [153024.473924] jbd2/md0-8(887): WRITE block 3699840 on md0 (8 sectors) [153024.476758] jbd2/md0-8(887): WRITE block 3699848 on md0 (8 sectors) [153024.700232] md0_raid1(872): WRITE block 8 on sda1 (1 sectors) [153024.700310] md0_raid1(872): WRITE block 8 on sdb1 (1 sectors) [153025.826268] rnutil(18317): dirtied inode 7065 (event.sq3-journal) on md0 [153025.890786] rnutil(18320): dirtied inode 7065 (event.sq3-journal) on md0
[153025.895052] rnutil(18320): READ block 414560 on md0 (8 sectors)
Looking at /etc/noflushd/spindown.sh (spinup.sh is the same):
#!/bin/sh MSG='Disks sleeping..' DURATION=60 # Time in seconds. # Show message on LCD. /usr/bin/rnutil rn_lcd -s "$MSG" -p 1 -e $DURATION -k 478
rnutil shows something the LCD. I just bluntly commented that line and now my disks spin down perfectly (at least they spin down and stay for 10+ hours).
Do I need that command? I'm on rn102.
- mdgm-ntgrOct 10, 2016NETGEAR Employee Retired
That command is not essential, but it would be nice to get to the bottom of this. Writing to the LCD should not spin up the disks. We will check if we can reproduce this.
It would be helpful if we could confirm if others with spin down problems on 6.6.0 are also running into the same cause.
Also with problems like this the verbose logging from the block_dump option should only be used to identify the problem and then be switched off again.
- mdgm-ntgrOct 11, 2016NETGEAR Employee Retired
I can see similar messages in hawke84's logs.
- mdgm-ntgrOct 11, 2016NETGEAR Employee Retired
hawke84 I have disabled the LCD messages for spindown on your system too. Please see if this helps.
- dacweOct 11, 2016Tutor
> Writing to the LCD should not spin up the disks.
Just to comment on this. Writing to the LCD using:
root@backup:~# rnutil rn_lcd -s "Testing" -p 1 -e 10 -k 478
writes to the event_push.log file:
root@backup:~# grep Testing /var/log/readynasd/event_push.log <lcd expiration="1476192972" priority="1" key="478" string="Testing"/></xs:add-s>
...and as far as I can see that file is placed on disk and not in memory.
- IldefonseOct 29, 2016Aspirant
I tried dacwe's solution and it bloody works !!!
I commented the last lines (about LCD) of /etc/noflushd/spindown.sh and spinup.sh
Thanks a lot!
- yuan_1202Oct 29, 2016Tutor
am trying it now. will report back.
- yuan_1202Oct 30, 2016Tutor
-- Logs begin at Sat 2016-10-29 21:10:46 WEST, end at Sun 2016-10-30 07:54:31 WET. -- Oct 29 21:10:49 YUAN-NAS noflushd[1826]: Enabling spindown for disk 1 [sda,0:0:TOSHIBA_HDWE140:Z5M8KE82F58D:FP2A:7200] Oct 29 21:22:07 YUAN-NAS noflushd[1826]: Error: Readahead /etc/resolv.conf failed. Oct 29 21:22:07 YUAN-NAS noflushd[1826]: Spinning down disk 1 (/dev/sda). Oct 29 21:26:15 YUAN-NAS noflushd[1826]: Spinning up disk 1 (/dev/sda) after 0:04:06. Oct 29 21:37:38 YUAN-NAS noflushd[1826]: Spinning down disk 1 (/dev/sda). Oct 29 21:52:25 YUAN-NAS noflushd[1826]: Spinning up disk 1 (/dev/sda) after 0:14:44. Oct 29 22:01:37 YUAN-NAS noflushd[1826]: Spinning down disk 1 (/dev/sda). Oct 29 23:32:22 YUAN-NAS noflushd[1826]: Spinning up disk 1 (/dev/sda) after 1:30:43. Oct 29 23:38:53 YUAN-NAS noflushd[1826]: Spinning down disk 1 (/dev/sda). Oct 30 02:58:10 YUAN-NAS noflushd[1826]: Spinning up disk 1 (/dev/sda) after 4:19:14. Oct 30 03:03:11 YUAN-NAS noflushd[1826]: Spinning down disk 1 (/dev/sda). Oct 30 04:23:02 YUAN-NAS noflushd[1826]: Spinning up disk 1 (/dev/sda) after 1:19:48. Oct 30 04:28:03 YUAN-NAS noflushd[1826]: Spinning down disk 1 (/dev/sda). Oct 30 06:06:53 YUAN-NAS noflushd[1826]: Spinning up disk 1 (/dev/sda) after 1:38:48. Oct 30 06:11:54 YUAN-NAS noflushd[1826]: Spinning down disk 1 (/dev/sda). Oct 30 07:27:06 YUAN-NAS noflushd[1826]: Spinning up disk 1 (/dev/sda) after 1:15:10. Oct 30 07:32:06 YUAN-NAS noflushd[1826]: Spinning down disk 1 (/dev/sda). Oct 30 07:54:19 YUAN-NAS noflushd[1826]: Spinning up disk 1 (/dev/sda) after 0:22:10.
doesnt seem to work. the wakeup frequency seems to be reduced, from minutes to now every hour or so. but still wakes up for no reason.
- jojax14Nov 02, 2016Aspirant
Hi all,
I thought I would share my experience.
Back in October I upgraded my RN102 from 6.5.1 to 6.6.0 (with an intermediate step through 6.5.2 as required). Since being on 6.6.0 the spindown messages show the disk (single disk - I am running this in a 1-disk config with no RAID) is spinning up after 10 seconds of spindown (occasionally 5 seconds, occasionally 20 seconds, but mostly 10). I have confirmed this as the unit is in a relatively quiet room and you could hear the HD spinning down and then spinning up again before the platters have completely stopped.
I took the advice of this thread and commented out the LCD line from the script. After restarting the service, this issue now appears to be resolved and I am no longer seeing the constant spin down/up sequence.
However, I shudder at the thought that my poor hard disk has gone through a spin down/up cycle 2000-odd times since the upgrade to 6.6.0. I sure hope this gets resolved in a future firmware update!
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!