NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

Michael_Oz's avatar
Michael_Oz
Luminary
Jun 03, 2019

ReadyNAS OS Vi.j.k - Discs don't spindown - still

I decided to use generics for the version in the title, so I can resuse this for the next release...

 

Right now it applies to 6.10.1 which I upgraded to yesterday.

tl;dr disk spindown is still broken.

 

The last time (6.9.5), I said:

 

>This is turning into a ritual, as for the last few releases I again check whether disc spindown is fixed.

>Last was for 6.9.3 here.

>My last post there was "So has this been recognised as a bug?" which remains unanswered by Netgear, I think it is obvious.

>As per the flow of the previous post, I cut to the chase.

 

This time I just did the minimal test, enabled spindown to 10 minutes, unplugged the network for a while, downloaded logs.

Same preconditions as before (except no htop).

 

Jun 03 10:02:46 ME-NAS-316A noflushd[26631]: Spinning down disk 2 (/dev/sdb).
Jun 03 10:02:53 ME-NAS-316A noflushd[26631]: Spinning up disk 2 (/dev/sdb) after 0:00:05.
Jun 03 10:09:30 ME-NAS-316A noflushd[26631]: Spinning down disk 1 (/dev/sda).
Jun 03 10:09:37 ME-NAS-316A noflushd[26631]: Spinning up disk 1 (/dev/sda) after 0:00:05.
Jun 03 10:11:44 ME-NAS-316A noflushd[26631]: Spinning down disk 3 (/dev/sdc).
Jun 03 10:11:47 ME-NAS-316A noflushd[26631]: Spinning down disk 4 (/dev/sdd).
Jun 03 10:11:49 ME-NAS-316A noflushd[26631]: Spinning down disk 5 (/dev/sde).
Jun 03 10:11:51 ME-NAS-316A noflushd[26631]: Spinning down disk 6 (/dev/sdf).
Jun 03 10:11:59 ME-NAS-316A noflushd[26631]: Spinning up disk 3 (/dev/sdc) after 0:00:12.
Jun 03 10:11:59 ME-NAS-316A noflushd[26631]: Spinning up disk 4 (/dev/sdd) after 0:00:10.
Jun 03 10:11:59 ME-NAS-316A noflushd[26631]: Spinning up disk 5 (/dev/sde) after 0:00:08.
Jun 03 10:11:59 ME-NAS-316A noflushd[26631]: Spinning up disk 6 (/dev/sdf) after 0:00:05.
Jun 03 10:12:58 ME-NAS-316A noflushd[26631]: Spinning down disk 2 (/dev/sdb).
Jun 03 10:13:06 ME-NAS-316A noflushd[26631]: Spinning up disk 2 (/dev/sdb) after 0:00:05.
Jun 03 10:19:42 ME-NAS-316A noflushd[26631]: Spinning down disk 1 (/dev/sda).
Jun 03 10:19:50 ME-NAS-316A noflushd[26631]: Spinning up disk 1 (/dev/sda) after 0:00:05.
Jun 03 10:22:11 ME-NAS-316A noflushd[26631]: Spinning down disk 3 (/dev/sdc).
Jun 03 10:22:14 ME-NAS-316A noflushd[26631]: Spinning down disk 4 (/dev/sdd).
Jun 03 10:22:16 ME-NAS-316A noflushd[26631]: Spinning down disk 5 (/dev/sde).
Jun 03 10:22:19 ME-NAS-316A noflushd[26631]: Spinning down disk 6 (/dev/sdf).
Jun 03 10:22:26 ME-NAS-316A noflushd[26631]: Spinning up disk 3 (/dev/sdc) after 0:00:12.
Jun 03 10:22:26 ME-NAS-316A noflushd[26631]: Spinning up disk 4 (/dev/sdd) after 0:00:10.
Jun 03 10:22:26 ME-NAS-316A noflushd[26631]: Spinning up disk 5 (/dev/sde) after 0:00:07.
Jun 03 10:22:26 ME-NAS-316A noflushd[26631]: Spinning up disk 6 (/dev/sdf) after 0:00:05.
Jun 03 10:23:10 ME-NAS-316A noflushd[26631]: Spinning down disk 2 (/dev/sdb).
Jun 03 10:23:18 ME-NAS-316A noflushd[26631]: Spinning up disk 2 (/dev/sdb) after 0:00:05.
Jun 03 10:29:54 ME-NAS-316A noflushd[26631]: Spinning down disk 1 (/dev/sda).
Jun 03 10:30:02 ME-NAS-316A noflushd[26631]: Spinning up disk 1 (/dev/sda) after 0:00:05.
Jun 03 10:32:38 ME-NAS-316A noflushd[26631]: Spinning down disk 3 (/dev/sdc).
Jun 03 10:32:41 ME-NAS-316A noflushd[26631]: Spinning down disk 4 (/dev/sdd).
Jun 03 10:32:43 ME-NAS-316A noflushd[26631]: Spinning down disk 5 (/dev/sde).
Jun 03 10:32:46 ME-NAS-316A noflushd[26631]: Spinning down disk 6 (/dev/sdf).
Jun 03 10:32:53 ME-NAS-316A noflushd[26631]: Spinning up disk 3 (/dev/sdc) after 0:00:12.
Jun 03 10:32:53 ME-NAS-316A noflushd[26631]: Spinning up disk 4 (/dev/sdd) after 0:00:10.
Jun 03 10:32:53 ME-NAS-316A noflushd[26631]: Spinning up disk 5 (/dev/sde) after 0:00:07.
Jun 03 10:32:53 ME-NAS-316A noflushd[26631]: Spinning up disk 6 (/dev/sdf) after 0:00:05.
Jun 03 10:33:22 ME-NAS-316A noflushd[26631]: Spinning down disk 2 (/dev/sdb).
Jun 03 10:33:30 ME-NAS-316A noflushd[26631]: Spinning up disk 2 (/dev/sdb) after 0:00:05.
Jun 03 10:40:06 ME-NAS-316A noflushd[26631]: Spinning down disk 1 (/dev/sda).
Jun 03 10:40:14 ME-NAS-316A noflushd[26631]: Spinning up disk 1 (/dev/sda) after 0:00:05.
Jun 03 10:43:05 ME-NAS-316A noflushd[26631]: Spinning down disk 3 (/dev/sdc).
Jun 03 10:43:08 ME-NAS-316A noflushd[26631]: Spinning down disk 4 (/dev/sdd).
Jun 03 10:43:10 ME-NAS-316A noflushd[26631]: Spinning down disk 5 (/dev/sde).
Jun 03 10:43:13 ME-NAS-316A noflushd[26631]: Spinning down disk 6 (/dev/sdf).
Jun 03 10:43:23 ME-NAS-316A noflushd[26631]: Spinning up disk 3 (/dev/sdc) after 0:00:15.
Jun 03 10:43:23 ME-NAS-316A noflushd[26631]: Spinning up disk 4 (/dev/sdd) after 0:00:13.
Jun 03 10:43:23 ME-NAS-316A noflushd[26631]: Spinning up disk 5 (/dev/sde) after 0:00:10.
Jun 03 10:43:23 ME-NAS-316A noflushd[26631]: Spinning up disk 6 (/dev/sdf) after 0:00:08.

This is beyond a joke.

Spindown is broken, or rather it has premature spinup.

It has been broken for a very long time.

 

You are falsely advertising a feature.

 

Let me know if anyone wants noflushd -b or block_dump info.

 

Retired_Member

 

 

 

 

 

 

 

18 Replies

Replies have been turned off for this discussion
  • schumaku's avatar
    schumaku
    Guru - Experienced User

    And what processes are wetting the inodes on e.g. the md device(s)?

    This example is not intended to be a "does not spin-down" example, just to illustrate what you might have done and collected as part of your test, and what I would like to see (taken here from the Kernel log rung buffer):

     

    root@RN628X:~# /bin/echo 1 > /proc/sys/vm/block_dump

     

    root@RN628X:~# dmesg | grep dirt | grep md
    [382243.814302] kworker/u16:1(28029): dirtied inode 1 (?) on md127
    [382267.797468] systemd-journal(2550): dirtied inode 492340 (system.journal) on md0
    [382392.192136] aurndb-write-db(4298): dirtied inode 7428 (audit_log.sq3) on md127
    [382392.222247] sshd(28139): dirtied inode 11380 (wtmp) on md0
    [382392.222272] sshd(28139): dirtied inode 11377 (lastlog) on md0
    [382453.937746] btrfs-transacti(2646): dirtied inode 1 (?) on md127
    [382568.295864] systemd-journal(2550): dirtied inode 492340 (system.journal) on md0
    [382608.754085] loadavg(5756): dirtied inode 11834 (loadavg.dat) on md0

     

    root@RN628X:~# /bin/echo 0 > /proc/sys/vm/block_dump

     

    Please note that a system disconnected from the network can and will develop activities becaue of the disconnected interfaces, because of the lack of Internet connectivity, ... so I would consider your ritual *****.

    • Michael_Oz's avatar
      Michael_Oz
      Luminary

      schumaku wrote:

      And what processes are wetting the inodes on e.g. the md device(s)?

      This example is not intended to be a "does not spin-down" example, just to illustrate what you might have done and collected as part of your test, and what I would like to see (taken here from the Kernel log rung buffer):

       

      root@RN628X:~# /bin/echo 1 > /proc/sys/vm/block_dump

       

      root@RN628X:~# dmesg | grep dirt | grep md
      [382243.814302] kworker/u16:1(28029): dirtied inode 1 (?) on md127
      [382267.797468] systemd-journal(2550): dirtied inode 492340 (system.journal) on md0
      [382392.192136] aurndb-write-db(4298): dirtied inode 7428 (audit_log.sq3) on md127
      [382392.222247] sshd(28139): dirtied inode 11380 (wtmp) on md0
      [382392.222272] sshd(28139): dirtied inode 11377 (lastlog) on md0
      [382453.937746] btrfs-transacti(2646): dirtied inode 1 (?) on md127
      [382568.295864] systemd-journal(2550): dirtied inode 492340 (system.journal) on md0
      [382608.754085] loadavg(5756): dirtied inode 11834 (loadavg.dat) on md0

       

      root@RN628X:~# /bin/echo 0 > /proc/sys/vm/block_dump

       

      Please note that a system disconnected from the network can and will develop activities becaue of the disconnected interfaces, because of the lack of Internet connectivity, ... so I would consider your ritual *****.


      I've been dealing with this bug for years. Thank you for the complement ***** to you too.

      • schumaku's avatar
        schumaku
        Guru - Experienced User

        While you might be perfectly right withb the problem, your test is potentially wrong. Other NAS vendors start writing error information into logs, try to send error status information, and much more - and this often wetting the storage buffer and makes the HDD spinning again.

         

        Instead of nagging - answer the question.

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More