NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

jedas's avatar
jedas
Aspirant
Dec 02, 2014

Disk spin-down problem, RN104, fw 6.2.0

Hello,

I'm using RN104, 2 x 4GB WD RED in JBOD/Flex mode (I need no raid). No apps installed, samba, dlna services stopped to debug this issue. Http/https/ssh - enabled. I've enabled disk spin down after 5 minutes from GUI menu. System logs shows that disks are stopped, but then immediately started after 4-8 seconds.


Disk [0] going to standby...
Disk [0] spinning up...
Disk [0] going to standby...
Disk [0] spinning up...


After enabling write log: echo 1 > /proc/sys/vm/block_dump I see this in the dmesg:
md0_raid1(694): WRITE block 8 on sda1 (1 sectors)
md0_raid1(694): WRITE block 8 on sdb1 (1 sectors)
jbd2/md0-8(719): WRITE block 3684640 on md0 (8 sectors)
jbd2/md0-8(719): WRITE block 3684648 on md0 (8 sectors)
jbd2/md0-8(719): WRITE block 3684656 on md0 (8 sectors)
jbd2/md0-8(719): WRITE block 3684664 on md0 (8 sectors)
jbd2/md0-8(719): WRITE block 3684672 on md0 (8 sectors)
jbd2/md0-8(719): WRITE block 3684680 on md0 (8 sectors)
jbd2/md0-8(719): WRITE block 3684688 on md0 (8 sectors)
jbd2/md0-8(719): WRITE block 3684696 on md0 (8 sectors)
leafp2p(1282): WRITE block 775816 on md0 (8 sectors)
md0_raid1(694): WRITE block 8 on sda1 (1 sectors)
md0_raid1(694): WRITE block 8 on sdb1 (1 sectors)


I do believe, these writes wakes up the disks, becaue they happen in every 4-8 seconds. I've tried to kill leafp2p process, it doesn't seem to be the problem source. I can suspect that it's these writes are made by kernel, but don't have any knowledge about raid stuff. Same behaviour is when I type "hdparm -y /dev/sda". It spinds down, and wakes up in a second. Please advise.

23 Replies

Replies have been turned off for this discussion
  • mdgm wrote:
    Disk 2 is a WD Green disk. Have you tried disabling the WDIDLE3 timer on that?


    Yes, I did the first day I had it. Before upgrading to version 6.2.1 this hd stopped.
  • After researching and lose some time, I've gotten the spindown of the disks work for me.

    Explain what my problems and as I resolved, maybe someone will help.

    I apologize for my English, I'll use a translator and what little is.

    After checking the logs created with
     echo 1> / proc / sys / vm / block_dump 


    I saw that mdadm constantly working on SDA2 and SDC1 partitions, which does not work spindown.

    Reviewing "mdstat.log" I saw that "md0" did not have the partition "sdc1" and that "md1" did not have the partition "sda2. The reason I do not know but had clear that this was not right for how the NAS.

    nused devices: <none>
    /dev/md/0:
    Version : 1.2
    Creation Time : Fri Dec 12 23:01:27 2014
    Raid Level : raid1
    Array Size : 4190208 (4.00 GiB 4.29 GB)
    Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
    Raid Devices : 3
    Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Fri Dec 26 21:23:12 2014
    State : clean
    Active Devices : 3
    Working Devices : 3
    Failed Devices : 0
    Spare Devices : 0

    Name : 0e361548:0 (local to host 0e361548)
    UUID : 49a8e5ae:bd76be6f:4afd44b6:12b6b985
    Events : 993

    Number Major Minor RaidDevice State
    0 8 49 0 active sync /dev/sdd1
    1 8 17 1 active sync /dev/sdb1
    3 8 1 2 active sync /dev/sda1
    /dev/md/1:
    Version : 1.2
    Creation Time : Wed Dec 17 00:11:37 2014
    Raid Level : raid1
    Array Size : 523712 (511.52 MiB 536.28 MB)
    Used Dev Size : 523712 (511.52 MiB 536.28 MB)
    Raid Devices : 3
    Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Fri Dec 26 20:34:35 2014
    State : clean
    Active Devices : 3
    Working Devices : 3
    Failed Devices : 0
    Spare Devices : 0

    Name : 0e361548:1 (local to host 0e361548)
    UUID : 32018819:4936abb9:daf4b9cd:1dfe33a6
    Events : 20

    Number Major Minor RaidDevice State
    0 8 18 0 active sync /dev/sdb2
    1 8 34 1 active sync /dev/sdc2
    2 8 50 2 active sync /dev/sdd2


    What I did was add each partition to its corresponding RAID. This process let these partitions in "spare" but now spindown works well and I do not want to try more. Is that you can add this disc to --grown option but I'll leave it like that.

    /dev/md/0:
    Version : 1.2
    Creation Time : Fri Dec 12 23:01:27 2014
    Raid Level : raid1
    Array Size : 4190208 (4.00 GiB 4.29 GB)
    Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
    Raid Devices : 3
    Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Dec 26 22:46:11 2014
    State : clean
    Active Devices : 3
    Working Devices : 4
    Failed Devices : 0
    Spare Devices : 1

    Name : 0e361548:0 (local to host 0e361548)
    UUID : 49a8e5ae:bd76be6f:4afd44b6:12b6b985
    Events : 996

    Number Major Minor RaidDevice State
    0 8 49 0 active sync /dev/sdd1
    1 8 17 1 active sync /dev/sdb1
    3 8 1 2 active sync /dev/sda1

    4 8 33 - spare /dev/sdc1
    /dev/md/1:
    Version : 1.2
    Creation Time : Wed Dec 17 00:11:37 2014
    Raid Level : raid1
    Array Size : 523712 (511.52 MiB 536.28 MB)
    Used Dev Size : 523712 (511.52 MiB 536.28 MB)
    Raid Devices : 3
    Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Dec 26 22:46:06 2014
    State : clean
    Active Devices : 3
    Working Devices : 4
    Failed Devices : 0
    Spare Devices : 1

    Name : 0e361548:1 (local to host 0e361548)
    UUID : 32018819:4936abb9:daf4b9cd:1dfe33a6
    Events : 22

    Number Major Minor RaidDevice State
    0 8 18 0 active sync /dev/sdb2
    1 8 34 1 active sync /dev/sdc2
    2 8 50 2 active sync /dev/sdd2

    3 8 2 - spare /dev/sda2


    All this I have done no data backup (on my own responsibility and knowing the danger of losing data) but I have no space for backup. :-) My data are not vital but better not lose them and I will not attempt to remove the "spare".

    Now I've gone from a consumption of 40W to 13W when not use the NAS.
    Now if I have full satisfaction with this product, it was what I needed.
  • Wow, you are very observant to have found this. I think Netgear should be interested in understanding the sequence of events that led up to to it. I'm not sure I advise your configuration as being a good one though. I do not believe any of the the netgear configurations would set partitions to be spares and you might find that this causes a big problem later when you try to add a disk or replace a bad disk. I suspect netgear support would either manually add those partitions as active or they would want you to do a full reinit of the system.

    I checked my system and don't see this problem, but I wanted to mention that my md1 is raid6 and your's is raid1. I'm running os6.2.2 on an older Pro6 but don't see why this would be intentionally different. I understand you know this, but if I were you I would get a backup solution and then use it to reinit and reload this system.

    steve

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More