NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
jedas
Dec 02, 2014Aspirant
Disk spin-down problem, RN104, fw 6.2.0
Hello, I'm using RN104, 2 x 4GB WD RED in JBOD/Flex mode (I need no raid). No apps installed, samba, dlna services stopped to debug this issue. Http/https/ssh - enabled. I've enabled disk spin down...
algiam
Dec 26, 2014Aspirant
After researching and lose some time, I've gotten the spindown of the disks work for me.
Explain what my problems and as I resolved, maybe someone will help.
I apologize for my English, I'll use a translator and what little is.
After checking the logs created with
I saw that mdadm constantly working on SDA2 and SDC1 partitions, which does not work spindown.
Reviewing "mdstat.log" I saw that "md0" did not have the partition "sdc1" and that "md1" did not have the partition "sda2. The reason I do not know but had clear that this was not right for how the NAS.
What I did was add each partition to its corresponding RAID. This process let these partitions in "spare" but now spindown works well and I do not want to try more. Is that you can add this disc to --grown option but I'll leave it like that.
All this I have done no data backup (on my own responsibility and knowing the danger of losing data) but I have no space for backup. :-) My data are not vital but better not lose them and I will not attempt to remove the "spare".
Now I've gone from a consumption of 40W to 13W when not use the NAS.
Now if I have full satisfaction with this product, it was what I needed.
Explain what my problems and as I resolved, maybe someone will help.
I apologize for my English, I'll use a translator and what little is.
After checking the logs created with
echo 1> / proc / sys / vm / block_dump
I saw that mdadm constantly working on SDA2 and SDC1 partitions, which does not work spindown.
Reviewing "mdstat.log" I saw that "md0" did not have the partition "sdc1" and that "md1" did not have the partition "sda2. The reason I do not know but had clear that this was not right for how the NAS.
nused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Fri Dec 12 23:01:27 2014
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Fri Dec 26 21:23:12 2014
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Name : 0e361548:0 (local to host 0e361548)
UUID : 49a8e5ae:bd76be6f:4afd44b6:12b6b985
Events : 993
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 17 1 active sync /dev/sdb1
3 8 1 2 active sync /dev/sda1
/dev/md/1:
Version : 1.2
Creation Time : Wed Dec 17 00:11:37 2014
Raid Level : raid1
Array Size : 523712 (511.52 MiB 536.28 MB)
Used Dev Size : 523712 (511.52 MiB 536.28 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Fri Dec 26 20:34:35 2014
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Name : 0e361548:1 (local to host 0e361548)
UUID : 32018819:4936abb9:daf4b9cd:1dfe33a6
Events : 20
Number Major Minor RaidDevice State
0 8 18 0 active sync /dev/sdb2
1 8 34 1 active sync /dev/sdc2
2 8 50 2 active sync /dev/sdd2
What I did was add each partition to its corresponding RAID. This process let these partitions in "spare" but now spindown works well and I do not want to try more. Is that you can add this disc to --grown option but I'll leave it like that.
/dev/md/0:
Version : 1.2
Creation Time : Fri Dec 12 23:01:27 2014
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Dec 26 22:46:11 2014
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Name : 0e361548:0 (local to host 0e361548)
UUID : 49a8e5ae:bd76be6f:4afd44b6:12b6b985
Events : 996
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 17 1 active sync /dev/sdb1
3 8 1 2 active sync /dev/sda1
4 8 33 - spare /dev/sdc1
/dev/md/1:
Version : 1.2
Creation Time : Wed Dec 17 00:11:37 2014
Raid Level : raid1
Array Size : 523712 (511.52 MiB 536.28 MB)
Used Dev Size : 523712 (511.52 MiB 536.28 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Dec 26 22:46:06 2014
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Name : 0e361548:1 (local to host 0e361548)
UUID : 32018819:4936abb9:daf4b9cd:1dfe33a6
Events : 22
Number Major Minor RaidDevice State
0 8 18 0 active sync /dev/sdb2
1 8 34 1 active sync /dev/sdc2
2 8 50 2 active sync /dev/sdd2
3 8 2 - spare /dev/sda2
All this I have done no data backup (on my own responsibility and knowing the danger of losing data) but I have no space for backup. :-) My data are not vital but better not lose them and I will not attempt to remove the "spare".
Now I've gone from a consumption of 40W to 13W when not use the NAS.
Now if I have full satisfaction with this product, it was what I needed.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!