× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

ReadyNAS 104 with 4 Caviar RED 3TB / High Load_Cycle_Count

c6p0
Aspirant

ReadyNAS 104 with 4 Caviar RED 3TB / High Load_Cycle_Count

After 2 weeks of usage, I was very worried about this.

Welcome to ReadyNASOS 6.1.5

root@RN104:~# smartctl -a /dev/sda | grep Load_Cycle_Count
193 Load_Cycle_Count 0x0032 198 198 000 Old_age Always - 8813
root@RN104:~# smartctl -a /dev/sdb | grep Load_Cycle_Count
193 Load_Cycle_Count 0x0032 198 198 000 Old_age Always - 8643
root@RN104:~# smartctl -a /dev/sdc | grep Load_Cycle_Count
193 Load_Cycle_Count 0x0032 198 198 000 Old_age Always - 8517
root@RN104:~# smartctl -a /dev/sdd | grep Load_Cycle_Count
193 Load_Cycle_Count 0x0032 198 198 000 Old_age Always - 8751


After hours of reading on the web, I found the solution
Nothing to install. Everything is already installed by the firmware
Just login to the ReadyNAS via SSH
I checked the disk parameter with hdparm (please read http://man7.org/linux/man-pages/man8/hdparm.8.html )

root@RN104:~# hdparm

hdparm - get/set hard disk parameters - version v9.39, by Mark Lord.

Usage: hdparm [options] [device ...]

-J Get/set the Western Digital (WD) Green Drive's "idle3" timeout value. This timeout controls how often the drive parks its
heads and enters a low power consumption state. The factory default is eight (8) seconds, which is a very poor choice for
use with Linux. Leaving it at the default will result in hundreds of thousands of head load/unload cycles in a very
short period of time. The drive mechanism is only rated for 300,000 to 1,000,000 cycles, so leaving it at the default
could result in premature failure, not to mention the performance impact of the drive often having to wake-up before
doing routine I/O.

WD supply a WDIDLE3.EXE DOS utility for tweaking this setting, and you should use that program instead of hdparm if at all
possible. The reverse-engineered implementation in hdparm is not as complete as the original official program, even though
it does seem to work on at a least a few drives. A full power cycle is required for any change in setting to take effect,
regardless of which program is used to tweak things.

A setting of 30 seconds is recommended for Linux use. Permitted values are from 8 to 12 seconds, and from 30 to 300
seconds in 30-second increments. Specify a value of zero (0) to disable the WD idle3 timer completely (NOT RECOMMENDED!).


root@RN104:~# hdparm -J /dev/sda
/dev/sda: wdidle3 = 8.0 secs

root@RN104:~# hdparm -J /dev/sdb
/dev/sdb: wdidle3 = 8.0 secs

root@RN104:~# hdparm -J /dev/sdc
/dev/sdc: wdidle3 = 8.0 secs

root@RN104:~# hdparm -J /dev/sdd
/dev/sdd: wdidle3 = 8.0 secs


I did set it to 30 seconds instead of 8 seconds as recommended

root@RN104:~# hdparm -J 30 --please-destroy-my-drive /dev/sda
/dev/sda:
setting wdidle3 to 30 secs (or 12.9 secs for older drives)
wdidle3 = 30 secs (or 12.9 secs for older drives)

root@RN104:~# hdparm -J 30 --please-destroy-my-drive /dev/sdb
/dev/sdb:
setting wdidle3 to 30 secs (or 12.9 secs for older drives)
wdidle3 = 30 secs (or 12.9 secs for older drives)

root@RN104:~# hdparm -J 30 --please-destroy-my-drive /dev/sdc
/dev/sdc:
setting wdidle3 to 30 secs (or 12.9 secs for older drives)
wdidle3 = 30 secs (or 12.9 secs for older drives)

root@RN104:~# hdparm -J 30 --please-destroy-my-drive /dev/sdd
/dev/sdd:
setting wdidle3 to 30 secs (or 12.9 secs for older drives)
wdidle3 = 30 secs (or 12.9 secs for older drives)

root@RN104:~# reboot


I checked that the new setting are retained afer reboot

Welcome to ReadyNASOS 6.1.5

root@RN104:~# hdparm -J /dev/sda
/dev/sda:
wdidle3 = 30 secs (or 12.9 secs for older drives)

root@RN104:~# hdparm -J /dev/sdb
/dev/sdb:
wdidle3 = 30 secs (or 12.9 secs for older drives)

root@RN104:~# hdparm -J /dev/sdc
/dev/sdc:
wdidle3 = 30 secs (or 12.9 secs for older drives)

root@RN104:~# hdparm -J /dev/sdd
/dev/sdd:
wdidle3 = 30 secs (or 12.9 secs for older drives)


This is now 2 days after running with the new settings

root@RN104:~# smartctl -a /dev/sda | grep -e "Device Model" -e "overall-health" -e "Raw_Read_Error_Rate" -e "Reallocated_Sector_Ct" -e "Temperature_Celsius" -e "Load_Cycle_Count"
Device Model: WDC WD30EFRX-68EUZN0
SMART overall-health self-assessment test result: PASSED
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
193 Load_Cycle_Count 0x0032 198 198 000 Old_age Always - 8814
194 Temperature_Celsius 0x0022 110 108 000 Old_age Always - 40

root@RN104:~# smartctl -a /dev/sdb | grep -e "Device Model" -e "overall-health" -e "Raw_Read_Error_Rate" -e "Reallocated_Sector_Ct" -e "Temperature_Celsius" -e "Load_Cycle_Count"
Device Model: WDC WD30EFRX-68EUZN0
SMART overall-health self-assessment test result: PASSED
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
193 Load_Cycle_Count 0x0032 198 198 000 Old_age Always - 8644
194 Temperature_Celsius 0x0022 108 106 000 Old_age Always - 42

root@RN104:~# smartctl -a /dev/sdc | grep -e "Device Model" -e "overall-health" -e "Raw_Read_Error_Rate" -e "Reallocated_Sector_Ct" -e "Temperature_Celsius" -e "Load_Cycle_Count"
Device Model: WDC WD30EFRX-68EUZN0
SMART overall-health self-assessment test result: PASSED
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
193 Load_Cycle_Count 0x0032 198 198 000 Old_age Always - 8518
194 Temperature_Celsius 0x0022 108 105 000 Old_age Always - 42

root@RN104:~# smartctl -a /dev/sdd | grep -e "Device Model" -e "overall-health" -e "Raw_Read_Error_Rate" -e "Reallocated_Sector_Ct" -e "Temperature_Celsius" -e "Load_Cycle_Count"
Device Model: WDC WD30EFRX-68EUZN0
SMART overall-health self-assessment test result: PASSED
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
193 Load_Cycle_Count 0x0032 198 198 000 Old_age Always - 8752
194 Temperature_Celsius 0x0022 111 108 000 Old_age Always - 39


I feel happier now 😄

I hope this will help some of you who may have the same issue
Message 1 of 6
portucale
Aspirant

Re: ReadyNAS 104 with 4 Caviar RED 3TB / High Load_Cycle_Cou

look's like WD have changed the new default values

check my post here
https://www.readynas.com/forum/viewtopic.php?f=24&t=73417&p=421094#p421094

PS: thanks for your linux commands in readynas 😄
Message 2 of 6
InteXX
Luminary

Re: ReadyNAS 104 with 4 Caviar RED 3TB / High Load_Cycle_Cou

Hi c6p0

I'm having trouble spotting the improvement between your first output and your output 2 weeks later. I hate to seem dull, but could you point it out to me?

For example, first I see this:

root@RN104:~# smartctl -a /dev/sda | grep Load_Cycle_Count
193 Load_Cycle_Count 0x0032 198 198 000 Old_age Always - 8813


And then I see this:

root@RN104:~# smartctl -a /dev/sda | grep -e "Device Model" -e "overall-health" -e "Raw_Read_Error_Rate" -e "Reallocated_Sector_Ct" -e "Temperature_Celsius" -e "Load_Cycle_Count"
193 Load_Cycle_Count 0x0032 198 198 000 Old_age Always - 8814


I'm not sure what Load_Cycle_Count is, but it seems to be about the same between the two; in fact it's only incremented by 1 (as have the other three).

Am I looking at it wrong? What does it mean?

Thanks,
Jeff Bowman
Fairbanks, Alaska
Message 3 of 6
StephenB
Guru

Re: ReadyNAS 104 with 4 Caviar RED 3TB / High Load_Cycle_Cou

The two results are 2 days apart, not 2 weeks apart. He posted the second result to show the rate of load cycles had dropped to near zero (e.g., only 1 more in the 2 day period).

Note these posts are old. My WDC red drives came with settings of 300 sec. for head parking, I didn't need to reset them. This includes drives I purchased this fall, as well as some I purchased well before these posts were made. There was a short period of time when WDC apparently shipped drives with inappropriate settings, but that was quickly corrected.

If you are concerned about this, perhaps post the power-on hours and the load cycle count for your disks. If you download the logs, you'll find the information in disk_info.log.

Load cycle count is explained in the thread - the disk heads are parked/unparked by the drive firmware, and every time that happens the load cycle count is incremented. That saves power, but shouldn't be done too frequently. It certainly happens every time the disks spin down/spin up. So with the default settings, the load cycle count should be nearly identical to the start/stop count.
Message 4 of 6
InteXX
Luminary

Re: ReadyNAS 104 with 4 Caviar RED 3TB / High Load_Cycle_Cou

StephenB wrote:
The two results are 2 days apart, not 2 weeks apart.

Right. How'd I mix that up? Must've been thinking of his opening statement.


StephenB wrote:
He posted the second result to show the rate of load cycles had dropped to near zero (e.g., only 1 more in the 2 day period).

OK, got it. In only two weeks his initial count was high--8813 to be exact. After the change he dropped to only one in 2 days--one seventh of the time. Quite a significant drop!


StephenB wrote:
If you are concerned about this, perhaps post the power-on hours and the load cycle count for your disks. If you download the logs, you'll find the information in disk_info.log.

At first I thought I was going to be, but I suppose not after your comments in the other thread (viewtopic.php?f=66&t=79261&p=446172). All the same, I suppose it wouldn't hurt to have a look at it one of these next times.


StephenB wrote:
Load cycle count is explained in the thread - the disk heads are parked/unparked by the drive firmware, and every time that happens the load cycle count is incremented.

Had to reread that to get it, but it makes sense now.

Thanks,
Jeff Bowman
Fairbanks, Alaska
Message 5 of 6
mads0100
Guide

Re: ReadyNAS 104 with 4 Caviar RED 3TB / High Load_Cycle_Cou

Thank you for posting this thread.
Message 6 of 6
Top Contributors
Discussion stats
  • 5 replies
  • 3647 views
  • 0 kudos
  • 5 in conversation
Announcements