NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
atz6975
Sep 20, 2008Guide
WD10EACS: Huge Load cycle count [ NOT SOLVED] in 4.2.9
Hi,
could ze jedis give their opinion on the following situation:
While checking my smart values after a forth drive expansion (4x1TB) I found that my three other drives that are 7 months old have +200 000 Load Cycle Count
While checking some forums, I find that this situation is somehow disturbing, at least confusing...
It is stated that :
1) This kind (wd gp) of drives is designed for 300k-500k Load cycle count
2) Many Linux environments generate very high Load cycle count and are worried about
such counts growth (76+ / hour)
3) WD has issued a WDIDLE3 DOS only, SATA only, tool that can enable/change timer/disable the current default 8s timer for Intellipark.
4) This could also somehow be achieved with hdparm
5) Some Unix HD flushing mecanism must also be considered
Please, please, please, as you now that many of us are happy (even with spin down off) Users of such disk, could you create an add-on or fix or statement on how to avoid this situation.
I'm more than ready to disable "intellipark" and run WDIDLE3 on many, many, many drives.
....or to run away from these drives (although $$$$ losses will hurt)...
To all wd10eacs and other wd gp drive users please post your Load Cycle Counts (LCC) here to help asses the situation.
Thank you.
could ze jedis give their opinion on the following situation:
While checking my smart values after a forth drive expansion (4x1TB) I found that my three other drives that are 7 months old have +200 000 Load Cycle Count
Raw Read Error Rate 0
Spin Up Time 7725
Start Stop Count 834
Reallocated Sector Count 0
Seek Error Rate 0
Power On Hours 4995
Spin Retry Count 0
Calibration Retry Count 0
Power Cycle Count 9
Power-Off Retract Count 7
Load Cycle Count 218854
Temperature Celsius 34
Reallocated Event Count 0
Current Pending Sector 0
Offline Uncorrectable 0
UDMA CRC Error Count 0
Multi Zone Error Rate 0
ATA Error Count 0
While checking some forums, I find that this situation is somehow disturbing, at least confusing...
It is stated that :
1) This kind (wd gp) of drives is designed for 300k-500k Load cycle count
2) Many Linux environments generate very high Load cycle count and are worried about
such counts growth (76+ / hour)
3) WD has issued a WDIDLE3 DOS only, SATA only, tool that can enable/change timer/disable the current default 8s timer for Intellipark.
4) This could also somehow be achieved with hdparm
5) Some Unix HD flushing mecanism must also be considered
Please, please, please, as you now that many of us are happy (even with spin down off) Users of such disk, could you create an add-on or fix or statement on how to avoid this situation.
I'm more than ready to disable "intellipark" and run WDIDLE3 on many, many, many drives.
....or to run away from these drives (although $$$$ losses will hurt)...
To all wd10eacs and other wd gp drive users please post your Load Cycle Counts (LCC) here to help asses the situation.
Thank you.
29 Replies
Replies have been turned off for this discussion
- LichonAspirant
A quick guess. You dont use the BT client, right ?sander wrote: I'm wondering if its a Readynas issue. I have 2 ATA WDC WD10EACS-00Ds and the numbers are much lower: - sander11AspirantSorry I wasn't clear. Those numbers are from another server I have set up. They're not from the Readynas.
When I fire up my WD Mybook World Edition again, I'll check the numbers on that, as it has seen more use and has WC10EACS drives in it.
I've invested in 6 of these drives with the goal of swapping out the 4 Seagate 500GB in NV+ with 4 of them and using the others as backup in other units.
I'm a little disturbed by this thread as well, but given I have these drives approaching 1500 hours of use and >50 reloads it has to be a firmware or readynas (or both) issue, not just the drives. - phoAspirantjedis? your take on this please?
- atz6975Guide1)these drives (for a total different reason) are not supported
2)this occurs with Raidiator+Wd drives, Linux+WD as opposed to Win+Wd (I wonder osx+WD?)
3)the newer firmware seems to fix this (over two advanced rmas I only got one with new firmware, but I'll be testing the rma drives with older firmware)
4)Other ux-based NAS products(Syn..) faced the same situation and pretty much fixed it
5)After all it is only a wear factor butthe drive may fail for a totally different reason
My point being, if I were to carry a light saber or fancy furry partners, I would probably watch this thread from a distance but certainly not get involved...
PS: Just helped a customer with Tera..II from Buff..., what a joke :rofl: :neener: :rofl:
EDIT: not my advise! - phoAspirant
atz6975 wrote:
4)Other ux-based NAS products(Syn..) faced the same situation and pretty much fixed it
so they fixed it in their firmware? - atz6975Guide
pho wrote:
so they fixed it in their firmware?
Nope, the NAS managed the WD differently.
Anyway, I received two RMA WD...helas, they came with the old firmware...same results - sander11AspirantAccording to threads like this:
http://fixunix.com/kernel/379138-re-wes ... linux.html
The problem has to do with the intellipark feature and how linux interacts with it. If you have the 3rd generation models (WD10EACS-00D) the problem appears to be fixed (at least my linux server, posted in this thread, it isn't an issue).
Apparently the workaround is to use the wdidle3.exe utility from Western Digital to increase the intellipark window to the maximum of 25 seconds. That appears to solve the problem.
The downside is you need to run the utility from a DOS boot disk, the upside is the utility is apparently non-disruptive. So you can power down your Readynas, run the utility on all your drives on another machine, and put the drives back in before you boot up again and you should be good.
I haven't tried this, but I'm doing a factory reset soon, to enable online expansion, so I'm going to do it then. - JimTheKiwiAspirantTo help us understand the consequences (in the absence of official help from WD), here is my data point:
I seem to be an extreme case because I bought early and have hardly ever turned the drives off even though my workload is very low. My load cycle count is now 540,000 - 610,000 (around twice WD's claimed design limit of 300,000) but everything else appears to be OK. I am a little surprised that the drives have very different "Power On Hours" figures for not-so-different load cycle counts, all four drives have been in the same NV+ system powered on for just over a year (about 8500-9500 hours elapsed time), not sure why the drives report a wide range from from 8500 to 5100 power on hours, perhaps the firmware IO is uneven. The lowest load cycle count is on the drive with the highest hours, so it seems to be a problem with idle time that gets better with increased activity.
I have not seen any errors, but I don't know what symptoms to expect other than some unknown probability of total drive failure. Would an empty load cycle (ie park the heads when they are already parked) actually cause much less wear and tear than a normal load cycle?
I'm hoping that the best approach may be to wait until one fails or until I want to upgrade the drives for other reasons (eg in another year or two it may be economical to swap four 2TB or 3TB drives in, or buy a new NAS). The low probability worst case is that one drive totally fails suddenly without SMART warning, and then the RAID recovery from that causes total failure on one or more of the other three, losing the entire contents.
If something happens to my drives I'll post here, and I'll stay subscribed to this thread in case someone else hits a wall first.
Disk 1 Disk 2 Disk 3 Disk 4 SMART Information
WDC WD10EACS-00ZJB0 WDC WD10EACS-00ZJB0 WDC WD10EACS-00ZJB0 WDC WD10EACS-00ZJB0 Model:
WD-WCASJ0159784 WD-WCASJ0443907 WD-WCASJ0346720 WD-WCASJ0443775 Serial:
01.01B01 01.01B01 01.01B01 01.01B01 Firmware:
SMART Attribute
0 0 0 0 Raw Read Error Rate
0 0 0 0 Spin Up Time
5 5 5 5 Start Stop Count
0 0 0 0 Reallocated Sector Count
0 0 0 0 Seek Error Rate
8495 7302 7403 5118 Power On Hours
0 0 0 0 Spin Retry Count
0 0 0 0 Calibration Retry Count
5 5 5 5 Power Cycle Count
4 4 4 4 Power-Off Retract Count
539897 567165 610837 596694 Load Cycle Count
33 35 35 33 Temperature Celsius
0 0 0 0 Reallocated Event Count
0 0 0 0 Current Pending Sector
0 0 0 0 Offline Uncorrectable
0 0 0 0 UDMA CRC Error Count
0 0 0 0 Multi Zone Error Rate
0 0 0 0 ATA Error Count
Extended Attribute
0 0 0 0 Hot-add events
0 0 0 0 Hot-remove events
0 0 0 0 Lp stat events
0 0 0 0 Power glitches
0 0 0 0 Hard disk resets
0 0 0 0 Retries
0 0 0 0 Repaired sectors - snipesAspirantMy LCC counts were all approaching 300,000 so I decided it was time to take action.
I tried using the max timer value of 25.5 seconds with wdidle3, but I still found my LCC increasing. So I disabled this timer on all my drives. No further increases in this value since I did that a few days ago.
As mentioned earlier this is a non destructive change. It just required pulling each drive out one at a time, inserting it into a PC then booting from dos to run the utility. The entire process took a bit of time, but at least it didn't require backup/restore. - sander11AspirantI was going to do this once I could borrow a machine that has sata. How much was it going up when you raised the limit 25 seconds? Because I was under the impression it would slow the reloads to an acceptable level.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!