NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Nantuc
Jun 09, 2017Star
ReadyNAS 204 - Seagate 8TB Archive Drives Keep Failing
I have had nothing but problems using my ReadyNAS 204 and 4x Seagate 8TB Archive Drives. I would have no problem with a drive failure taking two weeks to re-initialize as long as it was truely a dri...
- Jun 21, 2017
Thanks for the feedback everyone. The original issue was the failing drives. My answer was the firmware update to v6.7.4. I was just about to spend a ton of money only because I could not keep the drives from failing over and over. I use the NAS simply for storing videos and almost, if never, need to delete any files, just add to them. The Raid stripe is all the redundancy that I require and has proven reliable despite over 10 drive failures since last Xmas.
As long as the drives remain as stable as they have for the last week, even after moving new videos to it each day, I am perfectly happy with the performance of the 8TB Seagate archive drives in my ReadyNAS 204. Hopefully they won't mess anything up that fixed this problem in subsequent firmware updates (crossed fingers).
I consider this thread closed. If the drives become unstable in the future, or I need to upgrade, I will definately get NAS ready drives.
Again,
Thanks,
StephenB
Jun 10, 2017Guru - Experienced User
A lot of others had similar issues when these drives first came onto the market. Seagate clearly states that the drives are not recommended for surveillance or NAS, but of course people often don't check out the data sheets.
JBDragon1 wrote:
Have you had to replace a shingle on your house? It's not as simple as when you're laying new shingles onto your house.
That's a perfect visualization.
Nantuc
Jun 10, 2017Star
Ok, thanks for the feedback. I did look around on the internet and the only drawback I saw mentioned was the length of time the raid rebuild would be in case of a drive failure. I had no problem with this. I did not however, see anything mentioned about the drives consistently failing over and over. I will replace the lot with 10TB Seagate Ironwolf Pro's.
By the way, the rebuild time after the update to 6.7.4 has significantly been reduced from about two weeks to just over 2 days!
- JBDragon1Jun 11, 2017Virtuoso
So wait, you tried to go the cheap route with the 8TB Archive drives, and now you want to go with 10TB Seagate Ironwolf Pro's? So the complete opposite of Price and even larger. Those should work, but really are complete overkill. Your bottleneck is your 1 Gigabit connection. The slower normal WD Red NAS or the normal Seagate NAS drive will save you money. The WD drives are 5400RPM and the Seagate are 5900RPM. I'm using 4 of the Red's and 2 of the Seagate's. I can easily Max out my Gigabit Network on file transfers.
Unless you have 10Gigbit Ethernet on your NAS, with a number of users, it really is overkill. Unless you plan to upgrade your NAS in 6 months to a year and then moving the HDD over to the much faster NAS that could make use of them. Though I have to say, the price difference from a 10TB WD Red Normal, and a 10TB Seagate Ironwolf Pro isn't all that much. A WD Red Pro is even more costly. Seagate are cheaper drives, but I think WD is better reliability.
The slower NAS HDD's have a few benefits. Less Heat as they're running slower. Which is good in a cramped NAS unit. Which also means they run quieter. They're using less power. Since they're running slower, less heat, should last longer. That's my opinion. 3 of my WD Red drives are over 4 years old now and showing zero errors.
- NantucJun 14, 2017Star
It isn't the speed I am after as much as the drives are 2TB larger and even more so, the quality of the disk platters. I read that the material they make the enterprise class drives from is a much higher grade material. It also why the garantee is 5 years for the pro, instead of 3 for the non-pro version.
- JBDragon1Jun 14, 2017Virtuoso
I'm all for Pro is you have the money to get them! I'm over 4 years now on 2 of my 4 WD Red drives, one almost 4 and the other 3 years. I didn't want to buy all the drives at once as you could end up with a bad batch and have more then 1 HDD fail on you at once and cause you to lose all your data. Plus wear are tear on a HDD that you didn't need the space for at the time. So I started with 2, then 3 and finally 4, which was all my old NAS could hold until getting the 516, and those 4 drives got moved to that and then a I added a Seagate NAS drive and when that got close, added a second Seagate NAS drive. 6 HDD total and now full. All show error erros.
I've heard of to many people having a number of HDD fail at once that they got at the same time. So I think this is the safer way to go. Don't pack your NAS full all at once. Pop a new HDD as you need them. Buy a new HDD as you need them. If you need one for a spare, you buy the 1 and hold onto that as a spare ready to pop in, but again, buy at a later date or from a differnt company so you're not getting one fromt he same batch. That's my recomendation.
- StephenBJun 11, 2017Guru - Experienced User
Nantuc wrote:
I will replace the lot with 10TB Seagate Ironwolf Pro's.
Those are of course enterprise-class drives. Though I use Western Digital myself, folks here who use Ironwolf and Ironwolf Pros have been pretty happy with them. There are some users who found them to be very noisy. If you experience that, contact Seagate support right away - there might be drive firmware that would fix that.
Seagate specs the operating power at 6.8 watts and the idle power at 4.2. WDC 10TB Red specs say the operating power is 6.2 watts, which isn't really that big a difference. But the idle power is only 2.8 watts - which would give you lower temperatures.
As far as performance goes, large file transfers are limited by your network and perhaps the arm CPU. You might see a performance boost for directory searches and other functions that require a lot of seeking.
Though they might be overkill, they are suitable for your NAS (unlike the SMR drives).
Nantuc wrote:
By the way, the rebuild time after the update to 6.7.4 has significantly been reduced from about two weeks to just over 2 days!
I'm not seeing anything in the release notes about that.
This is with the new drives? They would rebuild a lot faster than the SMR ones.
- NantucJun 14, 2017Star
No, not the new drives. The archive drives. Typically, it had been taking between a week to two weeks per rebuild, however the rebuild after the firmware update only took 50 hours.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!