NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
NASguru
Apr 24, 2020Apprentice
shingled magnetic recording (SMR) hard drive fiasco - inquiring on recommendations
It's been a while since I jumped on the forum but what brings me here is my NAS volume utilization is hovering around 65%. I believe it's good until 80% and then starts to bark at you about storage ...
Sandshark
Apr 27, 2020Sensei
NASguru wrote:I personally wouldn't mix the two drive technologies as there are numerous reports of raids failing. If at all possible I'd return that drive and go 10TB or larger even though as a single drive you won't see the benefits to an expanded volume. I'm assuming Xraid and all existing drives are 6TBs here.
Correct assumption, though the total situation is more complicated (see below). Unfortunately, I only learned of all this well after I purchased the drive, so I think a return is not possible. I think it was the last scrub, in January, that pushed the old drive over the brink, just a couple months past warranty. I do have a very comprehensive backup system that does not have the same issue, but this volume is one of the more active ones I have. I have a ton of space with all my volumes (some 20TB remaining before I even go past 80% on any voluime). so swapping for larger drives I know (or think I do as of right now) won't benefit me space wise. And I'm not sure if swapping it for a 7200RPM drive would be better or worse.
They are in a 12-bay rack-mount system (I moved the volumes from an RN516 and EDA500 into it), and WD only recommends them for use in an up to 8 bay system due to no vibration compensation, but the NAS is in a very solid rack and I don't move it with anything powered on. The other 6 drives are older 3TB Reds, and I have spares for them because the 6TB were originally also all 3TB. To start moving them all to drives recommended for a 12-bay NAS or server would be quite expensive. And if done one or two at a time, I'll have a mix of 7200's and 5400's during the process.
In normal use, I don't notice anything terrible about the speed. And the NAS has enough processor power that the scrub really doesn't slow it down much (unlike when one volume was in an EDA500). So, for now, I'm going to wait and see.
NASguru
Apr 28, 2020Apprentice
Sandshark wrote:
NASguru wrote:I personally wouldn't mix the two drive technologies as there are numerous reports of raids failing. If at all possible I'd return that drive and go 10TB or larger even though as a single drive you won't see the benefits to an expanded volume. I'm assuming Xraid and all existing drives are 6TBs here.
Correct assumption, though the total situation is more complicated (see below). Unfortunately, I only learned of all this well after I purchased the drive, so I think a return is not possible. I think it was the last scrub, in January, that pushed the old drive over the brink, just a couple months past warranty. I do have a very comprehensive backup system that does not have the same issue, but this volume is one of the more active ones I have. I have a ton of space with all my volumes (some 20TB remaining before I even go past 80% on any voluime). so swapping for larger drives I know (or think I do as of right now) won't benefit me space wise. And I'm not sure if swapping it for a 7200RPM drive would be better or worse.
They are in a 12-bay rack-mount system (I moved the volumes from an RN516 and EDA500 into it), and WD only recommends them for use in an up to 8 bay system due to no vibration compensation, but the NAS is in a very solid rack and I don't move it with anything powered on. The other 6 drives are older 3TB Reds, and I have spares for them because the 6TB were originally also all 3TB. To start moving them all to drives recommended for a 12-bay NAS or server would be quite expensive. And if done one or two at a time, I'll have a mix of 7200's and 5400's during the process.
In normal use, I don't notice anything terrible about the speed. And the NAS has enough processor power that the scrub really doesn't slow it down much (unlike when one volume was in an EDA500). So, for now, I'm going to wait and see.
Got it and there's nothing wrong with taking a wait and see approach given you're volumes are backed up. I also agree backups are the key to preventing any type of unrecoverable situation. Is it possible to move that SMR drive to one of your less/non-acitve backups instead? The reason I ask is the SMR drives were originally advertised as archival drives so they are fine from a storage point and don't run into issues until there is a heavy write activities. I don't believe there would be any issues mixing 7200 drives with 5400 but it may make more sense to place the faster spinning drives to the physical outside of the drive line up inside your bays? Yeah, I heard on a few reviews those add on EDA500 run less than optimal compared to the NAS itself which is why I actually have two physical NAS (primary/secondary).
- StephenBApr 28, 2020Guru - Experienced User
NASguru wrote:
Got it and there's nothing wrong with taking a wait and see approach given your volumes are backed up.There would be value in keeping it as-is, and probing for performance issues.
I'm a little surprised that raid sync took longer, since simply reconstructing the drive contents should use sequential writes. Though any other writing would disrupt that, and would cause it to take longer.
I'd expect that expansion would take longer, since I don't think reshaping is done with purely sequential writes.
Random writes might be the killer (database updates, torrents, etc) - even if the workload is within the 180 TB/year (500 GB/day?) range in the WD spec.
FWIW, if the SMR zones are fairly small, then the drives might actually give reasonable performance for most people. The goal wasn't to maximize the space on the platter - instead they were just aiming at getting to 2 TB. So they might have ended up with fairly small SMR zones (or a generous CMR zone).
Though of course they should have disclosed this, and ideally provided a white paper detailing the performance.
- NASguruApr 28, 2020Apprentice
StephenB wrote:
NASguru wrote:
Got it and there's nothing wrong with taking a wait and see approach given your volumes are backed up.There would be value in keeping it as-is, and probing for performance issues.
That would depend on how much you want to be a guinea pig with your NAS. Me personally, I'm a set it and forget it kind of guy so I prefer to stick with what works and then move on to something else. Again, there's nothing wrong with wanting to test it out though.
I'm a little surprised that raid sync took longer, since simply reconstructing the drive contents should use sequential writes. Though any other writing would disrupt that, and would cause it to take longer.
I'd expect that expansion would take longer, since I don't think reshaping is done with purely sequential writes.
Random writes might be the killer (database updates, torrents, etc) - even if the workload is within the 180 TB/year (500 GB/day?) range in the WD spec.
I mostly seen the issue with SMRs come up during the resilvering (replacing a drive) process. Depending what Google results you trust or NAS you're referrencing the resilvering process is very similar to running a scrub so it makes senses it would take a LOT longer.
Though of course they should have disclosed this, and ideally provided a white paper detailing the performance.
I concur which is really the crux of the issue more than anything else. The only other thing I would add is that it may be wise to mark SMR drive(s) with a sharpie? My memory isn't what it use to be so it may be helpful if you ever find yourself in a situation where you need to play music chairs with hard drives between volume or even if you repurpose the drives down the road for stand alone applications.
- StephenBApr 28, 2020Guru - Experienced User
NASguru wrote:
StephenB wrote:
There would be value in keeping it as-is, and probing for performance issues.
That would depend on how much you want to be a guinea pig with your NAS.
Sandshark has done quite a number of experiments on various NAS, and the info gathered has been helpful here.
Obviously doing them on your main NAS is a different matter. Though he does seem to be inclined to "wait and see" - and if he does that, I suspect he'd be keeping a close eye on performance anyway.
NASguru wrote:
I mostly seen the issue with SMRs come up during the resilvering (replacing a drive) process.
And I guess I'm saying that I find that curious. If you are writing sequentially from the beginning to the end of the disk, then there shouldn't be any slowdown with SMR. And that is what resilvering should be doing.
Of course if you add or update files in the middle of the resilvering, that is another matter. That would trigger the background process of rewriting the tracks between the ones you updated and the end of their zone(s). If btrfs does some file system maintenance (or creates snapshots) during the process, then that could similarly trigger the background process.
This is one of several scenarios where I am interested in seeing some well-controlled performance tests with RAID.
- NASguruApr 28, 2020Apprentice
StephenB wrote:
NASguru wrote:
StephenB wrote:There would be value in keeping it as-is, and probing for performance issues.
That would depend on how much you want to be a guinea pig with your NAS.
Sandshark has done quite a number of experiments on various NAS, and the info gathered has been helpful here.
Obviously doing them on your main NAS is a different matter. Though he does seem to be inclined to "wait and see" - and if he does that, I suspect he'd be keeping a close eye on performance anyway.
Sounds good and hopefully he comes back with something worthy.
NASguru wrote:I mostly seen the issue with SMRs come up during the resilvering (replacing a drive) process.
And I guess I'm saying that I find that curious. If you are writing sequentially from the beginning to the end of the disk, then there shouldn't be any slowdown with SMR. And that is what resilvering should be doing.
Of course if you add or update files in the middle of the resilvering, that is another matter. That would trigger the background process of rewriting the tracks between the ones you updated and the end of their zone(s). If btrfs does some file system maintenance (or creates snapshots) during the process, then that could similarly trigger the background process.
This is one of several scenarios where I am interested in seeing some well-controlled performance tests with RAID.
Agreed, the devil is always in the details and even more so when it comes to technology. If you have the time, then all the power to those that want to lab it up in a controlled enviroment and narrow it down to specific occurences (I suspect WD already has this knowledge but ran with SMR drives anyway). Unfortunately, in real world enviroments things are rarely those one-offs situations so the value or usage case may not be there. SMR drives should probably not be in a line up of NAS drives and at least Seagate realizes that although they are proably more about sezing the moment to steal market share from WD than anything else.
- NASguruApr 28, 2020Apprentice
FYI, may save some time on testing SMR drives? Western Digital Blog
Of course they recommended spending more for their Pro or Gold. :smileyfrustrated:
WD Red HDDs have for many years reliably powered home and small business NAS systems around the world and have been consistently validated by major NAS manufacturers. Having built this reputation, we understand that, at times, our drives may be used in system workloads far exceeding their intended uses. Additionally, some of you have recently shared that in certain, more data intensive, continuous read/write use cases, the WD Red HDD-powered NAS systems are not performing as you would expect.
If you are encountering performance that is not what you expected, please consider our products designed for intensive workloads. These may include our WD Red Pro or WD Gold drives, or perhaps an Ultrastar drive. Our customer care team is ready to help and can also determine which product might be best for you.
- StephenBApr 28, 2020Guru - Experienced User
NASguru wrote:
FYI, may save some time on testing SMR drives?
I don't think so.
Basically they are saying that their SMR drives should be perfectly ok in NAS if the workload is <= 180TB per year.
Then they essentially allege the converse - that people seeing performance problems must be exceeding that workload limit. So it's not WD's fault. But they are very careful to say "some people", likely on advice from their lawyers.
I'm not interested in the CYA part that you highlighted. I am interested in understanding more about how these drives actually do perform in ReadyNAS, and get some understanding on when it might be ok to use them.
- NASguruApr 28, 2020Apprentice
StephenB wrote:
NASguru wrote:FYI, may save some time on testing SMR drives?
I don't think so.
Basically they are saying that their SMR drives should be perfectly ok in NAS if the workload is <= 180TB per year.
Then they essentially allege the converse - that people seeing performance problems must be exceeding that workload limit. So it's not WD's fault. But they are very careful to say "some people", likely on advice from their lawyers.
I'm not interested in the CYA part that you highlighted. I am interested in understanding more about how these drives actually do perform in ReadyNAS, and get some understanding on when it might be ok to use them.
:smileylol: I just found the response comical/entertaining which is why I said 'may' save some time.
- StephenBApr 28, 2020Guru - Experienced User
NASguru wrote: I just found the response comical/entertaining which is why I said 'may' save some time
FWIW, I have a WD60EFRX that is showing some UNCs. I'm replacing it with a WD100EFAX (just ordered). Of course the size is nice, but an another factor here is that I've seen more issues with my WD60EFRX drives than my other Red models. So pre-SMR I decided not to purchase any more WD60EFRX drives (and was shifting to 10 TB drives in this particular NAS).
As an aside, I looked at the specs of the WD101EFAX (a bit newer than the WD100). Everything's identical, except the power use on the WD101 is noticably higher. I saw some speculation that the WD101 might be air-filled instead of helium filled, and that might account for it. Personally I like the lower power of the WD Reds, so I went with the older model.
- NASguruApr 28, 2020Apprentice
StephenB wrote:
NASguru wrote: I just found the response comical/entertaining which is why I said 'may' save some timeFWIW, I have a WD60EFRX that is showing some UNCs. I'm replacing it with a WD100EFAX (just ordered). Of course the size is nice, but an another factor here is that I've seen more issues with my WD60EFRX drives than my other Red models. So pre-SMR I decided not to purchase any more WD60EFRX drives (and was shifting to 10 TB drives in this particular NAS).
As an aside, I looked at the specs of the WD101EFAX (a bit newer than the WD100). Everything's identical, except the power use on the WD101 is noticably higher. I saw some speculation that the WD101 might be air-filled instead of helium filled, and that might account for it. Personally I like the lower power of the WD Reds, so I went with the older model.
Ah, good tip and given no performance advantage I'd go with the helium filled drive as well. Unfortunately, I have 8TB drives currently so I'll need two 12TB or larger for it to make sense. It's just a hard pill to swallon in terms of cost. Although, if I really wanted to roll the dice there's the option to shuck some external drives and see what you get. :smileywink:
- SandsharkApr 28, 2020Sensei
The typical workload on my volume containing the 6TB drives, with the one EFAX variety, is lots of small files (music, pictures, general documents) plus videos. I frankly don't care how long it takes to write the videos, so long as they play right (and they do). So my normal usage may not really tickle the area where those drives become a factor, save the scrubs. So, I may not be affected by it so much.
If I ever re-arrange what's were on my volumes, I may move the more static things to the volume with the SMR drives. I'd do it now, but it's my main volume, and moving that is a PITA.
My PC backups are on a completely separate volume. The new file started every two weeks is massive, but I still woudn't worry a lot if it was a tad slow. It just has to complete during the night. I have a volme of SSDs for times when I want to write to the NAS rather than a local hard drive on the computer.
I don't have any other of the SMR drives to play with in one of my sandbox NAS. It would be interresting to try some things with two TLR and one TLR and one SMR and compare the results.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!