NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
hp532
May 28, 2017Aspirant
What protection does XRaid offer for large disk arrays?
I’ve been running my 314 for 3 years now with a mixture of disks
Disk 1 – Seagate 2TB (ST32000542AS 24k+ hours)
Disk 2 Seagate 3TB (ST3000DM001 18k+ hours, the one that has high failure rates)
Disk 3 Seagate 2TB (ST32000542AS 45k+ hours, old disk from my previous readynas duo)
Disk 4 WD Red 3TB (WD30EFRX-68EUZN0 11k+ hours, newer disk bought Jan 2016)
The NAS reported that the volume was degraded and that disk 3 (the oldest) had gone offline. Surprisingly, a few days later the disk suddenly appeared online again (but no longer part of the array). Not 100% sure but I think it reappeared when I plugged in an external USB drive. I’ve since checked the drive on a Windows machine and Crystal Disk info reports the drive health as good. It also passed the Seatools short generic and a chkdsk with bad sector scan. Is it possible that there is a problem in the NAS itself?
Anyway, as most of the drives are old and the Seagate 3TB has a bad rep I’ve decided to buy new drives and start again with a factory reset. I thought I’d buy two 8TB drives and reuse the WD Red 3TB to get the following configuration
3TB, 8TB, 8TB to get a capacity of 11TB
I’ve been reading up on Raid5 and wonder what protection XRaid (which I understand to be Raid5 + volume expansion) offers when used with large capacity disks? Most places suggest that given disks have an error rate of 1 in 10e14 you’d expect a 11TB array rebuild after a disk failure to fail with around 90% probability (11/12.5). If the rebuild fails because it cannot read a single sector during the rebuild is the whole array lost? The default Raid configuration seems to be XRaid(Raid5) regardless of disk size. Does this mean that the ReadyNAS does something clever at the OS level to handle such failures during rebuild and continue?
Retired_Member wrote:
Instead, in your situation I would either go with
1) 4 WD Red 3TB or
2) 3 8TB (WD or HGST preferred)
As has been pointed out, XRAID will let you use 2x8TB+3TB with no issue.
Just to comment on the economics, using current Amazon US pricing:
WD30EFRX: $110.00
WD80EFZX: $265.00
4x3TB costs $330 (since he is reusing the WD30EFRX he already has), that gives him 6 TB more storage than the existing WD30EFRX has by itself. So that is $55 per TB gained.
2x8TB+3TB costs $530, and gives 8 TB more storage than the existing 3 TB drive. That is $66.25 per TB gained.
So it is the case that 4x3TB is more economical in the short run. But later on, you will pay a much higher price to expand it again.
For example,
- adding 5 TB to the 4x3TB system will cost $530 (upgrading two drives to WD80EFZX). That is $106 per TB gained.
- adding 8 TB to the 2x8TB+3TB system costs only $265 (adding a single WD80EFRX) That's only $33 per TB gained.
hp532 wrote:
I’ve been reading up on Raid5 and wonder what protection XRaid (which I understand to be Raid5 + volume expansion) offers when used with large capacity disks? Most places suggest that given disks have an error rate of 1 in 10e14 you’d expect a 11TB array rebuild after a disk failure to fail with around 90% probability (11/12.5).
Those are the "death of RAID" sites I think.
But lots of people here have single-redundancy arrays that are bigger than 12 TB, and they have resynced them successfully (often many times). Every time you do a scrub you are reading all the sectors, and I do that every three months on my 15 TB, 16 TB, and 18 TB RAID-5 volumes with no problems. The math says that just won't work.
So it's clear that you can't just blindly apply the math. The error rate is spec'd at < 10e14, and the "less than" clearly is important. Drives are much more reliable than that spec suggests.
10 Replies
Replies have been turned off for this discussion
- Retired_Member
Raid5 does need at least 3 drives to create the redundancy protecting you against one of the drives getting faulty.
For consistency reasons I would not group one 3TB with two 8TB drives.
Instead, in your situation I would either go with
1) 4 WD Red 3TB or
2) 3 8TB (WD or HGST preferred)
Case1)
- You only would get about 9TB (3x3TB data and 1x3TB for redundancy) and have 4 drives whereof 1 is allowed to get faulty
+ More economic solution as the 3 additional drives are significantly cheaper, than 3 8TB drives of whatever brand
Case2)
- More expensive solution
+ You would get about 16TB (2x8TB data and 1x8TB for redundancy) and have 3 drives whereof 1 is allowed to get faulty
+ You could add another 8TB drive to end up with about 24TB (3x8TB data and 1x8TB for redundancy) and have 4 drives whereof 1 is allowed to get faulty
+ Solution just has more flexibility depending on your future data needs
My preferred option would be 2), if cost is not a critical issue. Otherwise would go with 1).
- StephenBGuru - Experienced User
Retired_Member wrote:
Instead, in your situation I would either go with
1) 4 WD Red 3TB or
2) 3 8TB (WD or HGST preferred)
As has been pointed out, XRAID will let you use 2x8TB+3TB with no issue.
Just to comment on the economics, using current Amazon US pricing:
WD30EFRX: $110.00
WD80EFZX: $265.00
4x3TB costs $330 (since he is reusing the WD30EFRX he already has), that gives him 6 TB more storage than the existing WD30EFRX has by itself. So that is $55 per TB gained.
2x8TB+3TB costs $530, and gives 8 TB more storage than the existing 3 TB drive. That is $66.25 per TB gained.
So it is the case that 4x3TB is more economical in the short run. But later on, you will pay a much higher price to expand it again.
For example,
- adding 5 TB to the 4x3TB system will cost $530 (upgrading two drives to WD80EFZX). That is $106 per TB gained.
- adding 8 TB to the 2x8TB+3TB system costs only $265 (adding a single WD80EFRX) That's only $33 per TB gained.
hp532 wrote:
I’ve been reading up on Raid5 and wonder what protection XRaid (which I understand to be Raid5 + volume expansion) offers when used with large capacity disks? Most places suggest that given disks have an error rate of 1 in 10e14 you’d expect a 11TB array rebuild after a disk failure to fail with around 90% probability (11/12.5).
Those are the "death of RAID" sites I think.
But lots of people here have single-redundancy arrays that are bigger than 12 TB, and they have resynced them successfully (often many times). Every time you do a scrub you are reading all the sectors, and I do that every three months on my 15 TB, 16 TB, and 18 TB RAID-5 volumes with no problems. The math says that just won't work.
So it's clear that you can't just blindly apply the math. The error rate is spec'd at < 10e14, and the "less than" clearly is important. Drives are much more reliable than that spec suggests.
I am a heavy believer in dual redundancy especially as they come out with ever larger disks.
Consider these factors;
when you rebuild/resync a raid 5 device, you are putting a heavy load on all of your drives.
the larger the array is, the longer it takes for a rebuild to finish
so you are putting a heavy load, for a longer period of time, the bigger your array is
if your array is rebuilding because of a failed disk, and your other disks have similar hours on them, you are now putting a heavy load, for a long period of time, on disks that have a lot of hours on them already.
with the 6 and 8 bay devices, raid6/dual redundancy is the default, but if you started out with only a few disks you might still be in raid 5/single redundancy mode.
Either way, with only a single redunancy, if anything bad happens during a rebuild, you are likely screwed plain and simple. this forum and every other nas forum are littered with the tears of those who assumed that raid 5 would protect them and did not have a separate backup of their data.
neither RAID or a NAS is backup by itself. neither will protect you from theft, fire, flood, or malicious users.
a backup means a completely separate copy of your data, ideally multiple copies, in multiple different places.
raid6/dual redundancy will better help keep your data safer during rebuilds/resyncs, if a drive fails during a raid 6 rebuild, you will still have your data intact.
Of course if you have really bad luck and multiple drives fail, again you are likely screwed without a separate backup.
in summary, neither raid, nor nas is a replacement for backups, backups, and more backups.
- jak0lantashMentor
hp532 wrote:ST32000542AS 45k+ hours
A Barracuda LP with 45k+ hours? Yeah, it's time to replace it.
You can read about X-RAID and its RAID layers here: https://community.netgear.com/t5/Using-your-ReadyNAS/XRAID-turned-RAID5-into-RAID6-when-adding-a-drive/m-p/1120084#M113353
If you have 1x 3TB + 2x 8TB, your volume will be composed of a RAID5 of 3 partitions of 3TB and a RAID1 of 2 partitions of 5TB.
You could have use this tool to calculate the usable capacity, but it doesn't include 8TB HDDs... http://rdconfigurator.netgear.com/raid/index.html
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!