NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
offbyone
Dec 01, 2014Aspirant
NAS104 Raid decision? Xraid vs Raid 10 etc #24296869
I just got a NAS 104 and installed 4 3TB Western Digital Red disks. Since I bought all my drives I don't really plan to upgrade or mess with the drives in the near future.
When I powered up and connected it said it was using XRaid and that it was configured for Raid 5. This surprised me, I thought XRaid would automatically pick Raid 10 in this configuration.
I have limited experience, but I was under the impression that if you had an even number of disks that Raid 10 was the way to go in terms of a combination of performance and data protection.
What am I missing?
Considering I already have all my drives, would the flexibility of xraid really benefit me?
thanks
When I powered up and connected it said it was using XRaid and that it was configured for Raid 5. This surprised me, I thought XRaid would automatically pick Raid 10 in this configuration.
I have limited experience, but I was under the impression that if you had an even number of disks that Raid 10 was the way to go in terms of a combination of performance and data protection.
What am I missing?
Considering I already have all my drives, would the flexibility of xraid really benefit me?
thanks
29 Replies
Replies have been turned off for this discussion
- mdgm-ntgrNETGEAR Employee RetiredFor expandability X-RAID is best. But if you don't intend to expand your volume in the future with 4TB disks (max capacity on compatibility list for the 104) then you don't need X-RAID.
You can disable X-RAID, delete the existing volume and create a new RAID-10 one if you like.
I would recommend that you still call the volume "data" (no quotes and note the case). - StephenBGuru - Experienced User
With 4x3TB drives and RAID-10, you'd have a 6 TB volume (~5.4 TiB). With RAID-5/X-RAID you'd have a 9 TB volume (~8.1 TiB).offbyone wrote: I have limited experience, but I was under the impression that if you had an even number of disks that Raid 10 was the way to go in terms of a combination of performance and data protection.
The choice is yours of course, the RN104 supports RAID-10. But in my view you are better off with RAID-5, RAID-6, or two volumes of RAID-1.
-RAID-5 offers the most space with protection against a single drive failure.
-RAID-6 offers protection against any combination of two drive failures, but 2/3 the space of RAID-5 (in your situation).
-RAID-1 offers protection against most (not all) combinations of two drive failures, and the simplest recovery. It also provides 6 TB of total storage (in two volumes in your situation).
Most users here would prefer the larger volume of RAID-5/XRAID. If your goal is improve performance, buying an RN300 or RN500 is a better way to go. AN RN500 will max out a gigabit ethernet network even in RAID-5 (and an RN300 is very close to doing the same). RAID-10 makes more sense if you have 10 gig ethernet and a higher end NAS than the one you purchased (for instance the RN700 series). - offbyoneAspirant
StephenB wrote:
With 4x3TB drives and RAID-10, you'd have a 6 TB volume (~5.4 TiB). With RAID-5/X-RAID you'd have a 9 TB volume (~8.1 TiB).offbyone wrote: I have limited experience, but I was under the impression that if you had an even number of disks that Raid 10 was the way to go in terms of a combination of performance and data protection.
The choice is yours of course, the RN104 supports RAID-10. But in my view you are better off with RAID-5, RAID-6, or two volumes of RAID-1.
-RAID-5 offers the most space with protection against a single drive failure.
-RAID-6 offers protection against any combination of two drive failures, but 2/3 the space of RAID-5 (in your situation).
-RAID-1 offers protection against most (not all) combinations of two drive failures, and the simplest recovery. It also provides 6 TB of total storage (in two volumes in your situation).
Most users here would prefer the larger volume of RAID-5/XRAID. If your goal is improve performance, buying an RN300 or RN500 is a better way to go. AN RN500 will max out a gigabit ethernet network even in RAID-5 (and an RN300 is very close to doing the same). RAID-10 makes more sense if you have 10 gig ethernet and a higher end NAS than the one you purchased (for instance the RN700 series).
Do the performance benefits of Raid 10 not translate on this device? Because Raid 10 and Raid 6 have the same disk space result, but I think Raid 6 has a huge write penalty. Do those performance aspects of raid not translate on the ReadyNAS?
Are there any performance benchmarks of the different raids on the ReadyNAS? - StephenBGuru - Experienced UserRAID-10 has the same disk space result as RAID-6, but does not offer the same protection benefits. It is similar to dual raid-1, in that it protects against some combinations of 2 disk failures but not all.
There's a performance review here: http://www.smallnetbuilder.com/nas/nas- ... l=&start=1 RAID-1, RAID-5, RAID-10, JBOD all came in ~80 MB/sec read. RAID-1, RAID-10, JBOD all came in ~50 MB/sec write. RAID-5 was slightly slower (~40 MB/sec write). They didn't test RAID-6, and this was done on very old firmware (6.0.5). So results are probably different now.
If you have the time/interest, you could try some or all of these combinations and benchmark them with NAStester (http://www.808.dk/?code-csharp-nas-performance). If you do this, I'd suggest turning off antivirus protection and snapshots. I'm sure there are folks here who'd be interested in seeing your results.
In any event, I'd rather have a 9 TB volume and 40 MB/s write speeds than a 6 TB volume and 50 MB/s write speeds. So with this NAS I don't think RAID-10 is a good balance.
If you want to focus on ease of recovery, then dual-RAID-1 is better than RAID-10, since you can read the data using a linux boot CD (as long as the btrfs file system is supported by boot CD) without needing RAID recovery software. - offbyoneAspirant
StephenB wrote: RAID-10 has the same disk space result as RAID-6, but does not offer the same protection benefits. It is similar to dual raid-1, in that it protects against some combinations of 2 disk failures but not all.
There's a performance review here: http://www.smallnetbuilder.com/nas/nas-reviews/32104-netgear-readynas-rn104-reviewed?showall=&start=1 RAID-1, RAID-5, RAID-10, JBOD all came in ~80 MB/sec read. RAID-1, RAID-10, JBOD all came in ~50 MB/sec write. RAID-5 was slightly slower (~40 MB/sec write). They didn't test RAID-6, and this was done on very old firmware (6.0.5). So results are probably different now.
If you have the time/interest, you could try some or all of these combinations and benchmark them with NAStester (http://www.808.dk/?code-csharp-nas-performance). If you do this, I'd suggest turning off antivirus protection and snapshots. I'm sure there are folks here who'd be interested in seeing your results.
In any event, I'd rather have a 9 TB volume and 40 MB/s write speeds than a 6 TB volume and 50 MB/s write speeds. So with this NAS I don't think RAID-10 is a good balance.
If you want to focus on ease of recovery, then dual-RAID-1 is better than RAID-10, since you can read the data using a linux boot CD (as long as the btrfs file system is supported by boot CD) without needing RAID recovery software.
Interesting points.
So lets take a look closer. I have 4 3TB disks on NAS104.
Storage results:
Raid 5: 9TB
Raid 6: 6TB
Raid 10: 6 TB
Fault Tolerance:
Raid 5: 1 disk
Raid 6: any 2 disks
Raid 10: any 1 disk, and the right combination of 2 disks
Write Speed: (io operations/ per write):
Raid 5: 4x
Raid 6: 6x
Raid 10: 2x
Read Speed benefits:
Raid 5: 3x
Raid 6: 2x
Raid 10: 4x
But this is just the technical numbers not the situational or implementation based. The question is how the 104 implements the writes and reads. Does the performance cost really make a difference?
You point out that the 104 isn't really that fast. So does that normalize the results meaning that even though Raid 10 has great read/write speed benefits, you won't really see them?
Also, your point about the network speeds is not one I considered. The benchmark you linked to showed read speeds maxing out at 80mbs and write speeds at 50mbs, so does gigabit even matter as long you are getting 100mbs?
My previous experience with raid was based as much around the speed benefits as the fault tolerance. But those were all environments where the raid array was connected directly or using a gigabit ethernet. Let's be honest, I surely plan to use the device via my wireless network where i live with speeds closer to 100mbs. I am not sure the effect though. Maybe it marginalizes the reads, but does make any write penalties inconsequential?
If the performance benefits of raid 10 aren't really taken advantage of, or the performance hits of raid 6 aren't really felt, then that makes me lean towards raid 6 or raid 5. The extra space sure would be nice, but I really want fault protection. I have seen a raid 5 array fail and become unrecoverable when a second drive failed during recovery.
This is a tough decision. I wish I had more data. I may do some benchmarks. I wish netgear had a more complete one done. - StephenBGuru - Experienced User
offbyone wrote:
Write Speed: (io operations/ per write):
Raid 5: 4x
Raid 6: 6x
Raid 10: 2x
I'm not sure how meaningful this analysis is, since it doesn't take caching into account
Disk caching (including deferred writes) is a very important piece of the puzzle, since if you are writing a sequential file you are writing all the data blocks in the stripe anyway, and that enables optimizations. If you make those optimizations, the combined i/o for those blocks is 1.33x in the RAID-5 case (you write 1 parity block for each 3 datablocks) and 2x for the RAID-6 case (you write 2 parity blocks for each 2 datablocks). For RAID-10, you are also writing 2x (one mirrored block for each data block). Also, the write queuing speeds things up in many practical use cases.
I am guessing here, but I think your benefits on the reads are not about the number of i/o operations, but they are about the ability to read the disks in parallel.offbyone wrote: Read Speed benefits:
Raid 5: 3x
Raid 6: 2x
Raid 10: 4x
If so, the RAID 10 numbers are high. You can read both of the underlying RAID-0 volumes in parallel, so I am thinking 2x.
If you are talking about i/o generally, then Data blocks are spread across all four disks in all cases. RAID-5 requires you either to read parity blocks you don't need or seek over them. If you read them, then you are making 4 i/o requests for 3 blocks you care about, so like the write case the efficiency is 1.33x. Similarly RAID-6 has efficiency 2.0x, since you are reading 4 i/o requests and only getting 2 data blocks. RAID-10 does not match the write case, the efficiency there is 1.0x since every block is a block you care about. Of course if you seek, what happens depends on the seek performance of the drives.
Again, disk caching (including read-ahead) are very important aspects of the puzzle for sequential file access
I think we can make this a bit simpler (excluding RAID-6). For large file sequential i/o:offbyone wrote: You point out that the 104 isn't really that fast. So does that normalize the results meaning that even though Raid 10 has great read/write speed benefits, you won't really see them?
your point about the network speeds is not one I considered. My previous experience with raid was based as much around the speed benefits as the fault tolerance. But those were all environments where the raid array was connected directly or using a gigabit ethernet. Let's be honest, I surely plan to use the device via my wireless network where i live with speeds closer to 100mbs. I am not sure the effect though. Maybe it marginalizes the reads, but does make any write penalties inconsequential?
The RN104 sustained write speeds (not including RAID-6) are measured at 320 to 400 mbits per second (40-50 MB/s). Sustained reads are similarly ~ 640 mbits per second.
On a wireless or fast ethernet network, the network is the bottleneck. So you will see close to 100 mbit performance on fast ethernet, and close to the real-world wifi speed for wireless. The NAS is fast enough that it doesn't get in the way.
When you have a faster network, then the RN104's processor performance becomes the bottleneck, and you will see 320-400 mbits of write performance or ~640 mbits per second of read performance no matter what RAID mode you pick (and no matter how fast the network is).
If you are running an application on the RN104 or doing random i/o, then of course it gets more complicated - and disk performance can become the bottleneck. And with a higher-end NAS, the disk i/o performance can become the bottleneck even with large sequential file access.
I'd benchmark RAID-6 before I used it. Dual RAID-1 isn't quite as good protection, but I think it will be quite a bit faster. Doing all the parity block computations in software will slow the RN104 down. The simplicity of dual RAID-1's storage layout is attractive as well.offbyone wrote: that makes me lean towards raid 6 or raid 5.
Yes, that can happen. Running the scheduled maintenance can help, especially if you have a lot of files that aren't accessed much (which I have). Often the drive fails long before it is detected, because the bad patch simply hasn't been accessed in a while. I suggest running scrubs and disk tests quarterly, with defrags also quarterly. That amounts to one diagnostic test per month.offbyone wrote: The extra space sure would be nice, but I really want fault protection. I have seen a raid 5 array fail and become unrecoverable when a second drive failed during recovery.
Of course this also points out the need for backup on another device.
Benchmarks on 6.2 would be good information to share if you have the time.offbyone wrote: This is a tough decision. I wish I had more data. I may do some benchmarks. I wish netgear had a more complete one done. - mdgm-ntgrNETGEAR Employee RetiredWe have made some changes to improve performance in 6.2.0.
Testing on that, preferably on a clean setup would be good. - offbyoneAspirant
StephenB wrote:
I'd benchmark RAID-6 before I used it. Dual RAID-1 isn't quite as good protection, but I think it will be quite a bit faster. Doing all the parity block computations in software will slow
Thanks for all the comments. I am benchmarking. I just did Raid 10, pretty disappointing, I will share results when I get more done. Problem is when I destroyed the volume and then created a raid 6 one it takes like 20 hours to rebuild. Can't believe it takes so long, hurry up and wait!mdgm wrote: We have made some changes to improve performance in 6.2.0.
Testing on that, preferably on a clean setup would be good.
I am!
Edit: Actually after 30 minutes, Raid 6 build is saying it is going to take 50 hours still. What on earth takes so long? - mdgm-ntgrNETGEAR Employee RetiredAre you running much on the NAS? That can slow this process down. It should be noted that the volume is redundant throughout the process of doing the initial sync of the volume.
Also the early estimates for how long it will take can be much longer than what turns out to be the case. - offbyoneAspirant
mdgm wrote: Are you running much on the NAS? That can slow this process down. It should be noted that the volume is redundant throughout the process of doing the initial sync of the volume.
Also the early estimates for how long it will take can be much longer than what turns out to be the case.
I literally plugged it in for the first time yesterday. After physically installing the drives and connecting using the web interface I immediately updated the firmware. Then I built the raid 10 array which took 20 hours in real time. Then I just destroyed it and started the raid 6 rebuild.
I have played with the user/groups menu a bit, but I haven't enabled any services or apps other than the defaults. There is no data on it.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!