NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
MaxxMark
Aug 12, 2016Luminary
Pro Pioneer - Poor performance X-RAID Raid-6 with 6x WD Red 3TB
For a really long time I thought the poor performance was due to the fact that I was running an old firmware and had never done a factory reset since 2009 (it was recommended in the past on the forum...
- Aug 15, 2016
For future readers;
The performance impact boiled down to the following things:
- The RAID implementation currently works different in comparison to older versions of the NAS which has impact on performance, but delivers more reliability
- The implementation of NFS (and/or NFSv4) works different and by default works in a more reliable way. Using the "async" option will greatly improve the speed of transfers, but will greatly increase the risk of faulty transfers in case of power-failure and such
- Performance is (obviously) impacted when there are operations running (ie: (initial) (re)syncing of volumes, balancing, scrubbing, defragmentation, simultanious transfers, etc.)
I tested diskspeeds within my system to evaluate the performance impact. My conclusion for now is that in a RAID-5 or RAID-6 setup the set won't perform better than an individual disk (which was the case in older versions of ReadyNAS but seems to not be true anymore, probably due to the first point). When using the async option, the performance is equal to the individual disks. Note; I have *not* compared speeds using CIFS/Samba. I have tested one time and it seemed that the speeds were comparable to NFS with the async option turned on.
MaxxMark
Aug 12, 2016Luminary
StephenB wrote:Even in OS4 you need to sync the full volume since RAID runs underneath the filesystem. Perhaps I am confused on what you mean???
If you are asking about setting up one volume per disk (jbod with no spanning), then the answer is that OS 6 lets you do this also. You can delete the volume you have now, and create new ones for each drive - so it is actually easier than OS 4.
I thought (but reading your reply I guess I remembered wrong) that when you initialize a raid set from zero (ie: all disks are totally blank) it only needs to initialize the filesystem as it assumes it is empty. But it could be that I mixed things up in my mind (when setting up a mirror raid (raid 1) with Intel Storage Engine, it is just clicking create and do a quick format).
What I wanted to do is go through all the steps;
first create 1 disk; check performance
create 2 disk array; using raid 1 (mirror); check performance
create 3 disk array; using raid 5 (2 data 1 parity); check performance
create 4 disk array; using raid 6 (2 data, 2 parity); check performance
create 4 disk array; using raid 5 (3 data, 1 parity); check performance
And when the performance dips, try the same setup with another disk to rule out disk issues.
It'll probably (very) time consuming when the volume needs to resync everytime. But it's a gutfeeling that there is something wrong somewhere.
StephenB
Aug 12, 2016Guru - Experienced User
MaxxMark wrote:
StephenB wrote:Even in OS4 you need to sync the full volume since RAID runs underneath the filesystem. Perhaps I am confused on what you mean???
If you are asking about setting up one volume per disk (jbod with no spanning), then the answer is that OS 6 lets you do this also. You can delete the volume you have now, and create new ones for each drive - so it is actually easier than OS 4.
I thought (but reading your reply I guess I remembered wrong) that when you initialize a raid set from zero (ie: all disks are totally blank) it only needs to initialize the filesystem as it assumes it is empty. But it could be that I mixed things up in my mind (when setting up a mirror raid (raid 1) with Intel Storage Engine, it is just clicking create and do a quick format).
What I wanted to do is go through all the steps;
first create 1 disk; check performance
create 2 disk array; using raid 1 (mirror); check performance
create 3 disk array; using raid 5 (2 data 1 parity); check performance
create 4 disk array; using raid 6 (2 data, 2 parity); check performance
create 4 disk array; using raid 5 (3 data, 1 parity); check performance
And when the performance dips, try the same setup with another disk to rule out disk issues.
It'll probably (very) time consuming when the volume needs to resync everytime. But it's a gutfeeling that there is something wrong somewhere.
You certainly can do that with OS6 (just switch to flexraid and create whatever volumes you want).
Are you suspicious of all the disks, particular disks, or the RAID performance?
jbod creation is pretty fast, since there is no sync for that - so you could set up a 1-disk volume for each disk and measure performance on each as part of your first step.
Also, do RAID-6 last, since you can simply expand the 3-disk RAID-5 array to 4 disks, which might save some time.
You haven't mentioned SMART stats - have you looked at them?
- MaxxMarkAug 12, 2016Luminary
I'm not suspicious of a disk in particular. Especially because the quick tests of the disks didnt show anything worth panicing about.
However the performance of the raid set as a whole is the thing that worries me. I just can't believe that disks which perform better individually (The WD Red (120-150mb/sec) compared to a Spinpoint HD103 (100-120 mb/sec) one on one outside of a RAID array) than they do in a raid set (RAID6 of 6x WD Red 3TB (30-60mb/sec) compared to RAID6 array of 6x Samsung Spinpoint 1TB (130+ mb/sec)).
SMART info has not really something noteworthy if you ask me. To be complete here is the volume.log after the most recent factory reset.
For now I have tested the following:
Disk 1 without other disks: speed was 100+ mb/s
Disk 2 without other disks: 100+ mb/s
At that moment I got curious and reverted to OS4 to check how that performed (noteworthy was that during factory reset you could at that time say you want raid6, which isnt possible in os6 anymore).
Performance of the resyncing was the same as was in OS6 (30 to 60mb/s). So the OS difference is not the cause (just as a sidenote; in the past when replacing a disk and doing online expansion, the resyncing was almost always above 80mb/sec).
After this I re-upgraded to OS6 again, which initializes as RAID5 again, and thats where I got the the above volume log containting smart info.
- MaxxMarkAug 12, 2016Luminary
I've done some more quick tests to compare performances. And I know I should not compare speeds when a raid (5/6/10) is rebuilding, but i did so anyway to at least create a base line.
First I tested every disk individually as a simple volume. All performed equally; around 110mb/sec
Then I systematically made volumes of 2 disks in RAID-1, which performed (even during syncing, which isnt that strange) equally; around 110mb/sec
Then I created a raid-5 array with 3 disks; Performance immediately plummeted to 50 - 60 mb/sec*.
I removed the raid-5 and tested with some raid-0 sets.Again, systematically selected 2 disks and made them part of a raid-0 set. All performed equal, and with high speeds; between 210 - 240 mb/sec.
Just for funzies I created a raid-0 set on all 6 disks. This wasn't as much as I'd thought. It topped out already at 340mb/sec.
Lastly I created a raid-10 array over all 6 disks; this resulted in 170mb/s. Which isn't bad, but not what I expected of it.
Finally I toyed with raid-5 on 3, 4, 5 and 6 disks as well as raid-6 on 4, 5 and 6 disks. All of which performed equally and as described above at a rate of 50 - 60 mb/sec
* I noticed that during the test actions the resync speeds dropped (as it did in OS4) so the OS provides priority to the process in question. I know this shouldnt be considered as '100% performance' but degradation to 50% seems a bit steep. Again when comparing with the finished RAID6 set (which I mentioned in the first/secnd post) the performance was comparable. (so speed during syncing was virtually identical to post-syncing)
To summarize and averaged:
Non raid: 110 mb/s
raid-0 2 disk: 220 mb/s
raid-0 6 disk: 340 mb/s
raid-1 2 disk: 110mb/s
raid-5 3 to 6 disk: ~50mb/s
raid-6 4 to 6 disk: ~50mb/s
What's strange to me is, is that although in raid-1 the disk is syncing, it has virtually no drawback on the speed. Therefor when in RAID-5 and RAID-6 it seems rather strange that the impact of rebuilding parity information (or creating new data including parity information) is almost 50%. I get that computing parity information is more cpu intensive than 'just duplicationg to another disk' but when the cpu isn't doing much else I'd expect the speed to go up as computing parity can be top priority.
I don't know what to think anymore. Am I chasing ghosts here? Am I missing something obvious or is my logic flawed somewhere?
- StephenBAug 12, 2016Guru - Experienced User
Probably this will confuse more, but this is what I am getting with my pro-6. OS is 4.2.28, Raid-5 with 6x WD Red 3TB, file system ext4
PRO:/c# dd if=/dev/zero of=/c/testfile bs=1M count=1000 oflag=sync
1048576000 bytes (1.0 GB) copied, 102.882 s, 10.2 MB/sPRO:/c# dd if=/dev/zero of=/c/testfile bs=10M count=100 oflag=sync
1048576000 bytes (1.0 GB) copied, 17.7785 s, 59.0 MB/sPRO:/c# dd if=/dev/zero of=/c/testfile bs=50M count=20 oflag=sync
1048576000 bytes (1.0 GB) copied, 15.3094 s, 68.5 MB/s
PRO:/c# dd if=/dev/zero of=/c/testfile bs=100M count=10 oflag=sync
1048576000 bytes (1.0 GB) copied, 8.612 s, 122 MB/sPRO:/c# dd if=/dev/zero of=/c/testfile bs=1000M count=1 oflag=sync
1048576000 bytes (1.0 GB) copied, 7.2489 s, 145 MB/sPRO:/c# dd if=/dev/zero of=/c/testfile bs=1000M count=10 oflag=sync
10485760000 bytes (10 GB) copied, 80.2636 s, 131 MB/s
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!