NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
dirkdigs
Feb 19, 2014Aspirant
Re: stripe size
what is the stripe size of an X-RAID2 array? is it a fixed setting or can it be changed?
10 Replies
Replies have been turned off for this discussion
- fastfwdVirtuoso
dirkdigs wrote: what is the stripe size of an X-RAID2 array? is it a fixed setting or can it be changed?
64K on my Pro 6 running OS 4.2.26. You could change the chunk size by manually rebuilding your array from the command line, I guess... But why would you want to? - killerus2AspirantCan you tell me if there is any simple method to change this strip size.
I want to test if it's get me better results with IO.
And second if i do that from command line is there a probability to loose anything with next FW update? - StephenBGuru - Experienced UserI think Netgear's view would likely be that this is unsupported territory, and that there are no guarantees that there won't be issues later on with FW updates, OS reinstall, etc.
If you want to try it anyway, I suggest testing this with a different set of disks. - killerus2AspirantI know that Netgear don't wan't to make troubles to it's own users - it's obvious but...
Ready NAS ultra 6 (intel ATOM D510 - not plus intel pentium) - 6x 2TB WD20eurx + 4GB ram + jumbo frames (also on switch and PC and all between) + cables not longer than 10m on each side of switch + pionieer teaming in adaptive loadbalance mode (tested with LACP - not diff at all on file transfer between LACP and pionier teaming)
Guess what speed i get when i'm write a one file ex. 10GB with drag&drop on WIN 7 x64 or 8.1 x64?
You guessed - it's not 128MB/s - not even 100MB/s - and somtimes it's below 70MB/s - suprised?
When i turned on jumbo frames it was a diffrence - yes in writes ex. 50MB vs 70MB. But it still NOT 128MB?
When read file it's 100MB+ in most - i can't say the cables are wrong or something.
Writes on disk are hmm... SLOW and it's not the fault of HDD itselfs - i think.
Intel Atom D510 + 4GB ram + 6x2TB can't do any faster? or it's the chunk size, raid5 or shit know what?
One HDD (wd20eurx) can read and write 135MB/s + on beginning (tested on PC) - they are all in raid5.
Maybe Raid5 is the problem? - but i need to have any opportunity to get data back on drive failure. - StephenBGuru - Experienced UserI suspect that changing the stripe size will not improve your performance. Though I've never tried it.
Generally you won't see any performance gain with teaming if you are testing with a single user. Teaming protocols are designed for trunking, and usually are designed to limit a single data flow to 1 gbit. That prevents packet loss when the connection is switched to a normal gigabit client.
If you have SSH enabled you can measure the performance of the raw raid array. That lets you separate the network and SAMBA performance.
You might also test with NasTester (http://www.808.dk/?code-csharp-nas-performance). The PC hard drive speed of course also impacts the network performance. Are you using an SSD?
Another benchmark would be to set up windows file sharing on another PC, and compare that performance with the NAS (on the same client PC).
In any event, smallnetbuilder measured about 95 MB/s read and about 65 MB/s write with RAID-5 when they reviewed the ultra-4 in 2010. You won't get 125 MB/sec (1 gigabit/8 -> 125 mbytes not 128), even with jumbo frames. Ethernet has overhead, and CIFS requires responses, which create dead times when the network isn't maxed. - killerus2Aspirant
StephenB wrote: I suspect that changing the stripe size will not improve your performance. Though I've never tried it.
If u didn't try to change strip size - then what are you talking about at all?
I'm setting up raids since 4 IDE ports on AMD mainboard socket A (duron/athlon) 2001 or even older. Testing with HDD like maxtor 40GB, seagate 20GB, 40GB SCSI (on adaptec controller) and so on to this times (2-3TB HDD on SATA 3 - even SSD raids).
CHUNK SIZE IS MOST PERFORMACE HITTING on EVERY RAID which is designed to get performace advantage over one drive (not jbod/raid1). Smaller chunk sizes (or strip sizes - as anyone wishes to call it) are for best performance with smaller files (large number of files) - and bigger chunk sizes are better for Bigger files (big file is that one which is bigger than 1MB).
It's like have a bucket and spoon to take gallons of water from one place to another. DONT TELL ANYONE THAT CHUNK SIZE NOT MATTERS and then say that U didn't even tryed to change it - just try it and tell me i'm wrong.
Tell me one more - why when u setup a raid on linux chunk size by default is set to 512KB - and when NAS (ultra 6) recognize empty HDD - it will create 64KB chunk size? strage performace selfkill?
One more option near volume creation would be very nice - CHUNK SIZE of raid or set default to 256kbStephenB wrote: Generally you won't see any performance gain with teaming if you are testing with a single user. Teaming protocols are designed for trunking, and usually are designed to limit a single data flow to 1 gbit. That prevents packet loss when the connection is switched to a normal gigabit client.
And what if i got 2 NIC cards on my PC teamed UP with LACP on switch??? i get better performance and thats why u choose - like raid level in discs - "raid" level on teaming. - Theoretically i got 2GB on both sides PC/NAS.StephenB wrote: If you have SSH enabled you can measure the performance of the raw raid array. That lets you separate the network and SAMBA performance.
If i haven't done this how could i know that HDD CAN DO IT FATSTER??? yes - i tested on nas /dev/md2 (6x1.8T) - 180MB+ in raw read this isn't briliant - two HDD (not SIX in raid5) can do this without getting sweat - but i can handle it.StephenB wrote: You might also test with NasTester (http://www.808.dk/?code-csharp-nas-performance). The PC hard drive speed of course also impacts the network performance. Are you using an SSD?
Page is offline or gone - check links if u paste them please. Second - yes i'm sure my pc can handle 400MB+ raw SAVE/READ if only network can send me this much (3x128GB adata s511 SATA 3 in raid0).StephenB wrote: Another benchmark would be to set up windows file sharing on another PC, and compare that performance with the NAS (on the same client PC).
I got 2 PC (one 8350,32GB ram, 4xSSD HDD and 2x1GB LAN TEAMED 9k jumbo frames ON, second Phenom II 965 8GB ram 1xSSD60GB SATA 3 (300MB+ RW) and 1GB NIC, 1 laptop with SSD...)
Between PC's i got 125 MB/s without any compromise - to and from NAS... as you can read.
More or less all machines get same read resoults - but only with jumbo frames on i get some performance gain over the rest (only when write to NAS).
I don't have zylions on files - i got mostly 200MB+ files - i need fast read LARGE FILES - not fast access to small word files like in office.killerus2 wrote: In any event, smallnetbuilder measured about 95 MB/s read and about 65 MB/s write with RAID-5 when they reviewed the ultra-4 in 2010. You won't get 125 MB/sec (1 gigabit/8 -> 125 mbytes not 128), even with jumbo frames. Ethernet has overhead, and CIFS requires responses, which create dead times when the network isn't maxed.
First - i writed 128 - you get me correct to 125 - really - 3MB??? is it any difrence? - agree 125MB my mistake
second (but first in your post) - check this out http://www.readynas.com/?page_id=3962 - Check under ULTRA 6 (not 4 nor plus). I can cleary say that they are using 6x2TB HDD and get better performance than you say - they got (CIFS drag&drop)
read 107MB/s write 87MB/s and WITH JUMBO FRAMES GET EVEN BETTER RESULTS read 115MB/s write 100MB/s
Netgear get this resoults and show them with this product to clients - then U try to tell me 65MB write is normal and I should be glad of IT? - there is one little difrence - they got RAID 0 - i got raid 5. When reads - i can say that there is no diffrence - writes yes can hit performance between theese 2 raid levels - but not when we downsize this all to one 1xGB NIC. 125MB/s isn't really so challanging. I can even say 2xHDD in raid0 can do this 125MB - we talk about 6 HDD.
Thats all bla bla bla about bla bla bla... we can talk like this forever - try change chunk size and add some ram to nas (i got ultra 6 with 4GB - 2 NIC in team) - just try don't talk about it.
I almost done with changing chunk size - but i must setup LVM, check is that all correct and reboot. I'll post my speed after changing chunk size.
One of most - i didn't write this post to talk about this - but to get help if someone already done this before?
Really no one ever before not even think about changing this crapy chunk size to normal - etc 256kb? i'm stunned.
HDD's soon will end sync... now i can go to sleep. - mdgm-ntgrNETGEAR Employee RetiredYou can change the stripe cache size but you do need to be careful as if you increase it too much you will use too much RAM which would be counter-productive.
- StephenBGuru - Experienced User
I think you will find that the ultra is limited by the CPU performance, not by the raw RAID performance. The same RAID structure is used in the PRO-6, and the performance is much better. So my suspicion is that it won't help. I didn't say you shouldn't try it.killerus2 wrote: StephenB wrote: I suspect that changing the stripe size will not improve your performance. Though I've never tried it.
If u didn't try to change strip size - then what are you talking about at all?
LACP for sure is constrained to prevent more than 1 gigabit dataflows. This is layer-2, so dataflows are between two mac addresses. Teaming on both ends will not change that. Again, the original purpose was to improve performance between switches. Some of the other teaming modes in the NAS might work differently. But LACP is constrained to 1 gigabit dataflows in order to ensure delivery in-order packets without loss to the sink. The simplest way to prevent out-of-order delivery is to keep all the traffic to a specific destination on the same physical wire.killerus2 wrote: StephenB wrote: Generally you won't see any performance gain with teaming if you are testing with a single user. Teaming protocols are designed for trunking, and usually are designed to limit a single data flow to 1 gbit. That prevents packet loss when the connection is switched to a normal gigabit client.
And what if i got 2 NIC cards on my PC teamed UP with LACP on switch??? i get better performance and thats why u choose - like raid level in discs - "raid" level on teaming. - Theoretically i got 2GB on both sides PC/NAS.
Then you already know that the RAID array is 2.5x faster than your measured network performance, and is already faster than gigabit ethernet. So I guess I'm confused on why you think changing the stripe size will matter, since the raid array itself is not the bottleneck. Though I'd be interested in hearing your results.killerus2 wrote: StephenB wrote: If you have SSH enabled you can measure the performance of the raw raid array. That lets you separate the network and SAMBA performance.
If i haven't done this how could i know that HDD CAN DO IT FATSTER??? yes - i tested on nas /dev/md2 (6x1.8T) - 180MB+ in raw read this isn't briliant - two HDD (not SIX in raid5) can do this without getting sweat - but i can handle it.
The link works for me, and I did test it before I posted.killerus2 wrote: StephenB wrote: You might also test with NasTester (http://www.808.dk/?code-csharp-nas-performance). The PC hard drive speed of course also impacts the network performance. Are you using an SSD?
Page is offline or gone - check links if u paste them please. Second - yes i'm sure my pc can handle 400MB+ raw SAVE/READ if only network can send me this much (3x128GB adata s511 SATA 3 in raid0). - killerus2AspirantHmm link works for now - but few hours ago - when i was writting my port - it was down...
Sync is over...
Nas1:/etc# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sda3[0] sdf3[6] sde3[4] sdd3[3] sdc3[2] sdb3[1]
9743962880 blocks super 1.2 level 5, 256k chunk, algorithm 2 [6/6] [UUUUUU]
hdparm -t /dev/md2
/dev/md2:
Timing buffered disk reads: 838 MB in 3.01 seconds = 278.74 MB/sec
I can admit - that last resoults was 180MB/s
It is more than 125MB/s you right, but as U obviously know too with this command i measure hdd read in the beginning.
This speed at the end of disk would be max half of it.
Second, as i mentioned before i got lots of large files, and somtimes addons must move or copy them inside NAS from one place to another - like transmission.
180MB vs 280MB would be nice improvement...
Now the last test - make it all works after reboot and check on samba...
Last round FIGHT.
NAS performance tester 1.7 http://www.808.dk/?nastester
Running warmup...
Running a 800MB file write on Z: 5 times...
Iteration 1: 115,44 MB/sec
Iteration 2: 116,14 MB/sec
Iteration 3: 117,20 MB/sec
Iteration 4: 117,09 MB/sec
Iteration 5: 114,32 MB/sec
-----------------------------
Average (W): 116,04 MB/sec
-----------------------------
Running a 800MB file read on Z: 5 times...
Iteration 1: 118,10 MB/sec
Iteration 2: 116,96 MB/sec
Iteration 3: 116,85 MB/sec
Iteration 4: 117,64 MB/sec
Iteration 5: 117,58 MB/sec
-----------------------------
Average (R): 117,43 MB/sec
-----------------------------
prior to last downed link - it works now - tested as above - killerus2Aspirant
mdgm wrote: You can change the stripe cache size but you do need to be careful as if you increase it too much you will use too much RAM which would be counter-productive.
Thats why I got 4GB of ram on NAS (1GB is stock).
Thanks for advise.
Related Content
- Dec 25, 2018Retired_Member
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!