NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
dirkdigs
Feb 19, 2014Aspirant
Re: stripe size
what is the stripe size of an X-RAID2 array? is it a fixed setting or can it be changed?
killerus2
Nov 11, 2014Aspirant
StephenB wrote: I suspect that changing the stripe size will not improve your performance. Though I've never tried it.
If u didn't try to change strip size - then what are you talking about at all?
I'm setting up raids since 4 IDE ports on AMD mainboard socket A (duron/athlon) 2001 or even older. Testing with HDD like maxtor 40GB, seagate 20GB, 40GB SCSI (on adaptec controller) and so on to this times (2-3TB HDD on SATA 3 - even SSD raids).
CHUNK SIZE IS MOST PERFORMACE HITTING on EVERY RAID which is designed to get performace advantage over one drive (not jbod/raid1). Smaller chunk sizes (or strip sizes - as anyone wishes to call it) are for best performance with smaller files (large number of files) - and bigger chunk sizes are better for Bigger files (big file is that one which is bigger than 1MB).
It's like have a bucket and spoon to take gallons of water from one place to another. DONT TELL ANYONE THAT CHUNK SIZE NOT MATTERS and then say that U didn't even tryed to change it - just try it and tell me i'm wrong.
Tell me one more - why when u setup a raid on linux chunk size by default is set to 512KB - and when NAS (ultra 6) recognize empty HDD - it will create 64KB chunk size? strage performace selfkill?
One more option near volume creation would be very nice - CHUNK SIZE of raid or set default to 256kb
StephenB wrote: Generally you won't see any performance gain with teaming if you are testing with a single user. Teaming protocols are designed for trunking, and usually are designed to limit a single data flow to 1 gbit. That prevents packet loss when the connection is switched to a normal gigabit client.
And what if i got 2 NIC cards on my PC teamed UP with LACP on switch??? i get better performance and thats why u choose - like raid level in discs - "raid" level on teaming. - Theoretically i got 2GB on both sides PC/NAS.
StephenB wrote: If you have SSH enabled you can measure the performance of the raw raid array. That lets you separate the network and SAMBA performance.
If i haven't done this how could i know that HDD CAN DO IT FATSTER??? yes - i tested on nas /dev/md2 (6x1.8T) - 180MB+ in raw read this isn't briliant - two HDD (not SIX in raid5) can do this without getting sweat - but i can handle it.
StephenB wrote: You might also test with NasTester (http://www.808.dk/?code-csharp-nas-performance). The PC hard drive speed of course also impacts the network performance. Are you using an SSD?
Page is offline or gone - check links if u paste them please. Second - yes i'm sure my pc can handle 400MB+ raw SAVE/READ if only network can send me this much (3x128GB adata s511 SATA 3 in raid0).
StephenB wrote: Another benchmark would be to set up windows file sharing on another PC, and compare that performance with the NAS (on the same client PC).
I got 2 PC (one 8350,32GB ram, 4xSSD HDD and 2x1GB LAN TEAMED 9k jumbo frames ON, second Phenom II 965 8GB ram 1xSSD60GB SATA 3 (300MB+ RW) and 1GB NIC, 1 laptop with SSD...)
Between PC's i got 125 MB/s without any compromise - to and from NAS... as you can read.
More or less all machines get same read resoults - but only with jumbo frames on i get some performance gain over the rest (only when write to NAS).
I don't have zylions on files - i got mostly 200MB+ files - i need fast read LARGE FILES - not fast access to small word files like in office.
killerus2 wrote: In any event, smallnetbuilder measured about 95 MB/s read and about 65 MB/s write with RAID-5 when they reviewed the ultra-4 in 2010. You won't get 125 MB/sec (1 gigabit/8 -> 125 mbytes not 128), even with jumbo frames. Ethernet has overhead, and CIFS requires responses, which create dead times when the network isn't maxed.
First - i writed 128 - you get me correct to 125 - really - 3MB??? is it any difrence? - agree 125MB my mistake
second (but first in your post) - check this out http://www.readynas.com/?page_id=3962 - Check under ULTRA 6 (not 4 nor plus). I can cleary say that they are using 6x2TB HDD and get better performance than you say - they got (CIFS drag&drop)
read 107MB/s write 87MB/s and WITH JUMBO FRAMES GET EVEN BETTER RESULTS read 115MB/s write 100MB/s
Netgear get this resoults and show them with this product to clients - then U try to tell me 65MB write is normal and I should be glad of IT? - there is one little difrence - they got RAID 0 - i got raid 5. When reads - i can say that there is no diffrence - writes yes can hit performance between theese 2 raid levels - but not when we downsize this all to one 1xGB NIC. 125MB/s isn't really so challanging. I can even say 2xHDD in raid0 can do this 125MB - we talk about 6 HDD.
Thats all bla bla bla about bla bla bla... we can talk like this forever - try change chunk size and add some ram to nas (i got ultra 6 with 4GB - 2 NIC in team) - just try don't talk about it.
I almost done with changing chunk size - but i must setup LVM, check is that all correct and reboot. I'll post my speed after changing chunk size.
One of most - i didn't write this post to talk about this - but to get help if someone already done this before?
Really no one ever before not even think about changing this crapy chunk size to normal - etc 256kb? i'm stunned.
HDD's soon will end sync... now i can go to sleep.
Related Content
- Dec 25, 2018Retired_Member
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!