× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Re: Post your performance results

Daryl_RL
Aspirant

Re: Post your performance results

Original set-up:

ReadyNAS NV
256Mb memory
3 x Seagate ST3400832AS 400Gb drives
X-RAID
Jumbo frames off
Journaling disabled
Fast CIFS frames enabled

Dell 2.4Ghz P4 PC
Windows XP Home
512Mb Memory
Belkin Gigahertz NIC F5D5005 NIC
Jumbo frames off
Flow control enabled, not sure what the TCP offload option is (mentioned in Infrant's test set-up post - I don't see any option like that anywhere)

D-Link DGS-2205 switch (GigE, no jumbo frame support)

READ: 29.2
WRITE: 19.3


Made the following change to the system (ONLY change):

Replaced stock 256Mb memory with Patriot 1Gb memory card (PEP1G2700SLL) from NewEgg

READ: 32.2
WRITE: 19.8


Replaced D-Link DGS-2205 switch with SMCGS8 switch (from Infrant compat. list). NO change to jumbo frame settings (yet):

READ: 31.3
WRITE: 19.2


Ok - now with enabling jumbo frames on NIC (MTU=9014) and ReadyNAS NV (MTU=whatever is standard, 7936??):

READ: 28.4
WRITE: 23.9


So I gained in write performance but lost read performance.

NOTE: I had posted previously about some volume errors I was getting when re-booting the NV. It appears they were caused by or related to the USB drive I had attached to the NV, NOT the jumbo frames.
Message 26 of 309
jching
Aspirant

Re: Post your performance results

michelkenny wrote:

Here's my info:

Stock NV
4 x Seagate ST3250823AS 250gb Hard Disk in X-RAID
All journaling disabled
Fast writes on


IO Meter Write: 19.321793 MBps
IO Meter Read: 26.803979 MBps


Do these performance numbers make sense? Correct me if I'm wrong, but aren't these numbers similar to regular IDE drives connected directly to an IDE interface? A SATA drive directly connected to the SATA interface should at least get twice this performance. A decent drive should get 3 times this performance.

Now take 4 of these drives and create a RAID5, it should be screaming above 100MB/s. I don't think gigabit ethernet is the bottle neck here, because I've done benchmarks with gigabit Ethernet and it's way higher than 30MB/s.

So, can someone explain why these numbers are so low? Where's the bottle neck?

--jc
Message 27 of 309
Helevitia
Aspirant

Re: Post your performance results

jching wrote:
michelkenny wrote:

Here's my info:

Stock NV
4 x Seagate ST3250823AS 250gb Hard Disk in X-RAID
All journaling disabled
Fast writes on


IO Meter Write: 19.321793 MBps
IO Meter Read: 26.803979 MBps


Do these performance numbers make sense? Correct me if I'm wrong, but aren't these numbers similar to regular IDE drives connected directly to an IDE interface? A SATA drive directly connected to the SATA interface should at least get twice this performance. A decent drive should get 3 times this performance.

Now take 4 of these drives and create a RAID5, it should be screaming above 100MB/s. I don't think gigabit ethernet is the bottle neck here, because I've done benchmarks with gigabit Ethernet and it's way higher than 30MB/s.

So, can someone explain why these numbers are so low? Where's the bottle neck?

--jc


The NV is definitely the bottleneck here. But if you compare the NV to the competition, you will see that not many are faster(plus these support forums are worlds better than the competition which is why I bought an NV). In time, as NAS devices become more popular, speed will become a bigger factor, but for now it's not to most people.
Message 28 of 309
jching
Aspirant

Re: Post your performance results

Helevitia wrote:

The NV is definitely the bottleneck here. But if you compare the NV to the competition, you will see that not many are faster(plus these support forums are worlds better than the competition which is why I bought an NV). In time, as NAS devices become more popular, speed will become a bigger factor, but for now it's not to most people.


If you're refering to Buffalo or similar, than yes. I agree. But how about FC RAIDs, like Medea, Infortrend, Xyratec. Granted, these are fiber channel, so they get 280+MB/s with 6 disks. This is about 50MB/s per SATA drive. Which is what I would expect from a SATA RAID system.

But even if we're limited to gigabit Ethernet, I'd expect greater than 30MB/s. So why is the Buffalo/NV/Thecus/etc so slow in comparison? Why aren't they getting similar performance from the SATA drives? Exactly what in the NV is the bottle neck? Is the RAID operations done in software? Is the parity done by the CPU?

Aside from the fiber channel vs. gigabit ethernet, what is different between the Medea/Infortrend vs. Infrant/Buffalo?

--jc
Message 29 of 309
gfbarros
Aspirant

Re: Post your performance results

Please list what protocol you are using when gathering these performance numbers. It is my understanding that there are some significant performance differences amongst the different supported protocols...
Message 30 of 309
yoh-dah
Guide

Re: Post your performance results

jching wrote:
Helevitia wrote:

The NV is definitely the bottleneck here. But if you compare the NV to the competition, you will see that not many are faster(plus these support forums are worlds better than the competition which is why I bought an NV). In time, as NAS devices become more popular, speed will become a bigger factor, but for now it's not to most people.


If you're refering to Buffalo or similar, than yes. I agree. But how about FC RAIDs, like Medea, Infortrend, Xyratec. Granted, these are fiber channel, so they get 280+MB/s with 6 disks. This is about 50MB/s per SATA drive. Which is what I would expect from a SATA RAID system.

But even if we're limited to gigabit Ethernet, I'd expect greater than 30MB/s. So why is the Buffalo/NV/Thecus/etc so slow in comparison? Why aren't they getting similar performance from the SATA drives? Exactly what in the NV is the bottle neck? Is the RAID operations done in software? Is the parity done by the CPU?

Aside from the fiber channel vs. gigabit ethernet, what is different between the Medea/Infortrend vs. Infrant/Buffalo?

--jc

A more apples-to-apples comparison would be if you were to measure the performance of the PCI RAID cards when accessed over the network. Processing TCP packets is a huge part of the overhead as well as copying the packets in and out of network file protocol, local file system, and RAID layers. Also, make sure when you're comparing write performance to use RAID 5, as the parity generation will add another level of overhead. And make sure you measure real performance and not "cached" by using data multiple times larger than the cache.

Another big factor in this market space is power consumption. Leave your PC on with a RAID card and 4 drives and it'll eat up more than 200W of power. The ReadyNAS will use about 55W at the highest load and will settle down to 35W when disks are in sleep mode -- almost half that's used by your typical light bulb, or down to close to almost nothing when in scheduled power-down mode. Consider this a growing requirement as we all need to do our part in reducing environmental impact.
Message 31 of 309
svtmike
Aspirant

My results...

Brand new ReadyNAS NV, running two 400GB WD "YR" drives in X-Raid. CIFS access, Share security, fast CIFS writes on.

Journaling enabled: 11.42 Mb/S write, 23.29 Mb/S read
Journaling disabled: 16.68 Mb/S write, 24.20 Mb/S read

Computer: home built with Asus P4P800 motherboard, 2G memory, Pentium IV 3.2G, on board 3Com 3C940 GigE NIC. I did not disable any services because I want to see how it runs under my normal conditions.

Network: D-Link DGL-4300 Router, newly installed Cat5e cables and connectors, all Cat5e patch cables, Leviton Cat5e structured wiring cabinet. PC and ReadyNAS autonegotiated to 1Gb/S.
Message 32 of 309
jching
Aspirant

Re: Post your performance results

yoh-dah wrote:

Processing TCP packets is a huge part of the overhead as well as copying the packets in and out of network file protocol, local file system, and RAID layers.


Can someone test this out for me? The NV+ supports an FTP server. Let's see what FTP can do (both get and put). If the problem is NFS overhead, then FTP shouldn't have the same problem. FTP has overhead too, but it's small compared to NFS.

Also, you can eliminate the local file system overhead by copying to /dev/null:

dd if=/mnt/file.bin of=/dev/null bs=<rsize> count=<large>

Where *large* means something that will exceed the cache. I'm assuming the NV+ is mounted on /mnt. To do a write test, use /dev/zero. Of course, this is all done under Linux.

yoh-dah wrote:

Also, make sure when you're comparing write performance to use RAID 5, as the parity generation will add another level of overhead.


Ugh, shouldn't parity generation be done by hardware? NV+ is a hardware RAID system, isn't it?

It also depends on what you want to test. If you want to see how fast the controller can access the drives, you should use RAID0. RAID0 is your theoretical max performance. This will tell you how much overhead you have in RAID5 by comparing the results of the two tests. Using RAID0 will also tell you how much of the physical SATA disk bandwidth you're using.

To eliminate the network throughput as a potential bottleneck, use one SATA drive with RAID0, and ftp a large file into the system. This should theorectically flood the bandwidth of that single disk drive.

My goal is to determine where the bottleneck is. Doing the above tests should be able to tell us...

--jc
Message 33 of 309
simonb681
Aspirant

Re: Post your performance results

ReadyNAS NV Rev B

2 x HGST 7K250 250GB HDDs

Firmware: RAIDiator™ v3.00c1-p2 [1.00a025]
Memory: 256 MB [2.5-3-3-7]

Online, 95% of 227 GB used - X-RAID (Expandable RAID), 2 disks

Access: CIFS, FTP, HTTPS
Performance: Enable Write Cache, Disable Full Journaling, Enable Fast Writes


PC

Intel D875PBX
Pentium 4 Northwood 3.0 GHz HT
2 x 1GB DDR RAM Dual Channel 2-3-3-6

NIC

Intel PRO/1000 CT
Auto Neg 1000 Mbps
Default Settings
Intel PROSet 11.1.0.19

Switch

NetGear GS105
Cat5e cables

Results

NV Jumbo On, Direct, NIC @16k

256K_Write 20.589032
256K_Read 20.875454

NV Jumbo On, Direct, NIC @9k

256K_Write 20.501461
256K_Read 20.939234

NV Jumbo On, Direct, NIC @4k

256K_Write 15.570781
256K_Read 20.919339

NV Jumbo On, Direct, NIC JF Disabled

256K_Write 7.799292
256K_Read 21.535115

NV Jumbo Off, Direct, NIC JF Disabled

256K_Write 14.291266
256K_Read 24.301739

NV Jumbo Off, via Switch

256K_Write 15.708299
256K_Read 24.55822

NV Jumbo On, via Switch, NIC @9k

256K_Write 20.244404
256K_Read 20.87902


These seem to be consistent with others in that enabling JF helps writes but slows reads. The interesting result is 'NV Jumbo On, via Switch, NIC @9k' which considering the switch is not supposed to support JF gives the same results as direct. Furthermore I can surf the net just as quickly it seems via my 100BaseT Draytek ADSL router. Since I have done the majority of the copying to the NV I think I will stick with JF off for the improved read, but it might be worthwhile enabling JF as and when I need to do any more bulk copying.

I have two HGST T7K250 250GB HDDs on order to bring the RAID upto 1TB, as well as 1GB stick of Patriot RAM, so I'll post back with updated performance tests once these are installed.
Message 34 of 309
luc1
Aspirant

Re: Post your performance results

Here's a few measurements I made some time ago:

ReadyNAS X6 rev.B w/ four 300GB HDs; journaling off, jumbo off

100Mbit network:

Internet === Linksys WAG354G modem+router+switch (100Mbit)
^ ^ ^
| | |__ ReadyNAS X6 rev.B
| |
(cable ~20m=65ft)-> | |_____ PC#1 (W2K 100Mbit)
|
|__ Netgear FS108 switch (100Mbit)
^
|__ PC#2 (W2K 1Gbit)

PC#1: write: 8.9 MBps; read: 8.4 MBps
PC#2: write: 8.6 MBps; read: 6.9 MBps


Got a 1Gbit network working for a while:

Internet === Linksys WAG354G modem+router+switch (100Mbit)
^
|__ DLink DGS-1005D switch (1Gbit)
^ ^ ^
| | |__ ReadyNAS X6 rev.B
| |
(cable ~20m=65ft)-> | |_____ PC#1 (W2K 100Mbit)
|
|__ Dlink DGS-1005D switch (1Gbit)
^
|__ PC#2 (W2K 1Gbit)

... but then I experienced a few network problems (switch incompatibilities after upgrading to RAIDiator 3.0?; see Network errors - b6 issue? for details) and I had to come back to my old 100Mbit network :cry:; here's the performance tests I made while everything was working:

PC#1: write: 9.7 MBps; read: 9.0 MBps
PC#2: write: 14.8 MBps; read: 28.4 MBps


Going toward 1Gbit network was definitely cool! Waiting Xmas for my buying two *new* switches ;)…

Luc.
Message 35 of 309
edmebba
Aspirant

Results from a new user.

PC:
Dell XPS 400
CPU Pentium D 2.80Ghz
Memory 3.25G

Network Equipment:
Dlink DGS-1008D* ( 10/100/1000 No JF support? The D-Link user manual says otherwise but it's on the Infrant HDW Incompatibility list. )
* Will post results after getting a D-Link DGS-1005D that supports JF.

Network Topology:
Dell ------ D-Link------Infrant
Intranet-----|

NAS Configuration:
Infrant ReadyNAS NV+
Version: RAIDiator v3.01c1-p2 [1.00a032]
Memory: 256MB [2.5-3-3-7]

RAID
X-RAID Redundant
Disk Write Cache enabled
Full Data Journaling Disabled
Fast CIFS writes Enabled

Disks
4-500GB Hitachi "Deathstars" ( HDS725050KLA360 )


Results:

1000Mbit JF Off
Sectors: 2048000
Write MBps: 15.670103
Write Iops: 62.680412
Read MBps: 28.604001
Read Iops: 114.416004


Sectors: 8192000
Write MBps: 15.180526
Write Iops: 60.722104
Read MBps: 28.140746
Read Iops: 112.562983


1000Mbit JF On
Sectors: 2048000
Write MBps: 7.444973
Write Iops: 29.779891
Read MBps: 23.555285
Read Iops: 94.22114


Sectors: 8192000
Write MBps: 7.320481
Write Iops: 29.281924
Read MBps: 24.238073
Read Iops: 96.952293

Thanks
EDM
Message 36 of 309
edmebba
Aspirant

Re: Results from a new user.

edmebba wrote:
PC:
Dell XPS 400
CPU Pentium D 2.80Ghz
Memory 3.25G

Network Equipment:
Dlink DGS-1008D* ( 10/100/1000 No JF support? The D-Link user manual says otherwise but it's on the Infrant HDW Incompatibility list. )
* Will post results after getting a D-Link DGS-1005D that supports JF.

Network Topology:
Dell ------ D-Link------Infrant
Intranet-----|

NAS Configuration:
Infrant ReadyNAS NV+
Version: RAIDiator v3.01c1-p2 [1.00a032]
Memory: 256MB [2.5-3-3-7]

RAID
X-RAID Redundant
Disk Write Cache enabled
Full Data Journaling Disabled
Fast CIFS writes Enabled

Disks
4-500GB Hitachi "Deathstars" ( HDS725050KLA360 )


Results:

1000Mbit JF Off
Sectors: 2048000
Write MBps: 15.670103
Write Iops: 62.680412
Read MBps: 28.604001
Read Iops: 114.416004


Sectors: 8192000
Write MBps: 15.180526
Write Iops: 60.722104
Read MBps: 28.140746
Read Iops: 112.562983


1000Mbit JF On
Sectors: 2048000
Write MBps: 7.444973
Write Iops: 29.779891
Read MBps: 23.555285
Read Iops: 94.22114


Sectors: 8192000
Write MBps: 7.320481
Write Iops: 29.281924
Read MBps: 24.238073
Read Iops: 96.952293

Thanks
EDM


Got the D-Link DGS-1005D and got the same numbers, any sugestions? I do not see a direct way to set host system to use Jumbo frames, according to the Users guide I just need to make sure the Intel Pro Set utility is being used, which it is. The Intel host system card is a Intel Pro/1000 PL. The host system that is running Iometer and the Ready NAS are plugged directly into the D-Link DGS-1005.

EDM
Message 37 of 309
yoh-dah
Guide

Re: Results from a new user.

edmebba wrote:
edmebba wrote:
PC:
Dell XPS 400
CPU Pentium D 2.80Ghz
Memory 3.25G

Network Equipment:
Dlink DGS-1008D* ( 10/100/1000 No JF support? The D-Link user manual says otherwise but it's on the Infrant HDW Incompatibility list. )
* Will post results after getting a D-Link DGS-1005D that supports JF.

Network Topology:
Dell ------ D-Link------Infrant
Intranet-----|

NAS Configuration:
Infrant ReadyNAS NV+
Version: RAIDiator v3.01c1-p2 [1.00a032]
Memory: 256MB [2.5-3-3-7]

RAID
X-RAID Redundant
Disk Write Cache enabled
Full Data Journaling Disabled
Fast CIFS writes Enabled

Disks
4-500GB Hitachi "Deathstars" ( HDS725050KLA360 )


Results:

1000Mbit JF Off
Sectors: 2048000
Write MBps: 15.670103
Write Iops: 62.680412
Read MBps: 28.604001
Read Iops: 114.416004


Sectors: 8192000
Write MBps: 15.180526
Write Iops: 60.722104
Read MBps: 28.140746
Read Iops: 112.562983


1000Mbit JF On
Sectors: 2048000
Write MBps: 7.444973
Write Iops: 29.779891
Read MBps: 23.555285
Read Iops: 94.22114


Sectors: 8192000
Write MBps: 7.320481
Write Iops: 29.281924
Read MBps: 24.238073
Read Iops: 96.952293

Thanks
EDM


Got the D-Link DGS-1005D and got the same numbers, any sugestions? I do not see a direct way to set host system to use Jumbo frames, according to the Users guide I just need to make sure the Intel Pro Set utility is being used, which it is. The Intel host system card is a Intel Pro/1000 PL. The host system that is running Iometer and the Ready NAS are plugged directly into the D-Link DGS-1005.

EDM

EDM, please open a new thread and we can discuss there.
Message 38 of 309
simonb681
Aspirant

Re: Post your performance results

Minor update

Changes:

ReadyNAS NV Rev B

2 x HGST 7K250 250GB HDDs
2 x HGST T7K250 250GB HDDs

Firmware: RAIDiator™ v3.01c1-p3 [1.00a032]

Online, 37% of 681 GB used, X-RAID (Expandable RAID), 4 disks

All streaming services disabled (was HSMS, uPnP)

Results:

Immediately after expansion

Jumbo OFF, via Switch

256K_Write 11.754537
256K_Read 21.188849

Which I found disappointing since I had assumed that a stripe set would be faster than a mirror, at least for reads. Thinking perhaps that the expansion had not best optimised the test file I deleted it and allowed IOMeter to recreate it.

256K_Write 15.476495
256K_Read 25.818728

Much better, and a slight improvement from 2 disks

NOTE: there was also a reboot of NAS and PC after deleting the test file, so this might have had some influence too.

NV Jumbo On, via Switch, NIC @9k

Run 1

256K_Write 18.467976
256K_Read 20.927655

Run 2

256K_Write 20.036544
256K_Read 21.021096

I did two runs as I saw some quite large access times for the first run. Run 2 is better but no real change from 2 disks.

I also discovered that my switch does support Jumbo Frames, which is 😄

As before since reads far outnumber writes in quantity and priority, I am running with JFs disabled.
Message 39 of 309
GoFaster
Aspirant

went to GigE yesterday

...and ran some informal tests.

Asus P5W DH motherboard with onboard Marvell Yukon Ethernet
Core 2 Duo E6600 CPU
2 GB DDR-800 RAM
XP Pro/SP2 latest patches
Grisoft AVG active

Infrant ReadyNAS NV
RAIDiator™ v3.00c1-p2 [1.00a025]
Patriot 1024 MB [2.0-2-2-6]
Volume C: Online, 26% of 1088 GB used
X-RAID (Expandable RAID), 4 disks Seagate ST3400620AS
FTP, NFS,
Journaling disabled
CIFS Fast writes enabled
Disk write cache enabled
CIFS,HTTP, uPNP on, other protocols, streaming and discovery services off
UPS online

IOMeter test parameters per the sticky.

Baseline with existing Netgear 100Mbit switches (2) between test machine and NAS

256K_Read
Total I/Os per Second 37.34
Total MBs per Second 9.34
Average I/O Response Time (ms) 26.7719
Maximum I/O Response Time (ms) 187.9842
% CPU Utilization (total) 3.66%
Total Error Count 0

256K_Write
Total I/Os per Second 42.72
Total MBs per Second 10.68
Average I/O Response Time (ms) 23.4049
Maximum I/O Response Time (ms) 226.314
% CPU Utilization (total) 4.80%
Total Error Count 0


Replaced with 2 Netgear GS608 1000Mbit switches (no Jumbo frames)

256K_Read
Total I/Os per Second 137.18
Total MBs per Second 34.29
Average I/O Response Time (ms) 7.2889
Maximum I/O Response Time (ms) 192.7529
% CPU Utilization (total) 7.76%
Total Error Count 0

256K_Write
Total I/Os per Second 76.99
Total MBs per Second 19.25
Average I/O Response Time (ms) 12.9875
Maximum I/O Response Time (ms) 493.9347
% CPU Utilization (total) 6.14%
Total Error Count 0


Enabled Jumbo frames on ReadyNAS and test system (9014 byte)

256K_Read
Total I/Os per Second 102.92
Total MBs per Second 25.73
Average I/O Response Time (ms) 9.7157
Maximum I/O Response Time (ms) 209.1619
% CPU Utilization (total) 6.55
Total Error Count 0

256K_Write
Total I/Os per Second 102.47
Total MBs per Second 25.62
Average I/O Response Time (ms) 9.7576
Maximum I/O Response Time (ms) 160.2585
% CPU Utilization (total) 3.59%
Total Error Count 0

Checked ReadyNAS to verify network error counts were zero after each run.

I disabled jumbo frames for everyday use. Read perfomance is more important for me.
Message 40 of 309
eric_carroll
Aspirant

Re: Post your performance results

Just to ensure my post is preserved in the sticky, see my testing in another thread: http://www.infrant.com/forum/viewtopic.php?t=7520

Summary:

18 MB/s IOTEST write performance with journaling off, no jumbo frames, and GE switch (Netgear GS108)
The ReadyNAS+ compares very favourably to nForce4 Ultra RAID performance in RAID1 or RAID0+1 configuration for SATA
ReadyNAS beats the nForce4 Ultra PATA RAID controller.
ReadyNAS beats my desktop WinXP system as CIFS fileserver
ReadyNAS compares very favourably to offline storage systems like DLT tape drives

An additional observation is that with Retrospect up and running, I have never had such a good backup solution as the NV+ and Retrospect.
Message 41 of 309
michelkenny
Aspirant

Re: Post your performance results

Hey guys,

Last week I installed Windows Server 2003 as my main workstation OS on the same computer I used for testing in post #1. I had also doubled the transmit/receive buffers on the onboard Intel Gigabit NIC since it said I may get better performance. To do that I went to the NIC properties -> Advanced -> Performance Options -> Doubled the Transmit/Receive Descriptors.

Today I copied a 3 gig file from another computer with identical hardware and I was getting a network usage of 40% of my 1 gigabit connection... that's almost 400 megabits! Before going the switch to Server 2003 and doubling the buffers I would never get above 25-29%, which is around 29 mbps (the results I had on my first speed test). So I decided to try a speed test on my ReadyNAS NV again... It started off reading at 40 mbps but it leveled off to what was in my 1st post (29ish mbps).

I'm wondering if there would be any difference if I increased the memory in my ReadyNAS. I guess I was also wondering, would it be possible to increase any kind of buffer on the unit itself that may lead to better network transfers? It looks like by increasing the NIC buffers on my workstation I get better transfer speeds, but only from another Windows computer.
Message 42 of 309
yoh-dah
Guide

Re: Post your performance results

michelkenny wrote:
Hey guys,

Last week I installed Windows Server 2003 as my main workstation OS on the same computer I used for testing in post #1. I had also doubled the transmit/receive buffers on the onboard Intel Gigabit NIC since it said I may get better performance. To do that I went to the NIC properties -> Advanced -> Performance Options -> Doubled the Transmit/Receive Descriptors.

Today I copied a 3 gig file from another computer with identical hardware and I was getting a network usage of 40% of my 1 gigabit connection... that's almost 40 megabits! Before going the switch to Server 2003 and doubling the buffers I would never get above 25-29%, which is around 29 mbps (the results I had on my first speed test). So I decided to try a speed test on my ReadyNAS NV again... It started off reading at 40 mbps but it leveled off to what was in my 1st post (29ish mbps).

I'm wondering if there would be any difference if I increased the memory in my ReadyNAS. I guess I was also wondering, would it be possible to increase any kind of buffer on the unit itself that may lead to better network transfers? It looks like by increasing the NIC buffers on my workstation I get better transfer speeds, but only from another Windows computer.

You'll get a 12% boost by increasing memory on the ReadyNAS to 1GB. If you have further question, please open a new topic, and we can take it off this sticky.
Message 43 of 309
TonyBerry
Aspirant

Re: Post your performance results


(Click For Larger Picture)

Client System: P4 2.0Ghz, 2GB RAM, Gb LAN (direct connection to X6), Windows XP Pro, SMB Mount to X6

X6 Array Setup: "Enable Disk Write Cache", "Disable Full Data Journaling", and "Enable Fast CIFS Writes", UPS Connected

IOMeter Setup: As specified by Infrant

XRAID without Jumbo Frames: Write - 43.952064MBps, Read - 43.885146MBps

XRAID with Jumbo Frames: Write - 35.096848MBps, Read - 36.381126MBps

RAID5 without Jumbo Frames: Write - 46.887815MBps, Read - 41.450465MBps

RAID5 with Jumbo Frames: Write - 51.639043MBps, Read - 43.862759MBps

First battery of tests were with 1GB file, second battery with 2GB file. Results were near identical.

Third battery of tests were with 256MB RAM in X6, fourth battery with 1GB RAM in X6. Results were near identical.

Still trying to get Linux IOMeter working.
Message 44 of 309
bhoar
Aspirant

Re: Post your performance results

Tony - that's odd that your RAID-5 read/write results are better with Jumbo, but your X-RAID results are worse with Jumbo. Something freaky is going on!

My suggestion: for each unit you test, create four separate, very large (e.g. 5GB or so) test files, then reboot both the ReadyNAS and your machine to clear any data caching that may occur. Then run the fours tests again.

-brendan
Message 45 of 309
jcollins
Aspirant

Re: Post your performance results

Man, I wish I could get those stats... 🙂

RAID5 with Jumbo Frames: Write - 51.639043MBps, Read - 43.862759MBps
Message 46 of 309
Ender
Aspirant

Re: Post your performance results

Can someone assist me with running iometer or provide an alternate icf file? I've downloaded the one according to the guide, however I get no read speeds displayed when I run the test?

Here is the results csv file...

http://upload2.net/page/download/csOTQ3 ... s.zip.html
Message 47 of 309
yoh-dah
Guide

Re: Post your performance results

Ender wrote:
Can someone assist me with running iometer or provide an alternate icf file? I've downloaded the one according to the guide, however I get no read speeds displayed when I run the test?

Here is the results csv file...

http://upload2.net/page/download/csOTQ3 ... s.zip.html

Please create a new post.
Message 48 of 309
iposner
Aspirant

Re: Post your performance results

jching wrote:
Helevitia wrote:

The NV is definitely the bottleneck here. But if you compare the NV to the competition, you will see that not many are faster(plus these support forums are worlds better than the competition which is why I bought an NV). In time, as NAS devices become more popular, speed will become a bigger factor, but for now it's not to most people.


If you're refering to Buffalo or similar, than yes. I agree. But how about FC RAIDs, like Medea, Infortrend, Xyratec. Granted, these are fiber channel, so they get 280+MB/s with 6 disks. This is about 50MB/s per SATA drive. Which is what I would expect from a SATA RAID system.

But even if we're limited to gigabit Ethernet, I'd expect greater than 30MB/s. So why is the Buffalo/NV/Thecus/etc so slow in comparison? Why aren't they getting similar performance from the SATA drives? Exactly what in the NV is the bottle neck? Is the RAID operations done in software? Is the parity done by the CPU?

Aside from the fiber channel vs. gigabit ethernet, what is different between the Medea/Infortrend vs. Infrant/Buffalo?

--jc


Fibre Channel RAID is controlled by the host bus adapter of the machine to which it is attached. This is a SAN (storage area network) which is completely different to a NAS (network attached storage). Because the disks in the NAS are controlled by the NAS (a computer itself), there is an extra overhead in communicating with this separate computer, caused by a) ethernet latency; b) cross server latency; c) inter-process latency, all of which delay the NAS from responding that it has written each packet of data.

If you really want performance, even SANs don't hack it -- the maximum throughput for a single HP fibre channel host bus adapter is currently 4Gbps (that's gigaBITS with a small 'b', i.e. 512MBs). However most of the SANs out there only support 2Gbps. Compare that with a four channel HP 6404 raid controller which has 4 x 320MBs (that's megaBYTES with a large 'B') = 1280MB = 10240Gbps. Quite a difference. Of course you'd need a server with PCI-X slots or the like and enough disks to soak up that much IO, but that's what's possible
Message 49 of 309
victorhortalive
Aspirant

Performance data of Ready NAS NV

Here's my two penn'orth :

System : AMD 3200+; 2GB RAM; Seagate ATA ST3160812A
NAS : 256MB RAM; 4x Seagate ST3250620NS in XRaid

Direct LAN connection, Gigabit NIC and Gigabit Switch

Write IOPS : 26.538 MBPS : 6.635
Read IOPS : 50.146 MBPS : 12.975

Direct LAN connection, Gigabit NIC and Gigabit Switch + JUMBO FRAMES

Write IOPS : 29.672 MBPS : 7.418
Read IOPS : 45.002 MBPS : 11.251

Wireless Connection 802.11g USR 5450 - USR 5450 (Bridge Mode) (Replacing 20m of Cable with Wireless Link)

Write IOPS : 6.171 MBPS : 1.543
Read IOPS : 4.904 MBPS : 1.226

If the NAS was quiet, then I could use cable !!! 😞

Next week I'll replace the USR 5450s with USR 5461s. Lets hope for 2x 🙂

UPDATE

Results improved with better power cable management and changing Wireless Channel from 8 to 13 :

Write IOPS : 9.134 MBPS : 2.283
Read IOPS : 8.679 MBPS : 2.170
Message 50 of 309
Top Contributors
Announcements