× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Re: RN716X performance

x0m0
Aspirant

RN716X performance

Hello all

We're looking at at the ReadyNAS 716x, and read the Anandtech review linked to on the readynas.com resources tab:
http://www.netgear.com/business/products/storage/readynas/readynas-desktop.aspx#tab-resources

The published performance numbers for single-client CIFS and iSCSI performance are approx. 116 MB/s for a 10GbE connection with the 716X fully loaded with SSDs. This seems suspiciously close to a single 1GbE connection. The figures are here:
http://www.anandtech.com/show/7608/netgear-readynas-716-review-10gbaset-in-a-desktop-nas/3

Can someone else comment on this? Is there a problem with Anandtech's article? I've seen better read performance from weaker NAS hardware.

Cheers
Message 1 of 8
ahpsi1
Tutor

Re: RN716X performance

Interesting. I'm shooting in the dark here but their test setup is using quad NIC's and 802.3ad dynamic link aggregation with "Src/Dest MAC, VLAN, EType,Incoming Port" hashing mode. If what I read here -> http://www.readynas.com/wp-content/uploads/2008/09/ReadyNAS-Teaming.pdf and here -> http://www.readynas.com/forum/viewtopic.php?f=7&t=52156 is correct (specifically the line
“Sec/Dest MAC, VLAN, EType, incoming port” – this corresponds to Layer 2 xmit hash policy
and in the second link the lines reading
layer2

Uses XOR of hardware MAC addresses to generate the
hash. The formula is

(source MAC XOR destination MAC) modulo slave count

This algorithm will place all traffic to a particular
network peer on the same slave.
wouldn't that mean testing from a PC to a NAS using their setup you would never see a single operation use more than one of the aggregate members (meaning never more than 1gbit throughput)?
Message 2 of 8
x0m0
Aspirant

Re: RN716X performance

Wow. I skimmed the article originally and assumed that the client was also connected to the 10Gb-T switch. But it's not. The client is connected with multiple link-bonded 1 Gb-T connections to switch A, which connects via 10 GB/s SFP+ to switch B, which then connects via 10Gb-T to ReadyNAS RN716X.

@ahpsi - Interesting. There should be many IO 'operations' for each iSCSI test, so I had hoped that multiple 1 Gb/s operations could be in-flight, yielding higher aggregate performance. But I don't know enough about the link bonding hash mode. Perhaps the RR or L3 modes would be better for single-client tests.

In any event, I hope Netgear publishes some performance numbers for 10GbE Client <--> 10GbE RN716X test setups, or at least solicits someone to publish a review of such a test setup.
Message 3 of 8
StephenB
Guru

Re: RN716X performance

x0m0 wrote:
...I haven't thought enough about the link bonding hash mode. Perhaps a hash mode of RR or L3 would be more favorable to the single-client test numbers...
With 802.3ad, all methods of frame distribution are required to send all the frames for each conversation out the same nic. That ensures that the frames all reach the receiver in the correct order.

One consequence is that 802.3ad always limits single client performance to 1 gb/s, no matter what hash method is picked.
Message 4 of 8
ahpsi1
Tutor

Re: RN716X performance

Yeah, I'd much rather see either the PC w/ 10Gb NIC connected directly to the RN716X or at least through a 10Gb switch. How they tested has value considering many shops won't have 10Gb available but I would still want 'best case scenario' numbers. I have a never used dual port 10 Gb NIC just sitting here waiting for something worthy of it's capabilities... Now if only I had the RN716X...
Message 5 of 8
mdgm-ntgr
NETGEAR Employee Retired

Re: RN716X performance

If I had a 716X I'd be very interested to see what numbers I could get.

When I tested with 6x2TB enterprise disks in my 516 (doing a local test, not over my network) I could get speeds far in excess of gigabit speeds so I would expect with a faster CPU there wouldn't be any reason why the 716X wouldn't be able to deliver very, very fast speeds and especially when SSDs are installed.
Message 6 of 8
kevinfor1
Aspirant

Re: RN716X performance

here is our setup & the results of our testing

RN716 --->xXS712T --->Win2012R2

The Server is a i7-960 running X58A-UD7 Gigabyte motherboard.. the drivers are 250GB Samsung 830 & intel 510 SSD
NIC is an Intel X540-T2 Dual port 10GB bonded together using 802.3ad Directly to the XS712T RN716X using the 10GB ports
also bonded... nothing else on the switch.. drives are plugged into the older SATA/3Gb ports on the MB 1st generation i7 board
The NAS is mixed drives 3-4TB ST4000VN 2-ST32000644NS 1-HDS723030ALA640 about 12TB XRAID/RAID5 80% full.

copying about 200GB (right click a bunch of folders select copy) then \\nasip\share -paste
files are everything from documents, videos, windows files etc. the results are about 215MB/s

we thought this was slow so we opened a tech support case with Netgear (Level2 SE) They reviewed the switch/NAS config tried Adapter load balancing, 802.3ad
various hash types basically same result. the SE said he did a "consult" with a Level-3 support -who basically said "variety of factors effect speed" but couldn't
give us any numbers of what we should expect... they also didn't really do a good job explaining the advan/disadvantages of the types of load balancing.
you would honestly think some of the marketing/presale engineering dept would be pushing there 10GB switches like the XS712T with the 10GB Readynas with
some type of "whitepaper" that talks about this and client Server setup in more detail.. as it's a "netgear solution" expect for the client NIC cards.

let me know your thoughts or how to increase performance.. we are using the NAS for storage
so unless someone can built 4TB SSD for under $500 each were sticking to HDD's on the NAS 🙂
Message 7 of 8
StephenB
Guru

Re: RN716X performance

My understanding is that 802.3ad improves throughput for multiple connections, but won't go over 10gb for a single connection (or over 1 gb if you were using bonding on gigabit). So it might not help in your test case anyway, unless you were running multiple tests in parallel. Other bonding modes might not have particular restriction - it is intended to make sure that bonding on a trunk doesn't create an out-of-order packet stream for the ultimate receiver, and also to make sure that a receiver on an unbonded connection doesn't see packet loss at the physical layer (because the trunk is carrying more data than the client's connection can handle).

I'd try a test with bonding off, since the performance is well below a single 10gb connection anyway. That will tell you if bonding itself is hurting performance. You might also try varying the MTU size.

But your hard drives won't go faster than about 165 MB/second per http://www.storagereview.com/seagate_nas_hdd_review - and you are not doing a large sequential file reads if you are moving a lot of documents around.

So my guess is you are limited by the drives or possibly by btrfs. You could use ssh on the NAS to do some internal tests to benchmark raw performance, taking the network out of the equation. Copying files to /dev/null is a good basic test.

Or (as a test) try removing the hard drives, and setting up the nas with a single SSD, and see how that performs. I'm not suggesting that you deploy with SSD, just that you should test with one to determine if the hard drives are the bottleneck.

Also, it might be useful to set up a multi-user test, and see how the aggregated throughput compares to your single-threaded test.
Message 8 of 8
Top Contributors
Discussion stats
  • 7 replies
  • 4178 views
  • 0 kudos
  • 5 in conversation
Announcements