× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Why can't I get better than 1gb performance bonding NIC's on my ReadyNAS Pro 6?

roveer1
Guide

Why can't I get better than 1gb performance bonding NIC's on my ReadyNAS Pro 6?

I have a ReadyNAS Pro 6 with (2) 1GB ports fw 6.9.4

 

I wanted to see if by setting up a NIC Bond (LACP) I could get faster than 1GB throughput.

 

First I attached both NAS NIC's to the network and SSH'd the NAS and ran iperf

 

I then Iperf'd both network cards simultaneousely from 2 different workstations on my lan.  Each came back 900+Mb/s (just about saturating both adapters. Good start.

 

I then bonded the NAS Nic's (LACP L2+3)

 

I set up my managed switch for lacp on both NAS ports lag (lacp)

 

I then re-ran the simultaneous iperf's and still only got 900+Mb/s as a total.  I was expecting something faster.

I tried all different versions of bonding and other than getting poorer performance I never saw anything above 900Mb/s.

Everything rebooted between changes including switch.

 

Why can't I seem to get better performance?

 

Thanks.

 

Model: ReadyNAS RNDP6000|ReadyNAS Pro 6 Chassis only
Message 1 of 10
StephenB
Guru

Re: Why can't I get better than 1gb performance bonding NIC's on my ReadyNAS Pro 6?

When you use LACP each data flow is assigned to one of the output NICs - that's done by the hash choice (layer 2, layer 2+3, layer 3+4).  Which ever option you pick, you end up with a coin-flip for each flow.  The outcome isn't random though - for a given output flow between two devices, you'll always end up with the same assignment.

 

In your case, the two iPerf flows happened to give the same answer for those two PCs, so both iPerfs ended up running over the same NIC.   One option is to try changing the IP addresses of the PCs (using layer 3 hash), and see if you can find IP addresses that give better performance.  That likely won't help in the PC->NAS direction, as most switches use layer-2 (and don't let you change that).

 

Another option is to try a static LAG on the switch, and use "round robin" on the NAS.   There is a disadvantage though -  the NAS will try to send > 1 gbps to a single PC user.  That will create packet loss in the switch.  TCP will back off the data rate to compensate, but it might unstable.   If you see that, you can also try enabling ethernet flow control in the switch.

Message 2 of 10
Retired_Member
Not applicable

Re: Why can't I get better than 1gb performance bonding NIC's on my ReadyNAS Pro 6?

Hi @roveer1, you also might want to try "adaptive load balancing" as the teaming mode.

Message 3 of 10
StephenB
Guru

Re: Why can't I get better than 1gb performance bonding NIC's on my ReadyNAS Pro 6?


@Retired_Member wrote:

you also might want to try "adaptive load balancing" as the teaming mode.


Worth a try, but I'd try the static lag/round robin approach first.

 

On the NAS->switch path, ALB (and TLB) will dynamically adjust the traffic balance between NICs.  Round-Robin will give the same performance when all the ethernet links in the LAG are the same speed.

 

On the switch->NAS path, ALB selects a NIC at the beginning of the traffic flow (based on NIC loading at the time).  That NIC is used for the life of the traffic flow.  There are still some scenarios where the receive flows aren't well balanced, since the NAS can't shift an on-going flow to another NIC.  That shouldn't happen with static LAG (though I haven't seen any information from Netgear on how the switches load-balance in the that direction).

 

So the net here is that ALB should be about the same as round-robin/static LAG for downloads from the NAS, but that it won't be quite as good for uploads to the NAS.  The simple iperf test might not show the difference in upload performance, but it will still show up in real-world use.

 

LACP is generally used to serve as a "trunk' that carries lots of connections.  It works well in those situations, and has the benefit that it won't overrun a single gigabit client connection (though it will overrun wifi or fast ethernet client connections).  But with just a couple of client connections it will often under-utilitize some of the links. 

 

Message 4 of 10
roveer1
Guide

Re: Why can't I get better than 1gb performance bonding NIC's on my ReadyNAS Pro 6?

Thanks everyone who replied.

 

I think I'm going to drop bonding all-together.

 

I've only got a few workstations that access the NAS (mostly for backup storage).  I was trying to set it up to be a target for my NIC Teaming testing on another machine.  

 

The whole concept of teaming/bonding is somewhat mysterious in that most vendors don't make available exactly what is happening when bonding is configured.

 

The simple concept of (2) 1gb nics, bond them, get twice the speed is kind of a myth after you consider, not single stream and start talking about multiple users etc.  I'm a single stream environment.  I was hoping to Bond my readyNAS, NIC Team a win 2016 server and get ~2Gb/s uploads (~150-200+ MB/S).  Seems like that's not going to happen even mult-threaded SMB 3.1 (although I have seen it on youtube).

 

Too many variables in my old hardware to expect that I can get that to work.  My netgear swtich while offering LACP, gives no options, ReadyNAS might be tunable, but from the GUI is very limited and my workstation hardware is older.

 

I'm happy enough with the fact that after upgrading my ReadyNAS Pro 6 to the latest OS6 and using Win10 on the workstation that I am getting full GB transfers.

 

I'm looking at 10GB for my next step.  Already have it between a server and FreeNAS box. 400+MB/s on that link.

 

Thanks for your posts.

Message 5 of 10
StephenB
Guru

Re: Why can't I get better than 1gb performance bonding NIC's on my ReadyNAS Pro 6?

Well, using the static LAG in the switch and round-robin in the NAS will give you the 2 GB performance you are looking for on the NAS->switch connection.  How to get that performance in the switch->Windows PC is another matter.

 


@roveer1 wrote:

 

I'm looking at 10GB for my next step.  Already have it between a server and FreeNAS box. 400+MB/s on that link.


Personally I use 10 gigabit ethernet in my main NAS (RN526x), my backup NAS (RN524x) and my windows server.  Definitely a good approach, especially now that multigigabit switch prices are coming down.

 

LACP is still configured on my pro-6 (secondary backup), but that is only because I haven't gotten around to reconfiguring it.

Message 6 of 10
roveer1
Guide

Re: Why can't I get better than 1gb performance bonding NIC's on my ReadyNAS Pro 6?

By the way.  Applying Round Robin on my ReadyNAS Pro 6 dropped my transfer speed from 900 Mb/s to in the 300 range from one workstation.  Definetely headed in the wrong direction.  I'll probably keep fiddling with it as I don't give up too easily despite my previous post.  Oh how it would be so nice to just go pull a piece of new equipment (switch, nic, motherboard) off the shelf like Linus (LTT) does.  Tinkering would be so much more fun.

 

Roveer

 

Message 7 of 10
Retired_Member
Not applicable

Re: Why can't I get better than 1gb performance bonding NIC's on my ReadyNAS Pro 6?

Well, @roveer1, if you just want to have as quick as possible traffic between the NAS and a win 2016 server, why not considering the following:

1) On the NAS side disable bonding

2) Assign two different static ip addresses to the 2 nics in the nas

3) On win2016 server side, if not available introduce a 2nd nic

4) Assign two different static ip addresses to the 2 nics in the win 2016 server

5) Have two static routes between a) nic1 in the nas talking to nic1 in the server (connection1) and b) nic2 in the nas and nic2 in the server (connection2)

6) Use server to manage, which traffic goes through which connection.

 

I did not test this, as I do not need that kind scenario for my purposes, but just thought to share the idea with you.

Happy tinkering and kind regards

 

Message 8 of 10
roveer1
Guide

Re: Why can't I get better than 1gb performance bonding NIC's on my ReadyNAS Pro 6?


@Retired_Member wrote:

Well, @roveer1, if you just want to have as quick as possible traffic between the NAS and a win 2016 server, why not considering the following:

1) On the NAS side disable bonding

2) Assign two different static ip addresses to the 2 nics in the nas

3) On win2016 server side, if not available introduce a 2nd nic

4) Assign two different static ip addresses to the 2 nics in the win 2016 server

5) Have two static routes between a) nic1 in the nas talking to nic1 in the server (connection1) and b) nic2 in the nas and nic2 in the server (connection2)

6) Use server to manage, which traffic goes through which connection.

 

I did not test this, as I do not need that kind scenario for my purposes, but just thought to share the idea with you.

Happy tinkering and kind regards

 


I've actually done a little testing with this.  That is how I currently have my NAS sitting, both ports connected to ethernet with their own ip addresses.  The other night I mapped 2 workstations, each to one of the ip's and then sent large files to the NAS from both workstations.  I believe the throughput was pretty high on both links.

 

Of course that doesn't help my attempts to get single stream running faster, but it did show that the NAS is capable of taking data on both ports.  I'm sure it starts to bottleneck at the disk subsystem, but 100MB/s (per link, 200 aggregated) should be sustainable on the X-raid array that I'm using.

 

The only way to really see blazing speeds (the kind I'm looking for) is to have current gen hardware, superfast nvme SSD's, huge pci lane mobo's and cpu's and 10GB cards.  This is toolman stuff Grr rr rr and big bucks.  I'm more like Dell R510/710 stuff which is great, but no where near cutting edge.  I will say that my 19 dollar Mellanox cards are giving me 9.48Gb/s iperf's.  My only reason for trying to bond things is that these really great cheap Mellanox cards are 8x speeds and most of my older hardware don't have 8x pcie slots or I'm left deciding whether to give my 8x slot to the HBA or NIC...

Message 9 of 10
StephenB
Guru

Re: Why can't I get better than 1gb performance bonding NIC's on my ReadyNAS Pro 6?


@roveer1 wrote:

Of course that doesn't help my attempts to get single stream running faster, but it did show that the NAS is capable of taking data on both ports.  I'm sure it starts to bottleneck at the disk subsystem, but 100MB/s (per link, 200 aggregated) should be sustainable on the X-raid array that I'm using.

 


You can get some info on the raw RAID performance for large file transfers using ssh (using dd to transfer to/from /dev/null).  I agree the pro can go faster than 100 MB/sec.

 

Converting the Pro to OS 6 would allow you to use SSD tiering (though you'd need to dedicate some bays to SSDs of course).  Right now that is metadata only, but they are adding data tiering in the 6.10 beta.  You are still limited by the SATA interface though.

 

I don't know of any way to upgrade the Pro's network card. 

 

If you are considering a ReadyNAS upgrade, the RN528 and RN628 might be worth thinking about.  They both have 10GBaseT, and 8 bays should be enough for SSD tiering.  12 bay rackmounts are another option, but would be quite a bit more expensive. 

Message 10 of 10
Top Contributors
Discussion stats
  • 9 replies
  • 2931 views
  • 4 kudos
  • 3 in conversation
Announcements