NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
roveer1
Oct 29, 2018Guide
Why can't I get better than 1gb performance bonding NIC's on my ReadyNAS Pro 6?
I have a ReadyNAS Pro 6 with (2) 1GB ports fw 6.9.4
I wanted to see if by setting up a NIC Bond (LACP) I could get faster than 1GB throughput.
First I attached both NAS NIC's to the netwo...
StephenB
Oct 30, 2018Guru - Experienced User
Retired_Member wrote:
you also might want to try "adaptive load balancing" as the teaming mode.
Worth a try, but I'd try the static lag/round robin approach first.
On the NAS->switch path, ALB (and TLB) will dynamically adjust the traffic balance between NICs. Round-Robin will give the same performance when all the ethernet links in the LAG are the same speed.
On the switch->NAS path, ALB selects a NIC at the beginning of the traffic flow (based on NIC loading at the time). That NIC is used for the life of the traffic flow. There are still some scenarios where the receive flows aren't well balanced, since the NAS can't shift an on-going flow to another NIC. That shouldn't happen with static LAG (though I haven't seen any information from Netgear on how the switches load-balance in the that direction).
So the net here is that ALB should be about the same as round-robin/static LAG for downloads from the NAS, but that it won't be quite as good for uploads to the NAS. The simple iperf test might not show the difference in upload performance, but it will still show up in real-world use.
LACP is generally used to serve as a "trunk' that carries lots of connections. It works well in those situations, and has the benefit that it won't overrun a single gigabit client connection (though it will overrun wifi or fast ethernet client connections). But with just a couple of client connections it will often under-utilitize some of the links.
roveer1
Oct 30, 2018Guide
Thanks everyone who replied.
I think I'm going to drop bonding all-together.
I've only got a few workstations that access the NAS (mostly for backup storage). I was trying to set it up to be a target for my NIC Teaming testing on another machine.
The whole concept of teaming/bonding is somewhat mysterious in that most vendors don't make available exactly what is happening when bonding is configured.
The simple concept of (2) 1gb nics, bond them, get twice the speed is kind of a myth after you consider, not single stream and start talking about multiple users etc. I'm a single stream environment. I was hoping to Bond my readyNAS, NIC Team a win 2016 server and get ~2Gb/s uploads (~150-200+ MB/S). Seems like that's not going to happen even mult-threaded SMB 3.1 (although I have seen it on youtube).
Too many variables in my old hardware to expect that I can get that to work. My netgear swtich while offering LACP, gives no options, ReadyNAS might be tunable, but from the GUI is very limited and my workstation hardware is older.
I'm happy enough with the fact that after upgrading my ReadyNAS Pro 6 to the latest OS6 and using Win10 on the workstation that I am getting full GB transfers.
I'm looking at 10GB for my next step. Already have it between a server and FreeNAS box. 400+MB/s on that link.
Thanks for your posts.
- StephenBOct 30, 2018Guru - Experienced User
Well, using the static LAG in the switch and round-robin in the NAS will give you the 2 GB performance you are looking for on the NAS->switch connection. How to get that performance in the switch->Windows PC is another matter.
roveer1 wrote:
I'm looking at 10GB for my next step. Already have it between a server and FreeNAS box. 400+MB/s on that link.
Personally I use 10 gigabit ethernet in my main NAS (RN526x), my backup NAS (RN524x) and my windows server. Definitely a good approach, especially now that multigigabit switch prices are coming down.
LACP is still configured on my pro-6 (secondary backup), but that is only because I haven't gotten around to reconfiguring it.
- roveer1Oct 31, 2018Guide
By the way. Applying Round Robin on my ReadyNAS Pro 6 dropped my transfer speed from 900 Mb/s to in the 300 range from one workstation. Definetely headed in the wrong direction. I'll probably keep fiddling with it as I don't give up too easily despite my previous post. Oh how it would be so nice to just go pull a piece of new equipment (switch, nic, motherboard) off the shelf like Linus (LTT) does. Tinkering would be so much more fun.
Roveer
- Retired_MemberOct 31, 2018
Well, roveer1, if you just want to have as quick as possible traffic between the NAS and a win 2016 server, why not considering the following:
1) On the NAS side disable bonding
2) Assign two different static ip addresses to the 2 nics in the nas
3) On win2016 server side, if not available introduce a 2nd nic
4) Assign two different static ip addresses to the 2 nics in the win 2016 server
5) Have two static routes between a) nic1 in the nas talking to nic1 in the server (connection1) and b) nic2 in the nas and nic2 in the server (connection2)
6) Use server to manage, which traffic goes through which connection.
I did not test this, as I do not need that kind scenario for my purposes, but just thought to share the idea with you.
Happy tinkering and kind regards
- roveer1Nov 01, 2018Guide
Retired_Member wrote:
Well, roveer1, if you just want to have as quick as possible traffic between the NAS and a win 2016 server, why not considering the following:
1) On the NAS side disable bonding
2) Assign two different static ip addresses to the 2 nics in the nas
3) On win2016 server side, if not available introduce a 2nd nic
4) Assign two different static ip addresses to the 2 nics in the win 2016 server
5) Have two static routes between a) nic1 in the nas talking to nic1 in the server (connection1) and b) nic2 in the nas and nic2 in the server (connection2)
6) Use server to manage, which traffic goes through which connection.
I did not test this, as I do not need that kind scenario for my purposes, but just thought to share the idea with you.
Happy tinkering and kind regards
I've actually done a little testing with this. That is how I currently have my NAS sitting, both ports connected to ethernet with their own ip addresses. The other night I mapped 2 workstations, each to one of the ip's and then sent large files to the NAS from both workstations. I believe the throughput was pretty high on both links.
Of course that doesn't help my attempts to get single stream running faster, but it did show that the NAS is capable of taking data on both ports. I'm sure it starts to bottleneck at the disk subsystem, but 100MB/s (per link, 200 aggregated) should be sustainable on the X-raid array that I'm using.
The only way to really see blazing speeds (the kind I'm looking for) is to have current gen hardware, superfast nvme SSD's, huge pci lane mobo's and cpu's and 10GB cards. This is toolman stuff Grr rr rr and big bucks. I'm more like Dell R510/710 stuff which is great, but no where near cutting edge. I will say that my 19 dollar Mellanox cards are giving me 9.48Gb/s iperf's. My only reason for trying to bond things is that these really great cheap Mellanox cards are 8x speeds and most of my older hardware don't have 8x pcie slots or I'm left deciding whether to give my 8x slot to the HBA or NIC...
- StephenBNov 01, 2018Guru - Experienced User
roveer1 wrote:
Of course that doesn't help my attempts to get single stream running faster, but it did show that the NAS is capable of taking data on both ports. I'm sure it starts to bottleneck at the disk subsystem, but 100MB/s (per link, 200 aggregated) should be sustainable on the X-raid array that I'm using.
You can get some info on the raw RAID performance for large file transfers using ssh (using dd to transfer to/from /dev/null). I agree the pro can go faster than 100 MB/sec.
Converting the Pro to OS 6 would allow you to use SSD tiering (though you'd need to dedicate some bays to SSDs of course). Right now that is metadata only, but they are adding data tiering in the 6.10 beta. You are still limited by the SATA interface though.
I don't know of any way to upgrade the Pro's network card.
If you are considering a ReadyNAS upgrade, the RN528 and RN628 might be worth thinking about. They both have 10GBaseT, and 8 bays should be enough for SSD tiering. 12 bay rackmounts are another option, but would be quite a bit more expensive.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!