NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Cyanara
Jun 21, 2015Aspirant
Slow RAID0 to RAID0 on 314s with bonded gigabit
Hi, I've just set up two 314s in a small business. Each has 3 new WD Red 6TB drives in it, set up in RAID0. Both have two new <3m CAT6 cables going into the gigabit switch, and are bonded with adap...
StephenB
Jun 23, 2015Guru - Experienced User
One challenge with NIC bonding is that the server also needs to communicate with clients that only have a single NIC. When you use two NICs for the same traffic flow, packets will arrive out-of-order at the receiver, and you can also end up with packet drops in the switch that serves the receiving device. The simplest way to avoid these issues is to send the entire flow for a connection out a single NIC. That's what LACP does (and most of the other non-standardized Linux bonding modes as well).
If you configure bonding to get > 1 gbit flow for a single connection, you will almost certainly see performance issues with your normal single nic clients.
Anyway, I suggest starting with link aggregation off and MTU=1500 in your testing to establish a baseline. Then explore the benefits of jumbo frames and aggregation separately. I think NFS is the best protocol to test with for NAS-to-NAS (why use a Windows file sharing protocol between two Linux machines???). But if the main goal is to optimize end-user experience, then use SMB, and include simultaneous tests between two Windows client machines and the NAS.
As a practical matter - if you had a NAS failure, wouldn't you immediately switch over to the backup NAS? Why is the speed of restoring the original NAS a major concern?
Also, if you have parallel work going on, you might consider using NAS A for half the users, and NAS B for the other half (each NAS backing up to the other). It seems to me that will optimize your user throughput.
If you configure bonding to get > 1 gbit flow for a single connection, you will almost certainly see performance issues with your normal single nic clients.
Anyway, I suggest starting with link aggregation off and MTU=1500 in your testing to establish a baseline. Then explore the benefits of jumbo frames and aggregation separately. I think NFS is the best protocol to test with for NAS-to-NAS (why use a Windows file sharing protocol between two Linux machines???). But if the main goal is to optimize end-user experience, then use SMB, and include simultaneous tests between two Windows client machines and the NAS.
As a practical matter - if you had a NAS failure, wouldn't you immediately switch over to the backup NAS? Why is the speed of restoring the original NAS a major concern?
Also, if you have parallel work going on, you might consider using NAS A for half the users, and NAS B for the other half (each NAS backing up to the other). It seems to me that will optimize your user throughput.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!