NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Cyanara
Jun 22, 2015Aspirant
Slow RAID0 to RAID0 on 314s with bonded gigabit
Hi,
I've just set up two 314s in a small business. Each has 3 new WD Red 6TB drives in it, set up in RAID0. Both have two new <3m CAT6 cables going into the gigabit switch, and are bonded with adapted load balancing.
This is a (small) video editing business, so performance is very important with large amounts of data needing to moved around in a timely manner. Also, a rapid restore between the NAS boxes in the event of a disk failure would be highly desirable.
I'm currently running a backup between the two boxes. Given the RAID0, networking bonding, and that most of the data is large video files, I was expecting/hoping for between 125-250MBps. Instead it is sitting pretty consistently on 75MBps for the last couple of hours.
I disabled antivirus on both, but that made no difference. I have no plugins.
Questions:
Are there any common options that would likely be causing these relatively low read/write speeds (bitrot protection, snapshots, etc)?
Is there any way to see CPU utilisation, to make sure that's not bottlenecking it somehow?
What's the best way to benchmark the performance these days? (Most of the stickied links are ancient.)
Thanks,
Joe
I've just set up two 314s in a small business. Each has 3 new WD Red 6TB drives in it, set up in RAID0. Both have two new <3m CAT6 cables going into the gigabit switch, and are bonded with adapted load balancing.
This is a (small) video editing business, so performance is very important with large amounts of data needing to moved around in a timely manner. Also, a rapid restore between the NAS boxes in the event of a disk failure would be highly desirable.
I'm currently running a backup between the two boxes. Given the RAID0, networking bonding, and that most of the data is large video files, I was expecting/hoping for between 125-250MBps. Instead it is sitting pretty consistently on 75MBps for the last couple of hours.
I disabled antivirus on both, but that made no difference. I have no plugins.
Questions:
Are there any common options that would likely be causing these relatively low read/write speeds (bitrot protection, snapshots, etc)?
Is there any way to see CPU utilisation, to make sure that's not bottlenecking it somehow?
What's the best way to benchmark the performance these days? (Most of the stickied links are ancient.)
Thanks,
Joe
12 Replies
Replies have been turned off for this discussion
- mdgm-ntgrNETGEAR Employee Retired1. If performance is key and you want to limit any performance degradation over time as much as possible create new shares with bit-rot protection disabled and snapshots disabled.
2. If you SSH in and run the 'top' command you get some good output on CPU utilisation.
I probably would have gone for the 516 for this use case. Furthermore with a single client you would like only saturate one link. - CyanaraAspirantThanks for that. I disabled both of those but I don't know if it will take effect while it's in the middle of a backup. I'll experiment with multiple client downloads/uploads later when it's finished.
At any rate, the sending NAS currently has over 70% CPU idle, while the receiving NAS has over 50% CPU idle, so I can probably rule that out as a problem.
Both generally have less than 100MB of RAM free, but I imagine that's not likely to be an issue unless most of that was already used up before I started the transfer. Once again, I'll check that later after this initial backup finishes. - mdgm-ntgrNETGEAR Employee RetiredCheck if the memory is swapping. If it's not swapping then you have plenty of RAM. We use memory for caching to improve performance and free up the memory used by caching as needed.
- CyanaraAspirantOh yeah, I forgot to mention that the swap space wasn't being touched, so all good.
- StephenBGuru - Experienced UserAntivirus is disabled?
What protocol are you using the backup? - CyanaraAspirantI turned off AV when I saw how slow the backup was going. I assume any improvement should have become apparent immediately, but there was none. That same backup is still going though. 3TB takes a while at this speed :p
The protocol is Windows/NAS, which may have higher overheads. I was keeping it simple while I worked out how set up backups. Do you recommend any protocol in particular?
On a side note: I know ReadyNAS can't be directly attached to a computer, but can you connect two of them together by USB? I'm just thinking about the fastest means of restoring up to 16TB of data in the event of a disk failure. - StephenBGuru - Experienced UserYou can't connect ReadyNAS together with USB.
I think the fastest protocol for bulk copy between two linux boxes is NFS. For incremental backup, Rsync is fastest. - deploylinuxAspirantmake sure the switch you have connected the nas's to supports etherchannel/lacp, configure the switch and nas to use a load balancing algorithm that is best for your destination...usually, if same source -> same destination, it will only go out one link...which means the max bandwidth will be about 80-85Mbps, and with a 316..75MBps would be rather close to that. If you have a managed switch, look at the port transfer stats to see if the traffic is being balanced evenly between the two links..I doubt it, I suspect that is your limiting factor here.
- CyanaraAspirantI'm using Adaptive Load Balancing on both NAS because it states that it doesn't need any special switch support. I'm not clear on whether LACP offers any significant advantages that I should be aware of. The Netgear Support page doesn't really go into much detail.
http://kb.netgear.com/app/answers/detail/a_id/23076/~/what-are-bonded-adapters-and-how-do-they-work-with-my-readynas-os-6-storage
Also, 75MBps is well short of 125MBps. I have had 100MBps to a desktop earlier. The 316 comes with two Gigabit ports, and if CPU and RAM aren't limiting factors then I'm not sure why I'm not at least maxing out a single connection.
At any rate, the backup finished last night so I'm hoping to carry out some tests today to see if I can find any improvements. - StephenBGuru - Experienced UserOne challenge with NIC bonding is that the server also needs to communicate with clients that only have a single NIC. When you use two NICs for the same traffic flow, packets will arrive out-of-order at the receiver, and you can also end up with packet drops in the switch that serves the receiving device. The simplest way to avoid these issues is to send the entire flow for a connection out a single NIC. That's what LACP does (and most of the other non-standardized Linux bonding modes as well).
If you configure bonding to get > 1 gbit flow for a single connection, you will almost certainly see performance issues with your normal single nic clients.
Anyway, I suggest starting with link aggregation off and MTU=1500 in your testing to establish a baseline. Then explore the benefits of jumbo frames and aggregation separately. I think NFS is the best protocol to test with for NAS-to-NAS (why use a Windows file sharing protocol between two Linux machines???). But if the main goal is to optimize end-user experience, then use SMB, and include simultaneous tests between two Windows client machines and the NAS.
As a practical matter - if you had a NAS failure, wouldn't you immediately switch over to the backup NAS? Why is the speed of restoring the original NAS a major concern?
Also, if you have parallel work going on, you might consider using NAS A for half the users, and NAS B for the other half (each NAS backing up to the other). It seems to me that will optimize your user throughput.
Related Content
- Dec 25, 2018Retired_Member
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!