NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
jimk1963
May 09, 2020Virtuoso
SMB over RN528X not working
Setup: - Core i7 PC, Win10 Home x64, with Intel X550-T2 dual 10GbE NIC card, and EVO 970 Plus SSD hard drive - XS716E 10GbE switch - RN528X with dual 10GbE ports - RN212 with dual 1GbE ports - Q...
jimk1963
Jun 21, 2020Virtuoso
You could try enabling the setting in /etc/frontview/samba/addons/addons.conf as described here: https://community.netgear.com/t5/Using-your-ReadyNAS-in-Business/Samba-manually-modify-etc-frontview...
Hi Guys, after some hiatus am back examining SMB multichannel. Picture attached is SSH view of RN212. In the etc/Frontview/Samba directory, I do not see any subdirectory named "addons" per the link provided earlier. The only "addons" directory I see is under etc/Frontview/Addons and that directory has but one file, addons.conf which has a "bin" designation after it so it appears to be a non-editable binary file. When I type "cat addons.conf" it returns nothing. I'm a newbie with SSH syntax, I used "ls -alr" to list out all hidden directories, etc., just to make sure I didn't miss something.
Basically, I'm stuck at this point. Not sure where the appropriate file is located to add smb multichannel commands.
Sandshark
Jun 21, 2020Sensei
If no apps have created /etc/frontview/samba/addons and it's addons.conf file, you have to create them yourself. If you look at /etc/samba/smb.conf, you'll see it is referenced in an include statement.
- jimk1963Jun 21, 2020Virtuoso
Thanks Sandshark ! I see the include statment "include = /etc/frontview/sama/addons/addons.conf" and understand I must now do the following:
1) Create subdirectory "/etc/frontview/samba/addons"
2) Create a text file in the above subdirectory called "addons.conf" with the desired SMB multichannel commands
Found the link below. Not sure if I'm fully clear on this, but believe I just need to add the 3 lines below to the "addons.conf" file to enable SMB multi-channel - sound right?
https://blog.chaospixel.com/linux/2016/09/samba-enable-smb-multichannel-support-on-linux.html
Enable multi-channel in smb.conf
This is really simple, just put:
server multi channel support = yes aio read size = 1 aio write size = 1
in your smb.conf.
- SandsharkJun 21, 2020Sensei
You need an applicability header for it in the file, too. Most likely [global].
- jimk1963Jun 21, 2020Virtuoso
Thanks Sandshark . I've created the directory using "mkdir addons", and have created the file using "touch addons.conf". So now I see the file. But how to edit addons.conf?? I've tried:
1) vim addons.conf --> command not found
2) nano addons.conf --> command not found
Searching the internet for solutions, all I can find are recommendations to add the above two commands somehow (also not clear, and am thinking I shouldn't need to do this).
- jimk1963Jun 21, 2020Virtuoso
Sorry bad info from another site. The command "vi addons.conf" works for editing the file.
It now looks like this:
admin@xxx:/etc/frontview/samba/addons$ cat addons.conf
[global]
server multi channel support = yes
aio read size = 1
aio write size = 1
admin@Kirkpatrick2016:/etc/frontview/samba/addons$I added a space between every word, and between the equals signs, because that's how it looks on the website I referenced. Hopefully that's correct. I assume I need to reboot the NAS so will do that now and see what happens.
- jimk1963Jun 22, 2020Virtuoso
After all that, am seeing no better performance than before, on either the 10GbE RN528x or the 1GbE RN212. Config:
1) Intel X550-T2 NIC is in Static Lag mode, Windows shows the team as a 20 Gbps link
2) XS716E switch configured with static LAGs
Ports 1/2 = PC (X550-T2)
Ports 3/4 = RN212 (2 x 1 GbE)
Ports 5/6 = RN314 (2 x 1 GbE)
Ports 7/8 = RN528 (2 x 10 GbE)
3) RN212, 314, 528 all set identically:
Bonded Ports
MTU = 9014
Round Robin
Using NAS Performance Tester 1.7:
RN528: 770 MB/s Write 737 MB/s Read
RN212: 116 MB/s Write 242 MB/s Read
RN314: 122 MB/s Write 232 MB/s Read
Running actual file transfers over Windows Explorer, from PC-to-RN528 is ~ 550 MB/s and from NAS-to-PC is ~625 MB/s. Between RN528-RN212, transfers are capped at ~125 MB/s in either direction.
Basically, I have yet to successfully write to either of the 2x1GbE NAS's at 2 Gbps. Could be the commands I entered are insufficient for SMB Multichannel to work properly, or maybe I need Windows Server, or not sure what else to try. I'ts been educational but pretty frustrating as well.
- StephenBJun 22, 2020Guru - Experienced User
jimk1963 wrote:
After all that, am seeing no better performance than before, on either the 10GbE RN528x or the 1GbE RN212. Config:
Using NAS Performance Tester 1.7:
RN528: 770 MB/s Write 737 MB/s Read
RN212: 116 MB/s Write 242 MB/s Read
RN314: 122 MB/s Write 232 MB/s Read
Running actual file transfers over Windows Explorer, from PC-to-RN528 is ~ 550 MB/s and from NAS-to-PC is ~625 MB/s. Between RN528-RN212, transfers are capped at ~125 MB/s in either direction.
Basically, I have yet to successfully write to either of the 2x1GbE NAS's at 2 Gbps. Could be the commands I entered are insufficient for SMB Multichannel to work properly, or maybe I need Windows Server, or not sure what else to try. I'ts been educational but pretty frustrating as well.
Maybe take a look here to double-check the SMB configuration on the PC: https://docs.microsoft.com/en-us/windows-server/storage/file-server/troubleshoot/smb-multichannel-troubleshooting
Have you checked the end-to-end MTU with ping? In some Netgear switches the MTU needs to be set for both the port and the LAG.
- jimk1963Jun 22, 2020Virtuoso
Thanks StephenB , I've taken snapshots of the commands in your reference as I'm not sure how to interpret the info. The bonded X550-T2 ETH adapter on the PC is "Ethernet 10", based on the two NICs Ethernet 7/8. For Ethernet 10 (Interface 20), SMBClient /SMBServer both say TRUE under "RSS Capable" and FALSE under "RDMA Capable". Under Net Adapter Binding, most of the Eth 10 Component ID's say TRUE but two are FALSE. I don't see a specific Component ID listed for SMB but am not familiar with these bindings so may be missing it.
- jimk1963Jun 22, 2020Virtuoso
Also StephenB regarding your MTU question, the XS716E doesn't have a configurable MTU from what I can see in the management console, for either the port or the LAG. It's auto-populated at MTU=9198 and not editable.
Pinging any of the 3 NAS boxes from the PC, max payload without fragmenting is 8972 bytes so +28 gives MTU=9000 optimally if I understand correctly. The PC NIC's are set for jumbo frames (9014 bytes, not tune-able) and the NAS's are each set to MTU=9014. I can drop the NAS MTU's down to 9000 without affecting max payload but any lower and I start to see fragmenting. Curiously, setting NAS to MTU=8992 still results in max unfragmented payload (MUP) = 8972 bytes, but MTU=8991 or any lower number results in a byte-for-byte reduction in max payload. For example, NAS MTU = 8980 results in MUP = 8960. I thought the byte overhead was 28 bytes but I guess that's 20 bytes for TCP and 8 bytes for UDP... so I guess the generic ping command doesn't include the UDP header? Not sure about that. Anyway seems I'm getting max jumbo frames from the pings (8972 byte MUP).
- schumakuJun 22, 2020Guru - Experienced User
Please allow two comments:
jimk1963 wrote:After all that, am seeing no better performance than before, on either the 10GbE RN528x ...
Using NAS Performance Tester 1.7:
RN528: 770 MB/s Write 737 MB/s Read
Running actual file transfers over Windows Explorer, from PC-to-RN528 is ~ 550 MB/s and from NAS-to-PC is ~625 MB/s.- The RN528 performance number in the 700 MB/s range are ways below of what a 10 GbE interface can achieve. Unless you are near to some 1250 MB/s minus some protocol overhead ... let's say at least in the range around 1100 MB/s you can't achieve more bandwidth by using a LAG of two 10G interfaces or using individual interfaces in an SMB Multipath config. The achievable throughput depends on the storage blocks in the RN528[X]. On an all-SATA-Flash equipped RN628X with a X-RAID (RAID6) config, the performance does run in the higher range wher bonding (or an SMB Multipath) config might start to make some sense. On mechanical HDD I doubt you will be able to achieve much higher numbers.
- Either use a LAG _or_ use SMB Multipath. SMB Multipath does always work over multiple individual network interfaces, each with a dedicated link and IP address.
- The RN528 performance number in the 700 MB/s range are ways below of what a 10 GbE interface can achieve. Unless you are near to some 1250 MB/s minus some protocol overhead ... let's say at least in the range around 1100 MB/s you can't achieve more bandwidth by using a LAG of two 10G interfaces or using individual interfaces in an SMB Multipath config. The achievable throughput depends on the storage blocks in the RN528[X]. On an all-SATA-Flash equipped RN628X with a X-RAID (RAID6) config, the performance does run in the higher range wher bonding (or an SMB Multipath) config might start to make some sense. On mechanical HDD I doubt you will be able to achieve much higher numbers.
- jimk1963Jun 22, 2020Virtuoso
Thanks schumaku for your inputs.
Please reference: https://www.downloads.netgear.com/files/GDC/READYNAS-100/ReadyNAS_OS_Performance_Guide.pdf
In this report, they use pedestrian Seagate HDD's. Their setup is different than mine - I'm using just one PC for testing, while in the report they use 4 PC's to push/pull data from the 2 10GbE ports via switches that sit in-between. As you can see, with the RN528X they can achieve 981 MB/s (just under 8 Gbps) writes and 2027 MB/s (16.2 Gbps) reads. Using HDD's. My box came with Toshiba MGO3ACA400 HDD's which are similarly spec'd.
With LAG, I've observed that multiple simultaneous PC transfers can in fact push the write speeds higher. However in this case, what I'm trying to accomplish is a faster lilnk to the 10GbE NAS from just a single PC. That's the entire goal of this thread and why I explored both (A) LAG and (B) SMB MultiChannel.
In this video for example, the engineer was able to demonstrate how SMB Multichannel did in fact improve speed over a single PC connected to a NAS: https://youtu.be/UjdPrCWiYwY ("SMB Multichannel - What is it? & QNAP NAS Setup")
In this case, the engineer is using Windows Server 2012 if memory serves. Windows Server has built-in, automatic SMB Multichannel capability. Windows 10 Home 64-bit can allegedly also support SMB MultiChannel, but again that's what I'm trying to prove out here. How do you get it working, and what is the benefit?
Regarding your observation to configure either (1) LAG or (2) SMB MultiChannel, I'm unclear on this point but will try out my newly NAS-implemented SMB Multichannel commands with the LAGs torn down to compare. It's certainly worth a try.
- schumakuJun 22, 2020Guru - Experienced User
Hope you understand why experimenting with LAG or Multipath does not make any sense if the set-up does not make high enough numbers to come near to the 10G performance.
Are your ~7xx MB/s 10G numbers coming from a real single threaded copy from/to your PC hardware, including read/write from the local storage?
Or are these pure benchmark numbers using /dev/null as source or destination, or a whatever benchmarking tool? - jimk1963Jun 22, 2020Virtuoso
Thanks schumaku . If you're asking, am I transferring real files back and forth between the NAS and PC, the answer is yes. I'm transferring a number of multi-GB zip files. As the thread indicates, I'm also testing with NAS Performance Tester 1.7, Atto, Black Magic, and one or two others.
Regarding speed, and my understanding of it, I just explained the justification for higher speed expectation, namely, a reference to a Netgear test report which documents the numbers I just posted, using my same NAS model RN528X. I am not achieving these numbers and am looking to get closer to these. Additionally, if you followed the thread, you'd see I also have 2 NAS's with dual 1GbE ports. These are well below the bandwidth of my system (also well documented in the thread) and are also failing to achieve full 2 Gbps bidirectionally. These are also set up with LAG and now, with SMB MultiChannel.
Here's another example of an engineer making LAG work with a single PC, in this case 4 x 1 GbE and he's achieving 400 MB/s bidirectionally with pedestrian HDD's. I want to know how to enable this same performance but with 2 x 1 GbE. This was actually the reason I started the thread in the first place, as my 10GbE RN528X is great already. It's these old 1GbE NAS boxes I want to pep-up by taking advantage of their dual 1GbE NIC's. It's working in the Read direction, but not at all in the Write direction.
- schumakuJun 22, 2020Guru - Experienced User
> RN212: 116 MB/s Write 242 MB/s Read
> RN314: 122 MB/s Write 232 MB/s Read
Write: You are putting data sourcing the 10G interface (or a LAG'ed 2*10G) computer not applying any load balancing distribution mode (policy), or applying address hash policy. All data does source from one MAC address, and the switch does again forward everything from the same MAC source to one MAC on the destination.
A switch with a static LAG does not apply any Tx policy - it will ensure the same source MAC will be sent to the one destination MAC. This makes only one 1G channel is used from the switch to the NAS.
Read: The NAS does apply some Tx policy, e.g. in balance-rr (round robin) on each 1G interface so flooding both interface almost equally. The switch will put this to the one and only MAC of one 10G interface.
That makes the difference in your set-up ... or the reason why I think it is that way. - StephenBJun 23, 2020Guru - Experienced User
schumaku wrote:
A switch with a static LAG does not apply any Tx policy - it will ensure the same source MAC will be sent to the one destination MAC. This makes only one 1G channel is used from the switch to the NAS.
Read: The NAS does apply some Tx policy, e.g. in balance-rr (round robin) on each 1G interface so flooding both interface almost equally. The switch will put this to the one and only MAC of one 10G interface.I agree the switch policy is part of the problem (although technically the switch is using a policy, since it does have to decide what NIC to use to send each packet). Likely it is using an XOR hash of the source and destination MAC. If that is the case, even if fhe PC is using both NICs for writes, there is a 50-50 chance the switch will undo that, and send all the traffic for the flow to one NIC.
FWIW, a direct connect of the PC to the NAS would allow jimk1963 to take the switch policy of of the equation.
This article (though older) might be more informing than the one I sent earlier: https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn610980(v=ws.11)#:~:text=SMB%20Multichannel%2C%20a%20feature%20included,use%20multiple%20network%20connections%20simultaneously.
jimk1963 does seem to have RSS enabled on the 20 GB NIC.
I also suggest switching the MTU back to 1500 on both the NAS and the PC, and see what difference that makes. There shouldn't be any packet fragmentation, and if I am reading the earlier posts correctly, there is some.
Overall, I also am not saturating a single 10GBASE-T link with my own RN526x. I haven't attempted to duplicate Netgear's performance numbers (as my current speeds are enough for my purpose). But I wouldn't expect much gain from multichannel if the overall setup can't saturate a single connection.
- jimk1963Jun 24, 2020Virtuoso
Thanks schumaku and StephenB , great info. I'm an RF guy attempting to better understand network theory, your tutorials are much appreciated. Next up:
1) try direct PC-to-NAS connection to eliminate any switch policy/configuration issues
2) try MTU=1500... not sure why this will help, I have jumbo frames enabled all the way through and don't see fragmentation with packets up to 8972 bytes. But I'll try it anyway
3) Per schumaku , I removed LAG and kept SMB Multichannel... so now there are two PC NIC outputs with individual DHCP-assigned IP addresses, same for all 3 NAS boxes. The SMB Multichanmel commands I noted earlier remain in the /addons subfolder of all 3 NAS. With this configuration, NAS Tester read performance is now cut in half on RN212/314 (120 MB/s vs 230-240 MB/s) and the RN528X is having an even worse issue with reads below 100 MB/s. Rebooted everything, no improvement. Actual PC-to-RN528X file transfers are in fact showing this sub-100 MB/s speed so something is definitely wrong. The other two NAS transfer files at very close to the 1GbE limit so seems something is wrong with the RN528X itself. Will debug.
- schumakuJun 24, 2020Guru - Experienced User
Well, SMB Multichannel is not intended to provide a single session high-speed-trunk - this is in place for serving multiple clients.
- StephenBJun 24, 2020Guru - Experienced User
jimk1963 wrote:
3) Per schumaku , I removed LAG and kept SMB Multichannel... so now there are two PC NIC outputs with individual DHCP-assigned IP addresses, same for all 3 NAS boxes.
You could have kept the LAGs on the NAS, and only dropped it in the PC.
In Windows, getting more performance with multichannel with a single session requires RSS on each link. If you have that, then it is supposed to use one channel per CPU core even if you are using teaming. Also, looking at the load balancing modes available in Windows - you should have the static lag configured to use "Dynamic" Though I think the switch will undo that.
On the NAS side, RSS might not be set up at all (or might not be set up correctly). That's not something I've ever played with though.
On the MTU size - it might not make a difference. But when you are transferring large files, the system will be using the full MTU size. And packet fragmentation will slow down the transfers. The receiving system has to defragment, and that does add latency. If the MTU size is set correctly, you won't see any packet fragmentation in the NAS or the PC. You likely will see them on other devices.
Also, there are many deployments where jumbo frames slow down performance, and don't speed it up. The performance gain is largely because the PC and the NAS aren't processing as many packets per second. If the edge devices can keep up with the packets per second rate, then jumbo frames don't speed anything up. In your case, you want to keep both links loaded. More packets per second might actually help with that.
Generally my advice here is that if jumbo frames aren't significantly improving your performance, then you should turn them off. FWIW, they don't improve my own performance enough to be worth the bother.
- schumakuJun 24, 2020Guru - Experienced User
RSS is between the adapter driver, the TCP offload engine, and the processing on the host and comes into the game where more CPU performance is as available from a single core.
Correct is that the obvious advantage of Jumbo Frames in added throughput is much less obvious then it was back in the times of 200 MHz Inhell processors or 800 MHz ARM single cores serving a NAS. Powerful servers and NAS with the "right" interfaces make use of TCP offloading to the network adapter. This was originally done to have more CPU ticks available on the CPU for doing it's job. Similar what RSS does so more than one core can work on an interface.
Appears there are a lot of strange ideas on how Jumbo Frames (JF) are handled in today's networking equipment. Packet fragmentation e.g. for TCP sessions does rarely happen on the data path - the PMTUD does always negotiate the complete path. All IPv4 end points should, all IPv6 end points must support this handshake. Permitting the interfaces are working correct and routers are properly handling it (some consumer router garbage doesn't) the stack does handle the MTU for each connection correctly. Old legacy routers might do some hard TCP fragmenting - in reality this should not happen anymore today. And this does no happen on modern L2 networks where most users are operating NAS. The small PMTUD three way handshake during the session establishment does not kill performance. The days where we had to tell people that the MTU eg. the lowest MTU support on it's stack is dictating the network maximum MTU are history - everything works perfectly transparent nowadays. - StephenBJun 25, 2020Guru - Experienced User
schumaku wrote:
Appears there are a lot of strange ideas on how Jumbo Frames (JF) are handled in today's networking equipment.jimk1963 is seeing some fragmentation, so something isn't negotiating quite right on his equipment.
schumaku wrote:
This was originally done to have more CPU ticks available on the CPU for doing it's job. Similar what RSS does so more than one core can work on an interface.
Right. Also- one cost of using Jumbo Frames is that you either need to have more memory for packet buffering, or you need to buffer fewer packets.
With my own equipment, I've sometimes seen that JFs have actually reduced my throughput. I don't think I've seen a substantial gain in anything (though it's not something I've measured recently).
- jimk1963Jun 27, 2020Virtuoso
Thanks StephenB and schumaku for your inputs, I've run some tests that you may find of interest.
General comments:
1) I wasn't reporting earlier that the system suffered from fragmentation. Rather, I just ran tests at different MTU's to force fragmentation so I could confirm those boundaries. With MTU=9014 on both Intel X550 NIC and on the NAS, I don't see any fragmentation
2) I'm not convinced at all that Jumbo Frames are immaterial to performance in modern systems. My test data shows quite the opposite, maybe you can enlighten me as to why
3) By far - the best performance I can achieve is 1 ETH connection directly to the NAS, where the NAS is in Bonded Mode (I ran all tests with NAS in bonded mode so cannot comment on single NAS ETH connection). In this mode, I achieved writes in the mid-800 MB/s range and reads in the 1100-1200 MB/s range using NAS Tester 1.7. Using actual file transfers I saw writes in the 700 MB/s range and reads in the 900-1000 MB/s range. Far better than anything I achieved with a switch in series or with Static LAG.
4) With the switch included, one PC ETH and NAS Bonding with all Jumbo Frames (last run in table), I'm at 400MB/s reads and 900 MB/s writes, back to where I started.
What I don't understand is, compariing Run 1 to Run 12, why adding the switch in series is causing so much degradation in Reads. The switch doesn't offer much configuration, mainly static LAG (which is disabled for those runs).
I'm sure some of these runs are considered "invalid" by network experts, for example I'm not sure setting up the PC with static LAG with a direct-to-NAS connection makes any sense. From what I've read a switch or router should be in-between but anyway, the data is there for analysis.
Also, the few ATTO/Black Magic runs don't agree well with the NAS Tester 1.7 numbers. Guessing they do things quite differently and probably ATTO and BM are more realistic (I guess).
- StephenBJun 27, 2020Guru - Experienced User
jimk1963 wrote:
2) I'm not convinced at all that Jumbo Frames are immaterial to performance in modern systems. My test data shows quite the opposite, maybe you can enlighten me as to why
I think we already did explain that. The effect of JF on the ethernet itself is immaterial. The improvement in performance (when there is one) is only because the CPUs in the systems (NAS and PC in this case) are processing fewer packets per second. In some cases that can help (and it seems to be doing that in your system). In other cases, the offload processing of the NIC cards offset the CPU gain (achieving a similar result in a different way).
jimk1963 wrote:
What I don't understand is, compariing Run 1 to Run 12, why adding the switch in series is causing so much degradation in Reads. The switch doesn't offer much configuration, mainly static LAG (which is disabled for those runs).
Maybe look at the packet statistics on the switch before and after the test (looking for errors).
Also, is flow control enabled on the switch ports?
You could repeat test 12, but remove one of the ethernet connections from the NAS to the switch.
- jimk1963Jun 27, 2020Virtuoso
Thanks StephenB :
1) Re: JF, I was replying to schumaku 's comment that modern systems don't have this bandwidth limitation. Appears that it does show up again in the world of 10Gbps ETH file transfers... The Core i7-6700 PC (2016 vintage) is loaded with EVO 970 Plus cards (C and D drives) and 64GB DDR4-2166 RAM (slow by today's standards but still tons of bandwidth). File transfers over PCIe between EVO 970 SSD's easily hit 3GB/s. The Intel X550-T2 has offloading capability as well. Not much more I can do there but I did take your point to heart so ran some JF tests (more below). I also have a Threadripper 3970 machine that is definitely not bottlenecked, will try that with/without JF's in a future test.
2) XS716E switch reports CRC Packet Error Statistics as shown below. If this is what you're referring to, I'm not seeing any errors.
3) Repeating Test 12 with one NAS ETH disconnected - tried this with (a) NAS ETH Bonding left enabled, and (b) NAS ETH Bonding torn down. Same result. A single NAS cable is the best config, producing over 800 MB/s write and well over 1100 MB/s read as shown below. Config is: PC = 1 ETH to XS716E, JF=9014, 1 ETH disabled; Switch = No LAG; RN528X = 1 ETH to XS716E, JF=9014.
4) Experimented with different MTU sizes. JF on only one side (either PC or NAS) with 1500 on the other side gives poor results, as expected. Didn't include those numbers but they are basically the same as Column 3 below (PC/NAS both at MTU=1500). Setting the NAS to a fixed MTU=9014, and stepping up the PC MTU (first MTU=1500 in Column 3, then MTU=4088 in Column 2, then MTU=9014 in Column 1), the reads and writes continue to climb with highest JF packet size. This trend correlated perfectly with file transfers between PC-NAS using File Explorer. So I guess this confirms your hypothesis that the PC may be struggling with smaller packet size. Monitored CPU loading using Task Manager, and did observe that when PC MTU=1500 the CPU was peaking at 100% during NAS reads (even though the reported average never climbed above 37%). With MTU=9014 the peaks never get above 80%. Pic of each below.
Through all this, I've left those added SMB Multichannel commands in the NAS. Guessing they are harmless, and useless, at this point but just for completeness, all tests in these tables do have the commands in the \addons.conf file.
First pic: PC MTU=1500, NAS MTU=9014. Second pic: PC & NAS MTU = 9014
- jimk1963Jun 27, 2020Virtuoso
Final configuration:
PC:
- MTU=9014
- Static Link Aggregation Enabled
XS716E Switch:
- Ports 1/2 (RN528X): no Static LAG
- Ports 3/4 (RN212): Static LAG Enabled
- Ports 5/6 (RN14): Static LAG Enabled
- Ports 7/8 (PC): Static LAG Enabled
RN212 and RN314:
- ETH Bonding Enabled
- MTU=9014
- SMB Multichannel "enabled" (if I did it right)
RN528X:
- No ETH Bonding
- One ETH port active, the other disabled
- SMB Multichannel "enabled"
With this configuration, I can achieve read/write file transfer speeds around:
RN528X: 810/560 MB/s (1170/850 on NAS Tester 1.7)
RN212: 135/90 MB/s , 205/135 MB/s simultaneous multi-file transfers, (235/110 on NAS Tester 1.7)
RN314: 166/101 MB/s , same with simultaneous multi-file transfers, (236/108 on NAS Tester 1.7)
This configuration gives me the fastest 10GbE performance with the RN528X, while also enabling faster reads on both the RN212 and RN314. Interestingly, the RN314 with the 4 new WD Red 4TB drives can read up to around 170MB/s, not bad for slow-spinning HDD's. They are dead quiet so I'm happy. Reliability.. we'll see. Also interestingly, the RN212 with its original Toshiba 7200RPM HDD's (circa 2014) does great when copying multiple groups of files simultaneously (over 200 MB/s) but can't quite keep up with the RN314 with just a single block of files. Of course, the RN314 does have 4 disks compared to only 2 in the RN212...
Thanks StephenB , schumaku and Sandshark for all your insight and course corrections. System is tuned about as good as I'm going to get it.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!