NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
jimk1963
May 09, 2020Virtuoso
SMB over RN528X not working
Setup: - Core i7 PC, Win10 Home x64, with Intel X550-T2 dual 10GbE NIC card, and EVO 970 Plus SSD hard drive - XS716E 10GbE switch - RN528X with dual 10GbE ports - RN212 with dual 1GbE ports - Q...
jimk1963
Jun 26, 2020Virtuoso
Thanks StephenB and schumaku for your inputs, I've run some tests that you may find of interest.
General comments:
1) I wasn't reporting earlier that the system suffered from fragmentation. Rather, I just ran tests at different MTU's to force fragmentation so I could confirm those boundaries. With MTU=9014 on both Intel X550 NIC and on the NAS, I don't see any fragmentation
2) I'm not convinced at all that Jumbo Frames are immaterial to performance in modern systems. My test data shows quite the opposite, maybe you can enlighten me as to why
3) By far - the best performance I can achieve is 1 ETH connection directly to the NAS, where the NAS is in Bonded Mode (I ran all tests with NAS in bonded mode so cannot comment on single NAS ETH connection). In this mode, I achieved writes in the mid-800 MB/s range and reads in the 1100-1200 MB/s range using NAS Tester 1.7. Using actual file transfers I saw writes in the 700 MB/s range and reads in the 900-1000 MB/s range. Far better than anything I achieved with a switch in series or with Static LAG.
4) With the switch included, one PC ETH and NAS Bonding with all Jumbo Frames (last run in table), I'm at 400MB/s reads and 900 MB/s writes, back to where I started.
What I don't understand is, compariing Run 1 to Run 12, why adding the switch in series is causing so much degradation in Reads. The switch doesn't offer much configuration, mainly static LAG (which is disabled for those runs).
I'm sure some of these runs are considered "invalid" by network experts, for example I'm not sure setting up the PC with static LAG with a direct-to-NAS connection makes any sense. From what I've read a switch or router should be in-between but anyway, the data is there for analysis.
Also, the few ATTO/Black Magic runs don't agree well with the NAS Tester 1.7 numbers. Guessing they do things quite differently and probably ATTO and BM are more realistic (I guess).
StephenB
Jun 27, 2020Guru - Experienced User
jimk1963 wrote:
2) I'm not convinced at all that Jumbo Frames are immaterial to performance in modern systems. My test data shows quite the opposite, maybe you can enlighten me as to why
I think we already did explain that. The effect of JF on the ethernet itself is immaterial. The improvement in performance (when there is one) is only because the CPUs in the systems (NAS and PC in this case) are processing fewer packets per second. In some cases that can help (and it seems to be doing that in your system). In other cases, the offload processing of the NIC cards offset the CPU gain (achieving a similar result in a different way).
jimk1963 wrote:
What I don't understand is, compariing Run 1 to Run 12, why adding the switch in series is causing so much degradation in Reads. The switch doesn't offer much configuration, mainly static LAG (which is disabled for those runs).
Maybe look at the packet statistics on the switch before and after the test (looking for errors).
Also, is flow control enabled on the switch ports?
You could repeat test 12, but remove one of the ethernet connections from the NAS to the switch.
- jimk1963Jun 27, 2020Virtuoso
Thanks StephenB :
1) Re: JF, I was replying to schumaku 's comment that modern systems don't have this bandwidth limitation. Appears that it does show up again in the world of 10Gbps ETH file transfers... The Core i7-6700 PC (2016 vintage) is loaded with EVO 970 Plus cards (C and D drives) and 64GB DDR4-2166 RAM (slow by today's standards but still tons of bandwidth). File transfers over PCIe between EVO 970 SSD's easily hit 3GB/s. The Intel X550-T2 has offloading capability as well. Not much more I can do there but I did take your point to heart so ran some JF tests (more below). I also have a Threadripper 3970 machine that is definitely not bottlenecked, will try that with/without JF's in a future test.
2) XS716E switch reports CRC Packet Error Statistics as shown below. If this is what you're referring to, I'm not seeing any errors.
3) Repeating Test 12 with one NAS ETH disconnected - tried this with (a) NAS ETH Bonding left enabled, and (b) NAS ETH Bonding torn down. Same result. A single NAS cable is the best config, producing over 800 MB/s write and well over 1100 MB/s read as shown below. Config is: PC = 1 ETH to XS716E, JF=9014, 1 ETH disabled; Switch = No LAG; RN528X = 1 ETH to XS716E, JF=9014.
4) Experimented with different MTU sizes. JF on only one side (either PC or NAS) with 1500 on the other side gives poor results, as expected. Didn't include those numbers but they are basically the same as Column 3 below (PC/NAS both at MTU=1500). Setting the NAS to a fixed MTU=9014, and stepping up the PC MTU (first MTU=1500 in Column 3, then MTU=4088 in Column 2, then MTU=9014 in Column 1), the reads and writes continue to climb with highest JF packet size. This trend correlated perfectly with file transfers between PC-NAS using File Explorer. So I guess this confirms your hypothesis that the PC may be struggling with smaller packet size. Monitored CPU loading using Task Manager, and did observe that when PC MTU=1500 the CPU was peaking at 100% during NAS reads (even though the reported average never climbed above 37%). With MTU=9014 the peaks never get above 80%. Pic of each below.
Through all this, I've left those added SMB Multichannel commands in the NAS. Guessing they are harmless, and useless, at this point but just for completeness, all tests in these tables do have the commands in the \addons.conf file.
First pic: PC MTU=1500, NAS MTU=9014. Second pic: PC & NAS MTU = 9014
- jimk1963Jun 27, 2020Virtuoso
Final configuration:
PC:
- MTU=9014
- Static Link Aggregation Enabled
XS716E Switch:
- Ports 1/2 (RN528X): no Static LAG
- Ports 3/4 (RN212): Static LAG Enabled
- Ports 5/6 (RN14): Static LAG Enabled
- Ports 7/8 (PC): Static LAG Enabled
RN212 and RN314:
- ETH Bonding Enabled
- MTU=9014
- SMB Multichannel "enabled" (if I did it right)
RN528X:
- No ETH Bonding
- One ETH port active, the other disabled
- SMB Multichannel "enabled"
With this configuration, I can achieve read/write file transfer speeds around:
RN528X: 810/560 MB/s (1170/850 on NAS Tester 1.7)
RN212: 135/90 MB/s , 205/135 MB/s simultaneous multi-file transfers, (235/110 on NAS Tester 1.7)
RN314: 166/101 MB/s , same with simultaneous multi-file transfers, (236/108 on NAS Tester 1.7)
This configuration gives me the fastest 10GbE performance with the RN528X, while also enabling faster reads on both the RN212 and RN314. Interestingly, the RN314 with the 4 new WD Red 4TB drives can read up to around 170MB/s, not bad for slow-spinning HDD's. They are dead quiet so I'm happy. Reliability.. we'll see. Also interestingly, the RN212 with its original Toshiba 7200RPM HDD's (circa 2014) does great when copying multiple groups of files simultaneously (over 200 MB/s) but can't quite keep up with the RN314 with just a single block of files. Of course, the RN314 does have 4 disks compared to only 2 in the RN212...
Thanks StephenB , schumaku and Sandshark for all your insight and course corrections. System is tuned about as good as I'm going to get it.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!