- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
Netgear Prosafe M5300 ( lag + NIC Teaming ) Windows Server 2012
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Netgear Prosafe M5300 ( lag + NIC Teaming ) Windows Server 2012
Hello ,
I have a problem in my two stack switch M5300. I have a Lag with two ports 1/0/49(10G) and 2/0/49 (10G) , anda a nic teaming created on Windows Server 2012 .
Same times i have a break in my speed traffic ... and same errors on the ports ( lag).... , the traffic between the switch down to 1M.... when should be 100M or more
Anyone have a solutions or there is some manual connection between netgear and nic teaming...
Tanks for your help!
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Netgear Prosafe M5300 ( lag + NIC Teaming ) Windows Server 2012
Sounds a bit strange - are you running the latest firmware? Try replacing the cables? Should probably call in to the support and have someone look over your settings, but with the information you've provided so far it's hard to say what the issue is. Replacing cables may be enough.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Netgear Prosafe M5300 ( lag + NIC Teaming ) Windows Server 2012
Hi depinfsocem,
Can you confirm if Static LAG is disabled on your switch? (if its disabled, it runs LACP). Hope you can give us more info about the setup of LAG between your switch as well as your server.
Hope to hear from you soon.
Kind regards,
BrianL
NETGEAR Community
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Netgear Prosafe M5300 ( lag + NIC Teaming ) Windows Server 2012
I replace the cables and nothing... the firmware is 11.0.0.10...
is very strange ..
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Netgear Prosafe M5300 ( lag + NIC Teaming ) Windows Server 2012
My configuration...
Error switch 2 on Stack :
Configuration switch :
Lag:
Port Switch 1:
Port Swicth 2:
NIC teaming:
Teaming Mode: LACP
Load Balancing mode : Hyper-V Port
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Netgear Prosafe M5300 ( lag + NIC Teaming ) Windows Server 2012
If you plan to use Hyper-V balancing, I would suggest going with switch independent method instead of LACP. The reason is that HyperV hash method just binds the VM's Mac to 1 of NIC in the team and uses it for both inbound/outbound.
If you plan to have more of load balance and good load balance. Then I would suggest you switch the server to LACP + Address hash. Make sure the hash is Src/Dst IP and port (default) on your server and on the switch change it to hash (6 Src/Dest IP and TCP/UDP Port fields or 7 Enchanced hashing mode).
The likely reason for error count with the exception of hardware configuration would be that you are receiving incoming traffic on the incorrect NIC since the switch is binding the MAC to LAG group and HyperV hash method on server binds the MAC to specific port causing inbound error (RX errors).
For reference
Also I forgot to add, it is better for server to not load balance traffic across a stacked link since the traffic will create more load on stack ring so make sure you got more bandwidth on the stack ring than the number of links you plan to LAG across stack. (For M5300 you can use 2 stacking links between 2 or more switches for stacking) If you stacking your switches using 1 of 10G ports that it makes no sense to try to load balance the server to seperate switches since the max traffic on the 10G LAG will fill you single 10G stack link.
For reference:
Hyper-V Port Load-Balancing Algorithm
This method is commonly chosen and recommended for all Hyper-V installation based solely on its name. This is a poor reason. The name wasn’t picked because it’s the automatic best choice for Hyper-V, but because of how it operates.
The operation is based on the virtual network adapters. In versions 2012 and prior, it was by MAC address. In 2012 R2, and presumably onward, it will be based on the actual virtual switch port. Distribution depends on the teaming mode of the virtual switch.
Switch-independent: Each virtual adapter is assigned to a specific physical member of the team. It sends and receives only on that member. Distribution of the adapters is just round-robin. The impact on VMQ is that each adapter gets a single queue on the physical adapter it is assigned to, assuming there are enough left.
Everything else: Virtual adapters are still assigned to a specific physical adapter, but this will only apply to outbound traffic. The MAC addresses of all these adapters appear on the combined link on the physical switch side, so it will decide how to send traffic to the virtual switch. Since there’s no way for the Hyper-V switch to know where inbound traffic for any given virtual adapter will be, it must register a VMQ for each virtual adapter on each physical adapter. This can quickly lead to queue depletion.