NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
depinfsocem
Aug 15, 2015Aspirant
Netgear Prosafe M5300 ( lag + NIC Teaming ) Windows Server 2012
Hello , I have a problem in my two stack switch M5300. I have a Lag with two ports 1/0/49(10G) and 2/0/49 (10G) , anda a nic teaming created on Windows Server 2012 . Same times i have a break in my...
depinfsocem
Aug 16, 2015Aspirant
My configuration...
Error switch 2 on Stack :
Configuration switch :
Lag:
Port Switch 1:
Port Swicth 2:
NIC teaming:
Teaming Mode: LACP
Load Balancing mode : Hyper-V Port
Jedi_Exile
Aug 18, 2015NETGEAR Expert
If you plan to use Hyper-V balancing, I would suggest going with switch independent method instead of LACP. The reason is that HyperV hash method just binds the VM's Mac to 1 of NIC in the team and uses it for both inbound/outbound.
If you plan to have more of load balance and good load balance. Then I would suggest you switch the server to LACP + Address hash. Make sure the hash is Src/Dst IP and port (default) on your server and on the switch change it to hash (6 Src/Dest IP and TCP/UDP Port fields or 7 Enchanced hashing mode).
The likely reason for error count with the exception of hardware configuration would be that you are receiving incoming traffic on the incorrect NIC since the switch is binding the MAC to LAG group and HyperV hash method on server binds the MAC to specific port causing inbound error (RX errors).
For reference
Also I forgot to add, it is better for server to not load balance traffic across a stacked link since the traffic will create more load on stack ring so make sure you got more bandwidth on the stack ring than the number of links you plan to LAG across stack. (For M5300 you can use 2 stacking links between 2 or more switches for stacking) If you stacking your switches using 1 of 10G ports that it makes no sense to try to load balance the server to seperate switches since the max traffic on the 10G LAG will fill you single 10G stack link.
For reference:
Hyper-V Port Load-Balancing Algorithm
This method is commonly chosen and recommended for all Hyper-V installation based solely on its name. This is a poor reason. The name wasn’t picked because it’s the automatic best choice for Hyper-V, but because of how it operates.
The operation is based on the virtual network adapters. In versions 2012 and prior, it was by MAC address. In 2012 R2, and presumably onward, it will be based on the actual virtual switch port. Distribution depends on the teaming mode of the virtual switch.
Switch-independent: Each virtual adapter is assigned to a specific physical member of the team. It sends and receives only on that member. Distribution of the adapters is just round-robin. The impact on VMQ is that each adapter gets a single queue on the physical adapter it is assigned to, assuming there are enough left.
Everything else: Virtual adapters are still assigned to a specific physical adapter, but this will only apply to outbound traffic. The MAC addresses of all these adapters appear on the combined link on the physical switch side, so it will decide how to send traffic to the virtual switch. Since there’s no way for the Hyper-V switch to know where inbound traffic for any given virtual adapter will be, it must register a VMQ for each virtual adapter on each physical adapter. This can quickly lead to queue depletion.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!