NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

depinfsocem's avatar
depinfsocem
Aspirant
Aug 15, 2015

Netgear Prosafe M5300 ( lag + NIC Teaming ) Windows Server 2012

Hello ,
I have a problem in my two stack switch M5300. I have a Lag with two ports 1/0/49(10G) and 2/0/49 (10G) , anda a nic teaming created on Windows Server 2012 .
Same times i have a break in my speed traffic ... and same errors on the ports ( lag).... , the traffic between the switch down to 1M.... when should be 100M or more
Anyone have a solutions or there is some manual connection between netgear and nic teaming...
Tanks for your help!

5 Replies

  • Danthem's avatar
    Danthem
    NETGEAR Employee Retired

    Sounds a bit strange - are you running the latest firmware? Try replacing the cables? Should probably call in to the support and have someone look over your settings, but with the information you've provided so far it's hard to say what the issue is. Replacing cables may be enough.

    • depinfsocem's avatar
      depinfsocem
      Aspirant

      I replace the cables and nothing... the firmware is 11.0.0.10...
      is very strange ..

  • BrianL2's avatar
    BrianL2
    NETGEAR Employee Retired

    Hi depinfsocem,

     

    Can you confirm if Static LAG is disabled on your switch? (if its disabled, it runs LACP). Hope you can give us more info about the setup of LAG between your switch as well as your server.

     

    Hope to hear from you soon.

     


    Kind regards,

     

    BrianL
    NETGEAR Community

    • depinfsocem's avatar
      depinfsocem
      Aspirant

      My configuration...

       

       

      Error  switch 2 on Stack :

      erros

       

       

      Configuration switch :

      Lag:

      LAg

       

      Port Switch 1:

      switch

      Port Swicth 2:

      switch2

      NIC teaming:

       

       

      Teaming Mode: LACP
      Load Balancing mode : Hyper-V Port

       

       

      • Jedi_Exile's avatar
        Jedi_Exile
        NETGEAR Expert

        If you plan to use Hyper-V balancing, I would suggest going with switch independent method instead of LACP.   The reason is that HyperV hash method just binds the VM's Mac to 1 of NIC in the team and uses it for both inbound/outbound.

         

        If you plan to have more of load balance and good load balance.  Then I would suggest you switch the server to LACP + Address hash.  Make sure the hash is Src/Dst IP and port (default) on your server and on the switch change it to hash (6 Src/Dest IP and TCP/UDP Port fields or 7 Enchanced hashing mode).

         

        The likely reason for error count with the exception of hardware configuration would be that you are receiving incoming traffic on the incorrect NIC since the switch is binding the MAC to LAG group and HyperV hash method on server binds the MAC to specific port causing inbound error (RX errors). 

         

        For reference

        http://www.darrylvanderpeijl.nl/nic-teaming-modes-and-distribution-algorithms-in-windows-server-2012-r2/

        http://blogs.technet.com/b/uspartner_ts2team/archive/2012/09/25/nic-teaming-in-windows-server-2012.aspx

         

         

        Also I forgot to add, it is better for server to not load balance traffic across a stacked link since the traffic will create more load on stack ring so make sure you got more bandwidth on the stack ring than the number of links you plan to LAG across stack.  (For M5300 you can use 2 stacking links between 2 or more switches for stacking)  If you stacking your switches using 1 of 10G ports that it makes no sense to try to load balance the server to seperate switches since the max traffic on the 10G LAG will fill you single 10G stack link.

         

         

         

        For reference:

        Hyper-V Port Load-Balancing Algorithm

        This method is commonly chosen and recommended for all Hyper-V installation based solely on its name. This is a poor reason. The name wasn’t picked because it’s the automatic best choice for Hyper-V, but because of how it operates.

        The operation is based on the virtual network adapters. In versions 2012 and prior, it was by MAC address. In 2012 R2, and presumably onward, it will be based on the actual virtual switch port. Distribution depends on the teaming mode of the virtual switch.

        Switch-independent: Each virtual adapter is assigned to a specific physical member of the team. It sends and receives only on that member. Distribution of the adapters is just round-robin. The impact on VMQ is that each adapter gets a single queue on the physical adapter it is assigned to, assuming there are enough left.

        Everything else: Virtual adapters are still assigned to a specific physical adapter, but this will only apply to outbound traffic. The MAC addresses of all these adapters appear on the combined link on the physical switch side, so it will decide how to send traffic to the virtual switch. Since there’s no way for the Hyper-V switch to know where inbound traffic for any given virtual adapter will be, it must register a VMQ for each virtual adapter on each physical adapter. This can quickly lead to queue depletion.

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More