Jump to content
Welcome to our new Citrix community!
  • 0

[active-active] traffic is not divided between the nics


Derick Fontes

Question

NIC1 - 1gbs

NIC2 - 1gbs

Bond0 - 2gbs (NIC1 + NIC2)

ovs-vsctl list port bond0
_uuid               : 20f245a5-7985-4af4-b1fc-e943fb9b3d73
bond_active_slave   : "0c:c4:7a:a4:87:ea"
bond_downdelay      : 200
bond_fake_iface     : false
bond_mode           : balance-slb
bond_updelay        : 31000
external_ids        : {}
fake_bridge         : false
interfaces          : [d9b37e86-7991-49ce-bfbb-965703980175, e30b8a13-f5bf-4cd6-aa35-f68a39360a3d]
lacp                : off
mac                 : "0c:c4:7a:a4:87:ea"
name                : "bond0"
other_config        : {bond-detect-mode=carrier, bond-miimon-interval="100", bond-rebalance-interval="10000"}
qos                 : []
rstp_statistics     : {}
rstp_status         : {}
statistics          : {}
status              : {}
tag                 : []
trunks              : []
vlan_mode           : []
ovs-appctl bond/show bond0
---- bond0 ----
bond_mode: balance-slb
bond may use recirculation: no, Recirc-ID : -1
bond-hash-basis: 0
updelay: 31000 ms
downdelay: 200 ms
next rebalance: 323 ms
lacp_status: off
active slave mac: 0c:c4:7a:a4:87:ea(eth0)

slave eth0: enabled
        active slave
        may_enable: true
        hash 130: 230313 kB load

slave eth1: enabled
        may_enable: true
        hash 49: 31874 kB load
        hash 162: 622 kB load
        hash 235: 1 kB load

 

My scenario: I get a lot of incoming traffic in a specific VM, and aggregating the interfaces is important to get up to 2gbs of traffic.

 

Using the active-active mode with balance-slb, I see that traffic is not divided between the network cards, I always see the traffic on a specific network card. That is, I am not having the benefit of aggregation to get in 2gbs of traffic.

 

Reading the documentations a little, I see that this way I could reach 2gbs as well as the scenario using the LACP.

 

In my scenario, would the best use even the LACP? Or am I forgetting something?

 

Thank you,

 

xen.png

Link to comment

3 answers to this question

Recommended Posts

  • 0

LACP would be what I suggest in general. Do note that this form of link aggregation still means that any active connection from a single VM will be limited to 1 Gbs, for LACP, as well. You only get up to 2 Gbs for all combined VM traffic. The only way you'll be able to get a higher throughput for a single VM connection is by upgrading to 10 Gb (or higher) NICs, unfortunately.

 

-=Tobias

Link to comment
  • 0

Yea, you get a theoretical 2Gbs theoretical, but in reality probably not close.  All of you bonding does hashing to load balance.

So if you have your VM's bonded and six VM's on one nic are low bandwidth, and six VM's on the other nic are high bandwidith,

you may be load balanced by number of VM's, but bandwidth will still be way out of balance. Traffic can also be split up by 

send and/or receive to add to the mix. You really only get a true/better split of network traffic with multipathing as that is putting

XenServer in charge of splitting traffic across multiple networks and the decision on where to send traffic is made at the source

of the connection (XenServer), and not during traffic transfer by switching. 

 

--Alan--

 

Link to comment
  • 0

All that will happen is that traffic will be multiplexed and distributed between the NICs, but for a given VM, network traffic will never pass through both at the same time. The purpose is to allow for more traffic to be handled overall by many VMs, not for any single VM to be able to have more bandwidth available. That, and of course, redundancy is the other benefit.

 

-=Tobias

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...