Bonded Interfaces – Testing

Home » CentOS » Bonded Interfaces – Testing
CentOS 2 Comments

Hi,

I have 2 servers running CentOS 6.3. Each has four 1gb Ethernet ports. I
have bonded all four ports on each server and patched them to the same switch (following the instructions at http://wiki.CentOS.org/TipsAndTricks/BondingInterfaces). I have created aggregated trunks for the servers respective ports on the switch. The switch reports that the ports are up and that the Link Aggregation is enabled.

I was under the impression, perhaps falsely, that I would see an improvement in the throughput. I am not sure that I am. I used a utility call nuttcp before the trunking and after to test the throughput. I am seeing miniscule differences in the Mbps. I am not sure that I am testing correctly. Perhaps the speed will not change but the available bandwidth has increased. Is there some way to demonstrate that I am able to transfer data from these two servers at a rate of 4,000Mbs?

Does anyone have any thoughts? I pasted some details below increase they have a bearing. Thanks in advance Dermot.

./nuttcp otherserver
1125.6573 MB / 10.03 sec = 941.4967 Mbps 10 %TX 47 %RX

cat /proc/net/bonding/bond0

Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0)
MII Status: up MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info LACP rate: slow Aggregator selection policy (ad_select): stable Active Aggregator Info:
Aggregator ID: 2
Number of ports: 4
Actor Key: 17
Partner Key: 420
Partner Mac Address: 10:0d:7f:4c:16:ca

Slave Interface: eth0
MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0
Permanent HW addr: 2c:76:8a:5d:28:1c Aggregator ID: 2
Slave queue ID: 0

Slave Interface: eth1
MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0
Permanent HW addr: 2c:76:8a:5d:28:1d Aggregator ID: 2
Slave queue ID: 0

Slave Interface: eth2
MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0
Permanent HW addr: 2c:76:8a:5d:28:1e Aggregator ID: 2
Slave queue ID: 0

Slave Interface: eth3
MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0
Permanent HW addr: 2c:76:8a:5d:28:1f Aggregator ID: 2
Slave queue ID: 0

2 thoughts on - Bonded Interfaces – Testing

  • Remember that LACP (802.3ad) uses a hash algorithm (configurable on how it’s carried out and whether you use mac addresses, dst/src IPs and ports will vary quite often for optimisation) to pick a physical connection for the TCP flow … and that will stay over the physical connection.

    As such for any one given flow you’ll see up to the speed of the physical interface the data is going over… the speed increases come with multiple systems communicating with that server and with the right pick of hashing function having those connections go over differing interfaces.

  • In which case, an appropriate test would be to have several servers push data to one the server while it’s interface is un-bonded. We’d anticipate that the results would be under 1000Mbps. Then do the same with the bonded interface and the results would hopefully be more consistently around
    1000Mpbs. So I should not expect fastest throughput, simply a fatter pipe?

    If it matters these are the hashing options available on the switch:

    Thanks, Dermot

    Src MAC, VLAN, EType, incoming port

    Dest MAC, VLAN, EType, incoming port

    Src/Dest MAC, VLAN, EType, incoming port

    Src IP and Src TCP/UDP Port fields

    Dest IP and Dest TCP/UDP Port fields

    Src/Dest IP and TCP/UDP Port fields Enhanced hashing mode

LEAVE A COMMENT