Network Bonding on CentOS (Redhat, RHEL) 6.x


Network bonding is a method of combining two or more network interfaces together into a single interface. It will increase network throughput by a small margin while providing redundancy. If one interface is down, remaining interfaces provide network access.

Linux provides the bonding kernel module to allowing bonding of multiple network interfaces into a single logical network interface. The behaviour of the bonded interfaces depends upon the mode setting of the logical bond interface.

Network Bonding Modes

For a comprehensive list of bonding modes and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org.  According to the official documentation, the supported bonding modes are:

mode=0 (balance-rr)

Round-robin: It the default mode. It transmits packets in sequential order from the first available network interface to last. This mode provides load balancing and fault tolerance.

mode=1 (active-backup)

Active-backup: In this mode, only one network interface in the bond is active. The other interfaces in the bond become active, only when the active network interface fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance.

mode=2 (balance-xor)

XOR: Transmit based on [(source MAC address XOR’d with destination MAC address) modulo network interface count]. This selects the same network interface for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)

Broadcast: transmits all data on all network interfaces. This mode provides fault tolerance.

mode=4 (802.3ad)

IEEE 802.3ad Dynamic link aggregation: Creates aggregation groups that share the same speed and duplex settings. Utilizes all network interfaces in the active aggregator according to the 802.3ad specification.

Prerequisites:

– Ethtool support in the network adapter driver for retrieving the speed and duplex of each slave.
– A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

mode=5 (balance-tlb)

Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each network interface. Incoming traffic is received by the current network interface. If the receiving network interface fails, another network interface takes over the MAC address of the failed receiving slave.

Prerequisite:

– Ethtool support in the network interface drivers for retrieving the speed of each network interface.

mode=6 (balance-alb)

Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the network interfaces in the bond such that different peers use different hardware addresses for the server.

Bonding Options passed to network driver

mode=

Possible values are 0 to 6

miimon=

Use integer value for the frequency (in ms) of MII link monitoring. Zero is the default value which means link monitoring is disabled. We recommend link monitoring, a good value is 100.

downdelay=

Use integer value for delaying disabling a link by this number (in ms) after a link failure has been detected. Must be a multiple of miimon. Default value is zero. On a very busy network with active job that generate large data flows, it will be necessary to adjust downdelay, a good value is 500.

updelay=

Use integer value for delaying enabling a link by this number (in ms) after the “link up” status has been detected. Must be a multiple of miimon. Default value is zero.

xmit_hash_policy=

This option defines the transmit load balancing policy for bonding modes 2 and 4. For example, if the majority of your traffic is between many different IP addresses, you may want to set a policy to balance by IP address. You can set this load-balancing policy by selecting a Custom bonding modesuch as mode=4, xmit_hash_policy=layer2+3

Configure Bond0 Interface

By default Redhat and CentOS stores network configuration files under /etc/sysconfig/network-scripts

As root, create a bond0 configuration file.

vi /etc/sysconfig/network-scripts/ifcfg-bond0

Add the following lines in the file.

DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.1.200
NETWORK=192.168.1.0
NETMASK=255.255.255.0
USERCTL=no
BONDING_OPTS="mode=1 miimon=100"

BONDING_OPTS describes the bonding options passed to the bonding driver. In this example, we configure mode1(active-backup). 192.168.1.200 is the IP address of the bond0 logical network interface.

Save and close file.

Next, load the bond0 interface into the Linux kernel.

vi /etc/modprobe.d/bonding.conf

Add the following line in the file.

alias bond0 bonding

Save and close the file.

Configure Network interfaces

To cause bond0 to use physical network interfaces, eth1 and eth2,  we must modify the eth1 & eth2 network configuration files.

Edit file /etc/sysconfig/network-scripts/ifcfg-eth1,

vi /etc/sysconfig/network-scripts/ifcfg-eth1

Modify the file as shown below.

DEVICE=eth1
MASTER=bond0
SLAVE=yes
USERCTL=no
ONBOOT=yes
BOOTPROTO=none

Save and close the file.

Edit file /etc/sysconfig/network-scripts/ifcfg-eth2,

# vi /etc/sysconfig/network-scripts/ifcfg-eth2

Modify the file as shown below.

DEVICE=eth2
MASTER=bond0
SLAVE=yes
USERCTL=no
ONBOOT=yes
BOOTPROTO=none

Save and close the file.

Ask Linux to load the bonding module (driver).

modprobe bonding

Restart network services.

service network restart

Test Network Bonding

To validate bond0 is up and running:

cat /proc/net/bonding/bond0

Sample output:

Ethernet Channel Bonding Driver: vX.X.X (Date)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: MAC Address
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: MAC Address
Slave queue ID: 0

The output shows bond0 is up and running. It is configured as mode 1 (active-backup).

To view the list of network interfaces and their IP address:

# ifconfig

Sample output:

bond0     Link encap:Ethernet  HWaddr 08:00:27:FE:6F:BF  
          inet addr:192.168.1.200  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fefe:6fbf/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:379 errors:0 dropped:0 overruns:0 frame:0
          TX packets:167 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:32354 (31.5 KiB)  TX bytes:24078 (23.5 KiB)

eth0      Link encap:Ethernet  HWaddr 08:00:27:BE:25:49  
          inet addr:192.168.1.101  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:febe:2549/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1402 errors:0 dropped:0 overruns:0 frame:0
          TX packets:904 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:134823 (131.6 KiB)  TX bytes:124938 (122.0 KiB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:FE:6F:BF  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:285 errors:0 dropped:0 overruns:0 frame:0
          TX packets:156 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:24746 (24.1 KiB)  TX bytes:22956 (22.4 KiB)

eth2      Link encap:Ethernet  HWaddr 08:00:27:FE:6F:BF  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:95 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:7674 (7.4 KiB)  TX bytes:1364 (1.3 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)