Setting Up Network Bonding
Bonding multiple network interfaces together provides the following benefits:
-
High network availability. If one of the interfaces fails, the traffic will be automatically routed to the working interface(s).
-
Higher network performance. For example, two Gigabit interfaces bonded together will deliver about 1.7 Gbit/s or 200 MB/s throughput. The required number of bonded storage network interfaces may depend on how many storage drives are on the Hardware Node. For example, a rotational HDD can deliver up to 1 Gbit/s throughput.
To configure a bonding interface, do the following:
-
Create the
/etc/modprobe.d/bonding.conf
file containing the following line:
alias bond0 bonding
-
Create the
/etc/sysconfig/network-scripts/ifcfg-bond0
file containing the following lines:
DEVICE=bond0
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
BONDING_OPTS="mode=balance-xor xmit_hash_policy=layer3+4 miimon=300 downdelay=300 updelay=300"
NAME="Storage net0"
NM_CONTROLLED=no
IPADDR=xxx.xxx.xxx.xxx
PREFIX=24
Notes:
1. Make sure to enter the correct values in the
IPADDR
and
PREFIX
lines.
2. The
balance-xor
mode is recommended, because it offers both fault tolerance and better performance. For more details, see the documents listed below.
-
Make sure the configuration file of each Ethernet interface you want to bond (e.g.,
/etc/sysconfig/network-scripts/ifcfg-eth0
) contains the lines shown in this example:
DEVICE="eth0"
BOOTPROTO=none
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="Ethernet"
HWADDR=xx:xx:xx:xx:xx:xx
MASTER=bond0
SLAVE=yes
USERCTL=no
-
Bring up the
bond0
interface:
# ifup bond0
-
Use
dmesg
output to verify that
bond0
and its slave Ethernet interfaces are up and links are ready.
Note:
More information on network bonding is provided in the
Red Hat Enterprise Linux Deployment Guide
and
Linux Ethernet Bonding Driver HOWTO
.
|