Ethernet Channel Bonding NIC Teaming on Linux Systems

146 篇文章 12 订阅

Ethernet Channel Bonding enables two or more Network Interfaces Card (NIC) to a single virtual NIC card which may increase the bandwidth and provides redundancy of NIC Cards. This is a great way to achieve redundant links, fault tolerance or load balancing networks in production system. If one physical NIC is down or unplugged, it will automatically move resources to other NIC card. Channel/NIC bonding will work with the help of bonding driver in Kernel. We’ll be using two NIC to demonstrate the same.

There are almost six types of Channel Bond types are available. Here, we’ll review only two type of Channel Bond which are popular and widely used.

  1. 0: Load balancing (Round-Robin) : Traffic is transmitted in sequential order or round-robin fashion from both NIC. This mode provides load balancing and fault tolerance.
  2. 1: Active-Backup : Only one slave NIC is active at any given point of time. Other Interface Card will be active only if the active slave NIC fails.

Creating Ethernet Channel Bonding

We have two Network Ethernet Cards i.e eth1 and eth2 where bond0 will be created for bonding purpose. Need superuser privileged to execute below commands.

Load Balancing (Round-Robin)
Configure eth1

Mention parameter MASTER bond0 and eth1 interface as a SLAVE in config file as shown below.

# vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
USERCTL=no
MASTER=bond0
SLAVE=yes
Configure eth2

Here also, specify parameter MASTER bond0 and eth2 interface as a SLAVE.

# vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE="eth2"
TYPE="Ethernet"
ONBOOT="yes"
USERCTL=no
#NM_CONTROLLED=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
Create bond0 Configuration

Create bond0 and configure Channel bonding interface in the “/etc/sysconfig/network-scripts/” directory called ifcfg-bond0.

The following is a sample channel bonding configuration file.

# vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
IPADDR=192.168.246.130
NETMASK=255.255.255.0
BONDING_OPTS="mode=0 miimon=100"

Note: In the above configuration we have chosen Bonding Options mode=0 i.e Round-Robin and miimon=100 (Polling intervals 100 ms).

Let’s see interfaces created using ifconfig command which shows “bond0” running as the MASTER both interfaces “eth1” and “eth2” running as SLAVES.

# ifconfig
bond0     Link encap:Ethernet  HWaddr 00:0C:29:57:61:8E
          inet addr:192.168.246.130  Bcast:192.168.246.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe57:618e/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:17374 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16060 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1231555 (1.1 MiB)  TX bytes:1622391 (1.5 MiB)

eth1      Link encap:Ethernet  HWaddr 00:0C:29:57:61:8E
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:16989 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8072 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1196931 (1.1 MiB)  TX bytes:819042 (799.8 KiB)
          Interrupt:19 Base address:0x2000

eth2      Link encap:Ethernet  HWaddr 00:0C:29:57:61:8E
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:385 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7989 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:34624 (33.8 KiB)  TX bytes:803583 (784.7 KiB)
          Interrupt:19 Base address:0x2080

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:480 (480.0 b)  TX bytes:480 (480.0 b)

Restart Network service and interfaces should be OK.

# service network restart
Shutting down interface bond0:                             [  OK  ]
Shutting down loopback interface:                          [  OK  ]
Bringing up loopback interface:                            [  OK  ]
Bringing up interface bond0:                               [  OK  ]

Checking the status of the bond.

# watch -n .1 cat /proc/net/bonding/bond0
Sample Ouput

Below output shows that Bonding Mode is Load Balancing (RR) and eth1 & eth2 are showing up.

Every 0.1s: cat /proc/net/bonding/bond0                         Thu Sep 12 14:08:47 2013 

Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 2
Permanent HW addr: 00:0c:29:57:61:8e
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 2
Permanent HW addr: 00:0c:29:57:61:98
Slave queue ID: 0
Create Active Backup

In this scenario, Slave interfaces remain same. only one change will be there in the bond interface ifcfg-bond0 instead of ‘0‘ it will be ‘1‘ which is shown as under.

# vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
IPADDR=192.168.246.130
NETMASK=255.255.255.0
BONDING_OPTS="mode=1 miimon=100"

Restart network service and check the status of bonding.

# service network restart
Shutting down interface bond0:                             [  OK  ]
Shutting down loopback interface:                          [  OK  ]
Bringing up loopback interface:                            [  OK  ]
Bringing up interface bond0:                               [  OK  ]

Checking the status of the bond with command.

# watch -n .1 cat /proc/net/bonding/bond0
Sample Output

Bonding Mode is showing fault-tolerance (active-backup) and Slave Interface is up.

Every 0.1s: cat /proc/n...  Thu Sep 12 14:40:37 2013

Ethernet Channel Bonding Driver: v3.6.0 (September 2
6, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 00:0c:29:57:61:8e
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 00:0c:29:57:61:98
Slave queue ID: 0

Note: Manually down and up the Slave Interfaces to check the working of Channel Bonding. Please see the command as below.

# ifconfig eth1 down
# ifconfig eth1 up

Thats it!


后记:简单的介绍一下上面在加载bonding模块的时候,options里的一些参数的含义:

miimon 监视网络链接的频度,单位是毫秒,我们设置的是200毫秒。

max_bonds 配置的bond口个数

mode bond模式,主要有以下几种,在一般的实际应用中,0和1用的比较多,

如果你要深入了解这些模式各自的特点就需要靠读者你自己去查资料并做实践了。

0或balance-rr 轮转策略,提供负载均衡和耐故障功能,按顺序轮流把包发给包含在bond口内的网口。

1或active-backup 主备策略,提供高耐故障功能,逻辑简单,一个处于激活状态,一个失败,另外一个自动激活。

2或balance-xor XOR策略,提供负载均衡和耐故障功能。

3或broadcast 广播策略,耐故障功能。把数据以广播的方式,发给包含在该bond口内的所有网口。

4或802.3ad IEEE 802.3ad动态链接集合。

5或balance-tlb 自动适应传输负载均衡策略。

6或balance-alb 自动适应负载均衡策略


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值