目录
出现问题(查询状态:是切换了过来,但是window上ping不通了)之后的修改
team的多网卡绑定
多个网卡绑定在一起对外提供服务,用户来访问的时候,发数据包。
team组的模式
broadcast:广播的意思:发过来的数据包以广播的形式,发送到每一个网卡
roundrobin:轮询的模式:发送数据包到网卡,轮换的发送
activebackup:主备模式:两块网卡互为主备。当网卡是主设备,另一块不工作作为从设备,当主设备出现问题,切换到从设备。
loadbalance:负载均衡:两块网卡绑定为负载均衡模式,他会通过计算,让两块网卡上面收到的数据包均衡。
lacp:需要借助交换机,实现高级的负载均衡
team机制的配置
可以通过nmcli命令来实现team机制的配置。
要素 :
多个网卡(>=2)
绑定的操作:将我的多个网卡绑定在一个虚拟网卡,虚拟网卡对外提供服务(提供一个ip)
但实际上数据包还是流经我们的物理网卡
绑定的操作:
1.首先得有一个虚拟网卡:对外提供服务,意味着它上面需要配置IP,配置IP是配置在连接上的。
如何去产生一个team的虚拟网卡,对应的上面要配置连接,连接上配置IP
直接去添加一条连接,并指定虚拟网卡:产生连接的同时,产生了一个虚拟网卡
# 和team相关的连接类型:
· team
· team-slave
# team-》 虚拟网卡上的连接
# team-slave: 绑定的物理网卡上的连接config字段是用来配置team机制的模式 config json字符串
# 本身team_dev这个设备不存在,虚拟的(基于我们的type配置为team后)
# config:配置team机制的模式,json格式的
'{"runner": {"name": "activebackup", "hwaddr_policy": "by_active"}}'nmcli c add type team con-name team_conn ifname team_dev config '{"runner": {"name": "activebackup"}}'
ipv4.addresses 192.168.149.158/24 ipv4.gateway 192.168.149.2 ipv4.dns 8.8.8.8 ipv4.method manual配置完成后:会产生两个东西
一个是team_dev这个虚拟设备
一个是team_conn这个连接,连接是依赖于我们的虚拟的网卡team_dev
然后team_conn对外提供的ip为192.168.149.158[root@localhost ~]# nmcli c add type team con-name team_conn ifname team_dev config '{"runner":{"name":"activebackup"}}' ipv4.addresses 192.168.149.158/24 ipv4.gateway 192.168.149.2 ipv4.dns 8.8.8.8 ipv4.method manual Warning: There is another connection with the name 'team_conn'. Reference the connection by its uuid 'a600015b-8735-493d-86f6-ed4f9af70558' Connection 'team_conn' (a600015b-8735-493d-86f6-ed4f9af70558) successfully added. [root@localhost ~]# nmcli c show NAME UUID TYPE DEVICE rhce_static 773fa021-9320-4fde-931d-377ac093b052 ethernet ens160 virbr0 12c71cbd-d9cc-401d-98df-d521bd2a2230 bridge virbr0 team_conn 4e50da42-2426-40d9-9917-e030732421a5 team team_dev ens160 79d23d30-ff60-485c-9f22-1ddbbda81e9d ethernet -- rhce_auto 9a51c2d8-be72-43c6-b3d8-25de827aa3e1 ethernet -- team_conn a600015b-8735-493d-86f6-ed4f9af70558 team -- [root@localhost ~]# ifconfig ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.149.148 netmask 255.255.255.0 broadcast 192.168.149.255 inet6 fe80::9246:9da6:330c:75d8 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:22:82:5c txqueuelen 1000 (Ethernet) RX packets 204 bytes 32809 (32.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 166 bytes 16928 (16.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens224: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 00:0c:29:22:82:66 txqueuelen 1000 (Ethernet) RX packets 81 bytes 5184 (5.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 team_dev: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 7e:a4:40:46:f8:a4 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:ab:d0:bf txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
2.实际的物理网卡得绑定在我们的虚拟网卡上,这个是不需要配置IP。
两块物理网卡:ens160, ens224
nmcli c add type team-slave con-name team_port1 ifname ens160 master team_dev
nmcli c add type team-slave con-name team_port2 ifname ens224 master team_dev[root@localhost ~]# nmcli c add type team-slave con-name team_port1 ifname ens160 master team_dev Connection 'team_port1' (ebd25581-0a41-4e81-b231-d1e0e00d269f) successfully added. [root@localhost ~]# nmcli c add type team-slave con-name team_port2 ifname ens224 master team_dev Connection 'team_port2' (e4cb86d3-128c-40e3-8026-dbee1ce04d38) successfully added. [root@localhost ~]# nmcli c show NAME UUID TYPE DEVICE rhce_static 773fa021-9320-4fde-931d-377ac093b052 ethernet ens160 team_conn 4e50da42-2426-40d9-9917-e030732421a5 team team_dev team_port2 e4cb86d3-128c-40e3-8026-dbee1ce04d38 ethernet ens224 virbr0 12c71cbd-d9cc-401d-98df-d521bd2a2230 bridge virbr0 ens160 79d23d30-ff60-485c-9f22-1ddbbda81e9d ethernet -- rhce_auto 9a51c2d8-be72-43c6-b3d8-25de827aa3e1 ethernet -- team_conn a600015b-8735-493d-86f6-ed4f9af70558 team -- team_port1 ebd25581-0a41-4e81-b231-d1e0e00d269f ethernet --
如何添加新的物理网卡
1.在虚拟机界面点击编辑虚拟机设置
2.点击添加(A)
3.选择网络适配器并点击完成
4.开启虚拟机并查询是否添加
[root@localhost ~]# ifconfig ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.149.148 netmask 255.255.255.0 broadcast 192.168.149.255 inet6 fe80::9246:9da6:330c:75d8 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:22:82:5c txqueuelen 1000 (Ethernet) RX packets 107 bytes 25569 (24.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 131 bytes 14137 (13.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens224: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 00:0c:29:22:82:66 txqueuelen 1000 (Ethernet) RX packets 12 bytes 768 (768.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ether 52:54:00:ab:d0:bf txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
3.激活我们的物理网卡,以及我们的虚拟网卡(实际上激活的是连接)
激活的时候,先激活team-slave, 然后再激活team
nmcli c up team_port1
nmcli c up team_port2
nmcli c up team_conn[root@localhost ~]# nmcli c up team_port1 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6) [root@localhost ~]# nmcli c up team_port2 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7) [root@localhost ~]# nmcli c up team_conn Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/8) [root@localhost ~]# nmcli c show NAME UUID TYPE DEVICE team_conn 4e50da42-2426-40d9-9917-e030732421a5 team team_dev team_port1 ebd25581-0a41-4e81-b231-d1e0e00d269f ethernet ens160 team_port2 e4cb86d3-128c-40e3-8026-dbee1ce04d38 ethernet ens224 virbr0 12c71cbd-d9cc-401d-98df-d521bd2a2230 bridge virbr0 ens160 79d23d30-ff60-485c-9f22-1ddbbda81e9d ethernet -- rhce_auto 9a51c2d8-be72-43c6-b3d8-25de827aa3e1 ethernet -- rhce_static 773fa021-9320-4fde-931d-377ac093b052 ethernet -- team_conn a600015b-8735-493d-86f6-ed4f9af70558 team --
4. 查看主备的状态
teamdctl team_dev state [view]
[root@localhost ~]# teamdctl team_dev state setup: runner: activebackup ports: ens160 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 ens224 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: ens160
5.切换:自动切换(我们把一块网卡给down掉,一块网卡不能工作了)
down掉主用的网卡ens160
nmcli d disconnect ens160查询状态:是切换了过来,但是window上ping不通了
出现这个问题的原因是因为配置成功了之后,ens160,ens224,team_dev三个mac地址相同的
执行切换的时候:ens160和ens224的mac相同,切换是切换过来的,但是它不知道使用哪个
需要做的修正是:让它切换的时候,让我们team_dev跟随主用设备的mac地址[root@localhost ~]# nmcli d disconnect ens160 Device 'ens160' successfully disconnected. [root@localhost ~]# teamdctl team_dev state setup: runner: activebackup ports: ens224 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: ens224
出现问题(查询状态:是切换了过来,但是window上ping不通了)之后的修改
1.nmcli c modify team_conn config '{"runner": {"name": "activebackup", "hwaddr_policy": "by_active"}}'
2.重新启动
nmcli c up team_port1
nmcli c up team_port2
nmcli c up team_conn3.重新验证
在windows上ping服务器的地址:
ping 192.168.149.158down掉主用的网卡ens160
nmcli d disconnect ens160查看状态:
[root@localhost ~]# teamdctl team_dev state setup: runner: activebackup ports: ens224 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: ens224