Docker 使用openvswitch 跨主机互联转载螃蟹博客



Docker多台宿主机间的容器互联-centos7

centos7系统采用open vswitch实现不同物理服务器/虚拟机上的docker容器互联。

环境:

node01:192.168.12.195,docker容器内网地址段:172.17.1.0/24

node02:192.168.12.196,docker容器内网地址段:172.17.2.0/24

准备:

1.node01和node02安装docker

yum install docker
service docker start
chkconfig docker on

2.node01和node02关闭selinux和firewall,安装常用包

2.1关闭selinux

setenforce 0(立即生效)

vi /etc/selinux/config ,(重启生效)

#SELINUX=enforcing
SELINUX=disabled

2.2关闭firewall

centos7默认开启了firewall,需关闭。

systemctl stop firewalld(立即生效)

systemctl disable firewalld(重启生效)

2.3安装net-tools,bridge-utils

yum install net-tools

yum install bridge-utils

3.node01和node02 安装openvswitch

最新的为openvswitch-2.4.0.tar.gz,

3.1源码安装。

root用户执行:

yum -y install wget openssl-devel kernel-devel
yum groupinstall “Development Tools”
adduser ovswitch

ovswitch用户执行:

su – ovswitch
wget http://openvswitch.org/releases/openvswitch-2.4.0.tar.gz
tar -zxvpf openvswitch-2.4.0.tar.gz
mkdir -p ~/rpmbuild/SOURCES
sed ‘s/openvswitch-kmod, //g’ openvswitch-2.4.0/rhel/openvswitch.spec > openvswitch-2.4.0/rhel/openvswitch_no_kmod.spec
cp openvswitch-2.4.0.tar.gz rpmbuild/SOURCES/
rpmbuild -bb –without check ~/openvswitch-2.4.0/rhel/openvswitch_no_kmod.spec
exit

root用户执行:

yum localinstall /home/ovswitch/rpmbuild/RPMS/x86_64/openvswitch-2.4.0-1.x86_64.rpm

3.2直接下载rpm包进行安装。(可选)

openvswitch-2.4.0-1.x86_64.rpm,

我已经放在了百度盘里,链接如下:

http://pan.baidu.com/s/1c0x7Wcw

执行:

rpm -ivh openvswitch-2.4.0-1.x86_64.rpm

3.3启动openvswitch

systemctl start openvswitch.service (立即生效)

chkconfig openvswitch on  (重启生效)

查看状态:

systemctl  status openvswitch.service -l

如看到如下界面,说明启动成功。

无标题777

4.node01和node02 配置OVS Bridge及GRE

规划:

node01:容器内地址段172.17.1.0/24,新网桥:kbr0,GRE:gre0

node02:容器内地址段172.17.2.0/24,新网桥:kbr0,GRE:gre0

4.1 node01:部署

开启ip转发:cat /proc/sys/net/ipv4/ip_forward,显示为1,表示开启。

ovs-vsctl add-br obr0
ovs-vsctl add-port obr0 gre0 — set Interface gre0 type=gre options:remote_ip=192.168.12.196

service docker stop
brctl addbr kbr0
brctl addif kbr0 obr0
ip link set dev docker0 down
ip link del dev docker0

 

vi /etc/sysconfig/network-scripts/ifcfg-kbr0

ONBOOT=yes
BOOTPROTO=static
IPADDR=172.17.1.1
NETMASK=255.255.255.0
GATEWAY=172.17.1.0
USERCTL=no
TYPE=Bridge
IPV6INIT=no

 

vi /etc/sysconfig/network-scripts/route-ens160   (ifconfig -a 查看网卡)

172.17.2.0/24 via 192.168.12.196 dev ens160

 

修改docker配置文件,添加-b参数

vi /etc/sysconfig/docker

OPTIONS=’–selinux-enabled -b=kbr0′

 

reboot

 

重启后登录,查看路由信息,查看网卡信息,查看docker是否启动,查看openvswitch是否启动。

[root@centos7_kube_node01 ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.12.254 0.0.0.0 UG 100 0 0 ens160
172.17.1.0 0.0.0.0 255.255.255.0 U 425 0 0 kbr0
172.17.2.0 192.168.12.196 255.255.255.0 UG 100 0 0 ens160
192.168.12.0 0.0.0.0 255.255.255.0 U 100 0 0 ens160

[root@centos7_kube_node01 ~]# ifconfig -a
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.12.195 netmask 255.255.255.0 broadcast 192.168.12.255
inet6 fe80::250:56ff:fe9d:2218 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:9d:22:18 txqueuelen 1000 (Ethernet)
RX packets 16183 bytes 1108162 (1.0 MiB)
RX errors 0 dropped 8579 overruns 0 frame 0
TX packets 1227 bytes 2410592 (2.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

kbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.1.1 netmask 255.255.255.0 broadcast 172.17.1.255
inet6 fe80::d8df:80ff:feea:bd09 prefixlen 64 scopeid 0x20<link>
ether da:df:80:ea:bd:09 txqueuelen 0 (Ethernet)
RX packets 45 bytes 3476 (3.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 45 bytes 4834 (4.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@centos7_kube_node01 ~]# docker version
Client:
Version: 1.8.2
API version: 1.20
Package Version: docker-1.8.2-7.el7.centos.x86_64
Go version: go1.4.2
Git commit: bb472f0/1.8.2
Built:
OS/Arch: linux/amd64

Server:
Version: 1.8.2
API version: 1.20
Package Version:
Go version: go1.4.2
Git commit: bb472f0/1.8.2
Built:
OS/Arch: linux/amd64
[root@centos7_kube_node01 ~]# service openvswitch status
ovsdb-server is running with pid 733
ovs-vswitchd is running with pid 744
[root@centos7_kube_node01 ~]#

node01配置完成。

4.2 node02:部署(重启上述部署,注意修改相应的地址)

开启ip转发:cat /proc/sys/net/ipv4/ip_forward,显示为1,表示开启。

ovs-vsctl add-br obr0
ovs-vsctl add-port obr0 gre0 — set Interface gre0 type=gre options:remote_ip=192.168.12.195

service docker stop
brctl addbr kbr0
brctl addif kbr0 obr0
ip link set dev docker0 down
ip link del dev docker0

vi /etc/sysconfig/network-scripts/ifcfg-kbr0

ONBOOT=yes
BOOTPROTO=static
IPADDR=172.17.2.1
NETMASK=255.255.255.0
GATEWAY=172.17.2.0
USERCTL=no
TYPE=Bridge
IPV6INIT=no

vi /etc/sysconfig/network-scripts/route-ens160   (ifconfig -a 查看网卡)

172.17.1.0/24 via 192.168.12.195 dev ens160

修改docker配置文件,添加-b参数

vi /etc/sysconfig/docker

OPTIONS=’–selinux-enabled -b=kbr0′

reboot

重启后登录,查看路由信息,查看网卡信息,查看docker是否启动,查看openvswitch是否启动。

node02配置完成

5.验证网络可达。

5.1验证容器网关是否可达。

node01:ping 192.168.12.196和172.17.2.1,即对端node02主机及node02容器网关。

[root@centos7_kube_node01 ~]# ping 192.168.12.196
PING 192.168.12.196 (192.168.12.196) 56(84) bytes of data.
64 bytes from 192.168.12.196: icmp_seq=1 ttl=64 time=0.374 ms
64 bytes from 192.168.12.196: icmp_seq=2 ttl=64 time=0.568 ms

[root@centos7_kube_node01 ~]# ping 172.17.2.1
PING 172.17.2.1 (172.17.2.1) 56(84) bytes of data.
64 bytes from 172.17.2.1: icmp_seq=1 ttl=64 time=0.342 ms
64 bytes from 172.17.2.1: icmp_seq=2 ttl=64 time=0.522 ms

node02:ping 192.168.12.195和172.17.1.1,即对端node01主机及node01容器网关。

[root@centos7_kube_node02 ~]# ping 192.168.12.195
PING 192.168.12.195 (192.168.12.195) 56(84) bytes of data.
64 bytes from 192.168.12.195: icmp_seq=1 ttl=64 time=0.447 ms

[root@centos7_kube_node02 ~]# ping 172.17.1.1
PING 172.17.1.1 (172.17.1.1) 56(84) bytes of data.
64 bytes from 172.17.1.1: icmp_seq=1 ttl=64 time=0.385 ms

5.2 验证2个宿主机上容器间的网络可达。

node01,新建容器

[root@centos7_kube_node01 ~]# docker run -ti docker.io/ubuntu /bin/bash
root@f780fc45b424:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:01:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.1.3/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:103/64 scope link
valid_lft forever preferred_lft forever
root@f780fc45b424:/#

node02:新建容器

[root@centos7_kube_node02 ~]# docker run -ti docker.io/ubuntu /bin/bash
root@b7533aeba1ed:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:02:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.2.3/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:203/64 scope link
valid_lft forever preferred_lft forever
root@b7533aeba1ed:/#

容器间互ping:

root@f780fc45b424:/# ping 172.17.2.3
PING 172.17.2.3 (172.17.2.3) 56(84) bytes of data.
64 bytes from 172.17.2.3: icmp_seq=1 ttl=62 time=0.794 ms
64 bytes from 172.17.2.3: icmp_seq=2 ttl=62 time=0.593 ms
64 bytes from 172.17.2.3: icmp_seq=3 ttl=62 time=0.755 ms

root@b7533aeba1ed:/# ping 172.17.1.3
PING 172.17.1.3 (172.17.1.3) 56(84) bytes of data.
64 bytes from 172.17.1.3: icmp_seq=1 ttl=62 time=0.578 ms
64 bytes from 172.17.1.3: icmp_seq=2 ttl=62 time=0.506 ms

验证成功。

本文结束

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值