Network-Namespace互通实践
通过实际的实验,模拟docker默认网络通讯方式,具体实验逻辑如下图:
主机初始化环境
#网卡信息
[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:84:54:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.159.3/24 brd 192.168.159.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::f59e:adb0:6796:7ddd/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:c3:e7:da:ec brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
#Network-Namespace网络空间信息
[root@localhost ~]# ip netns list
[root@localhost ~]#
#网桥信息
[root@localhost ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242c3e7daec no
创建网络空间
[root@localhost ~]# ip netns add Network-namespace1
[root@localhost ~]# ip netns add Network-namespace2
[root@localhost ~]# ip netns list
Network-namespace2
Network-namespace1
查看网络空间下面的网卡接口/路由信息
#通过ip netns exec命令可以在特定ns的内部执行相关程序(虚拟网络空间除了网络是虚的以外,文件系统完全和当前系统共享,也就是说所有本地可以使用的命令都可以在虚拟网络中使用)
#网卡信息
[root@localhost ~]# ip netns exec Network-namespace1 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[root@localhost ~]# ip netns exec Network-namespace2 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[root@localhost ~]#
#路由信息-均为空
[root@localhost ~]# ip netns exec Network-namespace1 ip route
[root@localhost ~]# ip netns exec Network-namespace2 ip route
[root@localhost ~]#
创建Mydocker0网桥
[root@localhost ~]# brctl addbr Mydocker0
[root@localhost ~]# brctl show
bridge name bridge id STP enabled interfaces
Mydocker0 8000.000000000000 no
docker0 8000.0242c3e7daec no
[root@localhost ~]#
#此时查看网卡接口信息,发现新建Mydocker0网桥还没有对应的ip地址信息
[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:84:54:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.159.3/24 brd 192.168.159.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::f59e:adb0:6796:7ddd/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:c3:e7:da:ec brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: Mydocker0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether e6:92:0a:27:3e:9f brd ff:ff:ff:ff:ff:ff
Mydocker0网桥添加对应的ip地址信息
#至于给网桥添加IP地址信息,是因为网桥默认工作在L2层,即通过MAC地址互通,那么如果网桥没有IP地址的化,只能通过网桥对应的mac地址通信,是无法和主机进行通信的,即无法和外网进行交流(添加了对应的IP地址后,如果匹配不到对应的网段信息,会根据路由表匹配到物理机上,从而达成和外部通信的原理)
#给网桥添加ip信息
[root@localhost ~]# ip addr add 172.16.0.1/16 dev Mydocker0
#启动网桥设备
[root@localhost ~]# ip link set Mydocker0 up
[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:84:54:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.159.3/24 brd 192.168.159.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::f59e:adb0:6796:7ddd/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:c3:e7:da:ec brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: Mydocker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether e6:92:0a:27:3e:9f brd ff:ff:ff:ff:ff:ff
inet 172.16.0.1/16 scope global Mydocker0
valid_lft forever preferred_lft forever
inet6 fe80::e492:aff:fe27:3e9f/64 scope link
valid_lft forever preferred_lft forever
#此时再查看路由规则,发现多了172.16网络段的匹配规则
[root@localhost ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 ens33
172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 Mydocker0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.159.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
创建veth-pair设备完成互通测试实验
#ip link add physicalVeth1 type veth peer name networkVeth1
#添加一个名成为physicalVeth1的类型是vethpeer,另一端名称为networkVeth1的网络设备
#此时可以看到多出来一对网卡,接下来要做的操作就是将这对网卡,一端插在Mydocker0网桥上,另一端放到我们创建的Network-namespace1网络空间中
[root@localhost ~]# ip link add physicalVeth1 type veth peer name networkVeth1
[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:84:54:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.159.3/24 brd 192.168.159.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::f59e:adb0:6796:7ddd/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:c3:e7:da:ec brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: Mydocker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether e6:92:0a:27:3e:9f brd ff:ff:ff:ff:ff:ff
inet 172.16.0.1/16 scope global Mydocker0
valid_lft forever preferred_lft forever
inet6 fe80::e492:aff:fe27:3e9f/64 scope link
valid_lft forever preferred_lft forever
5: networkVeth1@physicalVeth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d2:b4:09:ce:37:b6 brd ff:ff:ff:ff:ff:ff
6: physicalVeth1@networkVeth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether ea:4c:4f:f8:22:48 brd ff:ff:ff:ff:ff:ff
将新建的veth pair的physicalVeth1端插入到Mydocker0网桥
[root@localhost ~]# brctl addif Mydocker0 physicalVeth1
[root@localhost ~]# brctl show
bridge name bridge id STP enabled interfaces
Mydocker0 8000.ea4c4ff82248 no physicalVeth1
docker0 8000.0242c3e7daec no
#开启此网卡
[root@localhost ~]# ip link set physicalVeth1 up
将另一端networkVeth1移入新建的Network-namespace1网络空间中
#此处可以看到 5: networkVeth1@physicalVeth1已经不见了,另外在Network-namespace1里面可以看到
[root@localhost ~]# ip link set networkVeth1 netns Network-namespace1
[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:84:54:85 brd ff:ff:ff:ff:ff:ff
inet 192.168.159.3/24 brd 192.168.159.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::f59e:adb0:6796:7ddd/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:c3:e7:da:ec brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: Mydocker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether ea:4c:4f:f8:22:48 brd ff:ff:ff:ff:ff:ff
inet 172.16.0.1/16 scope global Mydocker0
valid_lft forever preferred_lft forever
inet6 fe80::e492:aff:fe27:3e9f/64 scope link
valid_lft forever preferred_lft forever
6: physicalVeth1@if5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue master Mydocker0 state LOWERLAYERDOWN group default qlen 1000
link/ether ea:4c:4f:f8:22:48 brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@localhost ~]# ip netns exec Network-namespace1 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
5: networkVeth1@if6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d2:b4:09:ce:37:b6 brd ff:ff:ff:ff:ff:ff link-netnsid 0
激活networkVeth1网卡
[root@localhost ~]# ip netns exec Network-namespace1 ip link set networkVeth1 up
[root@localhost ~]# ip netns exec Network-namespace1 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
5: networkVeth1@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d2:b4:09:ce:37:b6 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::d0b4:9ff:fece:37b6/64 scope link
valid_lft forever preferred_lft forever
[root@localhost ~]#
给networkVeth1网卡添加IP信息,以及路由信息
#添加IP信息
[root@localhost ~]# ip netns exec Network-namespace1 ip addr add 172.16.0.2/16 dev networkVeth1
[root@localhost ~]# ip netns exec Network-namespace1 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
5: networkVeth1@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d2:b4:09:ce:37:b6 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.16.0.2/16 scope global networkVeth1
valid_lft forever preferred_lft forever
inet6 fe80::d0b4:9ff:fece:37b6/64 scope link
valid_lft forever preferred_lft forever
#添加路由信息是为了网络空间和主机之间可达,未添加之前
此时的通路情况
在网路空间Network-namespace1里面添加对应的路由规则
[root@localhost ~]# ip netns exec Network-namespace1 ip route add default via 172.16.0.1
[root@localhost ~]# ip netns exec Network-namespace1 route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 0 0 0 networkVeth1
172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 networkVeth1
同上的业务逻辑,对Network-namespace2进行同样的操作
#创建veth pair网络设备
[root@localhost ~]# ip link add physicalVeth2 type veth peer name networkVeth2
#将physicalVeth2端插入到网桥Mydocker0上面
[root@localhost ~]# brctl addif Mydocker0 physicalVeth2
#激活physicalVeth2
[root@localhost ~]# ip link set physicalVeth2 up
#将另一端放入Network-namespace2网络空间里面
[root@localhost ~]# ip link set networkVeth2 netns Network-namespace2
#激活networkVeth2
[root@localhost ~]# ip netns exec Network-namespace2 ip link set networkVeth2 up
#给Network-namespace2网络空间里面的networkVeth2网卡添加对应的IP地址
[root@localhost ~]# ip netns exec Network-namespace2 ip addr add 172.16.0.3/16 dev networkVeth2
#给Network-namespace2里面添加对应的默认路由信息,不同网段同一路由到主机Mydocker0网桥
[root@localhost ~]# ip netns exec Network-namespace2 ip route add default via 172.16.0.1
解决无法访问外网的问题
#由于是内网地址访问外网地址,且可以理解成为容器内部,所以出宿主机的时候可以做一个snat,将出口地址同义替换为主机地址即可,查看docker0网桥即使如此,
# MASQUERADE,地址伪装,算是snat中的一种特例,可以实现自动化的snat。
[root@localhost ~]# iptables -t nat -A POSTROUTING -s 172.16.0.0/16 ! -o Mydocker0 -j MASQUERADE
[root@localhost ~]# ip netns exec Network-namespace1 ping 103.235.46.40
PING 103.235.46.40 (103.235.46.40) 56(84) bytes of data.
64 bytes from 103.235.46.40: icmp_seq=1 ttl=127 time=201 ms
64 bytes from 103.235.46.40: icmp_seq=2 ttl=127 time=191 ms
^C
--- 103.235.46.40 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 191.081/196.259/201.437/5.178 ms
[root@localhost ~]#
配置过路由规则后还是ping不同外网
#排查是否开启ipv4转发功能(允许主机上面网卡之间互转ip网络包)
[root@localhost ~]#echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
[root@localhost ~]#sysctl -p
#或则如下操作,改为1即可
[root@localhost ~]#vim /proc/sys/net/ipv4/ip_forward
同时有可能存在能ping同IP但是ping不同域名的情况,具体可参考如下进行排查
#查看是否打开ipv4转发功能,1为已打开
[root@localhost etc]# cat /proc/sys/net/ipv4/ip_forward
1
#查看防火墙状态(若防火墙为关闭状态,可跳过防火墙有关设置):
[root@localhost etc]# firewall-cmd --state
running
#若返回runging,则防火墙为开启状态,查看防火墙是否开启ip地址转发(ip地址伪装):为什么???域名解析全流程???
[root@localhost etc]# firewall-cmd --query-masquerade
#若返回no,则输入以下命令开启ip地址转发:
[root@localhost etc]# firewall-cmd --add-masquerade --permanent
#然后输入以下命令使修改生效:
[root@localhost etc]#firewall-cmd --reload
docker相关
[root@localhost]#vi /etc/docker/daemon.json
#在文件中输入以下内容:
{
"dns": ["8.8.8.8","114.114.114.114"]
}
[root@localhost]#systemctl restart docker
ll-cmd --query-masquerade
#若返回no,则输入以下命令开启ip地址转发:
[root@localhost etc]# firewall-cmd --add-masquerade --permanent
#然后输入以下命令使修改生效:
[root@localhost etc]#firewall-cmd --reload
##### docker相关
```shell
[root@localhost]#vi /etc/docker/daemon.json
#在文件中输入以下内容:
{
"dns": ["8.8.8.8","114.114.114.114"]
}
[root@localhost]#systemctl restart docker