Linux 内核实现名称空间的创建
ip netns 命令
可以借助 ip netns 命令来完成对 Network Namespace 的各种操作。ip netns 命令来自于 iproute 安装包,一般系统会默认安装,如果没有的话,请自行安装。
[root@localhost ~]# dnf -y install iproute
注意:ip netns 命令修改网络配置时需要 sudo 权限。
可以通过 ip netns 命令完成对 Network Namespace 的相关操作,可以通过 ip netns help 查看命令帮助信息:
[root@localhost ~]# ip netns help
Usage: ip netns list
ip netns add NAME
ip netns attach NAME PID
ip netns set NAME NETNSID
ip [-all] netns delete [NAME]
ip netns identify [PID]
ip netns pids NAME
ip [-all] netns exec [NAME] cmd ...
ip netns monitor
ip netns list-id [target-nsid POSITIVE-INT] [nsid POSITIVE-INT]
NETNSID := auto | POSITIVE-INT
默认情况下,Linux 系统中是没有任何 Network Namespace 的,所以ip netns list
命令不会返回任何信息。
创建 Network Namespace
[root@localhost ~]# ip netns list
[root@localhost ~]# ip netns add ns0
[root@localhost ~]# ip netns add ns1
[root@localhost ~]# ip netns list
ns1
ns0
新创建的 Network Namespace 会出现在/var/run/netns/
目录下。如果相同名字的 namespace 已经存在,命令会报Cannot create namespace file "/var/run/netns/ns0": File exists
的错误。
对于每个 Network Namespace 来说,它会有自己独立的网卡、路由表、ARP 表、iptables 等和网络相关的资源。
操作 Network Namespace
ip 命令提供了ip netns exec
子命令可以在对应的 Network Namespace 中执行命令。
查看新创建 Network Namespace 的网卡信息
[root@localhost ~]# ip netns exec ns0 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
可以看到,新创建的 Network Namespace 中会默认创建一个 lo 回环网卡,此时网卡处于关闭状态。此时,尝试去 ping 该 lo 回环网卡,会提示 Network is unreachable
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1
connect: Network is unreachable
通过下面的命令启用 lo 回环网卡:
[root@localhost ~]# ip netns exec ns0 ip link set lo up
[root@localhost ~]# ip netns exec ns0 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.062 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.055 ms
^C
--- 127.0.0.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2058ms
rtt min/avg/max/mdev = 0.026/0.047/0.062/0.017 ms
[root@localhost ~]# ip netns exec ns1 ip link set lo up
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec ns1 ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.054 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.023 ms
^C
--- 127.0.0.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2084ms
rtt min/avg/max/mdev = 0.023/0.033/0.054/0.015 ms
转移设备
我们可以在不同的 Network Namespace 之间转移设备(如 veth)。由于一个设备只能属于一个 Network Namespace ,所以转移后在这个 Network Namespace 内就看不到这个设备了。
其中,veth 设备属于可转移设备,而很多其它设备(如 lo、vxlan、ppp、bridge 等)是不可以转移的。
veth pair
veth pair 全称是 Virtual Ethernet Pair,是一个成对的端口,所有从这对端口一 端进入的数据包都将从另一端出来,反之也是一样。
引入veth pair是为了在不同的 Network Namespace 直接进行通信,利用它可以直接将两个 Network Namespace 连接起来。
创建 veth pair
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:5e:7e:b4 brd ff:ff:ff:ff:ff:ff
inet 192.168.91.128/24 brd 192.168.91.255 scope global dynamic noprefixroute ens160
valid_lft 1105sec preferred_lft 1105sec
inet6 fe80::2e9a:a6f4:ae9f:d298/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:d5:79:dd:a7 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:d5ff:fe79:dda7/64 scope link
valid_lft forever preferred_lft forever
[root@localhost ~]# ip link add type veth
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:5e:7e:b4 brd ff:ff:ff:ff:ff:ff
inet 192.168.91.128/24 brd 192.168.91.255 scope global dynamic noprefixroute ens160
valid_lft 1059sec preferred_lft 1059sec
inet6 fe80::2e9a:a6f4:ae9f:d298/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:d5:79:dd:a7 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:d5ff:fe79:dda7/64 scope link
valid_lft forever preferred_lft forever
10: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 7e:e7:d7:17:74:bf brd ff:ff:ff:ff:ff:ff
11: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether a2:2b:b4:af:9c:cc brd ff:ff:ff:ff:ff:ff
可以看到,此时系统中新增了一对 veth pair,将 veth0 和 veth1 两个虚拟网卡连接了起来,此时这对 veth pair 处于”未启用“状态。
实现 Network Namespace 间通信
下面我们利用 veth pair 实现两个不同的 Network Namespace 之间的通信。然后我们将 veth0 加入到 ns0,将 veth1 加入到 ns1
[root@localhost ~]# ip link set veth0 netns ns0
[root@localhost ~]# ip link set veth1 netns ns1
然后我们分别为这对 veth pair 配置上 ip 地址,并启用它们
[root@localhost ~]# ip netns exec ns0 ip link set veth0 up
[root@localhost ~]# ip netns exec ns1 ip link set veth1 up
[root@localhost ~]# ip netns exec ns0 ip addr add 192.168.1.2/24 dev veth0
[root@localhost ~]# ip netns exec ns1 ip addr add 192.168.1.4/24 dev veth1
查看这对 veth pair 的状态
[root@localhost ~]# ip netns exec ns0 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
10: veth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 7e:e7:d7:17:74:bf brd ff:ff:ff:ff:ff:ff link-netns ns1
inet 192.168.1.2/24 scope global veth0
valid_lft forever preferred_lft forever
inet6 fe80::7ce7:d7ff:fe17:74bf/64 scope link
valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
11: veth1@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether a2:2b:b4:af:9c:cc brd ff:ff:ff:ff:ff:ff link-netns ns0
inet 192.168.1.4/24 scope global veth1
valid_lft forever preferred_lft forever
inet6 fe80::a02b:b4ff:feaf:9ccc/64 scope link
valid_lft forever preferred_lft forever
从上面可以看出,我们已经成功启用了这个 veth pair,并为每个 veth 设备分配了对应的 ip 地址。
[root@localhost ~]# ip netns exec ns0 ping 192.168.1.4
PING 192.168.1.4 (192.168.1.4) 56(84) bytes of data.
64 bytes from 192.168.1.4: icmp_seq=1 ttl=64 time=0.037 ms
64 bytes from 192.168.1.4: icmp_seq=2 ttl=64 time=0.049 ms
64 bytes from 192.168.1.4: icmp_seq=3 ttl=64 time=0.053 ms
64 bytes from 192.168.1.4: icmp_seq=4 ttl=64 time=0.053 ms
^C
--- 192.168.1.4 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3051ms
rtt min/avg/max/mdev = 0.037/0.048/0.053/0.006 ms
[root@localhost ~]# ip netns exec ns1 ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.046 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.028 ms
^C
--- 192.168.1.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2074ms
rtt min/avg/max/mdev = 0.028/0.035/0.046/0.010 ms
可以看到,veth pair 成功实现了两个不同 Network Namespace 之间的网络交互。
veth 设备重命名
[root@localhost ~]# ip netns exec ns0 ip link set veth0 down
[root@localhost ~]# ip netns exec ns0 ip link set dev veth0 name eth0
[root@localhost ~]# ip netns exec ns1 ip link set veth1 down
[root@localhost ~]# ip netns exec ns1 ip link set dev veth1 name eth1
[root@localhost ~]# ip netns exec ns0 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 7e:e7:d7:17:74:bf brd ff:ff:ff:ff:ff:ff link-netns ns1
inet 192.168.1.2/24 scope global eth0
valid_lft forever preferred_lft forever
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
11: eth1@if10: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether a2:2b:b4:af:9c:cc brd ff:ff:ff:ff:ff:ff link-netns ns0
inet 192.168.1.4/24 scope global eth1
valid_lft forever preferred_lft forever
四种网络模式配置
bridge 模式配置
[root@localhost ~]# docker run -it --name t1 --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@localhost ~]# docker run -it --name t2 --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.201 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.065 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.054 ms
^C
--- 172.17.0.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.054/0.106/0.201 ms
/ # ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.024 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.035 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.044 ms
^C
--- 172.17.0.3 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.024/0.034/0.044 ms
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0566130a8c36 busybox "sh" About a minute ago Up 59 seconds t2
c78cc1872ac7 busybox "sh" About a minute ago Up About a minute t1
# 在创建容器时添加--network bridge与不加--network选项效果是一致的
[root@localhost ~]# docker run -it --name t1 --network bridge --rm busybox
none 模式配置
[root@localhost ~]# docker run -it --name t1 --network none --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
container 模式配置
启动第一个容器
[root@localhost ~]# docker run -it --name b1 --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
启动第二个容器
[root@localhost ~]# docker run -it --name b2 --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
可以看到名为 b2 的容器 IP 地址是 172.17.0.3,与第一个容器的 IP 地址不是一样的,也就是说并没有共享网络,此时如果我们将第二个容器的启动方式改变一下,就可以使名为 b2 的容器 IP 与 B1 容器 IP 一致,也即共享 IP,但不共享文件系统。
[root@localhost ~]# docker run -it --name b2 --rm --network container:b1 busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
此时我们在 b1 容器上创建一个目录
/ # mkdir /data
/ # ls
bin data dev etc home proc root sys tmp usr var
到 b2 容器上检查,会发现并没有这个目录,因为文件系统是处于隔离状态,仅仅是共享了网络而已。
/ # ls
bin dev etc home proc root sys tmp usr var
在 b1 容器上部署一个站点
/ # cd /data/
/data # echo "hello world" > index.html
/data # cat index.html
hello world
/data # cd
~ # /bin/httpd -h /data/
~ # netstat -antl
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 :::80 :::* LISTEN
在 b2 容器上用本地地址去访问此站点
/ # wget -O - 172.17.0.2
Connecting to 172.17.0.2 (172.17.0.2:80)
writing to stdout
hello world
- 100% |****************************| 12 0:00:00 ETA
written to stdout
由此可见,container 模式下的容器间关系就相当于一台主机上的两个不同进程
host 模式配置
启动容器时直接指明模式为 host
[root@localhost ~]# docker run -it --name b2 --rm --network host busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel qlen 1000
link/ether 00:0c:29:5e:7e:b4 brd ff:ff:ff:ff:ff:ff
inet 192.168.91.128/24 brd 192.168.91.255 scope global dynamic noprefixroute ens160
valid_lft 1260sec preferred_lft 1260sec
inet6 fe80::2e9a:a6f4:ae9f:d298/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
link/ether 02:42:d5:79:dd:a7 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:d5ff:fe79:dda7/64 scope link
valid_lft forever preferred_lft forever
#此时如果我们在这个容器中启动一个 http 站点,我们就可以直接用宿主机的 IP 直接在浏览器中访问这个容器中的站点了。
/ # ls
bin dev etc home proc root sys tmp usr var
/ # mkdir /data
/ # echo "hello world" > /data/index.html
/ # /bin/httpd -h /data/
[root@localhost ~]# curl 192.168.91.128
hello world
[root@localhost ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 9 *:80 *:*
LISTEN 0 128 [::]:22 [::]:*
容器的常用操作
查看容器的主机名
[root@localhost ~]# docker run -it --name t1 --network bridge --rm busybox
/ # hostname
b2a1db5577bf
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b2a1db5577bf busybox "sh" 30 seconds ago Up 29 seconds t1
在容器启动时注入主机名
[root@localhost ~]# docker run -it --name t1 --hostname abc --rm busybox
/ # hostname
abc
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 abc
/ # cat /etc/resolv.conf
# Generated by NetworkManager
search localdomain
nameserver 192.168.91.2
/ # ping baidu.com
PING baidu.com (110.242.68.66): 56 data bytes
64 bytes from 110.242.68.66: seq=0 ttl=127 time=39.491 ms
64 bytes from 110.242.68.66: seq=1 ttl=127 time=42.140 ms
64 bytes from 110.242.68.66: seq=2 ttl=127 time=43.182 ms
^C
--- baidu.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 39.491/41.604/43.182 ms
手动指定容器要使用的 DNS
[root@localhost ~]# docker run -it --name t1 --hostname abc --dns 114.114.114.114 --rm busybox
/ # cat /etc/resolv.conf
search localdomain
nameserver 114.114.114.114
/ # ping baidu.com
PING baidu.com (110.242.68.66): 56 data bytes
64 bytes from 110.242.68.66: seq=0 ttl=127 time=38.840 ms
64 bytes from 110.242.68.66: seq=1 ttl=127 time=35.444 ms
64 bytes from 110.242.68.66: seq=2 ttl=127 time=39.667 ms
^C
--- baidu.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 35.444/37.983/39.667 ms
手动往 /etc/hosts 文件中注入主机名到 IP 地址的映射
[root@localhost ~]# docker run -it --name t1 --hostname abc --add-host www.skye.com:1.1.1.1 --rm busybox
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
1.1.1.1 www.skye.com
172.17.0.2 abc
#或在容器中的配置文件内直接修改
/ # vi /etc/hosts
开放容器端口
执行 docker run 的时候有个 -p 选项,可以将容器中的应用端口映射到宿主机中,从而实现让外部主机可以通过访问宿主机的某端口来访问容器内应用的目的。
-p 选项能够使用多次,其所能够暴露的端口必须是容器确实在监听的端口。
[root@localhost ~]# docker run -dit --name b1 -p 80:80 busybox
500f7008542cd36e57aa09b36b411a0f8032c5ec566f2fef4617c41ddf338293
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
500f7008542c busybox "sh" 3 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp b1
[root@localhost ~]# curl 192.168.91.128
curl: (7) Failed to connect to 192.168.91.128 port 80: Connection refused
-p选项的使用格式:
- -p <containerPort>
- 将指定的容器端口映射至主机所有地址的一个动态端口
[root@localhost ~]# docker run -dit --name w1 -p 80 httpd
5e81888eae40f90f7c551630641f56c2b4197c401264955ea6e348f8d2bcb603
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5e81888eae40 httpd "httpd-foreground" 3 seconds ago Up 2 seconds 0.0.0.0:49153->80/tcp, :::49153->80/tcp w1
[root@localhost ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:49153 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:49153 [::]:*
LISTEN 0 128 [::]:22 [::]:*
- -p <hostPort>:<containerPort>
- 将容器端口 <containerPort> 映射至指定的主机端口 <hostPort>
[root@localhost ~]# docker run -dit --name w1 -p 80:80 httpd
5d60e8e1f0bf7d3eeff4f44836eef962dffbd55dce8040deca7e7d61adaa3dda
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5d60e8e1f0bf httpd "httpd-foreground" 4 seconds ago Up 3 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp w1
[root@localhost ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:80 [::]:*
LISTEN 0 128 [::]:22 [::]:*
- -p <ip>::<containerPort>
- 将指定的容器端口 <containerPort> 映射至主机指定 <ip> 的动态端口
[root@localhost ~]# docker run -dit --name w1 -p 192.168.91.128::80 httpd
95e09cb24e2802b150593bccfd6a67c9470d0f97e1227e59c8c1547b2711f4de
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
95e09cb24e28 httpd "httpd-foreground" 22 seconds ago Up 21 seconds 192.168.91.128:49153->80/tcp w1
[root@localhost ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 192.168.91.128:49153 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
- -p <ip>:<hostPort>:<containerPort>
- 将指定的容器端口 <containerPort> 映射至主机指定 <ip> 的端口 <hostPort>
[root@localhost ~]# docker run -dit --name w1 -p 192.168.91.128:82:80 httpd
f1946463d7133e6548ffbf3e1ba1309bb6de9980ea87c1e4854235eb547fcf80
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f1946463d713 httpd "httpd-foreground" 3 seconds ago Up 2 seconds 192.168.91.128:82->80/tcp w1
[root@localhost ~]# ss -antl
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 192.168.91.128:82 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
动态端口指的是随机端口,具体的映射结果可使用docker port
命令查看。
以上命令执行后会一直占用着前端,我们新开一个终端连接来看一下容器的 80 端口被映射到了宿主机的什么端口上
[root@localhost ~]# docker run -dit --name w1 -p 80 httpd
e36ef61219f569d7176b0df16716d288246cb7c7f236dcbf70e19ced67898ab9
[root@localhost ~]# docker port w1
80/tcp -> 0.0.0.0:49154
80/tcp -> :::49154
由此可见,容器的 80 端口被暴露到了宿主机的 49154 端口上,此时我们在宿主机上访问一下这个端口看是否能访问到容器内的站点
[root@localhost ~]# curl 127.0.0.1:49154
<html><body><h1>It works!</h1></body></html>
[root@localhost ~]# curl localhost:49154
<html><body><h1>It works!</h1></body></html>
[root@localhost ~]# curl 192.168.91.128:49154
<html><body><h1>It works!</h1></body></html>
iptables 防火墙规则将随容器的创建自动生成,随容器的删除自动删除规则。
[root@localhost ~]# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
9 1848 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
6 388 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:80
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
3 180 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
1 60 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:49154 to:172.17.0.2:80
自定义 docker0 桥的网络属性信息
自定义 docker0 桥的网络属性信息需要修改/etc/docker/daemon.json
配置文件
[root@localhost ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"],
"bip":"192.168.66.8/24"
}
[root@localhost ~]# systemctl restart docker
核心选项为 bip,即 bridge ip 之意,用于指定 docker0 桥自身的 IP 地址;其它选项可通过此地址计算得出。
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
381888993c85 httpd "httpd-foreground" 17 seconds ago Up 2 seconds 80/tcp w2
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:5e:7e:b4 brd ff:ff:ff:ff:ff:ff
inet 192.168.91.128/24 brd 192.168.91.255 scope global dynamic noprefixroute ens160
valid_lft 1594sec preferred_lft 1594sec
inet6 fe80::2e9a:a6f4:ae9f:d298/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:d5:79:dd:a7 brd ff:ff:ff:ff:ff:ff
inet 192.168.66.8/24 brd 192.168.66.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:d5ff:fe79:dda7/64 scope link
valid_lft forever preferred_lft forever
45: veth3066af7@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 12:2a:95:b9:68:bd brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::102a:95ff:feb9:68bd/64 scope link
valid_lft forever preferred_lft forever
#w2在修改bip后创建
[root@localhost ~]# docker inspect w2
......
"Gateway": "192.168.66.8",
"IPAddress": "192.168.66.1",
......
#w1在修改bip前创建
[root@localhost ~]# docker inspect w1
......
"Gateway": "",
"IPAddress": "",
......
–restart always
[root@localhost docker]# docker run -dit --name w1 httpd
5f1a17985592da73dfeab9d70651b87787a38b5322fd6509bf56e25a8b59d286
[root@localhost docker]# docker run -dit --name w2 --restart always httpd
381888993c8599c4b2ca9ed88746e703f8835fe1dea84f3e7c98e4726932343e
[root@localhost docker]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
381888993c85 httpd "httpd-foreground" 3 seconds ago Up 2 seconds 80/tcp w2
5f1a17985592 httpd "httpd-foreground" 17 seconds ago Up 17 seconds 80/tcp w1
[root@localhost docker]# systemctl restart docker
[root@localhost docker]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
381888993c85 httpd "httpd-foreground" 17 seconds ago Up 2 seconds 80/tcp w2
docker 远程连接
[root@localhost 128]# vim /lib/systemd/system/docker.service
......
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
......
[root@localhost 128]# systemctl daemon-reload
[root@localhost 128]# systemctl restart docker
在客户端上向 dockerd 直接传递 “-H|–host” 选项指定要控制哪台主机上的 docker 容器
[root@localhost 131]# docker -H 192.168.91.128:2375 images
REPOSITORY TAG IMAGE ID CREATED SIZE
centos-httpd v2 40d9de9e74c7 2 days ago 678MB
centos-httpd v1 3b5b771c3f29 2 days ago 678MB
busybox latest beae173ccac6 7 months ago 1.24MB
httpd latest dabbfbe0c57b 7 months ago 144MB
centos latest 5d0da3dc9764 10 months ago 231MB
docker 创建自定义桥
[root@localhost ~]# docker network help
Usage: docker network COMMAND
Manage networks
Commands:
connect Connect a container to a network
create Create a network
disconnect Disconnect a container from a network
inspect Display detailed information on one or more networks
ls List networks
prune Remove all unused networks
rm Remove one or more networks
Run 'docker network COMMAND --help' for more information on a command.
创建一个额外的自定义桥,区别于docker0
[root@localhost ~]# docker network create sky -d bridge
ffac843b9e04645eaf1cfa9bec5249de455322486a952f1b790a13d61a91885a
[root@localhost ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
fba219216226 bridge bridge local
0509b1a3abb3 host host local
ba0364f7d8e0 none null local
ffac843b9e04 sky bridge local
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:5e:7e:b4 brd ff:ff:ff:ff:ff:ff
inet 192.168.91.128/24 brd 192.168.91.255 scope global dynamic noprefixroute ens160
valid_lft 1384sec preferred_lft 1384sec
inet6 fe80::2e9a:a6f4:ae9f:d298/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:d5:79:dd:a7 brd ff:ff:ff:ff:ff:ff
inet 192.168.66.8/24 brd 192.168.66.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:d5ff:fe79:dda7/64 scope link
valid_lft forever preferred_lft forever
47: br-ffac843b9e04: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:61:1c:31:cc brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-ffac843b9e04
valid_lft forever preferred_lft forever
[root@localhost ~]# docker network create star --subnet "192.168.88.0/24" --gateway "192.168.88.1" -d bridge
0af49b7c24e03b132cbe9d457425ef8193ff8975934311ddfacb095efc7ef55e
[root@localhost ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
fba219216226 bridge bridge local
0509b1a3abb3 host host local
ba0364f7d8e0 none null local
ffac843b9e04 sky bridge local
0af49b7c24e0 star bridge local
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:5e:7e:b4 brd ff:ff:ff:ff:ff:ff
inet 192.168.91.128/24 brd 192.168.91.255 scope global dynamic noprefixroute ens160
valid_lft 1165sec preferred_lft 1165sec
inet6 fe80::2e9a:a6f4:ae9f:d298/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:d5:79:dd:a7 brd ff:ff:ff:ff:ff:ff
inet 192.168.66.8/24 brd 192.168.66.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:d5ff:fe79:dda7/64 scope link
valid_lft forever preferred_lft forever
47: br-ffac843b9e04: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:61:1c:31:cc brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-ffac843b9e04
valid_lft forever preferred_lft forever
48: br-0af49b7c24e0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b4:42:55:8f brd ff:ff:ff:ff:ff:ff
inet 192.168.88.1/24 brd 192.168.88.255 scope global br-0af49b7c24e0
valid_lft forever preferred_lft forever