docker容器网络的配置及常用操作

1. Linux内核实现名称空间的创建

1.1 ip netns命令

可以借助ip netns命令来完成对 Network Namespace 的各种操作。ip netns命令来自于iproute安装包,一般系统会默认安装,如果没有的话,请自行安装。
注意:ip netns命令修改网络配置时需要 sudo 权限。
可以通过ip netns命令完成对Network Namespace 的相关操作,可以通过ip netns help查看命令帮助信息:


[root@localhost ~]# ip netns help  //因为此命令是基于内核的,也就是内核自带命令不用 --help
Usage:	ip netns list  // 列出拥有的名称空间
	ip netns add NAME  // 添加名称空间
	ip netns attach NAME PID  //为名称空间添加一个pid
	ip netns set NAME NETNSID  //设置名称空间的pid
	ip [-all] netns delete [NAME]  //删除指定的名称空间,如果指定了 -all 选项则删除全部
	ip netns identify [PID]  // 用来查看名称空间的进程,若不指定pid就显示当前进程的名称空间
	ip netns pids NAME  //用来查看指定的名称空间中的进程的pid,这个命令其实就是去检查 /proc 下的所有进程
	ip [-all] netns exec [NAME] cmd ...  // 用来在指定的名称空间中指定的命令比如:ip netns exec +名称空间+要执行的命令例如ip a。
	ip netns monitor  //用来监控对名称空间的操作,如删除或添加名称空间,就会显示相应的通知信息
	ip netns list-id [target-nsid POSITIVE-INT] [nsid POSITIVE-INT]
NETNSID := auto | POSITIVE-INT

1.2 创建Network Namespace

创建一个名为ns1的名称空间

[root@localhost ~]# ip netns list  // 列出名称空间
[root@localhost ~]# ip netns add ns1  // 添加一个名称空间,ns1为名称空间的名字,可随意
[root@localhost ~]# ip netns list
ns1

新创建的 Network Namespace 会出现在/var/run/netns/目录下。如果相同名字的 namespace 已经存在,命令会报Cannot create namespace file “/var/run/netns/ns0”: File exists的错误。

[root@localhost ~]# ls /var/run/netns/
ns1

[root@localhost ~]# ip netns add ns1
Cannot create namespace file "/var/run/netns/ns1": File exists

// 有人会想能否手动创建一个不使用ip netns命令,下面就试一下
[root@localhost netns]# pwd
/var/run/netns

[root@localhost netns]# ls
ns1
[root@localhost netns]# touch ns2

[root@localhost netns]# ll 
total 0
-r--r--r-- 1 root root 0 Dec  5 18:23 ns1
-rw-r--r-- 1 root root 0 Dec  5 18:33 ns2
[root@localhost netns]# chmod u-w ns2
[root@localhost netns]# ll
total 0
-r--r--r-- 1 root root 0 Dec  5 18:23 ns1
-r--r--r-- 1 root root 0 Dec  5 18:33 ns2

[root@localhost netns]# ip netns ls  // 发现会报错
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
ns2
ns1

[root@localhost netns]# rm -f ns2 
[root@localhost netns]# ip netns list
ns1

1.3 操作Network Namespace

ip命令提供了ip netns exec子命令可以在对应的 Network Namespace 中执行命令。

// 查看新创建 Network Namespace 的网卡信息
[root@localhost netns]# ip netns exec ns1 ip a  //将纳入名称空间执行ip a命令
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000  //可以看到网卡处于禁用状态
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

可以看到,新创建的Network Namespace中会默认创建一个lo回环网卡,此时网卡处于关闭状态。此时,尝试去 ping 该lo回环网卡,会提示Network is unreachable

[root@localhost netns]# ip netns exec ns1 ping -c 3 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.

--- 127.0.0.1 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2056ms

// 通过下面的命令开启网卡

 // 开启lo网卡
[root@localhost netns]# ip netns exec ns1 ip link set lo up

[root@localhost netns]# ip netns exec ns1 ipa
exec of "ipa" failed: No such file or directory
[root@localhost netns]# ip netns exec ns1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

[root@localhost netns]# ip netns exec ns1 ping -c 3 127.0.0.1  // ping三次自动退出
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.087 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.063 ms

--- 127.0.0.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2062ms
rtt min/avg/max/mdev = 0.035/0.061/0.087/0.023 ms

1.4 转移设备

我们可以在不同的 Network Namespace 之间转移设备(如veth)。由于一个设备只能属于一个 Network Namespace ,所以转移后在这个 Network Namespace 内就看不到这个设备了。

其中,veth设备属于可转移设备,而很多其它设备(如lo、vxlan、ppp、bridge等)是不可以转移的。

1.5 veth pair

veth pair 全称是 Virtual Ethernet Pair,是一个成对的端口,所有从这对端口一 端进入的数据包都将从另一端出来,反之也是一样。
引入veth pair是为了在不同的 Network Namespace 直接进行通信,利用它可以直接将两个 Network Namespace 连接起来。

1.6 创建veth pair

// 可以看到我们现在只有三个网卡
[root@localhost ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:5f:b4:28 brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:35:05:b8:22 brd ff:ff:ff:ff:ff:ff

[root@localhost ~]# ip link add type veth  //添加一个veth类型的网卡,因为veth属于可以转移的设备

[root@localhost ~]# ip link show  // 可以看到,此时系统中新增了一对veth pair,将veth0和veth1两个虚拟网卡连接了起来,此时这对 veth pair 处于”未启用“状态。
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:5f:b4:28 brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:35:05:b8:22 brd ff:ff:ff:ff:ff:ff
4: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether e2:df:23:a3:37:e0 brd ff:ff:ff:ff:ff:ff
5: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether ca:d4:ee:ed:ce:86 brd ff:ff:ff:ff:ff:ff

1.7 实现Network Namespace间通信

下面我们利用veth pair实现两个不同的 Network Namespace 之间的通信。刚才我们已经创建了一个名为ns1的 Network Namespace,下面再创建一个信息Network Namespace,命名为ns2

[root@localhost ~]# ip netns add ns2
[root@localhost ~]# ip netns list
ns2
ns1

我们将veth0加入到ns1,将veth1加入到真机上测试

[root@localhost ~]# ip link set veth0 netns ns1

[root@localhost ~]# ip netns exec ns1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
4: veth0@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether e2:df:23:a3:37:e0 brd ff:ff:ff:ff:ff:ff link-netnsid 0


// 我们先将veth1放在真机上进行测试
[root@localhost ~]# ip addr add 192.168.2.2/24 dev veth1 //给真机的veth1网卡添加IP

我们分别为这对veth pair配置上ip地址,并启用它们

[root@localhost ~]# ip netns exec ns1 ip link set veth0 up
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
4: veth0@if5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether e2:df:23:a3:37:e0 brd ff:ff:ff:ff:ff:ff link-netns ns2

[root@localhost ~]# ip netns exec ns1 ip addr add 192.168.2.1/24 dev veth0
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
4: veth0@if5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether e2:df:23:a3:37:e0 brd ff:ff:ff:ff:ff:ff link-netns ns2
    inet 192.168.2.1/24 scope global veth0
       valid_lft forever preferred_lft forever

[root@localhost ~]# ip link set veth1 up  //开启真机的veth1网卡
[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:5f:b4:28 brd ff:ff:ff:ff:ff:ff
    inet 192.168.182.150/24 brd 192.168.182.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5f:b428/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:35:05:b8:22 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
7: veth1@if6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2e:31:0c:42:29:6e brd ff:ff:ff:ff:ff:ff link-netns ns1
    inet 192.168.2.2/24 scope global veth1
       valid_lft forever preferred_lft forever

// 测试是否能通信

[root@localhost ~]# ping -c2 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.113 ms
64 bytes from 192.168.2.1: icmp_seq=2 ttl=64 time=0.083 ms

--- 192.168.2.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1021ms
rtt min/avg/max/mdev = 0.083/0.098/0.113/0.015 ms

先将ns2的lo网卡开启

[root@localhost ~]# ip netns exec ns2 ip link show
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

[root@localhost ~]# ip netns exec ns2 ip link set lo up

将veth1加入到ns1

[root@localhost ~]# ip link set veth1 netns ns2

对veth pair配置上ip地址,并且启动
从下面我们可以发现将真机的veth2移到ns2之后配置的IP消失了,因为这是属于不同的名称空间,名称空间是独立的

[root@localhost ~]# ip netns exec ns2 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
7: veth1@if6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2e:31:0c:42:29:6e brd ff:ff:ff:ff:ff:ff link-netns ns1

[root@localhost ~]# ip netns exec ns2 ip link set veth1 up  //启动veth1
[root@localhost ~]# ip netns exec ns2 ip addr add 192.168.2.2/24 dev veth1  // 为veth1设置IP

[root@localhost ~]# ip netns  exec ns2 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
7: veth1@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2e:31:0c:42:29:6e brd ff:ff:ff:ff:ff:ff link-netns ns1
    inet 192.168.2.2/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::2c31:cff:fe42:296e/64 scope link 
       valid_lft forever preferred_lft forever

// 测试用ns2去ping,ns2看是否能通

[root@localhost ~]# ip netns exec ns2 ping -c 2 192.168.2.1  //ping两次之后退出
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from 192.168.2.1: icmp_seq=2 ttl=64 time=0.082 ms

--- 192.168.2.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1045ms
rtt min/avg/max/mdev = 0.082/0.083/0.084/0.001 ms

1.8 veth设备重命名

[root@localhost ~]# ip netns exec ns1 ip link set veth0 down  //先关闭veth0网卡
[root@localhost ~]# ip netns exec ns1 ip link set dev veth0 name eth0 
[root@localhost ~]# ip netns exec ns1 ip link set eth0 up  //启动网卡

[root@localhost ~]# ip netns exec ns1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1e:2a:ff:f9:96:e0 brd ff:ff:ff:ff:ff:ff link-netns ns2
    inet 192.168.2.1/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::1c2a:ffff:fef9:96e0/64 scope link 
       valid_lft forever preferred_lft forever

// 对ns2的veth1网卡进行改名
[root@localhost ~]# ip netns exec ns2 ip link set veth1 down
[root@localhost ~]# ip netns exec ns2 ip link set dev veth1 name eth0  
[root@localhost ~]# ip netns exec ns2 ip link set eth0 up

[root@localhost ~]# ip netns exec ns2 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
7: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2e:31:0c:42:29:6e brd ff:ff:ff:ff:ff:ff link-netns ns1
    inet 192.168.2.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2c31:cff:fe42:296e/64 scope link 
       valid_lft forever preferred_lft forever

2. 四种网络模式配置

2.1 bridge模式配置(默认模式)

// 通过对比可以发现这两种创建方式没有什么不同
[root@localhost ~]# docker run -it --rm --network bridge busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

[root@localhost ~]# docker run -it --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

2.2 none模式配置

[root@localhost ~]# docker run -it --rm --network none busybox  //只有一个lo网卡
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever

2.3 container模式配置

启动第一个容器

[root@localhost ~]# docker run -it --rm busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

启动第二个容器

[root@localhost ~]# docker run -it --rm busybox 
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

可以看到第二个的容器IP地址是172.17.0.3,与第一个容器的IP地址不是一样的,也就是说并没有共享网络,此时如果我们将第二个容器的启动方式改变一下,就可以使第二个的容器IP与第一个容器IP一致,也即共享IP,但不共享文件系统。

[root@localhost ~]# docker run -it --rm --network container:20e6ef4032fa busybox  //发现第一个容器的IP于第二个容器的IP一致
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

此时我们在一个容器上创建一个目录

/ # mkdir -p /opt/123

到第二个容器上检查/opt目录会发现并没有这个目录,因为文件系统是处于隔离状态,仅仅是共享了网络而已。

/ # ls /opt
ls: /opt: No such file or directory

在第二个容器上部署一个站点

/ # echo 'linux' > /var/www/index.html
/ # /bin/httpd -f -h /var/www/  
[root@localhost ~]# docker exec -it bbbd1d60444a /bin/sh  //进入第二个容器
/ # netstat -anlt  //查看端口号
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 :::80                   :::*                    LISTEN 

在第一个容器上用本地地址去访问此站点

/ # wget -O - -q 127.0.0.1
linux
// 由此可见,container模式下的容器间关系就相当于一台主机上的两个不同进程

2.4 host模式配置

// 启动容器时直接指明模式为host

[root@localhost ~]# docker run -it --rm --network host busybox  //和真机共用网卡
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel qlen 1000
    link/ether 00:0c:29:5f:b4:28 brd ff:ff:ff:ff:ff:ff
    inet 192.168.182.150/24 brd 192.168.182.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe5f:b428/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue 
    link/ether 02:42:35:05:b8:22 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:35ff:fe05:b822/64 scope link 
       valid_lft forever preferred_lft forever

/ # echo 'RHCAS' > /var/www/index.html
/ # /bin/httpd -h /var/www/
/ # netstat -anlt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      
tcp        0      0 192.168.182.150:22      192.168.182.1:53545     ESTABLISHED 
tcp        0      0 192.168.182.150:22      192.168.182.1:53648     ESTABLISHED 
tcp        0      0 192.168.182.150:22      192.168.182.1:64580     ESTABLISHED 
tcp        0      0 192.168.182.150:22      192.168.182.1:64579     ESTABLISHED 
tcp        0      0 :::80                   :::*                    LISTEN      
tcp        0      0 :::22                   :::*                    LISTEN

[root@localhost ~]# curl 192.168.182.150
RHCAS

3. 容器的常用操作

3.1查看容器的主机名

[root@localhost ~]# docker run -it --rm busybox
/ # hostname 
5993c4638ee9

3.2 在容器启动时注入主机名

[root@localhost ~]# docker run -it --rm --hostname admin busybox
/ # hostname 
admin

/ # cat /etc/hosts 
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.17.0.2	admin  // 注入主机名时会自动创建主机名到IP的映射关系

/ # cat /etc/resolv.conf 
# Generated by NetworkManager
nameserver 192.168.182.2  // DNS也会自动配置为宿主机的DNS

/ # ping -c1 baidu.com
PING baidu.com (220.181.38.251): 56 data bytes
64 bytes from 220.181.38.251: seq=0 ttl=127 time=29.478 ms

--- baidu.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 29.478/29.478/29.478 ms

3.3 手动指定容器要使用的DNS

[root@localhost ~]# docker run -it --rm --hostname tom --dns 114.114.114.114 busybox
/ # cat /etc/resolv.conf 
nameserver 114.114.114.114

3.4 手动往/etc/hosts文件中注入主机名到IP地址的映射

[root@localhost ~]# docker run -it --rm --hostname admin --add-host www.test.com:100.1.1.1 busybox

/ # cat /etc/hosts 
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
100.1.1.1	www.test.com
172.17.0.2	admin

3.5 开放容器端口

执行docker run的时候有个-p选项,可以将容器中的应用端口映射到宿主机中,从而实现让外部主机可以通过访问宿主机的某端口来访问容器内应用的目的。
-p选项能够使用多次,其所能够暴露的端口必须是容器确实在监听的端口。
-p选项的使用格式:

  • -p <containerPort>
    • 将指定的容器端口映射至主机所有地址的一个动态端口
  • -p <hostPort>:<containerPort>
    • 将容器端口<containerPort>映射至指定的主机端口<hostPort>
  • -p ::<containerPort>
    • 将指定的容器端口<containerPort>映射至主机指定<ip>的动态端口
  • -p :<hostPort>:<containerPort>
    • 将指定的容器端口<containerPort>映射至主机指定的端口<hostPort>

动态端口指的是随机端口,具体的映射结果可使用docker port命令查看。

[root@localhost ~]# docker run -d --name web --rm -p 80 httpd // 使用httpd镜像创建一个容器,-d后台运行。若不指定端口号80会随机映射到真机的一个端口号


// 由此可见,容器的80端口被暴露到了宿主机的49153端口上,此时我们在宿主机上访问一下这个端口看是否能访问到容器内的站点
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE     COMMAND              CREATED          STATUS         PORTS                                     NAMES
fd294d368bd6   httpd     "httpd-foreground"   11 seconds ago   Up 9 seconds   0.0.0.0:49153->80/tcp, :::49153->80/tcp   web

[root@localhost ~]# curl 192.168.182.150:49153
<html><body><h1>It works!</h1></body></html>

// 方法一:
[root@localhost ~]# docker run -d --name web --rm -p 80:80 httpd   //将容器的80映射到宿主机的80,前面的80是宿主机,后面一个80是容器的
07b4f20fc0a9aed9df32c81f064d8e1d5fd7009a0c3afd28be4eb2383c948abf

[root@localhost ~]# curl 192.168.182.150
<html><body><h1>It works!</h1></body></html>

// 方法二:
[root@localhost ~]# docker run -d --name web --rm -p 996:80 httpd  // 随机指定映射到哪个端口号
84cb7b0bc47c5401ed44d0758b60c062dd390a2b6f152f8b3ea776f07dd3b182

[root@localhost ~]# docker port web 
80/tcp -> 0.0.0.0:996
80/tcp -> :::996

[root@localhost ~]# curl 192.168.182.150:996
<html><body><h1>It works!</h1></body></html>

方法三:
[root@localhost ~]# docker run -d --name web --rm -p 192.168.182.150::80 httpd  //将容器的80映射到宿主机的49153端口号。只能使用192.168.182.150这个IP地址访问
dbe29d63cf6a5f3c70db059f5bfd3cd5d1c84f40f31403b046e136e470e42c6a
[root@localhost ~]# docker ps 
CONTAINER ID   IMAGE     COMMAND              CREATED         STATUS         PORTS                           NAMES
dbe29d63cf6a   httpd     "httpd-foreground"   8 seconds ago   Up 7 seconds   192.168.182.150:49153->80/tcp   web

[root@localhost ~]# curl 192.168.182.150:49153
<html><body><h1>It works!</h1></body></html>

// 方法四:
[root@localhost ~]# docker run -d --name web --rm -p 127.0.0.1:80:80 httpd  //将容器的80端口号映射到127这个IP的80端口号上
9cb016186b08a944b55b3bf4bbdebb66c9dc70d880a3aa6f33d15c87dae00b18
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE     COMMAND              CREATED         STATUS         PORTS                  NAMES
9cb016186b08   httpd     "httpd-foreground"   9 seconds ago   Up 7 seconds   127.0.0.1:80->80/tcp   web

[root@localhost ~]# curl 127.0.0.1
<html><body><h1>It works!</h1></body></html>

iptables防火墙规则将随容器的创建自动生成,随容器的删除自动删除规则,当容器关闭时也会删除规则。

// 将容器端口映射到指定IP的端口号
[root@localhost ~]# docker run -d --name web --rm -p 127.0.0.1:80:80 httpd
907277a6d75ec4435427cc2c4aca0bff9d6a69b83eb39ea7a100b609d5cebfd6

// 查看端口映射情况
[root@localhost ~]# docker port web 
80/tcp -> 127.0.0.1:80

[root@localhost ~]# iptables -t nat -nvL  //查看iptables规则
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    5   260 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    3   194 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:80

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    6   360 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            127.0.0.1            tcp dpt:80 to:172.17.0.2:80


[root@localhost ~]# docker run -d --name web -p 80:80 httpd
df6243779bab274b209ea3f4962a738c02eae085e25a98e1cd7ce5e3814955d0
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE     COMMAND              CREATED          STATUS          PORTS                               NAMES
df6243779bab   httpd     "httpd-foreground"   20 seconds ago   Up 18 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp   web
[root@localhost ~]# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    5   260 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    3   194 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:80

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    6   360 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:172.17.0.2:80

[root@localhost ~]# docker stop web   //停掉容器
web

[root@localhost ~]# docker ps  -a | grep web
df6243779bab   httpd          "httpd-foreground"   3 minutes ago   Exited (0) About a minute ago             web

[root@localhost ~]# iptables -t nat -nvL  //发现规则没有了
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    5   260 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    3   194 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    6   360 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0

3.6 自定义docker0桥的网络属性信息

自定义docker0桥的网络属性信息需要修改/etc/docker/daemon.json配置文件
官方参考文档

// 核心选项为bip,即bridge ip之意,用于指定docker0桥自身的IP地址;其它选项可通过此地址计算得出。
[root@localhost docker]# pwd
/etc/docker
[root@localhost docker]# cat daemon.json
{
  "bip": "192.168.1.1/24"
}

[root@localhost docker]# systemctl restart docker.service  // 重启docker服务
[root@localhost docker]# docker start web  // 因为重启docker服务之后容器都会停止
[root@localhost docker]# docker inspect web | grep -w IPAddress
            "IPAddress": "192.168.1.2",
                    "IPAddress": "192.168.1.2",

// 还原为默认
[root@localhost docker]# cat daemon.json
{
  "bip": "172.17.0.1/24"
}

[root@localhost docker]# systemctl restart docker.service  //重启服务使其生效
[root@localhost docker]# ip a | grep -w docker0 | grep inet | awk -F '[ /]+' '{print $3}'
172.17.0.1

3.7 docker远程连接

dockerd守护进程的C/S,其默认仅监听Unix Socket格式的地址(/var/run/docker.sock),如果要使用TCP套接字,则需要修改/etc/docker/daemon.json配置文件,添加如下内容,然后重启docker服务:

"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]

在客户端上向dockerd直接传递“-H|--host”选项指定要控制哪台主机上的docker容器
docker -H 192.168.10.145:2375 ps

3.8 docker创建自定义桥

创建一个额外的自定义桥,区别于docker0

[root@localhost ~]# docker network ls  //查看网络模式
NETWORK ID     NAME      DRIVER    SCOPE
c20dbcdb1e57   bridge    bridge    local
981fad12ece5   host      host      local
98d506fcdbf1   none      null      local

[root@localhost ~]# docker network create -d  bridge --subnet '192.168.2.0/24' --gateway '192.168.2.1' br0  //-d 是指定模式,--subnet是指定网段,网关为2.1 网桥的名字是br0
593afb169be7c90484a1e633ebc1913ac9501383e6931a77f59c3b5685210e89

[root@localhost ~]# docker network ls 
NETWORK ID     NAME      DRIVER    SCOPE
593afb169be7   br0       bridge    local
c20dbcdb1e57   bridge    bridge    local
981fad12ece5   host      host      local
98d506fcdbf1   none      null      local

// 使用新创建的自定义桥来创建容器:
[root@localhost ~]# docker run -it --rm --network br0 busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
45: eth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:c0:a8:02:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.2/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever


// 再创建一个容器,使用默认的bridge桥:
[root@localhost ~]# docker run -it --rm  busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
47: eth0@if48: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever


此时的b2与b1能否互相通信?如果不能该如何实现通信?

// 我们可以将默认网桥连接到br0网桥上使其互通。

通过对比我们可以发现自定义的桥和默认的桥接模式的网段不一样,那如何实现通讯呢?
[root@localhost ~]# docker network connect br0 7d1f75c55c1d

/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
47: eth0@if48: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
49: eth1@if50: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:c0:a8:02:03 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.3/24 brd 192.168.2.255 scope global eth1
       valid_lft forever preferred_lft forever

/ # ping -c2 192.168.182.2
PING 192.168.182.2 (192.168.182.2): 56 data bytes
64 bytes from 192.168.182.2: seq=0 ttl=127 time=0.363 ms
64 bytes from 192.168.182.2: seq=1 ttl=127 time=0.241 ms

--- 192.168.182.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.241/0.302/0.363 ms

[root@localhost ~]# docker network connect bridge da6a9c90c609  //这个id是br0网桥的id
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
45: eth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:c0:a8:02:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.2/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever
51: eth1@if52: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth1
       valid_lft forever preferred_lft forever

/ # ping -c2 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.245 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.107 ms

--- 172.17.0.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.107/0.176/0.245 ms
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值