Docker容器间网络通信
一、默认网络
安装 Docker 以后,会默认创建三种网络,可以通过 docker network ls
查看。
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
3920688d964e bridge bridge local
708ab663040f host host local
d82d5c47dacb none null local
四种网络模式简介
网络模式 | 简介 |
---|---|
bridge | 为每一个容器分配、设置 IP 等,并将容器连接到一个 docker0 虚拟网桥,默认为该模式。 |
host | 容器将不会虚拟出自己的网卡,配置自己的 IP 等,而是使用宿主机的 IP 和端口。 |
none | 容器有独立的 Network namespace,但并没有对其进行任何网络设置,如分配 veth pair 和网桥连接,IP 等。 |
container | 新创建的容器不会创建自己的网卡和配置自己的 IP,而是和一个指定的容器共享 IP、端口范围等。 |
1.1 bridge网络模式
在该模式中,Docker 守护进程创建了一个虚拟以太网桥 docker0
,新建的容器会自动桥接到这个接口,附加在其上的任何网卡之间都能自动转发数据包。
默认情况下,守护进程会创建一对对等虚拟设备接口 veth pair
,将其中一个接口设置为容器的 eth0
接口(容器的网卡),另一个接口放置在宿主机的命名空间中,以类似 vethxxx
这样的名字命名,从而将宿主机上的所有容器都连接到这个内部网络上。
比如我运行一个基于 busybox
镜像构建的容器 bbox01
,查看 ip addr
:
busybox 被称为嵌入式 Linux 的瑞士军刀,整合了很多小的 unix 下的通用功能到一个小的可执行文件中。
$ docker run -it --name bbox01 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
然后宿主机通过 ip addr
查看信息如下:
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:16:7c:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.50.10/24 brd 192.168.50.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::415c:5dc5:9ca8:fbc5/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:f0:36:87:a4 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:f0ff:fe36:87a4/64 scope link
valid_lft forever preferred_lft forever
24: vethe108936@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 0e:5a:14:50:87:61 brd ff:ff:ff:ff:ff:ff link-netnsid 9
inet6 fe80::c5a:14ff:fe50:8761/64 scope link
valid_lft forever preferred_lft forever
通过以上的比较可以发现,证实了之前所说的:守护进程会创建一对对等虚拟设备接口 veth pair
,将其中一个接口设置为容器的 eth0
接口(容器的网卡),另一个接口放置在宿主机的命名空间中,以类似 vethxxx
这样的名字命名。
同时,守护进程还会从网桥 docker0
的私有地址空间中分配一个 IP 地址和子网给该容器,并设置 docker0 的 IP 地址为容器的默认网关。也可以安装 yum install -y bridge-utils
以后,通过 brctl show
命令查看网桥信息。
$ brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242f03687a4 no vethe108936
对于每个容器的 IP 地址和 Gateway 信息,可以通过 docker inspect 容器名称|ID
进行查看,在 NetworkSettings
节点中可以看到详细信息。
"NetworkSettings": {
"Bridge": "",
"SandboxID": "dfb20f02d89999325541c5c7851a3132c98177c2fab6cc1c21a785cedbe4aef5",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/dfb20f02d899",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "274f05aa6eed9843aacdd78f88e2898ec40da650fe65d5429483806b22112d8e",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "3920688d964ed0d9c56fd7cbf62207505399f8d92df3c04864ea7025384ecfff",
"EndpointID": "274f05aa6eed9843aacdd78f88e2898ec40da650fe65d5429483806b22112d8e",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
可以通过 docker network inspect bridge
查看所有 bridge
网络模式下的容器,在 Containers
节点中可以看到容器名称。
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "3920688d964ed0d9c56fd7cbf62207505399f8d92df3c04864ea7025384ecfff",
"Created": "2021-11-29T21:59:18.096846215-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"bd69609e8b1812e010b3efccf6429da46be9827ff7bb9181fa03d83355e3c972": {
"Name": "bbox01",
"EndpointID": "274f05aa6eed9843aacdd78f88e2898ec40da650fe65d5429483806b22112d8e",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
关于
bridge
网络模式的使用,只需要在创建容器时通过参数--net bridge
或者--network bridge
指定即可,当然这也是创建容器默认使用的网络模式,也就是说这个参数是可以省略的。
Bridge 桥接模式的实现步骤主要如下:
- Docker Daemon 利用 veth pair 技术,在宿主机上创建一对对等虚拟网络接口设备,假设为 veth0 和 veth1。而veth pair 技术的特性可以保证无论哪一个 veth 接收到网络报文,都会将报文传输给另一方。
- Docker Daemon 将 veth0 附加到 Docker Daemon 创建的 docker0 网桥上。保证宿主机的网络报文可以发往 veth0;
- Docker Daemon 将 veth1 添加到 Docker Container 所属的 namespace 下,并被改名为 eth0。如此一来,宿主机的网络报文若发往 veth0,则立即会被 Container 的 eth0 接收,实现宿主机到 Docker Container 网络的联通性;同时,也保证 Docker Container 单独使用 eth0,实现容器网络环境的隔离性。
1.2 host网络模式
- host 网络模式需要在创建容器时通过参数
--net host
或者--network host
指定; - 采用 host 网络模式的 Docker Container,可以直接使用宿主机的 IP 地址与外界进行通信,若宿主机的 eth0 是一个公有 IP,那么容器也拥有这个公有 IP。同时容器内服务的端口也可以使用宿主机的端口,无需额外进行 NAT 转换;
- host 网络模式可以让容器共享宿主机网络栈,这样的好处是外部主机与容器直接通信,但是容器的网络缺少隔离性。
比如基于 host
网络模式创建了一个基于 busybox
镜像构建的容器 bbox02
,查看 ip addr
:
$ docker run -it --name bbox02 --net host busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:16:7c:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.50.10/24 brd 192.168.50.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::415c:5dc5:9ca8:fbc5/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
link/ether 02:42:f0:36:87:a4 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:f0ff:fe36:87a4/64 scope link
valid_lft forever preferred_lft forever
24: vethe108936@if23: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue master docker0
link/ether 0e:5a:14:50:87:61 brd ff:ff:ff:ff:ff:ff
inet6 fe80::c5a:14ff:fe50:8761/64 scope link
valid_lft forever preferred_lft forever
然后宿主机通过 ip addr
查看信息如下:
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:16:7c:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.50.10/24 brd 192.168.50.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::415c:5dc5:9ca8:fbc5/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:f0:36:87:a4 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:f0ff:fe36:87a4/64 scope link
valid_lft forever preferred_lft forever
24: vethe108936@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 0e:5a:14:50:87:61 brd ff:ff:ff:ff:ff:ff link-netnsid 9
inet6 fe80::c5a:14ff:fe50:8761/64 scope link
valid_lft forever preferred_lft forever
返回信息一模一样,可以通过 docker network inspect host
查看所有 host
网络模式下的容器,在 Containers
节点中可以看到容器名称。
$ docker network inspect host
[
{
"Name": "host",
"Id": "708ab663040f4b874d25ef5e46ffe52f04a427d1a92f2e72975e3938ddf27b69",
"Created": "2021-11-23T01:32:47.749423618-05:00",
"Scope": "local",
"Driver": "host",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"a65831b959b2cab66a597d9ce4072ed41dca5a375213e355ea866f3a510166fc": {
"Name": "bbox02",
"EndpointID": "cf23fa4039f427ba5a2a2dcdbea508b3d8653a818c5f3f8c8260410fe5c3b5f7",
"MacAddress": "",
"IPv4Address": "",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
1.3 none网络模式
- none 网络模式是指禁用网络功能,只有 lo 接口 local 的简写,代表 127.0.0.1,即 localhost 本地环回接口。在创建容器时通过参数
--net none
或者--network none
指定; - none 网络模式即不为 Docker Container 创建任何的网络环境,容器内部就只能使用 loopback 网络设备,不会再有其他的网络资源。可以说 none 模式为 Docke Container 做了极少的网络设定,但是俗话说得好“少即是多”,在没有网络配置的情况下,作为 Docker 开发者,才能在这基础做其他无限多可能的网络定制开发。这也恰巧体现了 Docker 设计理念的开放
基于 none
网络模式创建一个基于 busybox
镜像构建的容器 bbox03
,查看 ip addr
:
$ docker run -it --name bbox03 --net none busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
可以通过 docker network inspect none
查看所有 none
网络模式下的容器,在 Containers
节点中可以看到容器名称。
$ docker network inspect host
[
{
"Name": "host",
"Id": "708ab663040f4b874d25ef5e46ffe52f04a427d1a92f2e72975e3938ddf27b69",
"Created": "2021-11-23T01:32:47.749423618-05:00",
"Scope": "local",
"Driver": "host",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"a65831b959b2cab66a597d9ce4072ed41dca5a375213e355ea866f3a510166fc": {
"Name": "bbox02",
"EndpointID": "cf23fa4039f427ba5a2a2dcdbea508b3d8653a818c5f3f8c8260410fe5c3b5f7",
"MacAddress": "",
"IPv4Address": "",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
1.4 container网络模式
- Container 网络模式是 Docker 中一种较为特别的网络的模式。在创建容器时通过参数
--net container:已运行的容器名称|ID
或者--network container:已运行的容器名称|ID
指定; - 处于这个模式下的 Docker 容器会共享一个网络栈,这样两个容器之间可以使用 localhost 高效快速通信。
Container 网络模式即新创建的容器不会创建自己的网卡,配置自己的 IP,而是和一个指定的容器共享 IP、端口范围等。同样两个容器除了网络方面相同之外,其他的如文件系统、进程列表等还是隔离的。
基于容器 bbox01
创建一个 container
网络模式的容器 bbox04
,查看 ip addr
:
$ docker run -it --name bbox04 --net container:bbox01 busybox
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
容器 bbox01
的 ip addr
信息如下:
$ docker exec -ti bbox01 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
宿主机的 ip addr
信息如下:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:16:7c:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.50.10/24 brd 192.168.50.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::415c:5dc5:9ca8:fbc5/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:f0:36:87:a4 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:f0ff:fe36:87a4/64 scope link
valid_lft forever preferred_lft forever
24: vethe108936@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 0e:5a:14:50:87:61 brd ff:ff:ff:ff:ff:ff link-netnsid 9
inet6 fe80::c5a:14ff:fe50:8761/64 scope link
valid_lft forever preferred_lft forever
通过以上测试可以发现,Docker 守护进程只创建了一对对等虚拟设备接口用于连接 bbox01 容器和宿主机,而 bbox04 容器则直接使用了 bbox01 容器的网卡信息。
这个时候如果将 bbox01 容器停止,会发现 bbox04 容器就只剩下 lo 接口了。
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
然后 bbox01 容器重启以后,bbox04 容器也重启一下,就又可以获取到网卡信息了。
$ docker restart bbox01
bbox01
$ docker restart bbox04
bbox04
$ docker exec -ti bbox04 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
1.5 link
docker run --link
可以用来链接两个容器,使得源容器(被链接的容器)和接收容器(主动去链接的容器)之间可以互相通信,并且接收容器可以获取源容器的一些数据,如源容器的环境变量。
这种方式官方已不推荐使用,并且在未来版本可能会被移除,感兴趣可自行了解。
官网警告信息:https://docs.docker.com/network/links/
二、自定义网络
虽然 Docker 提供的默认网络使用比较简单,但是为了保证各容器中应用的安全性,在实际开发中更推荐使用自定义的网络进行容器管理,以及启用容器名称到 IP 地址的自动 DNS 解析。
从 Docker 1.10 版本开始,docker daemon 实现了一个内嵌的 DNS server,使容器可以直接通过容器名称通信。方法很简单,只要在创建容器时使用
--name
为容器命名即可。但是使用 Docker DNS 有个限制:只能在 user-defined 网络中使用。也就是说,默认的 bridge 网络是无法使用 DNS 的,所以我们就需要自定义网络。
2.1 创建网络
通过 docker network create
命令可以创建自定义网络模式,命令提示如下:
$ docker network --help
Usage: docker network COMMAND
Manage networks
Commands:
connect Connect a container to a network
create Create a network
disconnect Disconnect a container from a network
inspect Display detailed information on one or more networks
ls List networks
prune Remove all unused networks
rm Remove one or more networks
Run 'docker network COMMAND --help' for more information on a command.
进一步查看 docker network create
命令使用详情,发现可以通过 --driver
指定网络模式且默认是 bridge
网络模式,提示如下:
$ docker network create --help
Usage: docker network create [OPTIONS] NETWORK
Create a network
Options:
--attachable Enable manual container attachment
--aux-address map Auxiliary IPv4 or IPv6 addresses used by Network driver (default map[])
--config-from string The network from which to copy the configuration
--config-only Create a configuration only network
-d, --driver string Driver to manage the Network (default "bridge")
--gateway strings IPv4 or IPv6 Gateway for the master subnet
--ingress Create swarm routing-mesh network
--internal Restrict external access to the network
--ip-range strings Allocate container ip from a sub-range
--ipam-driver string IP Address Management Driver (default "default")
--ipam-opt map Set IPAM driver specific options (default map[])
--ipv6 Enable IPv6 networking
--label list Set metadata on a network
-o, --opt map Set driver specific options (default map[])
--scope string Control the network's scope
--subnet strings Subnet in CIDR format that represents a network segment
创建一个基于 bridge
网络模式的自定义网络模式 custom_network
,完整命令如下:
$ docker network create custom_network
通过 docker network ls
查看网络模式:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
3920688d964e bridge bridge local
848df6d8418a custom_network bridge local
708ab663040f host host local
d82d5c47dacb none null local
通过自定义网络模式 custom_network
创建容器:
$ docker run -itd --name bbox05 --net custom_network busybox
通过 docker inspect 容器名称|ID
查看容器的网络信息,在 NetworkSettings
节点中可以看到详细信息。
"NetworkSettings": {
"Bridge": "",
"SandboxID": "f7a837e50a9a647bbf6f529116c3e909fff8c23ef4fc42abc5cf3bd8a840905e",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/f7a837e50a9a",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"custom_network": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"6ed721a711cd"
],
"NetworkID": "848df6d8418acb00e2f7655dbed3c189801c19360ff7fd4d115d57adeddacdf2",
"EndpointID": "2203ca75a2055f5d9bc231f089dca243ffe09d42c9efea83eb8db5672c2663fe",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:02",
"DriverOpts": null
}
}
}
2.2 连接网络
通过 docker network connect 网络名称 容器名称
为容器连接新的网络模式。
[root@localhost ~]# docker network connect --help
Usage: docker network connect [OPTIONS] NETWORK CONTAINER
Connect a container to a network
Options:
--alias strings Add network-scoped alias for the container
--driver-opt strings driver options for the network
--ip string IPv4 address (e.g., 172.30.100.104)
--ip6 string IPv6 address (e.g., 2001:db8::33)
--link list Add link to another container
--link-local-ip strings Add a link-local address for the container
$ docker network connect bridge bbox05
通过 docker inspect 容器名称|ID
再次查看容器的网络信息,多增加了默认的 bridge
。
"NetworkSettings": {
"Bridge": "",
"SandboxID": "f7a837e50a9a647bbf6f529116c3e909fff8c23ef4fc42abc5cf3bd8a840905e",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/f7a837e50a9a",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "66ff78a06bdcbf2a483134ee32752e52c5d0e2853fbd9f56f1c48390f0049940",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:03",
"Networks": {
"bridge": {
"IPAMConfig": {},
"Links": null,
"Aliases": [],
"NetworkID": "3920688d964ed0d9c56fd7cbf62207505399f8d92df3c04864ea7025384ecfff",
"EndpointID": "66ff78a06bdcbf2a483134ee32752e52c5d0e2853fbd9f56f1c48390f0049940",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:03",
"DriverOpts": {}
},
"custom_network": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"6ed721a711cd"
],
"NetworkID": "848df6d8418acb00e2f7655dbed3c189801c19360ff7fd4d115d57adeddacdf2",
"EndpointID": "2203ca75a2055f5d9bc231f089dca243ffe09d42c9efea83eb8db5672c2663fe",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:02",
"DriverOpts": null
}
}
}
2.3 断开网络
通过 docker network disconnect 网络名称 容器名称
命令断开网络。
$ docker network disconnect custom_network bbox05
通过 docker inspect 容器名称|ID
再次查看容器的网络信息,发现只剩下默认的 bridge
。
"NetworkSettings": {
"Bridge": "",
"SandboxID": "f7a837e50a9a647bbf6f529116c3e909fff8c23ef4fc42abc5cf3bd8a840905e",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/f7a837e50a9a",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "66ff78a06bdcbf2a483134ee32752e52c5d0e2853fbd9f56f1c48390f0049940",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:03",
"Networks": {
"bridge": {
"IPAMConfig": {},
"Links": null,
"Aliases": [],
"NetworkID": "3920688d964ed0d9c56fd7cbf62207505399f8d92df3c04864ea7025384ecfff",
"EndpointID": "66ff78a06bdcbf2a483134ee32752e52c5d0e2853fbd9f56f1c48390f0049940",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:03",
"DriverOpts": {}
}
}
}
2.4 移除网络
可以通过 docker network rm 网络名称
命令移除自定义网络模式,网络模式移除成功会返回网络模式名称。
$ docker network rm custom_network
注意:如果通过某个自定义网络模式创建了容器,则该网络模式无法删除。
三、容器间网络通信
如果容器之间要互相通信,必须要有属于同一个网络的网卡。
先创建两个基于默认的 bridge
网络模式的容器。
$ docker run -itd --name default_bbox01 busybox
$ docker run -itd --name default_bbox02 busybox
通过 docker network inspect bridge
查看两容器的具体 IP 信息。
"Containers": {
"24234548c4f6dd69138b989d24da23a0b9dabf38bc7a76b9b48904d37f251916": {
"Name": "default_bbox01",
"EndpointID": "36b7ca7f6cde06a8747d519c915bc100981a68fa33be45ab3d5cbe85acffe80e",
"MacAddress": "02:42:ac:11:00:04",
"IPv4Address": "172.17.0.4/16",
"IPv6Address": ""
},
"6276e304644eb23c428e17e709a366c5e75b19ec1765b96aa817332444c2e412": {
"Name": "default_bbox02",
"EndpointID": "c940bd5439cf770765311725c1b1ae43d12f2993b5146c5d87a467d015a7ca59",
"MacAddress": "02:42:ac:11:00:05",
"IPv4Address": "172.17.0.5/16",
"IPv6Address": ""
}
然后测试两容器间是否可以进行网络通信。
$ docker exec -ti default_bbox01 ping 172.17.0.5
PING 172.17.0.5 (172.17.0.5): 56 data bytes
64 bytes from 172.17.0.5: seq=0 ttl=64 time=0.139 ms
64 bytes from 172.17.0.5: seq=1 ttl=64 time=0.051 ms
64 bytes from 172.17.0.5: seq=2 ttl=64 time=0.070 ms
64 bytes from 172.17.0.5: seq=3 ttl=64 time=0.055 ms
64 bytes from 172.17.0.5: seq=4 ttl=64 time=0.052 ms
64 bytes from 172.17.0.5: seq=5 ttl=64 time=0.054 ms
^C
--- 172.17.0.5 ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max = 0.051/0.070/0.139 ms
经过测试,从结果得知两个属于同一个网络的容器是可以进行网络通信的,但是 IP 地址可能是不固定的,有被更改的情况发生,那容器内所有通信的 IP 地址也需要进行更改,能否使用容器名称进行网络通信?
$ docker exec -ti default_bbox01 ping default_bbox02
ping: bad address 'default_bbox02'
经过测试,从结果得知使用容器进行网络通信是不行的,那怎么实现这个功能呢?
从 Docker 1.10 版本开始,docker daemon 实现了一个内嵌的 DNS server,使容器可以直接通过容器名称通信。方法很简单,只要在创建容器时使用 --name
为容器命名即可。
但是使用 Docker DNS 有个限制:只能在 user-defined 网络中使用。也就是说,默认的 bridge 网络是无法使用 DNS 的,所以我们就需要自定义网络。
先基于 bridge
网络模式创建自定义网络 custom_network
,然后创建两个基于自定义网络模式的容器。
$ docker run -itd --name custom_bbox01 --net custom_network busybox
$ docker run -itd --name custom_bbox02 --net custom_network busybox
通过 docker network inspect custom_network
查看两容器的具体 IP 信息。
"Containers": {
"40cf28b2261e603081d4a04768bae8e2144c078899836c1a915d239fabdd5ccd": {
"Name": "custom_bbox02",
"EndpointID": "58ae59cee42c7d15d471c1f507e792ca08b1c2e72063408d0de1280c72526b4a",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"aad302e8212dca15ce2c8697b1a0ea76642fdb4d9e4d2b5c7b9f0bb72ba14f3e": {
"Name": "custom_bbox01",
"EndpointID": "a09d9c02e8873a42fdf988ee1b1fbb49c169ea4c9b1fd278e091fb616f23d205",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
}
然后测试两容器间是否可以进行网络通信,分别使用具体 IP 和容器名称进行网络通信。
$ docker exec -ti custom_bbox01 ping 172.19.0.3
PING 172.19.0.3 (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.121 ms
64 bytes from 172.19.0.3: seq=1 ttl=64 time=0.057 ms
64 bytes from 172.19.0.3: seq=2 ttl=64 time=0.057 ms
64 bytes from 172.19.0.3: seq=3 ttl=64 time=0.056 ms
^C
--- 172.19.0.3 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.056/0.072/0.121 ms
$ docker exec -ti custom_bbox01 ping custom_bbox02
PING custom_bbox02 (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.083 ms
64 bytes from 172.19.0.3: seq=1 ttl=64 time=0.056 ms
64 bytes from 172.19.0.3: seq=2 ttl=64 time=0.056 ms
64 bytes from 172.19.0.3: seq=3 ttl=64 time=0.057 ms
^C
--- custom_bbox02 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.056/0.063/0.083 ms
经过测试,从结果得知两个属于同一个自定义网络的容器是可以进行网络通信的,并且可以使用容器名称进行网络通信。
那如果此时希望 bridge
网络下的容器可以和 custom_network
网络下的容器进行网络又该如何操作?其实也非常简单:让 bridge
网络下的容器连接至新的 custom_network
网络即可。
$ docker network connect custom_network default_bbox01
$ docker exec -ti custom_bbox01 ping default_bbox01
PING default_bbox01 (172.19.0.4): 56 data bytes
64 bytes from 172.19.0.4: seq=0 ttl=64 time=0.088 ms
64 bytes from 172.19.0.4: seq=1 ttl=64 time=0.057 ms
64 bytes from 172.19.0.4: seq=2 ttl=64 time=0.056 ms
64 bytes from 172.19.0.4: seq=3 ttl=64 time=0.070 ms
^C
--- default_bbox01 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.056/0.067/0.088 ms
$ docker exec -ti default_bbox01 ping custom_bbox02
PING custom_bbox02 (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.103 ms
64 bytes from 172.19.0.3: seq=1 ttl=64 time=0.056 ms
64 bytes from 172.19.0.3: seq=2 ttl=64 time=0.054 ms
64 bytes from 172.19.0.3: seq=3 ttl=64 time=0.056 ms
^C
--- custom_bbox02 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.054/0.067/0.103 ms