企业运维容器之 docker 网络

1. Docker原生网络

  • docker的镜像是令人称道的地方,但网络功能还是相对薄弱的部分

  • docker安装后会自动创建3种网络:bridge、host、none.

  • 可以使用以下命令查看:docker network ls

[root@server2 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
ff3156699943        bridge              bridge              local
97d148277d30        host                host                local
d727708902c7        none                null                local
  1. docker 安装时会创建一个名为 docker0 的 Linux bridge,新建的容器会自动桥接到这个接口。
  • bridge模式下容器没有一个公有ip,只有宿主机可以直接访问,外部主机是不可见的;容器通过宿主机的NAT规则后可以访问外网。
[root@server2 ~]# docker run -d --name demo nginx:latest 
25a42f4bba1467ba6e5ea8599f262cd8f6d722c8f9f1cfb5952ceb4c6fb81203
[root@server2 ~]# docker inspect demo	##查看分配的IP和网关

            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
[root@server2 ~]# docker rm -f demo
[root@server2 ~]# docker run -it  --name demo1 busybox
/ # ip addr		##也可以在容器内部查看
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 
[root@server2 ~]# curl 172.17.0.2		##测试
[root@server2 ~]# bridge link			##可以看到docker 0 的接口
5: vethfc9471a state UP @(null): <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2 

[root@server2 ~]# yum install bridge-utils.x86_64 -y		
	##网桥查看工具
[root@server2 ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242e47cfbb4	no		vethfc9471a
  1. host 网络模式需要在容器创建时指定 --network=host
    不使用桥接,直接使用和宿主机相同的网络位;
    host 模式可以让容器共享宿主机网络栈,这样的好处是外部主机与容器直接通信,但是容器的网络缺少隔离性。
[root@server2 ~]# docker run -d --name demo --network host nginx:v1
7f7adfa815e0390b121084e36f4edf8c5d8de5c3d0edb910a956c0d5e7a9c67c
[root@server2 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
7f7adfa815e0        nginx:v1            "/docker-entrypoint.…"   6 seconds ago       Up 6 seconds                            demo
[root@server2 ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
docker0		8000.0242e47cfbb4	no	
[root@server2 ~]# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:e9:26:22 brd ff:ff:ff:ff:ff:ff
    inet 172.25.25.2/24 brd 172.25.25.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fee9:2622/64 scope link 
       valid_lft forever preferred_lft forever
[root@server2 ~]# docker run -it --rm --network host busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 52:54:00:e9:26:22 brd ff:ff:ff:ff:ff:ff
    inet 172.25.25.2/24 brd 172.25.25.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fee9:2622/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue 
    link/ether 02:42:e4:7c:fb:b4 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:e4ff:fe7c:fbb4/64 scope link 
       valid_lft forever preferred_lft forever
  1. none模式是指禁用网络功能,只有lo接口,在容器创建时使用 --network=none指定。
    使用该网络可以放一些不让别人访问的东西。
[root@server2 ~]# docker run -it --rm --network none busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
/ # 

2. Docker自定义网络

  • 自定义网络模式,docker提供了三种自定义网络驱动:
    bridge
    overlay
    macvlan
  1. bridge驱动类似默认的bridge网络模式,但增加了一些新的功能,
    overlay和macvlan是用于创建跨主机网络。
    建议使用自定义的网络来控制哪些容器可以相互通信,还可以自动DNS解析容器名称到IP地址。

创建自定义网桥
自创的有解析,可以ping通;

[root@server2 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@server2 ~]# docker network create --help
[root@server2 ~]# docker network create mynet1
4dfb2e9905f89f9e3f1d5b6b896d41b0bc3de6e79b3fb375d3ca8aa1f3b1d116
[root@server2 ~]# docker inspect mynet1
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",		##子网
                    "Gateway": "172.18.0.1"
                }
            ]

[root@server2 ~]# docker run -it --name demo1 --network mynet1 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
15: eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping demo1
PING demo1 (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.053 ms
^C
--- demo1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.053/0.058/0.063 ms
 / # [root@server2 ~]# docker run -it --name demo2 --network mynet1 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.3/16 brd 172.18.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping demo1
PING demo1 (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.116 ms
^C
--- demo1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.116/0.116/0.116 ms
/ # 
  • 还可以自己定义网段,在创建时指定参数:–subnet 、–gateway
  • 使用–ip参数可以指定容器ip地址,但必须是在自定义网桥上,默认的bridge模式不支持,同一网桥上的容器是可以互通的。
[root@server2 ~]# docker network rm mynet1
mynet1
[root@server2 ~]# docker container prune		##清除不用的容器
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Total reclaimed space: 0B
[root@server2 ~]# docker network create --subnet 172.20.0.0/24 --gateway 172.20.0.1 mynet1
9d28602f328756c9a41ce51f5dbbe685054a796268da4154565b6ed8476fe55e
[root@server2 ~]# docker network inspect mynet1

        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.20.0.0/24",
                    "Gateway": "172.20.0.1"
                }
            ]

以上指定之后,在运行容器时可以指定IP;

[root@server2 ~]# docker run -it --name demo1 --network mynet1 --ip 172.20.0.200 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:14:00:c8 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.200/24 brd 172.20.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping demo1
PING demo1 (172.20.0.200): 56 data bytes
64 bytes from 172.20.0.200: seq=0 ttl=64 time=0.045 ms
^C
--- demo1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.045/0.045/0.045 ms
/ # 
  • 桥接到不同网桥上的容器,彼此是不通信的。
    docker在设计上就是要隔离不同network的。
[root@server2 ~]# docker run -it --name demo1 --network mynet1 --ip 172.20.0.200 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:14:00:c8 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.200/24 brd 172.20.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # [root@server2 ~]# docker run -it --name demo2 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
20: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping demo1
ping: bad address 'demo1'
  • 使两个不同网桥的容器通信:
    使用 docker network connect 命令为 vm1 添加一块 my_net2 的网卡。
/ # [root@server2 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
5ce636af4ba1        busybox             "sh"                2 minutes ago       Up 2 minutes                            demo2
f69ac1f092a7        busybox             "sh"                2 minutes ago       Up 2 minutes                            demo1
[root@server2 ~]# docker network connect mynet1 demo2
[root@server2 ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br-9d28602f3287		8000.02429c3fb2c7	no		veth415e67c
							vethb979bdb
docker0		8000.0242e47cfbb4	no		veth26c176b
[root@server2 ~]# docker attach demo2
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
20: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
22: eth1@if23: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:14:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.2/24 brd 172.20.0.255 scope global eth1
       valid_lft forever preferred_lft forever
/ # ping demo1
PING demo1 (172.20.0.200): 56 data bytes
64 bytes from 172.20.0.200: seq=0 ttl=64 time=0.122 ms
^C
--- demo1 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.122/0.122/0.122 ms
/ # 

3. Docker容器通信

  • 容器之间除了使用ip通信外,还可以使用容器名称通信。
    docker 1.10开始,内嵌了一个DNS server。
    dns解析功能必须在自定义网络中使用。
    启动容器时使用 --name 参数指定容器名称。
  • Joined容器一种较为特别的网络模式。
    在容器创建时使用–network=container:vm1指定。(vm1指定的是运行的容器名)
    可以看到两个容器通用一个网络位;好处就是通过 localhost 来通信,是高速的,但是端口不一致。
[root@server2 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
f69ac1f092a7        busybox             "sh"                11 minutes ago      Up 11 minutes                           demo1
[root@server2 ~]# docker run -it --name demo3 --network container:demo1 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:14:00:c8 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.200/24 brd 172.20.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # [root@server2 ~]# 
[root@server2 ~]# docker attach demo1
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:14:00:c8 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.200/24 brd 172.20.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 
  • 处于这个模式下的 Docker 容器会共享一个网络栈,这样两个容器之间可以使用localhost高效快速通信。
  • –link 可以用来链接2个容器。
    –link的格式:–link :alias
    name和id是源容器的name和id,alias是源容器在link下的别名。
[root@server2 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@server2 ~]# docker run -d --name  nginx nginx
64c84d7f690c6ac88b5b02ba2ac7999199c410256ecaf9467ccdab888b1635f0
[root@server2 ~]# docker ps 
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
64c84d7f690c        nginx               "/docker-entrypoint.…"   18 seconds ago      Up 17 seconds       80/tcp              nginx
[root@server2 ~]# docker run -it --name demo --link nginx:web busybox
/ # env
HOSTNAME=c87cfc906b0e
SHLVL=1
WEB_PORT=tcp://172.17.0.2:80
HOME=/root
WEB_NAME=/demo/web
WEB_PORT_80_TCP_ADDR=172.17.0.2
WEB_PORT_80_TCP_PORT=80
WEB_PORT_80_TCP_PROTO=tcp
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
WEB_PORT_80_TCP=tcp://172.17.0.2:80
WEB_ENV_PKG_RELEASE=1~buster
WEB_ENV_NGINX_VERSION=1.21.0
WEB_ENV_NJS_VERSION=0.5.3
PWD=/
/ # ping nginx
PING nginx (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.178 ms
^C
--- nginx ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.178/0.178/0.178 ms
/ # ping web
PING web (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.076 ms
^C
--- web ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.076/0.076/0.076 ms
/ # cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.17.0.2	web 64c84d7f690c nginx
172.17.0.3	c87cfc906b0e
/ # 

当容器改变时,只会改变解析,但不会更改env变量。

  • 容器如何访问外网是通过iptables的SNAT实现的

在这里插入图片描述

  • 外网如何访问容器:
    端口映射,-p 选项指定映射端口
[root@server2 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@server2 ~]# docker run -d --name demo -p 80:80 nginx:v1
a281bb6fc8cf4f734e08bf24759222593504ec9d51d4d59e43cc77f757fa6fb8
[root@server2 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
a281bb6fc8cf        nginx:v1            "/docker-entrypoint.…"   8 seconds ago       Up 6 seconds        0.0.0.0:80->80/tcp   demo
[root@server2 ~]# docker port demo
80/tcp -> 0.0.0.0:80
  • 外网访问容器用到了docker-proxy 和 iptables DNAT
    宿主机访问本机容器使用的是iptables DNAT
    外部主机访问容器或容器之间的访问是 docker-proxy 实现

4. 跨主机容器网络

  • 跨主机网络解决方案
    docker 原生的 overlay 和 macvlan
    第三方的flannel、weave、calico
  • 众多网络方案是如何与docker集成在一起的
    libnetwork docker容器网络库
    CNM (Container Network Model)这个模型对容器网络进行了抽象
  • CNM分三类组件
    Sandbox:容器网络栈,包含容器接口、dns、路由表。(namespace)
    Endpoint:作用是将sandbox接入network (veth pair)
    Network:包含一组endpoint,同一network的endpoint可以通信。
  • macvlan 网络方案实现
    Linux kernel 提供的一种网卡虚拟化技术。
    无需Linux bridge,直接使用物理接口,性能极好。
[root@server1 ~]# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:2e:de:d5 brd ff:ff:ff:ff:ff:ff
    inet 172.25.15.1/24 brd 172.25.15.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe2e:ded5/64 scope link 
       valid_lft forever preferred_lft for
[root@server1 ~]# ip link set eth0 promisc on
[root@server1 ~]# docker network ls
[root@server1 ~]# docker network create mynet1 -d macvlan --subnet 172.30.0.0/24 --gateway 172.30.0.1 -o parent=eth0
##此处的30 不能和已经存在的网络接口冲突,可以用 docner network inspect 看一下;
2a567a16f457f0a375b510bfc09a320942db51c258485ddf9d37bbd8264782e9
[root@server1 ~]# docker network ls

2a567a16f457        mynet1                      macvlan             local
[root@server1 ~]# docker network inspect mynet1 

        "Scope": "local",
        "Driver": "macvlan",		##此处已经加上了
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.30.0.0/24",
                    "Gateway": "172.30.0.1"
                }
            ]
        }
[root@server1 ~]# docker run -it --rm --network mynet1 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
239: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:1e:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.30.0.2/24 brd 172.30.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

在两台 docker 主机上各创建 macvlan 网络;
在第二台主机上也运行起来,来实现跨主机之间的通信;
容器的接口直接与主机网卡连接,无需NAT或端口映射。

[root@server2 ~]# ip link set eth0 promisc on
[root@server2 ~]# docker network ls
[root@server2 ~]# docker network create mynet1 -d macvlan --subnet 172.30.0.0/24 --gateway 172.30.0.1 -o parent=eth0	
	##和第一台一致
ba14e45bdd05cdfc0831a731e726054b5dc6da65b485a92ea4af84656f6bf011
[root@server2 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
7e43e3ed2ec8        bridge              bridge              local
97d148277d30        host                host                local
ba14e45bdd05        mynet1              macvlan             local
d727708902c7        none                null                local
[root@server2 ~]# docker network inspect mynet1
        "Scope": "local",
        "Driver": "macvlan",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.30.0.0/24",
                    "Gateway": "172.30.0.1"
                }
            ]
        },

[root@server2 ~]# docker run -it --rm --network mynet1 --ip 172.30.0.11 busybox
	##运行容器时指定IP ,不能冲突,然后便可以通信
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:1e:00:0b brd ff:ff:ff:ff:ff:ff
    inet 172.30.0.11/24 brd 172.30.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping 172.30.0.11
PING 172.30.0.11 (172.30.0.11): 56 data bytes
64 bytes from 172.30.0.11: seq=0 ttl=64 time=0.069 ms
^C
--- 172.30.0.11 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.069/0.069/0.069 ms
/ # ping 172.30.0.2		##此处默认是有解析的
PING 172.30.0.2 (172.30.0.2): 56 data bytes
64 bytes from 172.30.0.2: seq=0 ttl=64 time=0.515 ms
^C
--- 172.30.0.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.515/0.515/0.515 ms
/ # 

在生产环境中,会遇到同时开启多个仓库的情况,一个 vlan 往往就显得不够用;可以添加网卡,来提供更多的第址可供通信;

在两台 docker 主机上各添加一块网卡,打开网卡混杂模式:

[root@server1 ~]# ip addr show eth1 
273: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 52:54:00:8a:af:21 brd ff:ff:ff:ff:ff:ff
[root@server1 ~]# ip link set eth1 promisc on 	##打开混杂模式
[root@server1 ~]# ip link set up eth1			##启动网卡
[root@server1 ~]# ip addr show eth1 		##再次查看混杂模式已经打开
273: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:8a:af:21 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe8a:af21/64 scope link 
       valid_lft forever preferred_lft forever

在第二台主机上做相同的操作:

[root@server2 ~]# ip addr show eth1
6: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 52:54:00:c9:47:4b brd ff:ff:ff:ff:ff:ff
[root@server2 ~]# ip link set eth1 promisc on 
[root@server2 ~]# ip link set up eth1
[root@server2 ~]# ip addr show eth1 
6: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:c9:47:4b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fec9:474b/64 scope link 
       valid_lft forever preferred_lft forever

在第一台主机的新加网卡上添加网络段;并指定IP 运行容器,指定父级为eth1;

[root@server1 ~]# docker network create mynet2 -d macvlan --subnet 172.40.0.0/24 --gateway 172.40.0.1 -o parent=eth1
11add8145a47d25e247d6e0a7685809090ebd1c1166ec0017b78e0495bca6c8c
[root@server1 ~]# docker run -it --rm --network mynet2 --ip 172.40.0.10 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
298: eth0@if273: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:28:00:0a brd ff:ff:ff:ff:ff:ff
    inet 172.40.0.10/24 brd 172.40.0.255 scope global eth0
       valid_lft forever preferred_lft forever

在第二台主机的新加网卡上添加网络段;并指定IP 运行容器,指定父级为eth1;

[root@server2 ~]# docker network create mynet2 -d macvlan --subnet 172.40.0.0/24 --gateway 172.40.0.1 -o parent=eth1
4a8c552ca6da9c076d0a4d0c5a4a4a831f464f93647d34a0ea282c6a38e81d79
[root@server2 ~]# docker network ls
[root@server2 ~]# docker run -it --rm --network mynet2 --ip 172.40.0.11 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
7: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:28:00:0b brd ff:ff:ff:ff:ff:ff
    inet 172.40.0.11/24 brd 172.40.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping 172.40.0.10
PING 172.40.0.10 (172.40.0.10): 56 data bytes
64 bytes from 172.40.0.10: seq=0 ttl=64 time=0.565 ms
64 bytes from 172.40.0.10: seq=1 ttl=64 time=0.460 ms
^C
--- 172.40.0.10 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.460/0.512/0.565 ms
/ # 

对于需求很多时,单单添加网卡,往往有很大的限制;

macvlan 会独占主机网卡,但可以使用 vlan 子接口实现多 macvlan 网络;vlan可以将物理二层网络划分为4094个逻辑网络,彼此隔离,vlan id 取值为1~4094。

[root@server1 ~]# docker network create mynet3 -d macvlan --subnet 172.50.0.0/24 --gateway 172.50.0.1 -o parent=eth1.1
291b7222ee8e4a1583fc5597da579bde864c6feb7537942e606aecd371f9d163
[root@server1 ~]# docker run -it --rm --network mynet3 --ip 172.50.0.10 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
312: eth0@if303: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:32:00:0a brd ff:ff:ff:ff:ff:ff
    inet 172.50.0.10/24 brd 172.50.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

在第二台主机上也同样的操作来实现通信;

[root@server2 ~]# docker network create mynet3 -d macvlan --subnet 172.50.0.0/24 --gateway 172.50.0.1 -o parent=eth1.1
	##子接口
19c05673d369c2c11a8ed68f8bc679a36ad5a91cab0c47006d2cef7244c00164
[root@server2 ~]# ip addr show 
8: eth1.1@eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 52:54:00:c9:47:4b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fec9:474b/64 scope link 
       valid_lft forever preferred_lft forever

[root@server2 ~]# docker run -it --rm --network mynet3 --ip 172.50.0.11 busybox
/ # ping 172.50.0.10
PING 172.50.0.10 (172.50.0.10): 56 data bytes
64 bytes from 172.50.0.10: seq=0 ttl=64 time=0.537 ms
^C
--- 172.50.0.10 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.537/0.537/0.537 ms
/ # 

macvlan 网络间的隔离和连通;
macvlan 网络在二层上是隔离的,所以不同 macvlan 网络的容器是不能通信的;可以在三层上通过网关将 macvlan 网络连通起来;
docker本身不做任何限制,像传统vlan网络那样管理即可。

5. 总结

本章节讲解了docker各种网络模型的配置与实现,并从ip管理、隔离、性能等方面做了对比,希望对大家在实际生产环境中网络选型有所帮助。

docker network子命令
connect 连接容器到指定网络
create 创建网络
disconnect 断开容器与指定网络的连接
inspect 显示指定网络的详细信息
ls 显示所有网络
rm 删除网络

  • 2
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值