Docker的4种网络模式(bridge、host、none、container)和新建网络

1. bridge

1.1 介绍

默认为该网络模式

安装Docker的时候会在宿主机创建一个虚拟网桥docker0, 它在内核层连通了宿主机的物理网卡或虚拟网卡。默认指定了docker0网桥接口的IP地址和子网掩码

每通过指定--network bridge创建一个容器,都会在docker0上虚拟出一个设备接口veth,容器上会有一个虚拟网卡eth0和veth配对,构成一个veth pair用于和宿主机、其它容器进行通讯。容器拥有独立的IP,且和docker0处于同一个网段

bridge网络

1.2 inspect和ifconfig查看

查看name为bridge的network,其driver类型为bridge。driver name信息为 "com.docker.network.bridge.name": "docker0"

[root@docker ~]# 
[root@docker ~]# docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "11634c75f412e9575d726a31eef244b51ac2a7471c831983457b04dc68c8be01",
        "Created": "2022-04-29T19:18:30.201026849+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "7b576685d9bcd34f2d352944638f3e7ed9c0c71402605b973fa9ce99574aff88": {
                "Name": "serene_lumiere",
                "EndpointID": "c36ac727e1b374fd123251ef746617646e5572ae5f7fbbe48886e5305cbd3718",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            },
            "b209bbb46bbd5a56467da498014bdcc0f8c75a4b48d0410291b2b16e2f9afa70": {
                "Name": "infallible_nobel",
                "EndpointID": "5bc9b6272a2f5ec955aa0c8c545d805bf87156738701ed305be2ffdce09abcbf",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
[root@docker ~]# 

通过ifconfig查看,可以看到driver name为docker0的信息。其网络为172.17.0.xxx

[root@docker ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:f8ff:fe5d:e5f6  prefixlen 64  scopeid 0x20<link>
        ether 02:42:f8:5d:e5:f6  txqueuelen 0  (Ethernet)
        RX packets 31410  bytes 1629911 (1.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 50680  bytes 316212762 (301.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.23.31  netmask 255.255.255.0  broadcast 192.168.23.255
        inet6 fe80::6b36:6f58:3a2c:83b0  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:f4:78:cc  txqueuelen 1000  (Ethernet)
        RX packets 1544834  bytes 2071564590 (1.9 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 547623  bytes 38637589 (36.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 486054  bytes 603066780 (575.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 486054  bytes 603066780 (575.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@docker ~]# 

1.3 案例

启动rockylinux容器,查看IP信息。可以看到eth0@if162配对161

[root@docker ~]# 
[root@docker ~]# docker run -it rockylinux bash
[root@ec946414948e /]# 
[root@ec946414948e /]# yum install -y iproute
[root@ec946414948e /]# 
[root@ec946414948e /]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
161: eth0@if162: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
[root@ec946414948e /]# 

再查看宿主机的IP信息。可以看到vethb915dc1@if161配对162

[root@docker ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:f4:78:cc brd ff:ff:ff:ff:ff:ff
    inet 192.168.23.31/24 brd 192.168.23.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::6b36:6f58:3a2c:83b0/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:f8:5d:e5:f6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:f8ff:fe5d:e5f6/64 scope link 
       valid_lft forever preferred_lft forever
162: vethb915dc1@if161: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether a2:73:fb:d8:fe:e0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::a073:fbff:fed8:fee0/64 scope link 
       valid_lft forever preferred_lft forever
[root@docker ~]# 

2. host

2.1 介绍

通过--network host指定。容器没有独立的Network Namespace,不会虚拟出自己的网卡,没有独立的IP。和宿主机共用一个Network Namespace,使用宿主机的IP和端口

host网络

2.2 案例

driver为host的容器,通过-p 宿主机port:容器port无效

[root@docker ~]# 
[root@docker ~]# docker run -d --network host nginx
7eee507a29c52525b60da346e3a1b493350366c64014a19189e43fe285ba4813
[root@docker ~]# 
[root@docker ~]# docker ps
CONTAINER ID   IMAGE        COMMAND                  CREATED          STATUS          PORTS     NAMES
7eee507a29c5   nginx        "/docker-entrypoint.…"   30 seconds ago   Up 29 seconds             nice_robinson
[root@docker ~]# 

用inspect查看容器的网络信息

[root@docker ~]# 
[root@docker ~]# docker inspect 7eee507a29c5 | tail -n 20
            "Networks": {
                "host": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "fc720e541b067cd7533853173203414db0c32c8dbf4fc505ddfbc5ff00a66bf6",
                    "EndpointID": "c1d30b98a3ee62495b5bc467d551f4b67f7a9682df591ce28e554594f4d664fe",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "",
                    "DriverOpts": null
                }
            }
        }
    }
]
[root@docker ~]# 

访问http://192.168.23.31可以看到nginx的首页

3. none

3.1 介绍

通过--network none指定。容器有独立的Network namespace,但并没有进行任何网络配置,如分配veth pair和网桥连接、路由、IP等。只有127.0.0.1的本地网络

需要我们自己为Docker容器添加网卡、配置IP等

3.2 案例

network名称为none的,没有网络

[root@docker ~]# docker run -it --network none rockylinux bash
[root@3ddfae5c0474 /]# 
[root@3ddfae5c0474 /]# yum install -y net-tools
Rocky Linux 8 - AppStream                                                                                                                  0.0  B/s |   0  B     00:00    
Errors during downloading metadata for repository 'appstream':
  - Curl error (6): Couldn't resolve host name for https://mirrors.rockylinux.org/mirrorlist?arch=x86_64&repo=AppStream-8 [Could not resolve host: mirrors.rockylinux.org]
Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for https://mirrors.rockylinux.org/mirrorlist?arch=x86_64&repo=AppStream-8 [Could not resolve host: mirrors.rockylinux.org]
[root@3ddfae5c0474 /]# 

通过inspect查看容器的信息。没有Gateway和IPAddress

[root@docker ~]# docker inspect 3ddfae5c0474 | tail -n 20
            "Networks": {
                "none": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "fa2400985f507d7991c960a857e42c00e348c11d81a90805089e3a2da98cc8d6",
                    "EndpointID": "53d4abd5830e36fe8d7a18d73d1d28830451087b82d3508bc71453e38765612c",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "",
                    "DriverOpts": null
                }
            }
        }
    }
]
[root@docker ~]# 

4. container

4.1 介绍

通过--network container指定。容器不会虚拟出自己的网卡,没有独立的IP。使用另一个容器(通过容器的name或ID进行指定)的IP和端口。两个容器除了网络方面,其他的如文件系统、进程列表等还是隔离的

container网络

4.2 案例

启动一个alpine linux容器。Alpine Linux小、简单、安全,但基本功能都有,不到 6M的大小

[root@docker ~]# 
[root@docker ~]# docker run -it --name alpine1 alpine bin/sh
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
59bf1c3509f3: Pull complete 
Digest: sha256:21a3deaa0d32a8057914f36584b5288d2e5ecc984380bc0118285c70fa8c9300
Status: Downloaded newer image for alpine:latest
/ # 
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
148: eth0@if149: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

再用--network container:alpine1启动一个容器。可以看到两个容器的eth0虚拟网卡是一样的

[root@docker ~]# docker run -it --name alpine2 --network container:alpine1 alpine /bin/sh
/ # 
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
148: eth0@if149: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

stop掉alpine1容器,再查看alpine2容器的ip addr。已经没有eth0虚拟网卡了

/ # 
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
/ # 

5. 新建网络

5.1 create一个network

每新建一个driver类型默认为bridge的network都会在宿主机创建一个虚拟网卡ethoN

[root@docker ~]# 
[root@docker ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
11634c75f412   bridge    bridge    local
fc720e541b06   host      host      local
fa2400985f50   none      null      local
[root@docker ~]# 
[root@docker ~]# docker network create my_network
d35732fae91cf2b92c5122993855cee15769a18d530b52597aeab1efe9e43702
[root@docker ~]# 
[root@docker ~]# docker network ls
NETWORK ID     NAME         DRIVER    SCOPE
11634c75f412   bridge       bridge    local
fc720e541b06   host         host      local
d35732fae91c   my_network   bridge    local
fa2400985f50   none         null      local
[root@docker ~]# 

查看宿主机的网络。可以看到我们刚刚创建的my_network的虚拟网桥br-d35732fae91c

[root@docker ~]# 
[root@docker ~]# ifconfig
br-d35732fae91c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.18.0.1  netmask 255.255.0.0  broadcast 172.18.255.255
        inet6 fe80::42:43ff:fe57:d16e  prefixlen 64  scopeid 0x20<link>
        ether 02:42:43:57:d1:6e  txqueuelen 0  (Ethernet)
        RX packets 1657284  bytes 2211539610 (2.0 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 595184  bytes 42131351 (40.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:f8ff:fe5d:e5f6  prefixlen 64  scopeid 0x20<link>
        ether 02:42:f8:5d:e5:f6  txqueuelen 0  (Ethernet)
        RX packets 53213  bytes 2539819 (2.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 85028  bytes 391917388 (373.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.23.31  netmask 255.255.255.0  broadcast 192.168.23.255
        inet6 fe80::6b36:6f58:3a2c:83b0  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:f4:78:cc  txqueuelen 1000  (Ethernet)
        RX packets 1657284  bytes 2211539610 (2.0 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 595184  bytes 42131351 (40.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 486054  bytes 603066780 (575.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 486054  bytes 603066780 (575.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@docker ~]# 

5.2 基于新的network运行容器

[root@docker ~]# 
[root@docker ~]# docker run -it --name alpine1 --network my_network alpine /bin/sh
/ # 

再新建一个name为alpine2的容器

[root@docker ~]# 
[root@docker ~]# docker run -it --name alpine2 --network my_network alpine /bin/sh
/ #

在alpine1容器中ping alpine2,是可以直接ping通的。因为自定义network已经有IP和hostname的映射关系

/ # ping alpine2
PING alpine2 (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.180 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.077 ms

但是如果是在name是bridge的network下,创建的两个容器,通过容器name是ping不通的

6. docker network命令

语法如下:

[root@docker ~]# 
[root@docker ~]# docker network --help

Usage:  docker network COMMAND

Manage networks

Commands:
  connect     Connect a container to a network
  create      Create a network
  disconnect  Disconnect a container from a network
  inspect     Display detailed information on one or more networks
  ls          List networks
  prune       Remove all unused networks
  rm          Remove one or more networks

Run 'docker network COMMAND --help' for more information on a command.
[root@docker ~]# 

6.1 删除自定义network

[root@docker ~]# 
[root@docker ~]# docker network ls
NETWORK ID     NAME         DRIVER    SCOPE
11634c75f412   bridge       bridge    local
fc720e541b06   host         host      local
d35732fae91c   my_network   bridge    local
fa2400985f50   none         null      local
[root@docker ~]# 
[root@docker ~]# 
[root@docker ~]# docker network rm my_network
my_network
[root@docker ~]# 
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值