Docker 高级 Docker网络

一、是什么

1、docker不启动,默认网络情况

[root@localhost ~]# docker images
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?  #这个表示docker没有启动
[root@localhost ~]# 

  1. ens33
  2. lo
  3. virbr0

virbr0

在CentOS7的安装过程中如果有选择相关虚拟化的的服务安装系统后,启动网卡时会发现有一个以网桥连接的私网地址的virbr0网卡(virbr0网卡:它还有一个固定的默认IP地址192.168.122.1),是做虚拟机网桥的使用的,其作用是为连接其上的虚机网卡提供 NAT访问外网的功能。

我们之前学习Linux安装,勾选安装系统的时候附带了libvirt服务才会生成的一个东西,如果不需要可以直接将libvirtd服务卸载,
yum remove libvirt-libs.x86_64

[root@localhost ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.174.138  netmask 255.255.255.0  broadcast 192.168.174.255
        inet6 fe80::1747:11ea:1bb4:820c  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:33:b8:13  txqueuelen 1000  (Ethernet)
        RX packets 126  bytes 12771 (12.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 117  bytes 14598 (14.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 68  bytes 5920 (5.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 68  bytes 5920 (5.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:67:7c:44  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:33:b8:13 brd ff:ff:ff:ff:ff:ff
    inet 192.168.174.138/24 brd 192.168.174.255 scope global noprefixroute dynamic ens33
       valid_lft 1692sec preferred_lft 1692sec
    inet6 fe80::1747:11ea:1bb4:820c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:67:7c:44 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:67:7c:44 brd ff:ff:ff:ff:ff:ff
[root@localhost ~]# 

在这里插入图片描述

2、docker启动后,网络情况

会产生一个名为docker0的虚拟网桥

[root@localhost ~]# systemctl start docker  #启动docker
[root@localhost ~]# docker images
REPOSITORY                                                TAG          IMAGE ID       CREATED         SIZE
zzyy_docker                                               1.6          c366639d316c   13 hours ago    682MB
centosjava8                                               1.5          52dae5c93423   16 hours ago    1.19GB
<none>                                                    <none>       70ee17621a10   16 hours ago    231MB
192.168.174.133:5000/zzyyubuntu                           1.2          04ea4a10f57c   10 days ago     109MB
registry.cn-hangzhou.aliyuncs.com/testshanghai/myubuntu   1.3          8d4088598f0b   10 days ago     176MB
tomcat                                                    latest       fb5657adc892   3 months ago    680MB
mysql                                                     5.7          c20987f18b13   3 months ago    448MB
rabbitmq                                                  management   6c3c2a225947   4 months ago    253MB
registry                                                  latest       b8604a3fe854   5 months ago    26.2MB
ubuntu                                                    latest       ba6acccedd29   6 months ago    72.8MB
centos                                                    7            eeb6ee3f44bd   7 months ago    204MB
centos                                                    latest       5d0da3dc9764   7 months ago    231MB
redis                                                     6.0.8        16ecd2772934   17 months ago   104MB
billygoo/tomcat8-jdk8                                     latest       30ef4019761d   3 years ago     523MB
java                                                      8            d23bdf5b1b1b   5 years ago     643MB
[root@localhost ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:d3:25:c1:b6  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.174.138  netmask 255.255.255.0  broadcast 192.168.174.255
        inet6 fe80::1747:11ea:1bb4:820c  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:33:b8:13  txqueuelen 1000  (Ethernet)
        RX packets 284  bytes 23845 (23.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 182  bytes 25886 (25.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 68  bytes 5920 (5.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 68  bytes 5920 (5.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:67:7c:44  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@localhost ~]# 

在这里插入图片描述
docker就是通过docker0进行与宿主机和容器与容器之间的网络通信的

查看docker网络模式命令

默认创建3大网络模式

[root@localhost ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
9c7cd5be3b29   bridge    bridge    local
b83e5caf0cea   host      host      local
2b812ba15cf5   none      null      local
[root@localhost ~]# 

二、常用基本命令

1、All命令
查看docker network帮助文档

[root@localhost ~]# docker network --help

Usage:  docker network COMMAND

Manage networks

Commands:
  connect     Connect a container to a network  #连接
  create      Create a network #创建
  disconnect  Disconnect a container from a network #中断
  inspect     Display detailed information on one or more networks #查看
  ls          List networks
  prune       Remove all unused networks #删除所有无效不再用的网络
  rm          Remove one or more networks

Run 'docker network COMMAND --help' for more information on a command.
[root@localhost ~]# 

2、查看网络

docker network ls

3、查看网络源数据

docker network inspect  XXX网络名字

4、删除网络

docker network rm XXX网络名字

5、案例

[root@localhost ~]# docker network create aa_network  #创建名为aa_network的网络
a1e19bb04e6f41d15f1ffd3a926201ccb0caa91da317d51ec8abceda85728df1
[root@localhost ~]# docker network ls  #查看网络
NETWORK ID     NAME         DRIVER    SCOPE
a1e19bb04e6f   aa_network   bridge    local  #可以看到aa_network
9c7cd5be3b29   bridge       bridge    local
b83e5caf0cea   host         host      local
2b812ba15cf5   none         null      local
[root@localhost ~]# 
[root@localhost ~]# docker network rm aa_network #删除名为aa_network的网络
aa_network
[root@localhost ~]# docker network ls  #可以看到没有aa_network了
NETWORK ID     NAME      DRIVER    SCOPE
9c7cd5be3b29   bridge    bridge    local
b83e5caf0cea   host      host      local
2b812ba15cf5   none      null      local
[root@localhost ~]# 
[root@localhost ~]# docker network inspect bridge #查看bridge网络源数据
[
    {
        "Name": "bridge",
        "Id": "9c7cd5be3b2997b9360ce1dec8638d6605291d9bad4b6db8ddf255eec8c32a26",
        "Created": "2022-04-19T14:51:41.503542891+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
[root@localhost ~]# 



三、能干嘛

  1. 容器间的互联和通信以及端口映射
  2. 容器IP变动时候可以通过服务名直接网络通信而不受到影响

解释:
同一网段,docker不明显 所有要做docker网路管理和容器调用之间的规划

在同一个docker中容器的调用时没有问题的,
但是dockerA中的某一个容器要调用dockerB中的某一个容器就会出现找不到的问题,
即使我们把dockerA和dockerB之间的网路IP写死但是不排除遇到服务器重启的情况,服务器每次重启网络IP可能都不一样所有我们要做docker网路管理和容器调用之间的规划

四、网络模式

1、总体介绍

在这里插入图片描述

  • bridge模式:使用–network bridge指定,默认使用docker0
  • host模式:使用–network host指定
  • none模式:使用–network none指定
  • container模式:使用–network container:NAME或者容器ID指定

2、容器实例内默认网络IP生产规则

说明:
我们运行三个容器实例分别是u1、u3、u4
运行的顺序是u1、u3、查看其桥接IP,然后把u3删除掉,再运行u4 查看器桥接IP
1、运行u1、u3、查看其桥接IP

[root@localhost ~]# docker ps #查看运行的容器
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@localhost ~]# docker run -it --name u1 ubuntu bash  #运行u1容器
root@15c982bfbc7d:/# [root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# docker run -it --name u3 ubuntu bash #运行u3容器
root@aa162aa1e17e:/# [root@localhost ~]# 
[root@localhost ~]# docker ps #查看运行的容器
CONTAINER ID   IMAGE     COMMAND   CREATED          STATUS          PORTS     NAMES
aa162aa1e17e   ubuntu    "bash"    29 seconds ago   Up 28 seconds             u3
15c982bfbc7d   ubuntu    "bash"    2 minutes ago    Up 2 minutes              u1
[root@localhost ~]# docker inspect u1 #查看u1全部的网络数据源
[
    {
        "Id": "15c982bfbc7daf76ef2bc419fb3efbcb58abcec417f35b1426728923109bfddd",
        "Created": "2022-04-19T10:01:08.310883904Z",
        "Path": "bash",
        "Args": [],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 4425,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2022-04-19T10:01:09.279388431Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:ba6acccedd2923aee4c2acc6a23780b14ed4b8a5fa4e14e252a23b846df9b6c1",
        "ResolvConfPath": "/var/lib/docker/containers/15c982bfbc7daf76ef2bc419fb3efbcb58abcec417f35b1426728923109bfddd/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/15c982bfbc7daf76ef2bc419fb3efbcb58abcec417f35b1426728923109bfddd/hostname",
        "HostsPath": "/var/lib/docker/containers/15c982bfbc7daf76ef2bc419fb3efbcb58abcec417f35b1426728923109bfddd/hosts",
        "LogPath": "/var/lib/docker/containers/15c982bfbc7daf76ef2bc419fb3efbcb58abcec417f35b1426728923109bfddd/15c982bfbc7daf76ef2bc419fb3efbcb58abcec417f35b1426728923109bfddd-json.log",
        "Name": "/u1",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "CgroupnsMode": "host",
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "private",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": [],
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": null,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/0071b7e65bfe882c5ab3e219ebfb0624c115f31b251ddaddf5c08bb6eef81fd9-init/diff:/var/lib/docker/overlay2/66ee9e227a0492bc60d681a7b1a755b50e6e7716b45decf928f6af0ea7a3fa38/diff",
                "MergedDir": "/var/lib/docker/overlay2/0071b7e65bfe882c5ab3e219ebfb0624c115f31b251ddaddf5c08bb6eef81fd9/merged",
                "UpperDir": "/var/lib/docker/overlay2/0071b7e65bfe882c5ab3e219ebfb0624c115f31b251ddaddf5c08bb6eef81fd9/diff",
                "WorkDir": "/var/lib/docker/overlay2/0071b7e65bfe882c5ab3e219ebfb0624c115f31b251ddaddf5c08bb6eef81fd9/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [],
        "Config": {
            "Hostname": "15c982bfbc7d",
            "Domainname": "",
            "User": "",
            "AttachStdin": true,
            "AttachStdout": true,
            "AttachStderr": true,
            "Tty": true,
            "OpenStdin": true,
            "StdinOnce": true,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "bash"
            ],
            "Image": "ubuntu",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {}
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "5e5a6e7481b82cb03fa94e4b19e11384a727e3c980b65561fd2178b8e2dfde9d",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "/var/run/docker/netns/5e5a6e7481b8",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "ecac9f2013489ececacc24dcf2d2be9da200924736f7a14a07b371a49dfbc24e",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "9c7cd5be3b2997b9360ce1dec8638d6605291d9bad4b6db8ddf255eec8c32a26",
                    "EndpointID": "ecac9f2013489ececacc24dcf2d2be9da200924736f7a14a07b371a49dfbc24e",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
        }
    }
]

查看u1的桥接IP

[root@localhost ~]# docker inspect u1|tail -n 20 #查看u1网络数据源的最后20行
            "Networks": {
                "bridge": { #桥接网络
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "9c7cd5be3b2997b9360ce1dec8638d6605291d9bad4b6db8ddf255eec8c32a26",
                    "EndpointID": "ecac9f2013489ececacc24dcf2d2be9da200924736f7a14a07b371a49dfbc24e",
                    "Gateway": "172.17.0.1", #网关
                    "IPAddress": "172.17.0.2", #容器u1的桥接IP是0.2
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
        }
    }
]
[root@localhost ~]# 

查看u3的桥接IP

[root@localhost ~]# docker inspect u3|tail -n 20 #查看u3网络数据源的最后20行
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "9c7cd5be3b2997b9360ce1dec8638d6605291d9bad4b6db8ddf255eec8c32a26",
                    "EndpointID": "98a974b7dcbd61a3bd97acc88c5c7aa742cd2ebe7df8eae110bda057477b689e",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.3", #容器u3的桥接IP是0.3
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:03",
                    "DriverOpts": null
                }
            }
        }
    }
]
[root@localhost ~]# 

删除掉u3,运行u4 查看器桥接IP

[root@localhost ~]# docker rm -f u3 #删除掉u3
u3
[root@localhost ~]# docker run -it --name u4 ubuntu bash #运行容器u4
root@2146d601a4e8:/# [root@localhost ~]# 
[root@localhost ~]# docker inspect u4|tail -n 20
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "9c7cd5be3b2997b9360ce1dec8638d6605291d9bad4b6db8ddf255eec8c32a26",
                    "EndpointID": "92b8ad7bc4be161beca686cac62297aa929976a3b8e6b8f7a2db3c17594cdb00",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.3", #u4的桥接IP是0.3
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:03",
                    "DriverOpts": null
                }
            }
        }
    }
]
[root@localhost ~]# 

结论:docker容器内部的ip是有可能会发生改变的

假如你调的是u3的微服务,当u3宕机的时候有可能就会调到u4的微服务
所以我们要规划好我们的网络服务

3、案例说明

我们先看一下自带的network

[root@localhost ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
9c7cd5be3b29   bridge    bridge    local
b83e5caf0cea   host      host      local
2b812ba15cf5   none      null      local
[root@localhost ~]# docker network inspect bridge  
[
    {
        "Name": "bridge",
        "Id": "9c7cd5be3b2997b9360ce1dec8638d6605291d9bad4b6db8ddf255eec8c32a26",
        "Created": "2022-04-19T14:51:41.503542891+08:00",
        "Scope": "local",
        "Driver": "bridge", #bridge的驱动是bridge
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "15c982bfbc7daf76ef2bc419fb3efbcb58abcec417f35b1426728923109bfddd": {
                "Name": "u1",
                "EndpointID": "ecac9f2013489ececacc24dcf2d2be9da200924736f7a14a07b371a49dfbc24e",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            },
            "2146d601a4e8637f169d503eccff805a0e4eec5831572e25149d56fbbec4449d": {
                "Name": "u4",
                "EndpointID": "92b8ad7bc4be161beca686cac62297aa929976a3b8e6b8f7a2db3c17594cdb00",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
[root@localhost ~]# docker network inspect host
[
    {
        "Name": "host",
        "Id": "b83e5caf0cea549922ea9642260fe580a78f59f6969b89ad7d7e33b59e13eae8",
        "Created": "2022-04-05T15:48:10.427587433+08:00",
        "Scope": "local",
        "Driver": "host", #host的驱动是host
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
[root@localhost ~]# docker network inspect none
[
    {
        "Name": "none",
        "Id": "2b812ba15cf5415cb6039bcc64c835e605d8e893834f4f6b346adad0a51a07b5",
        "Created": "2022-04-05T15:48:10.41816428+08:00",
        "Scope": "local",
        "Driver": "null", #none的驱动是null
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
[root@localhost ~]# 

再看一下我们自己创建的network

[root@localhost ~]# docker network create bb_network #创建network 
d1c895f7099ac47ed43f61639a31c82057c44cd05e5aaf1565b6e1d12fbbbdb4
[root@localhost ~]# docker network ls #查看network 
NETWORK ID     NAME         DRIVER    SCOPE
d1c895f7099a   bb_network   bridge    local
9c7cd5be3b29   bridge       bridge    local
b83e5caf0cea   host         host      local
2b812ba15cf5   none         null      local
[root@localhost ~]# docker network inspect bb_network 
[
    {
        "Name": "bb_network",
        "Id": "d1c895f7099ac47ed43f61639a31c82057c44cd05e5aaf1565b6e1d12fbbbdb4",
        "Created": "2022-04-19T18:40:38.038507748+08:00",
        "Scope": "local",
        "Driver": "bridge", #我们自己创建的network默认用的驱动是bridge
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.19.0.0/16",
                    "Gateway": "172.19.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
[root@localhost ~]# 

1、bridge

1、bridge 是什么

Docker 服务默认会创建一个 docker0 网桥(其上有一个 docker0 内部接口),该桥接网络的名称为docker0,它在内核层连通了其他的物理或虚拟网卡,这就将所有容器和本地主机都放到同一个物理网络。Docker 默认指定了 docker0 接口 的 IP 地址和子网掩码,让主机和容器之间可以通过网桥相互通信。

查看 bridge 网络的详细信息,并通过 grep 获取名称项

docker network inspect bridge | grep name

在这里插入图片描述

ifconfig | grep docker

在这里插入图片描述

2、案例

1、说明
1 Docker使用Linux桥接,在宿主机虚拟一个Docker容器网桥(docker0),Docker启动一个容器时会根据Docker网桥的网段分配给容器一个IP地址,称为Container-IP,同时Docker网桥是每个容器的默认网关。因为在同一宿主机内的容器都接入同一个网桥,这样容器之间就能够通过容器的Container-IP直接通信。

2 docker run 的时候,没有指定network的话默认使用的网桥模式就是bridge,使用的就是docker0。在宿主机ifconfig,就可以看到docker0和自己create的network(后面讲)eth0,eth1,eth2……代表网卡一,网卡二,网卡三……,lo代表127.0.0.1,即localhost,inet addr用来表示网卡的IP地址

3 网桥docker0创建一对对等虚拟设备接口一个叫veth,另一个叫eth0,成对匹配。
3.1 整个宿主机的网桥模式都是docker0,类似一个交换机有一堆接口,每个接口叫veth,在本地主机和容器内分别创建一个虚拟接口,并让他们彼此联通(这样一对接口叫veth pair);
3.2 每个容器实例内部也有一块网卡,每个接口叫eth0;
3.3 docker0上面的每个veth匹配某个容器实例内部的eth0,两两配对,一一匹配。
通过上述,将宿主机上的所有容器都连接到这个内部网络上,两个容器在同一个网络下,会从这个网关下各自拿到分配的ip,此时两个容器的网络是互通的。
在这里插入图片描述

2、代码

每个容器都是有自己实际的网络IP的通过docker0进行网络通信
我们创建两个容器

docker run -d -p 8081:8080   --name tomcat81 billygoo/tomcat8-jdk8

docker run -d -p 8082:8080   --name tomcat82 billygoo/tomcat8-jdk8
[root@localhost ~]# docker run -d -p 8081:8080   --name tomcat81 billygoo/tomcat8-jdk8
5d859e3cb3fa52257cfb8ff6b407e6dc82f70f37d6dcbcf2b6366df269c845b5
[root@localhost ~]# docker run -d -p 8082:8080   --name tomcat82 billygoo/tomcat8-jdk8
95437e77a8f9e04a2f7104decc99bc2d2ae17297539fda90888d331b323a0cdf
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                   COMMAND             CREATED              STATUS              PORTS                                       NAMES
95437e77a8f9   billygoo/tomcat8-jdk8   "catalina.sh run"   58 seconds ago       Up 57 seconds       0.0.0.0:8082->8080/tcp, :::8082->8080/tcp   tomcat82
5d859e3cb3fa   billygoo/tomcat8-jdk8   "catalina.sh run"   About a minute ago   Up About a minute   0.0.0.0:8081->8080/tcp, :::8081->8080/tcp   tomcat81
[root@localhost ~]# 


查看宿主机的网络IP

[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:33:b8:13 brd ff:ff:ff:ff:ff:ff
    inet 192.168.174.139/24 brd 192.168.174.255 scope global noprefixroute dynamic ens33
       valid_lft 1124sec preferred_lft 1124sec
    inet6 fe80::1747:11ea:1bb4:820c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:67:7c:44 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:67:7c:44 brd ff:ff:ff:ff:ff:ff
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:d3:25:c1:b6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:d3ff:fe25:c1b6/64 scope link 
       valid_lft forever preferred_lft forever
13: br-d1c895f7099a: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:4d:ec:95:41 brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global br-d1c895f7099a
       valid_lft forever preferred_lft forever
15: veth59d2a90@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 6e:e2:18:24:27:17 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::6ce2:18ff:fe24:2717/64 scope link 
       valid_lft forever preferred_lft forever
17: veth23e0e7e@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 0a:0e:67:59:67:4a brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::80e:67ff:fe59:674a/64 scope link 
       valid_lft forever preferred_lft forever
[root@localhost ~]# 

进入tomcat81 查看

[root@localhost ~]# docker exec -it tomcat81 bash
root@5d859e3cb3fa:/usr/local/tomcat# 
root@5d859e3cb3fa:/usr/local/tomcat# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
root@5d859e3cb3fa:/usr/local/tomcat# 

进入tomcat82 查看

[root@localhost ~]# docker exec -it tomcat82 bash
root@95437e77a8f9:/usr/local/tomcat# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.5/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
root@95437e77a8f9:/usr/local/tomcat# 

3、两两匹配验证

tomcat81
在这里插入图片描述
tomcat82
在这里插入图片描述

2、host

1、host是什么

直接使用宿主机的 IP 地址与外界进行通信,不再需要额外进行NAT 转换。

2、案例

1、说明
容器将不会获得一个独立的Network Namespace, 而是和宿主机共用一个Network Namespace。容器将不会虚拟出自己的网卡而是使用宿主机的IP和端口。
在这里插入图片描述

2、代码

2.1、警告

docker run -d -p 8083:8080 --network host --name tomcat83 billygoo/tomcat8-jdk8
[root@localhost ~]# docker run -d -p 8083:8080 --network host --name tomcat83 billygoo/tomcat8-jdk8
WARNING: Published ports are discarded when using host network mode
e8521641af5a2d6306b4668bcbe34c472639822cabba448adac818ad0270e650
[root@localhost ~]# 
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                   COMMAND             CREATED          STATUS          PORTS                                       NAMES
e8521641af5a   billygoo/tomcat8-jdk8   "catalina.sh run"   52 seconds ago   Up 51 seconds                                               tomcat83
95437e77a8f9   billygoo/tomcat8-jdk8   "catalina.sh run"   26 minutes ago   Up 26 minutes   0.0.0.0:8082->8080/tcp, :::8082->8080/tcp   tomcat82
5d859e3cb3fa   billygoo/tomcat8-jdk8   "catalina.sh run"   27 minutes ago   Up 27 minutes   0.0.0.0:8081->8080/tcp, :::8081->8080/tcp   tomcat81
[root@localhost ~]# 

在这里插入图片描述

问题:
docke启动时总是遇见标题中的警告
原因:
docker启动时指定–network=host或-net=host,如果还指定了-p映射端口,那这个时候就会有此警告,
并且通过-p设置的参数将不会起到任何作用,端口号会以主机端口号为主,重复时则递增。
解决:
解决的办法就是使用docker的其他网络模式,例如–network=bridge,这样就可以解决问题,或者直接无视。。。。O(∩_∩)O哈哈~

2.2、正确

docker run -d                          --network host --name tomcat83 billygoo/tomcat8-jdk8
[root@localhost ~]# docker run -d                          --network host --name tomcat83 billygoo/tomcat8-jdk8
d1774134b30c477b18031ed55fb66256da2045f6365d3e94fbc9b17d5ce321dd
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                   COMMAND             CREATED          STATUS          PORTS                                       NAMES
d1774134b30c   billygoo/tomcat8-jdk8   "catalina.sh run"   2 seconds ago    Up 2 seconds                                                tomcat83
95437e77a8f9   billygoo/tomcat8-jdk8   "catalina.sh run"   31 minutes ago   Up 31 minutes   0.0.0.0:8082->8080/tcp, :::8082->8080/tcp   tomcat82
5d859e3cb3fa   billygoo/tomcat8-jdk8   "catalina.sh run"   31 minutes ago   Up 31 minutes   0.0.0.0:8081->8080/tcp, :::8081->8080/tcp   tomcat81
[root@localhost ~]# 

3、无之前的配对显示了,看容器实例内部

docker inspect tomcat83  #在宿主机上看tomcat83  
[root@localhost ~]# docker inspect tomcat83 | tail -n 20
            "Networks": {
                "host": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "b83e5caf0cea549922ea9642260fe580a78f59f6969b89ad7d7e33b59e13eae8",
                    "EndpointID": "aac6e5430934a2af36479971c8a71d932d16ff44383d3ac3f777790d89aea7ba",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "",
                    "DriverOpts": null
                }
            }
        }
    }
]
[root@localhost ~]# 

和宿主机共用一个网关和IP
在这里插入图片描述
在tomcat83 容器内部看,可以看到几乎和宿主机的一样

[root@localhost ~]# docker exec -it tomcat83 bash
root@localhost:/usr/local/tomcat# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:33:b8:13 brd ff:ff:ff:ff:ff:ff
    inet 192.168.174.139/24 brd 192.168.174.255 scope global noprefixroute dynamic ens33
       valid_lft 1436sec preferred_lft 1436sec
    inet6 fe80::1747:11ea:1bb4:820c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:67:7c:44 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:67:7c:44 brd ff:ff:ff:ff:ff:ff
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:d3:25:c1:b6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:d3ff:fe25:c1b6/64 scope link 
       valid_lft forever preferred_lft forever
13: br-d1c895f7099a: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:4d:ec:95:41 brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global br-d1c895f7099a
       valid_lft forever preferred_lft forever
15: veth59d2a90@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 6e:e2:18:24:27:17 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::6ce2:18ff:fe24:2717/64 scope link 
       valid_lft forever preferred_lft forever
17: veth23e0e7e@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 0a:0e:67:59:67:4a brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::80e:67ff:fe59:674a/64 scope link 
       valid_lft forever preferred_lft forever
root@localhost:/usr/local/tomcat# 

4、没有设置-p的端口映射了,如何访问启动的tomcat83??

http://宿主机IP:8080/
在这里插入图片描述

在CentOS里面用默认的火狐浏览器访问容器内的tomcat83看到访问成功,因为此时容器的IP借用主机的,
所以容器共享宿主机网络IP,这样的好处是外部主机与容器可以直接通信。

3、none

1、是什么

  • 在none模式下,并不为Docker容器进行任何网络配置。
  • 也就是说,这个Docker容器没有网卡、IP、路由等信息,只有一个lo
  • 需要我们自己为Docker容器添加网卡、配置IP等。

禁用网络功能,只有lo标识(就是127.0.0.1表示本地回环)

2、案例

docker run -d -p 8084:8080 --network none --name tomcat84 billygoo/tomcat8-jdk8

进入容器内部查看

[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                   COMMAND             CREATED          STATUS          PORTS                                       NAMES
d6adc44842cb   billygoo/tomcat8-jdk8   "catalina.sh run"   2 minutes ago    Up 2 minutes                                                tomcat84
d1774134b30c   billygoo/tomcat8-jdk8   "catalina.sh run"   21 minutes ago   Up 21 minutes                                               tomcat83
95437e77a8f9   billygoo/tomcat8-jdk8   "catalina.sh run"   53 minutes ago   Up 53 minutes   0.0.0.0:8082->8080/tcp, :::8082->8080/tcp   tomcat82
5d859e3cb3fa   billygoo/tomcat8-jdk8   "catalina.sh run"   53 minutes ago   Up 53 minutes   0.0.0.0:8081->8080/tcp, :::8081->8080/tcp   tomcat81
[root@localhost ~]# docker exec -it tomcat84 bash
root@d6adc44842cb:/usr/local/tomcat# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
root@d6adc44842cb:/usr/local/tomcat# 

在这里插入图片描述

在容器外部查看

[root@localhost ~]# docker run -d -p 8084:8080 --network none --name tomcat84 billygoo/tomcat8-jdk8
d6adc44842cb881e004c453f72303cf71d5196a010802eaddb1f623b7cbb8af7
[root@localhost ~]# docker inspect tomcat84 | tail -n 20
            "Networks": {
                "none": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "2b812ba15cf5415cb6039bcc64c835e605d8e893834f4f6b346adad0a51a07b5",
                    "EndpointID": "6f7a1882c8b3cfdf2d9b2114cab1024068ca27f06c136e27d82efd5722ff73ba",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "",
                    "DriverOpts": null
                }
            }
        }
    }
]
[root@localhost ~]# 

在这里插入图片描述

4、container

1、是什么
container⽹络模式
新建的容器和已经存在的一个容器共享一个网络ip配置而不是和宿主机共享。新创建的容器不会创建自己的网卡,配置自己的IP,而是和一个指定的容器共享IP、端口范围等。同样,两个容器除了网络方面,其他的如文件系统、进程列表等还是隔离的。

在这里插入图片描述

2、案例

docker run -d -p 8085:8080                                     --name tomcat85 billygoo/tomcat8-jdk8

docker run -d -p 8086:8080 --network container:tomcat85 --name tomcat86 billygoo/tomcat8-jdk8

运行结果

相当于tomcat86和tomcat85公用同一个ip同一个端口,导致端口冲突

本案例用tomcat演示不合适。。。演示坑。。。。。。o(╥﹏╥)o

换一个镜像给大家演示,

[root@localhost ~]# docker run -d -p 8085:8080                                     --name tomcat85 billygoo/tomcat8-jdk8
385d859966abf7301c72b83ac63d480963c1102558a5eafe6dcd17715c7c4e3f
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                   COMMAND             CREATED             STATUS             PORTS                                       NAMES
385d859966ab   billygoo/tomcat8-jdk8   "catalina.sh run"   41 seconds ago      Up 38 seconds      0.0.0.0:8085->8080/tcp, :::8085->8080/tcp   tomcat85
d6adc44842cb   billygoo/tomcat8-jdk8   "catalina.sh run"   10 minutes ago      Up 10 minutes                                                  tomcat84
d1774134b30c   billygoo/tomcat8-jdk8   "catalina.sh run"   29 minutes ago      Up 29 minutes                                                  tomcat83
95437e77a8f9   billygoo/tomcat8-jdk8   "catalina.sh run"   About an hour ago   Up About an hour   0.0.0.0:8082->8080/tcp, :::8082->8080/tcp   tomcat82
5d859e3cb3fa   billygoo/tomcat8-jdk8   "catalina.sh run"   About an hour ago   Up About an hour   0.0.0.0:8081->8080/tcp, :::8081->8080/tcp   tomcat81
[root@localhost ~]# docker run -d -p 8086:8080 --network container:tomcat85 --name tomcat86 billygoo/tomcat8-jdk8
docker: Error response from daemon: conflicting options: port publishing and the container type network mode.
See 'docker run --help'.
[root@localhost ~]# 

3、案例2
1、Alpine操作系统是一个面向安全的轻型 Linux发行版
Alpine Linux 是一款独立的、非商业的通用 Linux 发行版,专为追求安全性、简单性和资源效率的用户而设计。 可能很多人没听说过这个 Linux 发行版本,但是经常用 Docker 的朋友可能都用过,因为他小,简单,安全而著称,所以作为基础镜像是非常好的一个选择,可谓是麻雀虽小但五脏俱全,镜像非常小巧,不到 6M的大小,所以特别适合容器打包。

docker run -it                                                    --name alpine1  alpine /bin/sh

docker run -it --network container:alpine1 --name alpine2  alpine /bin/sh

运行结果,验证共用搭桥
alpine1

[root@localhost ~]# docker run -it                                                    --name alpine1  alpine /bin/sh
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
59bf1c3509f3: Pull complete 
Digest: sha256:21a3deaa0d32a8057914f36584b5288d2e5ecc984380bc0118285c70fa8c9300
Status: Downloaded newer image for alpine:latest
/ # 
/ # 
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
20: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

alpine2

[root@localhost ~]# docker run -it --network container:alpine1 --name alpine2  alpine /bin/sh
/ # 
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
20: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

停掉alpine1,再看看alpine2

/ # exit  #直接退出就可以了,退出alpine1就停了
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                   COMMAND             CREATED             STATUS             PORTS                                       NAMES
ff3b0764af33   alpine                  "/bin/sh"           4 minutes ago       Up 4 minutes                                                   alpine2
d6adc44842cb   billygoo/tomcat8-jdk8   "catalina.sh run"   21 minutes ago      Up 21 minutes                                                  tomcat84
d1774134b30c   billygoo/tomcat8-jdk8   "catalina.sh run"   40 minutes ago      Up 40 minutes                                                  tomcat83
95437e77a8f9   billygoo/tomcat8-jdk8   "catalina.sh run"   About an hour ago   Up About an hour   0.0.0.0:8082->8080/tcp, :::8082->8080/tcp   tomcat82
5d859e3cb3fa   billygoo/tomcat8-jdk8   "catalina.sh run"   About an hour ago   Up About an hour   0.0.0.0:8081->8080/tcp, :::8081->8080/tcp   tomcat81
[root@localhost ~]# 

查看alpine2

/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
/ # 

5、自定义网络
1、过时的link

在这里插入图片描述

2、是什么
3、案例

1、before
案例

docker run -d -p 8081:8080   --name tomcat81 billygoo/tomcat8-jdk8

docker run -d -p 8082:8080   --name tomcat82 billygoo/tomcat8-jdk8
[root@localhost ~]# docker run -d -p 8081:8080   --name tomcat81 billygoo/tomcat8-jdk8
8b611f969dcf8421a8de70c0f27b9ff2d411399fe586bf792fc4260f36900da8
[root@localhost ~]# docker run -d -p 8082:8080   --name tomcat82 billygoo/tomcat8-jdk8
06628f5bd979e6dfe699e38f9d62f78e1c59ad3d3c806a774b5e3dcabe164cf5
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                   COMMAND             CREATED          STATUS          PORTS                                       NAMES
06628f5bd979   billygoo/tomcat8-jdk8   "catalina.sh run"   13 seconds ago   Up 11 seconds   0.0.0.0:8082->8080/tcp, :::8082->8080/tcp   tomcat82
8b611f969dcf   billygoo/tomcat8-jdk8   "catalina.sh run"   23 seconds ago   Up 22 seconds   0.0.0.0:8081->8080/tcp, :::8081->8080/tcp   tomcat81

上述成功启动并用docker exec进入各自容器实例内部

[root@localhost ~]# docker exec -it tomcat81 bash
root@8b611f969dcf:/usr/local/tomcat# 

[root@localhost ~]# docker exec -it tomcat82 bash
root@06628f5bd979:/usr/local/tomcat# 

问题

按照IP地址ping是OK的
tomcat81

[root@localhost ~]# docker exec -it tomcat81 bash
root@8b611f969dcf:/usr/local/tomcat# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
root@8b611f969dcf:/usr/local/tomcat# ping 172.17.0.3 #ping tomcat82
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.083 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.081 ms
64 bytes from 172.17.0.3: icmp_seq=3 ttl=64 time=0.123 ms
64 bytes from 172.17.0.3: icmp_seq=4 ttl=64 time=0.111 ms
^C
--- 172.17.0.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3001ms
rtt min/avg/max/mdev = 0.081/0.099/0.123/0.020 ms
root@8b611f969dcf:/usr/local/tomcat# 

tomcat82

[root@localhost ~]# docker exec -it tomcat82 bash
root@06628f5bd979:/usr/local/tomcat# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
24: eth0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
root@06628f5bd979:/usr/local/tomcat# ping 172.17.0.2  #ping tomcat81
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.409 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.123 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.117 ms
64 bytes from 172.17.0.2: icmp_seq=4 ttl=64 time=0.105 ms
64 bytes from 172.17.0.2: icmp_seq=5 ttl=64 time=0.072 ms
64 bytes from 172.17.0.2: icmp_seq=6 ttl=64 time=0.097 ms
64 bytes from 172.17.0.2: icmp_seq=7 ttl=64 time=0.082 ms
64 bytes from 172.17.0.2: icmp_seq=8 ttl=64 time=0.775 ms
^C
--- 172.17.0.2 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7012ms
rtt min/avg/max/mdev = 0.072/0.222/0.775/0.233 ms
root@06628f5bd979:/usr/local/tomcat# 

按照服务名ping结果???
tomcat82

root@06628f5bd979:/usr/local/tomcat# ping tomcat81
ping: tomcat81: Name or service not known
root@06628f5bd979:/usr/local/tomcat# 

tomcat81

root@8b611f969dcf:/usr/local/tomcat# ping tomcat82
ping: tomcat82: Name or service not known
root@8b611f969dcf:/usr/local/tomcat# 

2、after
案例
1、自定义桥接网络,自定义网络默认使用的是桥接网络bridge
2、新建自定义网络

[root@localhost ~]# docker  network create zy_network
f5a3cda1926a816cf18dc48c55c952ce9cb9a33280f7fe44fca3abb8df0ac226
[root@localhost ~]# docker  network ls
NETWORK ID     NAME         DRIVER    SCOPE
9c7cd5be3b29   bridge       bridge    local
b83e5caf0cea   host         host      local
2b812ba15cf5   none         null      local
f5a3cda1926a   zy_network   bridge    local
[root@localhost ~]# 

3、新建容器加入上一步新建的自定义网络

docker run -d -p 8081:8080 --network zy_network  --name tomcat81 billygoo/tomcat8-jdk8

docker run -d -p 8082:8080 --network zy_network  --name tomcat82 billygoo/tomcat8-jdk8

tomcat81

[root@localhost ~]# docker run -d -p 8081:8080 --network zy_network  --name tomcat81 billygoo/tomcat8-jdk8
4699d841edd8dc1ba42c57d2e903571b17f451c1776b9efc48c1f075b0aa343b
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE                   COMMAND             CREATED         STATUS         PORTS                                       NAMES
4699d841edd8   billygoo/tomcat8-jdk8   "catalina.sh run"   2 minutes ago   Up 2 minutes   0.0.0.0:8081->8080/tcp, :::8081->8080/tcp   tomcat81
[root@localhost ~]# docker exec -it tomcat81 bash
root@4699d841edd8:/usr/local/tomcat# ping tomcat82
PING tomcat82 (172.20.0.3) 56(84) bytes of data.
64 bytes from tomcat82.zy_network (172.20.0.3): icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from tomcat82.zy_network (172.20.0.3): icmp_seq=2 ttl=64 time=0.124 ms
64 bytes from tomcat82.zy_network (172.20.0.3): icmp_seq=3 ttl=64 time=0.123 ms
^C
--- tomcat82 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.053/0.100/0.124/0.033 ms
root@4699d841edd8:/usr/local/tomcat# 

tomcat82

[root@localhost ~]# docker run -d -p 8082:8080 --network zy_network  --name tomcat82 billygoo/tomcat8-jdk8
f7b4c51c292c7f7afaef5a9683f2dafba749c59d85a661f86889588e085670a1
[root@localhost ~]# docker exec -it tomcat82 bash
root@f7b4c51c292c:/usr/local/tomcat# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
29: eth0@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:14:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.20.0.3/16 brd 172.20.255.255 scope global eth0
       valid_lft forever preferred_lft forever
root@f7b4c51c292c:/usr/local/tomcat# ping tomcat81
PING tomcat81 (172.20.0.2) 56(84) bytes of data.
64 bytes from tomcat81.zy_network (172.20.0.2): icmp_seq=1 ttl=64 time=0.096 ms
64 bytes from tomcat81.zy_network (172.20.0.2): icmp_seq=2 ttl=64 time=0.128 ms
64 bytes from tomcat81.zy_network (172.20.0.2): icmp_seq=3 ttl=64 time=0.119 ms
64 bytes from tomcat81.zy_network (172.20.0.2): icmp_seq=4 ttl=64 time=0.126 ms
^C
--- tomcat81 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 0.096/0.117/0.128/0.014 ms
root@f7b4c51c292c:/usr/local/tomcat# 

4、互相ping测试
问题结论
重要的事情说三遍

  • 自定义网络本身就维护好了主机名和ip的对应关系(ip和域名都能通)
  • 自定义网络本身就维护好了主机名和ip的对应关系(ip和域名都能通)
  • 自定义网络本身就维护好了主机名和ip的对应关系(ip和域名都能通)

五、Docker平台架构图解

整体说明

从其架构和运行流程来看,Docker 是一个 C/S 模式的架构,后端是一个松耦合架构,众多模块各司其职。

Docker 运行的基本流程为:

1 用户是使用 Docker Client 与 Docker Daemon 建立通信,并发送请求给后者。
2 Docker Daemon 作为 Docker 架构中的主体部分,首先提供 Docker Server 的功能使其可以接受 Docker Client 的请求。
3 Docker Engine 执行 Docker 内部的一系列工作,每一项工作都是以一个 Job 的形式的存在。
4 Job 的运行过程中,当需要容器镜像时,则从 Docker Registry 中下载镜像,并通过镜像管理驱动 Graph driver将下载镜像以Graph的形式存储。
5 当需要为 Docker 创建网络环境时,通过网络管理驱动 Network driver 创建并配置 Docker 容器网络环境。
6 当需要限制 Docker 容器运行资源或执行用户指令等操作时,则通过 Execdriver 来完成。
7 Libcontainer是一项独立的容器管理包,Network driver以及Exec driver都是通过Libcontainer来实现具体对容器进行的操作。

整体架构

在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值