docker网络
Docker 网络从覆盖范围可分为单个 host 上的容器网络和跨多个 host 的网络
docker网络类型
查看docker的原生网络
[root@docker01 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
68f4c1f9f020 bridge bridge local
85317d309423 host host local
0699d506810c none null local
[root@docker01 ~]#
none网络
翻译过来“没有,什么都没有”,使用此状态的容器处于完全隔离状态,封闭状态,不会有ip地址;
使用场景:安全系数很高的网络,基本不用
[root@docker01 ~]# docker run -itd --name none --net none busybox
查看网络
[root@docker01 ~]# docker exec -it none ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
Host模式
host模式会和dockerhost的网卡一样,和宿主机共用一个 Network Namespace,容器将不会虚拟出自己的网卡,配置自己的 IP
[root@docker01 ~]# docker run -itd --name host --net host busybox
[root@docker01 ~]# docker exec -it host ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:9a:af:82 brd ff:ff:ff:ff:ff:ff
inet 172.16.46.111/24 brd 172.16.46.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::87c7:dfca:c8d:46d8/64 scope link
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue qlen 1000
link/ether 52:54:00:6b:c4:36 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 qlen 1000
link/ether 52:54:00:6b:c4:36 brd ff:ff:ff:ff:ff:ff
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
link/ether 02:42:7d:fe:68:77 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
PS:
用到Host网络的容器,他的网络跟宿主机的网络一模一样,那是因 为,在创建这个容器之初,并没有对它的Net网络栈进行隔离,而是直接使 用的宿主机的网络栈
使用场景:最大的好处就是性能,如果对网络传输效率要求比较高,可以选择host;最大的弊端就是不够灵活,端口过多可能会容易端口冲突
bridge网络
bridge是docker的默认模式,默认是172.17.0.1做网关
开启docker以后,宿主机会创建自己的docker0网桥,启动docker以后就会连接上这个虚拟网桥上,从docker0子网中分配一个 IP 给容器使用,并设置 docker0 的 IP 地址为容器的默认网关;
桥接网络,在容器和docker以veth-pair
的形式呈现出来;一端在容器里面,一端在宿主机;以vethxxx这样类似的名字命名,并将这个设备加入到 docker0 网桥中;
如果有新建的网桥,如下面的mynet,同上,分配的veth-xxx不过是加入的是mynet网桥而不是docker0
[root@aliyun ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
94e3a737c800 bridge bridge local
9abcb6d81f9f host host local
9823467158e0 lnmp bridge local
3e0b8788a3fa mynet bridge local
05efb7a8d771 none null local
# brctl show #查看docker网桥信息
[root@aliyun ~]# brctl show
bridge name bridge id STP enabled interfaces
br-3e0b8788a3fa 8000.0242fbfb5c8e no
br-9823467158e0 8000.02420c6cc82b no veth5ec66ad
vethac1e476
docker0 8000.02426f7b5a40 no
在bridge模式下的时的容器是可以和宿主机ping的通的
docker run -itd --name dc01 busybox:latest
[root@docker01 ~]# docker exec -it dc01 ping 172.16.46.111
PING 172.16.46.111 (172.16.46.111): 56 data bytes
64 bytes from 172.16.46.111: seq=0 ttl=64 time=0.066 ms
64 bytes from 172.16.46.111: seq=1 ttl=64 time=0.102 ms
64 bytes from 172.16.46.111: seq=2 ttl=64 time=0.080 ms
^C
--- 172.16.46.111 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.066/0.082/0.102 ms
自定义网络
如果我们想要改网段自定义ip,docker支持create一个网卡
docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
docker network inspect mynet
查看网卡的信息
#--network 指定所属的网卡
docker run -itd --name busybox_net01 --network mynet busybox:latest
docker run -itd --name busybox_net02 --network mynet busybox:latest
ps:自定义网卡也支持自定义ip
docker run -itd --name dc03 --network mynet --ip 192.168.0.100 busybox:latest
bridge模式架构图
容器分配的ip所属于相应的网桥(docker0,mynet),网桥的数据转发基于本地的真实网卡ens33转发出去,如果是虚拟机的情况,ens33网卡有是宿主机的V8网卡的映射出来的,最后靠宿主机的网卡转发出去。
如上图我们知道,同一网段我们都可以ping通,但是busybox01和busybox_mynet01可以ping通吗?
答案:当然不可以
不在同一个网段,不可能ping通!!!
解决方案:把容器加入到你这个网卡就可以啦!docker怎么实现的?
docker network connect --help
在mynet网桥添加别的网桥的容器
docker network connect mynet busybox01
添加成功以后,改容器上面会有2个不同网桥的ip,此时就可以个别的不同的网段的容器ping通
[root@docker01 ~]# docker exec -it busybox01 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
13: eth1@if14: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:00:03 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.3/16 brd 192.168.255.255 scope global eth1
valid_lft forever preferred_lft forever
此时,busybox01就可以和busybox_mynet01ping通
[root@docker01 ~]# docker exec -it busybox01 ping busybox_mynet01
PING dc02 (192.168.0.2): 56 data bytes
64 bytes from 192.168.0.2: seq=0 ttl=64 time=0.118 ms
64 bytes from 192.168.0.2: seq=1 ttl=64 time=0.105 ms
64 bytes from 192.168.0.2: seq=2 ttl=64 time=0.103 ms
64 bytes from 192.168.0.2: seq=3 ttl=64 time=0.122 ms
^C
--- dc02 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.103/0.112/0.122 ms
自定义网络的好处
自带了一个ContainerDNSserver功能(域名解析)
#直接ping主机名不会ping通
[root@docker01 ~]# docker exec -it busybox01 ping busybox02
ping: bad address 'busybox02'
#默认的网桥如果ping busybox02的ip可以ping通
[root@docker01 ~]# docker exec -it busybox01 ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.115 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.140 ms
^C
--- 172.17.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.115/0.127/0.140 ms
#自定义的网桥
[root@docker01 ~]# docker exec -it busybox_net01 ping busybox_net02
PING busybox_net02 (192.168.0.3): 56 data bytes
64 bytes from 192.168.0.3: seq=0 ttl=64 time=0.063 ms
64 bytes from 192.168.0.3: seq=1 ttl=64 time=0.074 ms
64 bytes from 192.168.0.3: seq=2 ttl=64 time=0.097 ms
^C
container(共享网络协议栈)
两个容器是共享一个ip地址,不过除了ip,其他的这2个容器还是互相隔离的
[root@docker01 ~]# docker run -itd --name web5 busybox:latest
6a1bf2b8bbfa724d9f94afcef141a4a3bea235265235e97aeb91a5a04f0a81f1
[root@docker01 ~]# docker run -itd --name web6 --network container:web5 busybox:latest
bf39a692d2aeec28ec921d7fa2a61aa9138ba105a2091c8eef9b7f889b43797e
[root@docker01 ~]# docker exec -it web5 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
[root@docker01 ~]# docker exec -it web6 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
PS: 由于这种网络的特殊性,一般在运行同一个服务,并且合格服务需要做监控,已经日志收集、或者网络监控的时候,可以选择这种网络;一般运行的场景不多