Docker网络
理解Docker0
清空所有docker环境(反正下载也快)
# 清空所有容器
docker rm -f $(docker ps -aq)
# 清空所有镜像
docker rmi -f $(docker images -aq)
# 清空所有挂载卷
docker volume prune
# 网络命令须知 network
[root@wulei home]# docker network --help
Usage: docker network COMMAND
COMMANDS:
connect 将容器连接到网络
create 创建一个网络
disconnect 断开容器与网络的连接
inspect 显示一个或多个网络上的详细信息
ls 网络列表
prune 删除所有未使用的网络
rm 删除一个或多个网络
测试
[root@wulei home]# ip addr
三个网络
# 问题:docker 是如何处理容器网络访问的?
[root@wulei home]# docker pull tomcat
# -P (大写)是随机映射端口
[root@wulei home]# docker run -d -P --name tomcat01 tomcat
# 查看容器内部网络地址 ip addr
#发现容器启动的时候会得到 eth0@if225 ip地址,这就是docker分配的
[root@wulei home]# docker exec -it tomcat01 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
224: eth0@if225: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
# 思考:linux 能不能 ping 容器内部?
[root@wulei home]# ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from 172.18.0.2: icmp_seq=2 ttl=64 time=0.057 ms
...
# linux可以 ping通 docker 容器
原理
1、我们每启动一个docker容器,docker就会给docker容器分配一个ip,我们只要安装了docker,就会有一个网卡 docker0
桥接模式,使用的技术是veth-pair技术!
再次测试 ip addr
2、再启动一个容器测试,发现又多了一对网卡
# 我们发现这个容器的网卡,都是一对一对的
# veth-pair 就是一对虚拟设备的接口,他们都是成对出现的,一端连着协议,一端彼此相连
# 正因为有这个特性,我们通常会用 veth-pair 来充当一个桥梁,连接各种虚拟网路设备的
# Openstack,Docker容器之间的连接,OVS的连接,都是使用 veth-pair 技术
3、我们测试tomcat01和tomcat02是否可以ping通
# 下面的ip是进入容器内部通过 ip addr 获取的
[root@wulei home]# docker exec -it tomcat02 ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.118 ms
64 bytes from 172.18.0.2: icmp_seq=2 ttl=64 time=0.061 ms
64 bytes from 172.18.0.2: icmp_seq=3 ttl=64 time=0.067 ms
...
# 结论 ,容器和容器之间时刻一互相ping通的
绘制一个网络模型图:
总结:tomcat01和tomcat02是共用一个路由器:docker0
所有的容器在不指定网络(–net) 的情况下,都是由docker0路由的,docker会给我们的容器分配一个默认的可用的ip
小结
Docker 使用的是Linux的桥接,宿主机中是一个Docker容器的网桥 docker0
Docker中的所有的网络接口都是虚拟的,虚拟的转发效率高
只要容器删除,对应的一对网桥就没了
–link
思考一个场景,我们编写了一个微服务,项目不重启的情况下,数据库ip换掉了,我们希望可以处理这个问题,怎样可以使用服务名(容器名)来访问?
# 启动 tomcat01
[root@wulei home]# docker run -d -P --name tomcat01 tomcat
1188660bd733736f7ea703b901270b1b07dfb2ab62a2c00585ae6e6ca6c6d282
# 启动 tomcat02
[root@wulei home]# docker run -d -P --name tomcat02 tomcat
80749245d0519298744691bf669f205b2e314b87b8c2f320782f9b27ba469e53
# tomcat02 来 ping tomcat01 用服务名发现出错,怎么解决呢?
[root@wulei home]# docker exec -it tomcat02 ping tomcat01
ping: tomcat01: Name or service not known
# 启动tomcat03 使用 --link tomcat02
[root@wulei home]# docker run -d -P --name tomcat03 --link tomcat02 tomcat
ae157f8a49b9df78da4cdc345d19bd63c13aa695f58d69a5839c72085062ca85
# 测试是不是双向的,明显不是,只能 tomcat03 --> tomcat02
[root@wulei home]# docker exec -it tomcat02 ping tomcat03
ping: tomcat03: Name or service not known
[root@wulei home]# docker exec -it tomcat03 ping tomcat02
PING tomcat02 (172.18.0.3) 56(84) bytes of data.
64 bytes from tomcat02 (172.18.0.3): icmp_seq=1 ttl=64 time=0.073 ms
64 bytes from tomcat02 (172.18.0.3): icmp_seq=2 ttl=64 time=0.060 ms
...
[root@wulei home]#
探究 :docker network inspect
[root@wulei home]# docker network ls
NETWORK ID NAME DRIVER SCOPE
dd3ecec9cf82 bridge bridge local
7e39bad7912c host host local
6f3688b0e412 none null local
[root@wulei home]# docker network inspect dd3ecec9cf82
看下tomcat03的具体信息,其实这个tomcat03就是在本地配置了tomcat02的配置
docker inspect tomcat03
进入容器查看hosts文件
[root@wulei home]# docker exec -it tomcat03 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.18.0.3 tomcat02 80749245d051
172.18.0.4 ae157f8a49b9
# 通过这个文件可以发现什么?即使你不配置link,也可以直接通过容器id去ping通!!
[root@wulei home]# docker exec -it tomcat03 ping ae157f8a49b9
PING ae157f8a49b9 (172.18.0.4) 56(84) bytes of data.
64 bytes from ae157f8a49b9 (172.18.0.4): icmp_seq=1 ttl=64 time=0.039 ms
64 bytes from ae157f8a49b9 (172.18.0.4): icmp_seq=2 ttl=64 time=0.056 ms
我们现在玩docker,已经不建议使用 --link了
而是使用自定义网网络,不适用 docker0
docker0问题:不支持容器名访问!
自定义网络
查看说有的docker网络
[root@wulei home]# docker network ls
NETWORK ID NAME DRIVER SCOPE
dd3ecec9cf82 bridge bridge local
7e39bad7912c host host local
6f3688b0e412 none null local
网络模式
bridge : 桥接 docker (默认,自己创建也使用bridge模式)
none :不配置网络
host : 和宿主机共享网络
container:容器网络连通!(用的少,局限性很大)
测试
# 自定义网络
# --driver 默认为桥接模式 bridge
# --subnet 192.168.0.0/16 子网 192.168.0.2 - 192.168.255.255
# --gateway 192.168.0.1 网关
[root@wulei home]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
aa7346dc413421a77a5e8edc196395a0d578213ccf24019c38bf4496fb2c3f1e
[root@wulei home]# docker network ls
NETWORK ID NAME DRIVER SCOPE
dd3ecec9cf82 bridge bridge local
7e39bad7912c host host local
aa7346dc4134 mynet bridge local
6f3688b0e412 none null local
我们自己的一个网络就创建好了
[root@wulei home]# docker run -dP --name tomcat01 --net mynet tomcat
501c1f4f6cf3e4a339f2bc4d62b922f16dfe4968cf6d0aab950be6b56a428bce
[root@wulei home]# docker run -dP --name tomcat02 --net mynet tomcat
4afcc7bc887ab7fbad171de1891330fd414862d064a02ed1e1b5054e8914d22d
[root@wulei home]# docker network inspect mynet
[
{
"Name": "mynet",
"Id": "aa7346dc413421a77a5e8edc196395a0d578213ccf24019c38bf4496fb2c3f1e",
"Created": "2021-02-14T14:34:25.878536524+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4afcc7bc887ab7fbad171de1891330fd414862d064a02ed1e1b5054e8914d22d": {
"Name": "tomcat02",
"EndpointID": "b2cfd0968cd747d3ada29c3caaee8f31d6cb423b2c6e33783fe3d740756a7d42",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
},
"501c1f4f6cf3e4a339f2bc4d62b922f16dfe4968cf6d0aab950be6b56a428bce": {
"Name": "tomcat01",
"EndpointID": "e1103e9bb6fc69b83b829c8cfdfd4c3b86ddf5f849b520b956c2c704e70b6b65",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
# 再次测试直接使用IP ping
[root@wulei home]# docker exec -it tomcat01 ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.
64 bytes from 192.168.0.2: icmp_seq=1 ttl=64 time=0.041 ms
64 bytes from 192.168.0.2: icmp_seq=2 ttl=64 time=0.045 ms
...
# 现在我们使用自己网络,没有使用 --link ,使用容器名ping,也可以 ping 通
[root@wulei home]# docker exec -it tomcat01 ping tomcat02
PING tomcat02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.056 ms
64 bytes from tomcat02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.068 ms
...
我们自定义的网络docker都已经帮我们维护好了对应的关系,推荐我们平时这样使用网络
好处:
- redis - 不同的集群使用不同的网络,保证集群是安全健康的
- mysql- 不同的集群使用不同的网络,保证集群是安全健康的
网络联通
# 再启动两个容器,不指定网络,直接使用默认的docker0
[root@wulei home]# docker run -dP --name tomcat03 tomcat
[root@wulei home]# docker run -dP --name tomcat04 tomcat
# 现在的容器与网络的连接图如下:
现在的tomcat01与tomcat03肯定是连通不了的,现在还要学习一个连通命令:注意它的描述
# 测试打通 tomcat03 -- mynet
[root@wulei home]# docker network connect mynet tomcat03
# 在linux里面什么都没发生就是好事,我们现在再来看下我们的网络
[root@wulei home]# docker network inspect mynet
# 发现连通之后就是将 tomcat03 放到了 mynet 网络下(一个容器两个IP地址 )
# 现在tomcat03打通了,测试 ping 成功
[root@wulei home]# docker exec -it tomcat01 ping tomcat03
PING tomcat03 (192.168.0.4) 56(84) bytes of data.
64 bytes from tomcat03.mynet (192.168.0.4): icmp_seq=1 ttl=64 time=0.092 ms
# 想让tomcat04打通,配置一下即可
结论:假如要跨网络操作别人,就需要使用 docker network connet 连通!
实战:部署redis集群
# 创建网卡
docker network create --subnet 123.56.0.0/16 redis
# 通过脚本创建 6 个redis 配置
#···········································shell 脚本 👇·········································
for port in $(seq 1 6); \
do \
mkdir -p /home/mydata/redis/node-${port}/conf
touch /home/mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/home/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 123.56.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
# 为了体验手动创建,我这里就不加入启动脚本了
#docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \
# -v /home/mydata/redis/node-${port}/data:/data \
# -v /home/mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
# -d --net redis --ip 123.56.0.1${port} redis:5.0.10 redis-server /etc/redis/redis.conf
done
#·············································shell 脚本 👆·······································
# redis-1
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
-v /home/mydata/redis/node-1/data:/data \
-v /home/mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 123.56.0.11 redis:5.0.10 redis-server /etc/redis/redis.conf
# redis-2
docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
-v /home/mydata/redis/node-2/data:/data \
-v /home/mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 123.56.0.12 redis:5.0.10 redis-server /etc/redis/redis.conf
# redis-3
docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
-v /home/mydata/redis/node-3/data:/data \
-v /home/mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 123.56.0.13 redis:5.0.10 redis-server /etc/redis/redis.conf
# redis-4
docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
-v /home/mydata/redis/node-4/data:/data \
-v /home/mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 123.56.0.14 redis:5.0.10 redis-server /etc/redis/redis.conf
# redis-5
docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
-v /home/mydata/redis/node-5/data:/data \
-v /home/mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 123.56.0.15 redis:5.0.10 redis-server /etc/redis/redis.conf
# redis-6
docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
-v /home/mydata/redis/node-6/data:/data \
-v /home/mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 123.56.0.16 redis:5.0.10 redis-server /etc/redis/redis.conf
# 查看容器
[root@wulei home]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
516aa3f30d0b redis:5.0.10 "docker-entrypoint.s…" 6 seconds ago Up 6 seconds 0.0.0.0:6376->6379/tcp, 0.0.0.0:16376->16379/tcp redis-6
290c231f62ea redis:5.0.10 "docker-entrypoint.s…" 26 seconds ago Up 26 seconds 0.0.0.0:6375->6379/tcp, 0.0.0.0:16375->16379/tcp redis-5
9b0a3ad3c2d1 redis:5.0.10 "docker-entrypoint.s…" 55 seconds ago Up 55 seconds 0.0.0.0:6374->6379/tcp, 0.0.0.0:16374->16379/tcp redis-4
955d7730a497 redis:5.0.10 "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 0.0.0.0:6373->6379/tcp, 0.0.0.0:16373->16379/tcp redis-3
bf9cf8896232 redis:5.0.10 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 0.0.0.0:6372->6379/tcp, 0.0.0.0:16372->16379/tcp redis-2
b61b8c7e50b5 redis:5.0.10 "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 0.0.0.0:6371->6379/tcp, 0.0.0.0:16371->16379/tcp redis-1
# 进入redis-1
[root@wulei home]# docker exec -it redis-1 /bin/sh
# 进入之后创建集群 注: 下面一行的 "#" 不是注释,是进入redis-1的根目录,我也不知道为什么不显示路径,直接显示 #
# redis-cli --cluster \
create \
123.56.0.11:6379 \
123.56.0.12:6379 \
123.56.0.13:6379 \
123.56.0.14:6379 \
123.56.0.15:6379 \
123.56.0.16:6379 \
--cluster-replicas 1
# 集群的配置
--cluster-replicas 1> > > > > > > >
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 123.56.0.15:6379 to 123.56.0.11:6379
Adding replica 123.56.0.16:6379 to 123.56.0.12:6379
Adding replica 123.56.0.14:6379 to 123.56.0.13:6379
M: 7d7e71f42f423a6a17fe8c3841ceb9ce00e5b0fd 123.56.0.11:6379
slots:[0-5460] (5461 slots) master
M: c23449c5a14f6c866cbb9d54ae3dc678e6d5acdb 123.56.0.12:6379
slots:[5461-10922] (5462 slots) master
M: 0fd9fa82e771c701411edc93f4df1334441c4c25 123.56.0.13:6379
slots:[10923-16383] (5461 slots) master
S: 82cd260f912eb42904eb9c404551af9826904cc5 123.56.0.14:6379
replicates 0fd9fa82e771c701411edc93f4df1334441c4c25
S: cd9166fb1baf9781c7da539dd8c67a73ba188ee3 123.56.0.15:6379
replicates 7d7e71f42f423a6a17fe8c3841ceb9ce00e5b0fd
S: 0900c99c3625c0dd3b581ea14f5521546184073d 123.56.0.16:6379
replicates c23449c5a14f6c866cbb9d54ae3dc678e6d5acdb
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 123.56.0.11:6379)
M: 7d7e71f42f423a6a17fe8c3841ceb9ce00e5b0fd 123.56.0.11:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 0fd9fa82e771c701411edc93f4df1334441c4c25 123.56.0.13:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: c23449c5a14f6c866cbb9d54ae3dc678e6d5acdb 123.56.0.12:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 0900c99c3625c0dd3b581ea14f5521546184073d 123.56.0.16:6379
slots: (0 slots) slave
replicates c23449c5a14f6c866cbb9d54ae3dc678e6d5acdb
S: cd9166fb1baf9781c7da539dd8c67a73ba188ee3 123.56.0.15:6379
slots: (0 slots) slave
replicates 7d7e71f42f423a6a17fe8c3841ceb9ce00e5b0fd
S: 82cd260f912eb42904eb9c404551af9826904cc5 123.56.0.14:6379
slots: (0 slots) slave
replicates 0fd9fa82e771c701411edc93f4df1334441c4c25
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
# 进入redis -c 集群 注:下面一行的 "#" 不是注释,是redis的根目录,我也不知道为什么不显示路径,直接显示 #
# redis-cli -c
# 获取集群信息
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:440
cluster_stats_messages_pong_sent:451
cluster_stats_messages_sent:891
cluster_stats_messages_ping_received:446
cluster_stats_messages_pong_received:440
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:891
# 查看节点信息
127.0.0.1:6379> cluster nodes
0fd9fa82e771c701411edc93f4df1334441c4c25 123.56.0.13:6379@16379 master - 0 1613291206512 3 connected 10923-16383
c23449c5a14f6c866cbb9d54ae3dc678e6d5acdb 123.56.0.12:6379@16379 master - 0 1613291207514 2 connected 5461-10922
0900c99c3625c0dd3b581ea14f5521546184073d 123.56.0.16:6379@16379 slave c23449c5a14f6c866cbb9d54ae3dc678e6d5acdb 0 1613291207514 6 connected
cd9166fb1baf9781c7da539dd8c67a73ba188ee3 123.56.0.15:6379@16379 slave 7d7e71f42f423a6a17fe8c3841ceb9ce00e5b0fd 0 1613291207000 5 connected
7d7e71f42f423a6a17fe8c3841ceb9ce00e5b0fd 123.56.0.11:6379@16379 myself,master - 0 1613291206000 1 connected 0-5460
82cd260f912eb42904eb9c404551af9826904cc5 123.56.0.14:6379@16379 slave 0fd9fa82e771c701411edc93f4df1334441c4c25 0 1613291206000 4 connected
说明:
- redis的主机宕机后,从机会通过选举成为主机
SpringBoot 微服务打包Docker镜像
1、构建SpringBoot项目
//随便写一个hello程序即可
@RestController
public class HelloController {
@RequestMapping("/hello")
public String hello() {
return "hello docker";
}
}
2、打包应用
# 上传文件
[root@wulei idea]# ls
Dockerfile docker-test-0.0.1-SNAPSHOT.jar
3、编写Dockerfile
FROM java:8
COPY *.jar /app.jar
CMD ["--server.port=8086"]
EXPOSE 8086
ENTRYPOINT ["java","-jar","/app.jar"]
4、构建镜像
[root@wulei idea]# docker build -t test:1.0 .
5、发布运行
[root@wulei idea]# docker run -d -p 8086:8086 test:1.0
6、访问即可 http://ip:8086/hello