理解Docker0
先把之前的容器和镜像全部删除
docker rm -f $(docker ps -qa)
docker rmi $(docker images -qa)
查看网络信息
ip a
里面有一个docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:83:27:d2:d4 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
创建一个容器
docker run -d -P --name tomcat01 tomcat
容器外的docker0和容器内网卡的地址都是同一个网段
[root@docker ~]# docker exec -it tomcat01 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
我们在容器外ping容器内的地址
[root@docker ~]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.131 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.119 ms
linux可以ping通docker容器内部
docker network ls可以查看网络方式
[root@docker ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
bb3aac34755d bridge bridge local
faa06498f8b8 host host local
674f1e2caf4c none null local
常见的容器之间通信方式
同一个宿主机内部
- 我们每启动一个docker容器,docker就会给docker容器分配一个ip,我们只要安装了docker,就会有一个网卡 docker0
- 网卡之间是桥接模式 通过docker network ls 可以查看,使用的技术是evth-pair技术
- 通过虚拟交换机来实现容器间通信
再创建一个容器
docker run -d -P --name tomcat02 tomcat
再次查看网络信息
ip a
6: veth091d8e1@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 62:f8:47:a3:d9:be brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::60f8:47ff:fea3:d9be/64 scope link
valid_lft forever preferred_lft forever
8: veth9a5c240@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 4e:42:99:58:92:f1 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::4c42:99ff:fe58:92f1/64 scope link
valid_lft forever preferred_lft forever
发现又多了一张网卡
- 我们发现这个容器带来网卡,都是一对对的
- evth-pair 就是一对的虚拟设备接口,他们都是成对出现的,一段连着协议,一段彼此相连
- 正因为有这个特性,evth-pair充当一个桥梁,连接各种虚拟网络设备的
- openstac,Docker容器之间的连接,OVS的连接,都是使用evth-pair技术
测试两个容器之间能否通信
[root@docker ~]# docker exec -it tomcat01 ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.127 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.051 ms
[root@docker ~]# docker exec -it tomcat02 ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.112 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.063 ms
- tomcat01和tomcat02是公用的一个路由器,docker0
- 所有的容器不指定网络的情况下,都是docker0来路由,docker会给我们的容器分配一个默认的可用ip
- Docker使用的是Linux的桥接,宿主机中是一个Docker容器的网桥 docker0
- Docker 中所有的网络都是虚拟的。虚拟的转发效率高
- 只要容器删除,对应的网桥就没了
docker rm -f $(docker ps -qa)
ip a
容器网络信息查看
ip link show
docker network ls
docker network inspect bridge
容器互联
我们编写了一个微服务,database url=ip ,项目不重启,数据库ip换掉了,我们希望可以处理这个问题,可以使用容器名字来进行访问容器?
这时候就需要容器互联
[root@docker ~]# docker exec -it tomcat01 ping tomcat02
ping: tomcat02: Name or service not known
[root@docker ~]# docker exec -it tomcat02 ping tomcat01
ping: tomcat01: Name or service not known
这时候通过名字无法ping通
如果我们在创建一个tomcat03 想和tomcat01 ping通
通过--link 就可以解决网络连通的问题
docker run -d -P --name tomcat03 --link tomcat01 tomcat
tomcat03 ping 通了tomcat01
[root@docker ~]# docker exec -it tomcat03 ping tomcat01
PING tomcat01 (172.17.0.2) 56(84) bytes of data.
64 bytes from tomcat01 (172.17.0.2): icmp_seq=1 ttl=64 time=0.071 ms
64 bytes from tomcat01 (172.17.0.2): icmp_seq=2 ttl=64 time=0.039 ms
但是tomcat01 ping tomcat03 就无法连通
[root@docker ~]# docker exec -it tomcat01 ping tomcat03
ping: tomcat03: Name or service not known
查看当前docker0的网络信息
[root@docker ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
bb3aac34755d bridge bridge local
faa06498f8b8 host host local
674f1e2caf4c none null local
[root@docker ~]# docker network inspect bb3aac34755d
tomcat03在创建容器的同时 在/etc/hosts文件内添加了tomcat01的主机名和ip地址
[root@docker ~]# docker exec -it tomcat03 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 tomcat01 9b63068b93a9
172.17.0.4 e420707f2b68
我们查看tomcat01的hosts文件
[root@docker ~]# docker exec -it tomcat01 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 9b63068b93a9
没有绑定tomcat03的主机名和地址 因此tomcat01 ping tomcat03 则不通
本质
--link 就是我们在hosts配置中增加了一个172.17.0.2 tomcat01 9b63068b93a9
但是现在已经不推荐使用这种方法
自定义网络
查看所有docker网络
[root@docker ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
bb3aac34755d bridge bridge local
faa06498f8b8 host host local
674f1e2caf4c none null local
bridge:桥接docker(默认的,自己创建也使用bridge模式)
none:不配置网络
host:和宿主机共享网络
container:容器网络连通!(用的少!局限很大)
我们直接启动的命令
docker run -d -P --name tomcat01 tomcat
和
docker run -d -P --name tomcat01 --net bridge tomcat
其实是一样的,默认有--net bridge 这个参数
docker0的特点:
默认,但有局限性,不能通过容器名访问,必须使用--link打通连接
因此我们可用自定义一个网络
创建自定义网络
docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
[root@docker ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
bb3aac34755d bridge bridge local
faa06498f8b8 host host local
0a4bca199bba mynet bridge local
674f1e2caf4c none null local
查看网络信息
docker network inspect mynet
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
通过自定义网络创建容器
docker run -d -P --name tomcat-net-01 --net mynet tomcat
docker run -d -P --name tomcat-net-02 --net mynet tomcat
然后再次查看自定义网络的信息
docker network inspect mynet
就能够看到创建容器的网络地址
测试两个容器之间的网络通信
首先通过ip地址
[root@docker ~]# docker exec -it tomcat-net-01 ping 192.168.0.3
PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.167 ms
64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.052 ms
然后通过容器名
[root@docker ~]# docker exec -it tomcat-net-01 ping tomcat-net-02
PING tomcat-net-02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.568 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.044 ms
- 我们的自定义网络docker已经帮我们维护好了对应的关系,因此这种自定义网络的方式是推荐使用的
- 好处是能够隔离两个集群之间的网络,保证安全
不同网段之间的连通
首先测试docker0的网段和我们自定义mynet能否连通
还是先创建两个默认容器
docker run -d -P --name tomcat01 tomcat
docker run -d -P --name tomcat02 tomcat
测试能否ping通
[root@docker ~]# docker exec -it tomcat01 ping tomcat-net-01
ping: tomcat-net-01: Name or service not known
不同网段肯定是没法连通的
因此我们需要使用connect连通网络
docker network connect mynet tomcat01
docker network inspect mynet
查看mynet的信息
发现tomcat01直接被加进了mynet网络
现在就可以ping通了
[root@docker ~]# docker exec -it tomcat01 ping tomcat-net-01
PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.075 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.032 ms
但是tomcat02还是不通,因为只和tomcat01进行了连通
[root@docker ~]# docker exec -it tomcat02 ping tomcat-net-01
ping: tomcat-net-01: Name or service not known
不同自定义网络连通
再创建一个自定义网络
docker network create --driver bridge --subnet 172.16.1.0/24 --gateway 172.16.1.1 mynet2
通过mynet2创建容器
docker run -d -P --name tomcat-net2-01 --net mynet2 tomcat
docker run -d -P --name tomcat-net2-02 --net mynet2 tomcat
查看信息
docker network inspect mynet2
"Containers": {
"0854cd7532471832bb49ab9ce776d4b851ff7d9bad30360b6d9771745eda7ea6": {
"Name": "tomcat-net2-01",
"EndpointID": "eaa78572546559c8cb942e2fab81a488908570988b73211a2822de870017556e",
"MacAddress": "02:42:ac:10:01:02",
"IPv4Address": "172.16.1.2/24",
"IPv6Address": ""
},
"9ad32a10cbd3758ec94a181007c75ffef901d9653e9b51481cf3ebe49c153078": {
"Name": "tomcat-net2-02",
"EndpointID": "d5227bd0adbe698c3bcb50e357ed17727c8c39252db7fb501e81c262e8e82ee2",
"MacAddress": "02:42:ac:10:01:03",
"IPv4Address": "172.16.1.3/24",
"IPv6Address": ""
}
},
测试能否ping通mynet的容器
[root@docker ~]# docker exec -it tomcat-net2-01 ping tomcat-net-01
ping: tomcat-net-01: Name or service not known
然后连通网络
docker network connect mynet2 tomcat-net-01
docker network inspect mynet2
"8d49c1d32fe68bec3c4f0b783800b709f67034342ecb73975cf9fd385128cb3d": {
"Name": "tomcat-net-01",
"EndpointID": "54e0bfa27f72ab886c9c37861eb6e4375e3324387ff71f9918406caa21d692
b8", "MacAddress": "02:42:ac:10:01:04",
"IPv4Address": "172.16.1.4/24",
"IPv6Address": ""
},
查看到tomcat-net-01 直接被加入到mynet2的网络里了
再次ping
[root@docker ~]# docker exec -it tomcat-net2-01 ping tomcat-net-01
PING tomcat-net-01 (172.16.1.4) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet2 (172.16.1.4): icmp_seq=1 ttl=64 time=0.138 ms
64 bytes from tomcat-net-01.mynet2 (172.16.1.4): icmp_seq=2 ttl=64 time=0.043 ms
Redis集群部署实战
先把之前的容器全部删除
docker rm -f $(docker ps -qa)
创建redis集群网络
[root@docker ~]# docker network create redis --subnet 172.38.0.0/16
4a95541bafd13e76a9be208d6acb64d307b25f25d1c114b1a594dee70f8ceec1
[root@docker ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
bb3aac34755d bridge bridge local
faa06498f8b8 host host local
0a4bca199bba mynet bridge local
6c55688f7167 mynet2 bridge local
674f1e2caf4c none null local
4a95541bafd1 redis bridge local
因为redis集群需要修改配置文件,因此我们要写一个shell脚本
vim redis.sh
#! /bin/bash
for port in $(seq 1 6);
do
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
tee /mydata/redis/node-${port}/conf/redis.conf <<-EOF
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done
执行脚本
bash redis.sh
六个节点的配置文件已生成
开启redis容器
docker run -d -p 6371:6379 -p 16371:16379 --name redis-1 -v /mydata/redis/node-1/data:/data -v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf --net redis --ip 172.38.0.11 redis:6.2.2 redis-server /etc/redis/redis.conf
# 6379 是redis端口
# 16379 是redis集群TCP端口
# --net 使用自定义redis网络
# --ip 是本容器绑定的ip
# redis:6.2.2版本
# redis-server /etc/redis/redis.conf 通过此配置文件启动redis服务
然后在此基础上再把剩余5个redis容器开启
docker run -d -p 6372:6379 -p 16372:16379 --name redis-2 -v /mydata/redis/node-2/data:/data -v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf --net redis --ip 172.38.0.12 redis:6.2.2 redis-server /etc/redis/redis.conf
写一个启动脚本
vim redis1.sh
#! /bin/bash
for port in $(seq 1 6);
do
docker run -d -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} -v /mydata/redis/node-${port}/data:/data -v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf --net redis --ip 172.38.0.1${port} redis:6.2.2 redis-server /etc/redis/redis.conf
done
执行脚本
[root@docker home]# bash redis1.sh
3addccf04cba9c73c8d900cf4f0484a1d9f68660f9eed32edfdf0f30d34c819a
04817643420edf0e0b7be929656520fe3b1c386bbf821e496f87ec45efae32a3
f30621fcfabb34283f07157936d8f0df7ae3150099edeccea3c0241ad22b9eb2
943f25f416ce71a777c4844a560c92420080883db11fac422d2729068aa6b979
2df6b54a1dc70eed111d4ba04fd3c9b6f8cbf024c95d2f35b51f590ee5d6ce78
a8e2dd29809790fa1f8b85411288888338e6f7ea4afe142e38502cfe4d8a767d
查看容器
docker ps
进入redis容器
创建集群可参考 redis集群
docker exec -it redis-1 /bin/bash
root@3addccf04cba:/data# ls
appendonly.aof nodes.conf
在主节点创建集群
redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: e0cc3f54b7d1cfd2bd12d17dfb6f2df04025d328 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
M: a469dc0e6ec949e17ab13aa89bb70c6bfcc36d1a 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
M: c4b0299258af02fa7e6f5d8fa844961836e0fd9b 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
S: 12ef1fc2cd9437aa6ae847d4a151aa6c05927a43 172.38.0.14:6379
replicates c4b0299258af02fa7e6f5d8fa844961836e0fd9b
S: 55105d1c3cc9b01d778cdd29ffd61aea7b639e08 172.38.0.15:6379
replicates e0cc3f54b7d1cfd2bd12d17dfb6f2df04025d328
S: 1a566cd13eaffacadbe8003dcc234330d4bd3499 172.38.0.16:6379
replicates a469dc0e6ec949e17ab13aa89bb70c6bfcc36d1a
3个主机 3个从机
正在配置集群
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: e0cc3f54b7d1cfd2bd12d17dfb6f2df04025d328 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: a469dc0e6ec949e17ab13aa89bb70c6bfcc36d1a 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 1a566cd13eaffacadbe8003dcc234330d4bd3499 172.38.0.16:6379
slots: (0 slots) slave
replicates a469dc0e6ec949e17ab13aa89bb70c6bfcc36d1a
M: c4b0299258af02fa7e6f5d8fa844961836e0fd9b 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 12ef1fc2cd9437aa6ae847d4a151aa6c05927a43 172.38.0.14:6379
slots: (0 slots) slave
replicates c4b0299258af02fa7e6f5d8fa844961836e0fd9b
S: 55105d1c3cc9b01d778cdd29ffd61aea7b639e08 172.38.0.15:6379
slots: (0 slots) slave
replicates e0cc3f54b7d1cfd2bd12d17dfb6f2df04025d328
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
连接集群
redis-cli -c
127.0.0.1:6379> cluster info # 查看集群信息
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3 # 3个主从
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:175
cluster_stats_messages_pong_sent:191
cluster_stats_messages_sent:366
cluster_stats_messages_ping_received:186
cluster_stats_messages_pong_received:175
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:366
查看节点
127.0.0.1:6379> cluster nodes
可用看到三主三从
我们设置一个key
127.0.0.1:6379> set name maomao
-> Redirected to slot [5798] located at 172.38.0.12:6379
OK
显示数据被插槽在172.38.0.12:6379 第二个主机上
测试高可用
我们将第二个主机容器停止,来验证它的从机能否成为主机,实现高可用
[root@docker ~]# docker stop redis-2
redis-2
我们去获取值
127.0.0.1:6379> get name
-> Redirected to slot [5798] located at 172.38.0.16:6379
"maomao"
172.38.0.16:6379> cluster nodes
发现16节点成为了主节点
1a566cd13eaffacadbe8003dcc234330d4bd3499 172.38.0.16:6379@16379 myself,master - 0 1619258193000 7 connected 5461-10922
高可用成功