在一个主机上使用容器技术搭建一个redis集群,为什么说是伪集群,因为redis集群和分布式相互交叉。因为成本,在一台主机上部署一个三主三从的redis集群.
redis版本 v=6.2.6
部署
运行六个节点
docker-compose.yaml
version: '3'
services:
redis-1:
image: redis
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file /data/nodes.conf --cluster-node-timeout 5000 --appendonly yes
volumes:
- redis-1-data:/data
ports:
- "6381:6379"
networks:
redis-net:
redis-2:
image: redis
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file /data/nodes.conf --cluster-node-timeout 5000 --appendonly yes
volumes:
- redis-2-data:/data
ports:
- "6382:6379"
networks:
redis-net:
redis-3:
image: redis
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file /data/nodes.conf --cluster-node-timeout 5000 --appendonly yes
volumes:
- redis-3-data:/data
ports:
- "6383:6379"
networks:
redis-net:
redis-4:
image: redis
command: redis-server --port 6382 --cluster-enabled yes --cluster-config-file /data/nodes.conf --cluster-node-timeout 5000 --appendonly yes
volumes:
- redis-4-data:/data
ports:
- "6384:6382"
networks:
redis-net:
redis-5:
image: redis
command: redis-server --port 6383 --cluster-enabled yes --cluster-config-file /data/nodes.conf --cluster-node-timeout 5000 --appendonly yes
volumes:
- redis-5-data:/data
ports:
- "6385:6383"
networks:
redis-net:
redis-6:
image: redis
command: redis-server --port 6384 --cluster-enabled yes --cluster-config-file /data/nodes.conf --cluster-node-timeout 5000 --appendonly yes
volumes:
- redis-6-data:/data
ports:
- "6386:6384"
networks:
redis-net:
volumes:
redis-1-data:
driver: local
redis-2-data:
driver: local
redis-3-data:
driver: local
redis-4-data:
driver: local
redis-5-data:
driver: local
redis-6-data:
driver: local
networks:
redis-net:
部署实例容器
# cd 到 compose文件所在目录,运行如下命令
docker-compose up -d
➜ ~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3cc83d4354d redis "docker-entrypoint.s…" 17 hours ago Up 17 hours 6379/tcp, 0.0.0.0:6382->6380/tcp, 0.0.0.0:16382->16380/tcp redis-redis-2-1
3ca1cb3e44a0 redis "docker-entrypoint.s…" 17 hours ago Up 17 hours 6379/tcp, 0.0.0.0:6386->6384/tcp, 0.0.0.0:16386->16384/tcp redis-redis-6-1
90b6c9db88b7 redis "docker-entrypoint.s…" 17 hours ago Up 17 hours 0.0.0.0:6381->6379/tcp, 0.0.0.0:16381->16379/tcp redis-redis-1-1
34885f8048de redis "docker-entrypoint.s…" 17 hours ago Up 17 hours 6379/tcp, 0.0.0.0:6385->6383/tcp, 0.0.0.0:16385->16383/tcp redis-redis-5-1
994604d01bf3 redis "docker-entrypoint.s…" 17 hours ago Up 17 hours 6379/tcp, 0.0.0.0:6383->6381/tcp, 0.0.0.0:16383->16381/tcp redis-redis-3-1
568c5f837eb7 redis "docker-entrypoint.s…" 17 hours ago Up 17 hours 6379/tcp, 0.0.0.0:6384->6382/tcp, 0.0.0.0:16384->16382/tcp redis-redis-4-1
使用redis-cli创建集群
# 随机进入一个容器,组建集群
redis-cli --cluster create [宿主机ip]:6379 [宿主机ip]:6380 [宿主机ip]:6381 [宿主机ip]:6382 [宿主机ip]:6383 [宿主机ip]:6384 --cluster-replicas 1
这个命令将在本地创建一个 Redis 集群,这个集群由 6 个 Redis 实例组成,其中 3 个实例是主节点,另外 3 个实例是从节点。这个命令还设置了每个主节点都有一个从节点,这样可以提高集群的可用性和容错性。因为如果一个主节点出现故障,其对应的从节点可以接替它的工作,继续提供服务。
命令执行后会有如下输出
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica [我的主机ip]:6383 to [我的主机ip]:6379
Adding replica [我的主机ip]:6384 to [我的主机ip]:6380
Adding replica [我的主机ip]:6382 to [我的主机ip]:6381
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: eec5d53ffa77a4a97d5eaf01020cae55f06bafb4 [我的主机ip]:6379
slots:[0-5460] (5461 slots) master
M: 1e2140229c82f64c542476bb1df81ad555bc904a [我的主机ip]:6380
slots:[5461-10922] (5462 slots) master
M: bff2862af526bfdbfc9aa21e24ceddf29edbd9b4 [我的主机ip]:6381
slots:[10923-16383] (5461 slots) master
S: 9532c7738bc64e459d60c73c3307cffe02566667 [我的主机ip]:6382
replicates 1e2140229c82f64c542476bb1df81ad555bc904a
S: 244582ca81994034d0b92e8e7d53e3215f80132c [我的主机ip]:6383
replicates bff2862af526bfdbfc9aa21e24ceddf29edbd9b4
S: f4cd9c2279d08d1eb0f71bcd2f56855b8789898f [我的主机ip]:6384
replicates eec5d53ffa77a4a97d5eaf01020cae55f06bafb4
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
>>> Performing Cluster Check (using node [我的主机ip]:6379)
M: eec5d53ffa77a4a97d5eaf01020cae55f06bafb4 [我的主机ip]:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 244582ca81994034d0b92e8e7d53e3215f80132c 172.19.0.2:6383
slots: (0 slots) slave
replicates bff2862af526bfdbfc9aa21e24ceddf29edbd9b4
M: bff2862af526bfdbfc9aa21e24ceddf29edbd9b4 172.19.0.7:6381
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: f4cd9c2279d08d1eb0f71bcd2f56855b8789898f 172.19.0.5:6384
slots: (0 slots) slave
replicates eec5d53ffa77a4a97d5eaf01020cae55f06bafb4
S: 9532c7738bc64e459d60c73c3307cffe02566667 172.19.0.3:6382
slots: (0 slots) slave
replicates 1e2140229c82f64c542476bb1df81ad555bc904a
M: 1e2140229c82f64c542476bb1df81ad555bc904a 172.19.0.4:6380
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
查看节点状态
127.0.0.1:6379> cluster nodes
244582ca81994034d0b92e8e7d53e3215f80132c 172.19.0.2:6383@16383 slave bff2862af526bfdbfc9aa21e24ceddf29edbd9b4 0 1683514923000 3 connected
bff2862af526bfdbfc9aa21e24ceddf29edbd9b4 172.19.0.7:6381@16381 master - 0 1683514922000 3 connected 10923-16383
f4cd9c2279d08d1eb0f71bcd2f56855b8789898f 172.19.0.5:6384@16384 slave eec5d53ffa77a4a97d5eaf01020cae55f06bafb4 0 1683514922000 1 connected
eec5d53ffa77a4a97d5eaf01020cae55f06bafb4 172.19.0.6:6379@16379 myself,master - 0 1683514922000 1 connected 0-5460
9532c7738bc64e459d60c73c3307cffe02566667 172.19.0.3:6382@16382 slave 1e2140229c82f64c542476bb1df81ad555bc904a 0 1683514923285 2 connected
1e2140229c82f64c542476bb1df81ad555bc904a 172.19.0.4:6380@16380 master - 0 1683514922000 2 connected 5461-10922
遇到的问题:
Waiting for the cluster to join~问题
说是集群节点的总线端口没开,但是容器开放了端口后,在容器外运行创建集群命令,还是会出这个问题。
- 尝试,在容器内部运行组建集群命令,使用docker网络 ip /宿主机pi
- 猜想:容器之间的通信造成的问题
解决:进入到容器内部,然后运行集群创建指令,