目录
1.2分别创建/data/application目录用于存放redis
1.8复制/root/redis目录下的所有数据到创建的目录
一、redis集群
1.1介绍
1、Redis集群是一个提供在多个Redis间节点间共享数据的程序集
2、Redis集群并不支持处理多个keys的命令,因为这需要在不同的节点间移动数据,从而达不到像Redis那样的性能,在高负载的情况下可能会导致不可预料的错误
3、Redis集群通过分区来提供一定程度的可用性,在实际环境中当某个节点宕机或者不可达的情况下可继续处理命令
1.2优势
- 自动分割数据到不同的节点上
- 整个集群的部分节点失败或者不可达的情况下能够继续处理命令
1.3Redis-Cluster数据分片
- Redis集群没有使用一致性hash,而是引入了哈希槽概念
- Redis集群有16384个哈希槽
- 每个key通过CRC16校验后对16384取模来决定放置槽
- 集群的每个节点负责一部分哈希槽
以3个节点组成的集群为例
- 节点A包含О到5500号哈希槽
- 节点B包含5501到11000号哈希槽
- 节点C包含11001到16384号哈希槽
支持添加或者删除节点
- 添加删除节点无需停止服务
1.4Redis-Cluster的主从复制模型
集群中具有A,B,C三个节点,如果节点B失败了,整个集群就会因缺少5501-11000这个范围的槽而不可用
为每个节点添加一个从节点A1,B1,C1,整个集群便有三个master节点和三个slave节点组成,在节点B失败后,集群便会选举B1为新的主节点继续服务
当B和B1都失败后,集群将不可用
1.5原理
一、搭建redis集群
准备服务器
这里我准备4台
先关防火墙、selinx(4台都执行)
[root@cluster1 ~]# systemctl stop firewalld && setenforce 0
[root@cluster1 ~]# 192.168.239.145
[root@cluster2 ~]# 192.168.239.146
[root@cluster3 ~]# 192.168.239.141
[root@cluster4 ~]# 192.168.239.139
1、安装redis
1.1官网下载
[root@cluster1 ~]# wget https://github.com/redis/redis/archive/7.2.4.tar.gz
1.2分别创建/data/application目录用于存放redis
[root@cluster1 ~]# mkdir -p /data/application/{7001,7002}
[root@cluster2 ~]# mkdir -p /data/application/{7003,7004}
[root@cluster3 ~]# mkdir -p /data/application/{7005,7006}
[root@cluster4 ~]# mkdir -p /data/application/{8001,8002}#这台服务器用于后面测试添加主、从节点
1.3解压redis源码包并改名
[root@cluster1 ~]# tar xf 7.2.4.tar.gz
[root@cluster1 ~]# mv redis-7.2.4 redis
1.4安装编译工具
[root@cluster1 ~]# yum -y install gcc make
1.5编译
[root@cluster1 ~]# cd /root/redis/
[root@cluster1 redis]# make
1.6修改redis的配置文件
[root@cluster1 ~]# cd /root/redis
[root@cluster1 redis]# cp redis.conf redis.conf.bak #备份文件
[root@cluster1 redis]# vim redis.conf ---修改如下
bind 0.0.0.0 #所有IP都可访问
daemonize yes #开启后台模式将on改为yes
timeout 300 #连接超时时间
port 6379 #端口号
dir /data/application/6379/data #本地数据库存放持久化数据的目录该目录-----需要存在 这里改为6379方便后面cp
pidfile /var/run/redis_6379.pid #定义pid文件
logfile /var/log/redis.log #定义log文件
1.7创建存储数据的目录
[root@cluster1 ~]# mkdir /root/redis/data
1.8复制/root/redis目录下的所有数据到创建的目录
这里选择cp、不需要每台服务器都编译安装redis 有闲时间的可全部编译。
[root@cluster1 ~]# cp -rf /root/redis/* /data/application/7001
[root@cluster1 ~]# cp -rf /root/redis/* /data/application/7002
[root@cluster1 ~]# scp -r /root/redis/* 192.168.239.146:/data/application/7003
[root@cluster1 ~]# scp -r /root/redis/* 192.168.239.146:/data/application/7004
[root@cluster1 ~]# scp -r /root/redis/* 192.168.239.141:/data/application/7005
[root@cluster1 ~]# scp -r /root/redis/* 192.168.239.141:/data/application/7006
[root@cluster1 ~]# scp -r /root/redis/* 192.168.239.139:/data/application/8002
[root@cluster1 ~]# scp -r /root/redis/* 192.168.239.139:/data/application/8003
1.9 redis.conf已经修改过只需修改端口即可
[root@cluster1 ~]# sed -i s/6379/7001/g /data/application/7001/redis.conf
[root@cluster1 ~]# sed -i s/6379/7002/g /data/application/7002/redis.conf
[root@cluster2 ~]# sed -i s/6379/7003/g /data/application/7003/redis.conf
[root@cluster2 ~]# sed -i s/6379/7004/g /data/application/7004/redis.conf
[root@cluster3 ~]# sed -i s/6379/7005/g /data/application/7005/redis.conf
[root@cluster3 ~]# sed -i s/6379/7006/g /data/application/7006/redis.conf
[root@cluster4 ~]# sed -i s/6379/8001/g /data/application/8001/redis.conf
[root@cluster4 ~]# sed -i s/6379/8002/g /data/application/8002/redis.conf
1.10启动redis
这里我只演示cluster2,剩余自己启动,方法一样
注意:启动时必须在7002这层目录
端口为7002、17002
root@cluster1 7002]# src/redis-server redis.conf
[root@cluster1 7002]# ss -nplt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 100 127.0.0.1:25 *:* users:(("master",pid=1116,fd=13))
LISTEN 0 128 *:7002 *:* users:(("redis-server",pid=3067,fd=7))
LISTEN 0 128 *:17002 *:* users:(("redis-server",pid=3067,fd=8))
LISTEN 0 128 *:22 *:* users:(("sshd",pid=965,fd=3))
LISTEN 0 100 [::1]:25 [::]:* users:(("master",pid=1116,fd=14))
LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=965,fd=4))
[root@cluster1 7002]#
1.11创建集群
注意IP与端口的对应
[root@cluster1 ~]# /data/application/7001/src/redis-cli --cluster create --cluster-replicas 1 192.168.239.145:7001 192.168.239.145:7002 192.168.239.146:7003 192.168.239.146:7004 192.168.239.141:7005 192.168.239.141:7006
输出结果如下
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.239.146:7004 to 192.168.239.145:7001
Adding replica 192.168.239.141:7006 to 192.168.239.146:7003
Adding replica 192.168.239.145:7002 to 192.168.239.141:7005
M: 409b8508a5ea87fdfe7a29c35ad9ed333798ec15 192.168.239.145:7001
slots:[0-5460] (5461 slots) master
S: 5fb3421f52f1db4d5de9092770a83dfafd91111e 192.168.239.145:7002
replicates d38b84d902ba175258862bf429af1a133127d495
M: f4ddbf650fa050776daf019471a0e51224b2c503 192.168.239.146:7003
slots:[5461-10922] (5462 slots) master
S: 71b2a492596aaf8fefa060f3aa8bdb8dc241e982 192.168.239.146:7004
replicates 409b8508a5ea87fdfe7a29c35ad9ed333798ec15
M: d38b84d902ba175258862bf429af1a133127d495 192.168.239.141:7005
slots:[10923-16383] (5461 slots) master
S: 52f5b9e838339595a2ca66633464a1ca041deee4 192.168.239.141:7006
replicates f4ddbf650fa050776daf019471a0e51224b2c503
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 192.168.239.145:7001)
M: 409b8508a5ea87fdfe7a29c35ad9ed333798ec15 192.168.239.145:7001
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 71b2a492596aaf8fefa060f3aa8bdb8dc241e982 192.168.239.146:7004
slots: (0 slots) slave
replicates 409b8508a5ea87fdfe7a29c35ad9ed333798ec15
S: 5fb3421f52f1db4d5de9092770a83dfafd91111e 192.168.239.145:7002
slots: (0 slots) slave
replicates d38b84d902ba175258862bf429af1a133127d495
M: f4ddbf650fa050776daf019471a0e51224b2c503 192.168.239.146:7003
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: d38b84d902ba175258862bf429af1a133127d495 192.168.239.141:7005
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 52f5b9e838339595a2ca66633464a1ca041deee4 192.168.239.141:7006
slots: (0 slots) slave
replicates f4ddbf650fa050776daf019471a0e51224b2c503
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
1.12测试集群
[root@cluster1 7001]# src/redis-cli -c -h 192.168.239.145 -p 7001 #'连接master1'
192.168.239.145:7001> keys * #查看所有键值
(empty array)
192.168.239.145:7001> set name xiaoming ##'创建一个键值'
-> Redirected to slot [5798] located at 192.168.239.146:7003 #'存入master2'
OK
192.168.239.146:7003> set job teacher #'自动跳转到master2,再创建一个键值'
-> Redirected to slot [2906] located at 192.168.239.145:7001 #'存入master1'
OK
192.168.239.145:7001>
[root@cluster1 7001]# src/redis-cli -c -h 192.168.239.141 -p 7006 #'连接master2的slave'
192.168.239.141:7006> keys * #'查看键值,发现已同步'
1) "name"
192.168.239.141:7006> get name
-> Redirected to slot [5798] located at 192.168.239.146:7003 #'在master2上找到'
"xiaoming"
192.168.239.146:7003>
[root@cluster1 7001]# src/redis-cli -c -h 192.168.239.146 -p 7004 #'连接master1的slave'
192.168.239.146:7004> keys * #'查看键值,发现已同步'
1) "job"
192.168.239.146:7004> get job
-> Redirected to slot [2906] located at 192.168.239.145:7001 #'在master1上找到'
"teacher"
192.168.239.145:7001>
1.13模拟master服务器宕机情况
[root@cluster1 7001]# src/redis-cli -c -h 192.168.239.145 -p 7001 shutdown #'关闭master1'
[root@cluster1 7001]# src/redis-cli -c -h 192.168.239.146 -p 7004 #'连接master1的slave'
192.168.239.146:7004> keys * #'依旧可以查看到键值,redis集群正常'
1) "job"
192.168.239.146:7004> get job
"teacher"
192.168.239.146:7004>
[root@cluster1 7001]# src/redis-cli -c -h 192.168.239.146 -p 7003 #'连接master2'
192.168.239.146:7003> set age 18 #'创建一个键值存入到master1的slave'
-> Redirected to slot [741] located at 192.168.239.146:7004
OK
192.168.239.146:7004> keys * #'查看master1的slave,发现两键值都存在'
1) "age"
2) "job"
192.168.239.146:7004> get age #'刚创建的键值也可以查看'
"18"
192.168.239.146:7004>
总结:在redis集群中,在3个msater上依次写入数据,会在master对应的slave上复制一份数据,在一台master宕机后,由它的slave顶替进行数据的读写工作。
1.14添加主、从节点
启动事先准备好的cluster4
[root@cluster4 8001]# src/redis-server redis.conf
[root@cluster4 8001]# ../8002/src/redis-server ../8002/redis.conf
root@cluster4 8001]# ss -nplt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 100 127.0.0.1:25 *:* users:(("master",pid=1110,fd=13))
LISTEN 0 128 *:8001 *:* users:(("redis-server",pid=1447,fd=7))
LISTEN 0 128 *:8002 *:* users:(("redis-server",pid=1453,fd=7))
LISTEN 0 128 *:18001 *:* users:(("redis-server",pid=1447,fd=8))
LISTEN 0 128 *:18002 *:* users:(("redis-server",pid=1453,fd=8))
LISTEN 0 128 *:22 *:* users:(("sshd",pid=940,fd=3))
LISTEN 0 100 [::1]:25 [::]:* users:(("master",pid=1110,fd=14))
LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=940,fd=4))
[root@cluster4 8001]#
添加主节点
[root@cluster1 7002]# src/redis-cli -c -h 192.168.239.145 -p 7001 cluster meet 192.168.239.139 8001
OK
查看是否添加
[root@cluster1 7002]# src/redis-cli -c -h 192.168.239.145 -p 7001
192.168.239.145:7001> cluster nodes
409b8508a5ea87fdfe7a29c35ad9ed333798ec15 192.168.239.145:7001@17001 myself,slave 71b2a492596aaf8fefa060f3aa8bdb8dc241e982 0 1707274235000 7 connected
f4ddbf650fa050776daf019471a0e51224b2c503 192.168.239.146:7003@17003 master - 0 1707274236012 12 connected 5461-10922
52f5b9e838339595a2ca66633464a1ca041deee4 192.168.239.141:7006@17006 slave f4ddbf650fa050776daf019471a0e51224b2c503 0 1707274237527 12 connected
5fb3421f52f1db4d5de9092770a83dfafd91111e 192.168.239.145:7002@17002 master - 0 1707274235910 10 connected 10923-16383
734442f680633d87127cf8c59741317da49a6949 192.168.239.139:8001@18001 master - 0 1707274237000 0 connected
71b2a492596aaf8fefa060f3aa8bdb8dc241e982 192.168.239.146:7004@17004 master - 0 1707274236416 7 connected 0-5460
d38b84d902ba175258862bf429af1a133127d495 192.168.239.141:7005@17005 slave 5fb3421f52f1db4d5de9092770a83dfafd91111e 0 1707274237426 10 connected
192.168.239.145:7001>
添加从节点
[root@cluster1 7002]# src/redis-cli -c -h 192.168.239.145 -p 7001 cluster meet 192.168.239.139 8002
OK
#replicate后面这里指定的id为主的ip
[root@cluster1 7002]# src/redis-cli -h 192.168.239.139 -p 8002 cluster replicate 734442f680633d87127cf8c59741317da49a6949
OK
查看是否添加
[root@cluster2 7003]# src/redis-cli -c -h 192.168.239.145 -p 7001
192.168.239.145:7001> cluster nodes
409b8508a5ea87fdfe7a29c35ad9ed333798ec15 192.168.239.145:7001@17001 myself,master - 0 1707275189000 18 connected 0-5460
f4ddbf650fa050776daf019471a0e51224b2c503 192.168.239.146:7003@17003 slave 52f5b9e838339595a2ca66633464a1ca041deee4 0 1707275189335 22 connected
52f5b9e838339595a2ca66633464a1ca041deee4 192.168.239.141:7006@17006 master - 0 1707275188339 22 connected 5461-10922
5fb3421f52f1db4d5de9092770a83dfafd91111e 192.168.239.145:7002@17002 slave d38b84d902ba175258862bf429af1a133127d495 0 1707275189536 23 connected
734442f680633d87127cf8c59741317da49a6949 192.168.239.139:8001@18001 master - 0 1707275189000 13 connected
71b2a492596aaf8fefa060f3aa8bdb8dc241e982 192.168.239.146:7004@17004 slave 409b8508a5ea87fdfe7a29c35ad9ed333798ec15 0 1707275187314 18 connected
d38b84d902ba175258862bf429af1a133127d495 192.168.239.141:7005@17005 master - 0 1707275189133 23 connected 10923-16383
ff9a1e6774361382b1b209317942ddc754cf9b62 192.168.239.139:8002@18002 slave 734442f680633d87127cf8c59741317da49a6949 0 1707275189000 13 connected
192.168.239.145:7001>
1.15自动分配槽位
[root@cluster1 7002]# src/redis-cli --cluster rebalance --cluster-threshold 1 --cluster-use-empty-masters 192.168.239.145:7001
[root@cluster2 7003]# src/redis-cli -c -h 192.168.239.145 -p 7001
192.168.239.145:7001> cluster nodes
409b8508a5ea87fdfe7a29c35ad9ed333798ec15 192.168.239.145:7001@17001 myself,slave 71b2a492596aaf8fefa060f3aa8bdb8dc241e982 0 1707275487000 25 connected
f4ddbf650fa050776daf019471a0e51224b2c503 192.168.239.146:7003@17003 slave 52f5b9e838339595a2ca66633464a1ca041deee4 0 1707275488559 22 connected
52f5b9e838339595a2ca66633464a1ca041deee4 192.168.239.141:7006@17006 master - 0 1707275488000 22 connected 6827-10922
5fb3421f52f1db4d5de9092770a83dfafd91111e 192.168.239.145:7002@17002 slave d38b84d902ba175258862bf429af1a133127d495 0 1707275487000 23 connected
734442f680633d87127cf8c59741317da49a6949 192.168.239.139:8001@18001 master - 0 1707275488960 24 connected 0-1364 5461-6826 10923-12287
71b2a492596aaf8fefa060f3aa8bdb8dc241e982 192.168.239.146:7004@17004 master - 0 1707275487000 25 connected 1365-5460
d38b84d902ba175258862bf429af1a133127d495 192.168.239.141:7005@17005 master - 0 1707275488000 23 connected 12288-16383
ff9a1e6774361382b1b209317942ddc754cf9b62 192.168.239.139:8002@18002 slave 734442f680633d87127cf8c59741317da49a6949 0 1707275487546 24 connected
192.168.239.145:7001>
1.16回收节点
回收节点需先回收槽位
回收槽位
[root@cluster1 7002]# src/redis-cli -h 192.168.239.145 -p 7001 cluster reset
语法:[root@cluster1 7002]# src/redis-cli -h 回收节点IP -p 端口 cluster reset
下线主机
语法:src/redis-cli --cluster del-node 登录集群ip:端口 被下线主机id
[root@cluster1 7002]# src/redis-cli --cluster del-node 192.168.239.145:7002 409b8508a5ea87fdfe7a29c35ad9ed333798ec15
>>> Removing node 409b8508a5ea87fdfe7a29c35ad9ed333798ec15 from cluster 192.168.239.145:7002
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.