一、Redis集群介绍
Redis集群是一个提供在多个Redis间节点间共享数据的程序集 Redis集群并不支持处理多个keys的命令,因为这需要在不同的节点间移动数据,从而达不到像Redis那样的性能,在高负载的情况下可能会导致不可预料的错误 Redis集群通过分区来提供一定程度的可用性,在实际环境中当某个节点宕机或者不可达的情况下可继续处理命令
二、Redis集群的优势
自动分割数据到不同节点上 整个集群的部分节点失败或者不可达的情况下能够继续处理命令
三、Redis集群的实现方法
四、Redis-Cluster数据分片
Redis集群没有使用一致性hash,而是引入了哈希槽的概念 Redis集群有16384个哈希槽 每个key通过CRC16校验后对16384取模来决定放置槽 集群的每个节点负责一部分哈希槽 以3个节点组成的集群为例
节点A包含0到5500号哈希槽 节点B包含5501到11000号哈希槽 节点C包含11001到16383号哈希槽 支持添加或者删除节点
添加删除节点无需停止服务 例如
如果想新添加个节点D,需要移动节点A,B,C中部分槽到D上 如果想要移除节点A,需要将A中的槽移动到B和C节点上,再讲没有任何槽的A节点从集群中移除
五、Redis-Cluster的主从复制模型
集群中具有A,B,C三个节点,如果节点B失败了,整个集群就会因确实5501-11000这个范围的槽而不可用 为每个节点添加一个从节点A1,B1,C1,整个集群便有了三个master节点和三个slave节点组成,在节点B失败后,集群便会选举B1作为新的主节点继续服务 当B和B1都失败后,集群将不可用
六、快速部署Redis集群
6.1 环境
master1 192.168.10.10 master2 192.168.10.20 master3 192.168.10.30 slave1 192.168.10.40 slave2 192.168.10.50 slave3 192.168.10.60
6.2 Redis部署
tar zxvf redis- 5.0 .4 . tar. gz
cd redis- 5.0 .4 /
make
make PREFIX= / usr/ local/ redis install
ln - s / usr/ local/ redis/ bin
6.3 6台机器上同时修改配置文件
vi / etc/ redis/ 6379. conf
bind 192.168 .10 .10 ###删除之前的127.0 .0 .1 ,改为各主机的IP地址
protected - mode no ###关闭保护模式
daemonize yes ###以独立进程启动
cluster- enabled yes ###开启群集功能
cluster- config- file nodes- 6379. conf ###群集名称文件设置
cluster- node- timeout 15000 ###群集超时时间
appendonly yes ###开启aof持久化
6.4 生成集群
##将redis- 3.2 .0 . gem 导入至/ OPT并执行
yum - y install ruby rubygems
cd / opt
[ root@master1 opt] # gem install redis- 3.2 .0 . gem
Successfully installed redis- 3.2 .0
Parsing documentation for redis- 3.2 .0
Installing ri documentation for redis- 3.2 .0
1 gem installed
6.5 创建集群
## 六台机器,三个主节点,replicas 1 表示副本数为1 ,也就是每个主节点都要有一个副本
[ root@master1 src] # redis- cli -- cluster create -- cluster- replicas 1 192.168 .10 .10 : 6379 192.168 .10 .20 : 6379 192.168 .10 .30 : 6379 192.168 .10 .40 : 6379 192.168 .10 .50 : 6379 192.168 .10 .60 : 6379
>>> Performing hash slots allocation on 6 nodes. . .
Master[ 0 ] - > Slots 0 - 5460
Master[ 1 ] - > Slots 5461 - 10922
Master[ 2 ] - > Slots 10923 - 16383 ## 哈希槽
Adding replica 192.168 .10 .50 : 6379 to 192.168 .10 .10 : 6379 ## 主从对应关系
Adding replica 192.168 .10 .60 : 6379 to 192.168 .10 .20 : 6379
Adding replica 192.168 .10 .40 : 6379 to 192.168 .10 .30 : 6379
M: 4233e5 c4348ba1008d95571502366eaea26624e0 192.168 .10 .10 : 6379
slots: [ 0 - 5460 ] ( 5461 slots) master
M: 38f a1960ecd23df0b59867eca516338bd0d18af8 192.168 .10 .20 : 6379
slots: [ 5461 - 10922 ] ( 5462 slots) master
M: b2e064fa103d6924008dd61950e13c7140735478 192.168 .10 .30 : 6379
slots: [ 10923 - 16383 ] ( 5461 slots) master
S: 46d c9515522176238c8c49863df825d184d5af93 192.168 .10 .40 : 6379
replicates b2e064fa103d6924008dd61950e13c7140735478
S: 4 b42f3977e099aa9c550e99eb8f13dd23d6874ef 192.168 .10 .50 : 6379
replicates 4233e5 c4348ba1008d95571502366eaea26624e0
S: 0f 18 b2ef971b49ea58dce9e68d7f7350fc743e39 192.168 .10 .60 : 6379
replicates 38f a1960ecd23df0b59867eca516338bd0d18af8
Can I set the above configuration? ( type 'yes' to accept) : yes ## 交互页面,输入yes继续下一步
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
. . . . .
>>> Performing Cluster Check ( using node 192.168 .10 .10 : 6379 )
M: 4233e5 c4348ba1008d95571502366eaea26624e0 192.168 .10 .10 : 6379
slots: [ 0 - 5460 ] ( 5461 slots) master
1 additional replica ( s)
S: 46d c9515522176238c8c49863df825d184d5af93 192.168 .10 .40 : 6379
slots: ( 0 slots) slave
replicates b2e064fa103d6924008dd61950e13c7140735478
S: 4 b42f3977e099aa9c550e99eb8f13dd23d6874ef 192.168 .10 .50 : 6379
slots: ( 0 slots) slave
replicates 4233e5 c4348ba1008d95571502366eaea26624e0
M: b2e064fa103d6924008dd61950e13c7140735478 192.168 .10 .30 : 6379
slots: [ 10923 - 16383 ] ( 5461 slots) master
1 additional replica ( s)
S: 0f 18 b2ef971b49ea58dce9e68d7f7350fc743e39 192.168 .10 .60 : 6379
slots: ( 0 slots) slave
replicates 38f a1960ecd23df0b59867eca516338bd0d18af8
M: 38f a1960ecd23df0b59867eca516338bd0d18af8 192.168 .10 .20 : 6379
slots: [ 5461 - 10922 ] ( 5462 slots) master
1 additional replica ( s)
[ OK] All nodes agree about slots configuration.
>>> Check for open slots. . .
>>> Check slots coverage. . .
[ OK] All 16384 slots covered
6.6 验证
[ root@master1 ~ ] # redis- cli - h 192.168 .10 .20 - p 6379 - c
192.168 .10 .20 : 6379 > set key1 aaa
OK
192.168 .10 .20 : 6379 > quit
[ root@master1 ~ ] # redis- cli - h 192.168 .10 .30 - p 6379 - c
192.168 .10 .30 : 6379 > get key1
- > Redirected to slot [ 9189 ] located at 192.168 .10 .20 : 6379
"aaa"
192.168 .10 .20 : 6379 >
## 查看集群状态
[ root@master1 ~ ] # redis- cli - h 192.168 .10 .10 - p 6379 - c
192.168 .10 .10 : 6379 > cluster info
cluster_state: ok
cluster_slots_assigned: 16384
cluster_slots_ok: 16384
cluster_slots_pfail: 0
cluster_slots_fail: 0
cluster_known_nodes: 6
cluster_size: 3
cluster_current_epoch: 6
cluster_my_epoch: 1
cluster_stats_messages_ping_sent: 1160
cluster_stats_messages_pong_sent: 1193
cluster_stats_messages_sent: 2353
cluster_stats_messages_ping_received: 1188
cluster_stats_messages_pong_received: 1160
cluster_stats_messages_meet_received: 5
cluster_stats_messages_received: 2353
## 查看节点信息
192.168 .10 .10 : 6379 > cluster nodes
46d c9515522176238c8c49863df825d184d5af93 192.168 .10 .40 : 6379 @16379 slave b2e064fa103d6924008dd61950e13c7140735478 0 1608386328890 4 connected
4 b42f3977e099aa9c550e99eb8f13dd23d6874ef 192.168 .10 .50 : 6379 @16379 slave 4233e5 c4348ba1008d95571502366eaea26624e0 0 1608386326873 5 connected
b2e064fa103d6924008dd61950e13c7140735478 192.168 .10 .30 : 6379 @16379 master - 0 1608386327882 3 connected 10923 - 16383
0f 18 b2ef971b49ea58dce9e68d7f7350fc743e39 192.168 .10 .60 : 6379 @16379 slave 38f a1960ecd23df0b59867eca516338bd0d18af8 0 1608386327000 6 connected
38f a1960ecd23df0b59867eca516338bd0d18af8 192.168 .10 .20 : 6379 @16379 master - 0 1608386326000 2 connected 5461 - 10922
4233e5 c4348ba1008d95571502366eaea26624e0 192.168 .10 .10 : 6379 @16379 myself, master - 0 1608386324000 1 connected 0 - 5460