redis 集群搭建,测试以及维护

reids cluster搭建,测试以及维护

在哨兵sentinel机制中,可以解决redis高可用的问题,即当master故障后可以自动将slave提升为master从而可以保证redis服务的正常使用,但是无法解决redis单机写入的瓶颈问题,即单机的redis写入性能受限于单机的内存大小、并发数量、网卡速率等因素,因此redis官方在redis 3.0版本之后推出了无中心架构的redis cluster机制,在无中心的redis集群当中,其每个节点保存当前节点数据和整个集群状态,每个节点都和其他所有节点连接,特点如下:
1:所有Redis节点使用(PING机制)互联
2:集群中某个节点的失效,是整个集群中超过半数的节点监测都失效才算真正的失效
3:客户端不需要proxy即可直接连接redis,应用程序需要写全部的redis服务器IP。
4:redis cluster把所有的redis node映射到 0-16383个槽位(slot)上,读写需要到指定的redis node上进行
操作,因此有多少个reids node相当于redis 并发扩展了多少倍。
5:Redis cluster预先分配16384个(slot)槽位,当需要在redis集群中写入一个key -value的时候,会使用
CRC16(key) mod 16384之后的值,决定将key写入值哪一个槽位从而决定写入哪一个Redis节点上,从而有效解决单
机瓶颈。

搭建

1.安装redis
准备6台虚拟机,安装上redis(生产环境建议6个节点)
ip地址如下:

10.35.78.24:6379
10.35.78.25:6379
10.35.78.26:6379
10.35.78.27:6379
10.35.78.29:6379
10.35.78.79:6379

每个redis node节点采用相同的硬件配置、相同的密码、相同的redis版本。
每个节点必须开启的参数

cluster-enabled yes #必须开启集群状态,开启后redis 进程会有cluster显示
cluster-config-file nodes-6380.conf #此文件有redis cluster集群自动创建和维护

2.安装redis cluster 管理工具redis-trib安装

# tar zxvf ruby-2.5.5.tar.gz
# cd ruby-2.5.5
# ./configure 
# make -j 2
# make install
# gem install -l redis-4.2.5.gem
# redis-trib.rb
Usage: redis-trib <command> <options> <arguments ...>
.....

3.修改集群工具登录redis密码

vim /usr/local/lib/ruby/gems/2.5.0/gems/redis-4.2.5/lib/redis/client.rb

password: nil(default)

4.创建集群:

# redis-trib.rb create --replicas 1 10.35.78.26:6379 10.35.78.25:6379 10.35.78.27:6379 10.35.78.79:6379 10.35.78.24:6379 10.35.78.29:6379
# redis-trib.rb  check 10.35.78.79:6379
>>> Performing Cluster Check (using node 10.35.78.79:6379)  ###check集群中的任意的node都可以
M: f30d3ce3b867a01c1723c6817227af580b149166 10.35.78.79:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 9052b47638642703b4b6e7da5bc6d26f369be47b 10.35.78.24:6379
   slots: (0 slots) slave
   replicates 4779d8af1e60c5df0e0a7ea4cc85b72d32935ce5
S: a3ab9a42ee6d11594cbbd21330cf04bfb1ee2fdf 10.35.78.27:6379
   slots: (0 slots) slave
   replicates 166f14c832283749dcb95d7d8ea033026b0338b2
M: 166f14c832283749dcb95d7d8ea033026b0338b2 10.35.78.29:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: 4779d8af1e60c5df0e0a7ea4cc85b72d32935ce5 10.35.78.25:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 6dedeae06d6bf9d1c879a9d498058dcae5a6dc36 10.35.78.26:6379
   slots: (0 slots) slave
   replicates f30d3ce3b867a01c1723c6817227af580b149166
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

# redis-trib.rb  info 10.35.78.79:6379
10.35.78.79:6379 (f30d3ce3...) -> 53787 keys | 5461 slots | 1 slaves.
10.35.78.29:6379 (166f14c8...) -> 53788 keys | 5461 slots | 1 slaves.
10.35.78.25:6379 (4779d8af...) -> 53726 keys | 5462 slots | 1 slaves.
[OK] 161301 keys in 3 masters.
9.85 keys per slot on average.

可得集群信息:
master:10.35.78.79,10.35.78.29,10.35.78.25
对应slave:10.35.78.26,10.35.78.27,10.35.78.24
Redis Slave节点一定不能和master在一个服务器,必须为跨主机交叉备份模式,避免主机故障后主备全部挂掉,如果出现Redis Slave与Redis master在同一台Redis node的情况,则需要安装以上步骤重新进行slave分配,直到相互交叉备份为止。

测试

测试jedis客户端写入:

package redis;

import java.io.IOException;
import java.util.HashSet;
import java.util.Iterator;
import java.util.Set;

import redis.clients.jedis.HostAndPort;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisCluster;
import redis.clients.jedis.JedisSentinelPool;


//redis cluster
public class test2 {
	public static void main(String[] args) throws IOException {
		 Set<HostAndPort> nodes = new HashSet<HostAndPort>();
		 nodes.add(new HostAndPort("10.35.78.24", 6379));
		 nodes.add(new HostAndPort("10.35.78.25", 6379));
		 nodes.add(new HostAndPort("10.35.78.26", 6379));
		 nodes.add(new HostAndPort("10.35.78.27", 6379));
		 nodes.add(new HostAndPort("10.35.78.29", 6379));
		 nodes.add(new HostAndPort("10.35.78.79", 6379));
		 
		 JedisCluster cluster = new JedisCluster(nodes);

	    
	     int i = 0;
         for(;i<1000000;i++){
         	String key="cluster-test" + i;
         	String value="my jedis" + i;
         	
         	cluster.set(key, value);
         	System.out.println(i);
         }
	     cluster.close();
		
	}
}

登陆各个节点执行keys *,发现确实是分片写入

2.java程序运行过程中,让其中一个master:10.35.78.79断网
原先集群对应信息

master:10.35.78.79,10.35.78.29,10.35.78.25
对应slave:10.35.78.26,10.35.78.27,10.35.78.24
10.35.78.79的redis节点宕机以后,确实发现10.35.78.26的身份发生转变,晋升为master
[root@RHEL1 ~]# redis-trib.rb  check  10.35.78.26:6379
>>> Performing Cluster Check (using node 10.35.78.26:6379)
M: 6dedeae06d6bf9d1c879a9d498058dcae5a6dc36 10.35.78.26:6379
   slots:0-5460 (5461 slots) master
   0 additional replica(s)
M: 4779d8af1e60c5df0e0a7ea4cc85b72d32935ce5 10.35.78.25:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 9052b47638642703b4b6e7da5bc6d26f369be47b 10.35.78.24:6379
   slots: (0 slots) slave
   replicates 4779d8af1e60c5df0e0a7ea4cc85b72d32935ce5
S: a3ab9a42ee6d11594cbbd21330cf04bfb1ee2fdf 10.35.78.27:6379
   slots: (0 slots) slave
   replicates 166f14c832283749dcb95d7d8ea033026b0338b2
M: 166f14c832283749dcb95d7d8ea033026b0338b2 10.35.78.29:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

继续启动java程序,发现程序可以继续写入,这时候,将master:10.35.78.26宕机,注意该master已经没有slave了
查看cluster状态

[root@RHEL1 ~]# redis-trib.rb  check  10.35.78.27:6379
>>> Performing Cluster Check (using node 10.35.78.27:6379)
S: a3ab9a42ee6d11594cbbd21330cf04bfb1ee2fdf 10.35.78.27:6379
   slots: (0 slots) slave
   replicates 166f14c832283749dcb95d7d8ea033026b0338b2
M: 4779d8af1e60c5df0e0a7ea4cc85b72d32935ce5 10.35.78.25:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 9052b47638642703b4b6e7da5bc6d26f369be47b 10.35.78.24:6379
   slots: (0 slots) slave
   replicates 4779d8af1e60c5df0e0a7ea4cc85b72d32935ce5
M: 166f14c832283749dcb95d7d8ea033026b0338b2 10.35.78.29:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[ERR] Not all 16384 slots are covered by nodes.(##槽没有覆盖完全,集群已经瘫痪了)
再次启动java程序,发现提示集群已经down:
Exception in thread "main" redis.clients.jedis.exceptions.JedisClusterException: CLUSTERDOWN The cluster is down
	at redis.clients.jedis.Protocol.processError(Protocol.java:121)
	at redis.clients.jedis.Protocol.process(Protocol.java:161)
	at redis.clients.jedis.Protocol.read(Protocol.java:215)
	at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:340)
	at redis.clients.jedis.Connection.getStatusCodeReply(Connection.java:239)
	at redis.clients.jedis.Jedis.set(Jedis.java:121)
	at redis.clients.jedis.JedisCluster$1.execute(JedisCluster.java:101)
	at redis.clients.jedis.JedisCluster$1.execute(JedisCluster.java:98)
	at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:120)
	at redis.clients.jedis.JedisClusterCommand.run(JedisClusterCommand.java:31)
	at redis.clients.jedis.JedisCluster.set(JedisCluster.java:103)
	at redis.test2.main(test2.java:34)

维护

集群维护之动态添加节点:

额外准备两台服务器:
10.35.78.171
10.35.78.179

# redis-trib.rb add-node 10.35.78.171:6379 10.35.78.25:6379
# redis-trib.rb add-node 10.35.78.179:6379 10.35.78.25:6379
# redis-trib.rb  check 10.35.78.25:6379
>>> Performing Cluster Check (using node 10.35.78.25:6379)
M: 4779d8af1e60c5df0e0a7ea4cc85b72d32935ce5 10.35.78.25:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: f30d3ce3b867a01c1723c6817227af580b149166 10.35.78.79:6379
   slots: (0 slots) slave
   replicates 6dedeae06d6bf9d1c879a9d498058dcae5a6dc36
M: 6dedeae06d6bf9d1c879a9d498058dcae5a6dc36 10.35.78.26:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: a3ab9a42ee6d11594cbbd21330cf04bfb1ee2fdf 10.35.78.27:6379
   slots: (0 slots) slave
   replicates 166f14c832283749dcb95d7d8ea033026b0338b2
M: 38b93e4b068a5151adfd3dc5e0864cff0347890c 10.35.78.179:6379  #####171和179添加进来默认是master,且没有分配slot
   slots: (0 slots) master
   0 additional replica(s)
M: 166f14c832283749dcb95d7d8ea033026b0338b2 10.35.78.29:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: e5f05de6d5c20c684ec102c6dc53264903620d9a 10.35.78.171:6379
   slots: (0 slots) master
   0 additional replica(s)
S: 9052b47638642703b4b6e7da5bc6d26f369be47b 10.35.78.24:6379
   slots: (0 slots) slave
   replicates 4779d8af1e60c5df0e0a7ea4cc85b72d32935ce5
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

备份数据(开发通过jedis完成),清空数据,分配slot
# redis-trib.rb  reshard 10.35.78.171:6379
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? e5f05de6d5c20c684ec102c6dc53264903620d9a
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:all

[root@RHEL1 ~]# redis-trib.rb  check 10.35.78.25:6379
>>> Performing Cluster Check (using node 10.35.78.25:6379)
M: 4779d8af1e60c5df0e0a7ea4cc85b72d32935ce5 10.35.78.25:6379
   slots:6827-10922 (4096 slots) master
   1 additional replica(s)
S: f30d3ce3b867a01c1723c6817227af580b149166 10.35.78.79:6379
   slots: (0 slots) slave
   replicates 6dedeae06d6bf9d1c879a9d498058dcae5a6dc36
M: 6dedeae06d6bf9d1c879a9d498058dcae5a6dc36 10.35.78.26:6379
   slots:1365-5460 (4096 slots) master
   1 additional replica(s)
S: a3ab9a42ee6d11594cbbd21330cf04bfb1ee2fdf 10.35.78.27:6379
   slots: (0 slots) slave
   replicates 166f14c832283749dcb95d7d8ea033026b0338b2
M: 38b93e4b068a5151adfd3dc5e0864cff0347890c 10.35.78.179:6379
   slots: (0 slots) master
   0 additional replica(s)
M: 166f14c832283749dcb95d7d8ea033026b0338b2 10.35.78.29:6379
   slots:12288-16383 (4096 slots) master
   1 additional replica(s)
M: e5f05de6d5c20c684ec102c6dc53264903620d9a 10.35.78.171:6379
   slots:0-1364,5461-6826,10923-12287 (4096 slots) master  ##171上的4096slot来自其他三个master,因此slot不是连续的
   0 additional replica(s)
S: 9052b47638642703b4b6e7da5bc6d26f369be47b 10.35.78.24:6379
   slots: (0 slots) slave
   replicates 4779d8af1e60c5df0e0a7ea4cc85b72d32935ce5
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

10.35.78.179:
127.0.0.1:6379> CLUSTER replicate e5f05de6d5c20c684ec102c6dc53264903620d9a

[root@RHEL1 ~]# redis-trib.rb  info  10.35.78.25:6379(再次执行java程序,发现并发确实提高)
10.35.78.25:6379 (4779d8af...) -> 0 keys | 4096 slots | 1 slaves.
10.35.78.26:6379 (6dedeae0...) -> 0 keys | 4096 slots | 1 slaves.
10.35.78.29:6379 (166f14c8...) -> 0 keys | 4096 slots | 1 slaves.
10.35.78.171:6379 (e5f05de6...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
集群维护之动态删除节点
添加节点的时候是先添加node节点到集群,然后分配槽位,删除节点的操作与添加节点的操作正好相反,是先将
被删除的Redis node上的槽位迁移到集群中的其他Redis node节点上,然后再将其删除,如果一个Redis node节
点上的槽位没有被完全迁移,删除该node的时候会提示有数据且无法删除。
[root@RHEL1 ~]# redis-trib.rb reshard 10.35.78.25:6379
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? 4779d8af1e60c5df0e0a7ea4cc85b72d32935ce5
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:e5f05de6d5c20c684ec102c6dc53264903620d9a (171那台)
Source node #2:done

……多次迁移之后

[root@RHEL1 ~]# redis-trib.rb  info  10.35.78.25:6379
10.35.78.25:6379 (4779d8af...) -> 0 keys | 4694 slots | 1 slaves.
10.35.78.26:6379 (6dedeae0...) -> 0 keys | 6570 slots | 1 slaves.
10.35.78.29:6379 (166f14c8...) -> 0 keys | 5120 slots | 2 slaves.
10.35.78.171:6379 (e5f05de6...) -> 0 keys | 0 slots | 0 slaves. ##当一个master的slot被清空,slave会自动变成其他master的slave。slave没有分配slot,可以直接删
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.

# redis-trib.rb  del-node 10.35.78.25:6379 e5f05de6d5c20c684ec102c6dc53264903620d9a
>>> Removing node e5f05de6d5c20c684ec102c6dc53264903620d9a from cluster 10.35.78.25:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

**集群维护之动态更换节点,也就是下线旧的服务器
方法1:先向cluster添加一个新节点,旧的节点转交slot(转交以前需要删除自身数据,否则转交不了),再删除原先旧的节点
方法2:额外添加slave,使其同步数据,然后使原先master宕机,将其下线,新机器的身份就从slave转变为master

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值