Redis 集群模式部署与测试

前言

  本文为拉勾高薪训练营作业,搭建三主三从redis集群模式,并在搭建完成后,添加一个master主节点和一个slave从节点

1 环境说明:

1.1 各个软件版本说明

(1)虚拟机:Vmware 15
(2)操作系统:Centos 7
(3)redis版本:redis-5.0.7

1.2 各个机器节点说明

redis节点IP端口角色
master1192.168.122.1286379主节点1
master2192.168.122.1296379主节点2
master3192.168.122.1306379主节点3
master4192.168.122.1306479集群搭建完成后,新增主节点
slave1192.168.122.1286380从节点1
slave2192.168.122.1296380从节点2
slave3192.168.122.1306380从节点3
slave4192.168.122.1306480集群搭建完成后,新增从节点

2 集群环境搭建

2.1单节点配置redis环境

  (1)在192.168.122.129节点上下载安装包,并解压安装

mkdir -p /opt/redis/redis-cluster/master ##主节点安装位置
mkdir -p /opt/redis/redis-cluster/slave	##从节点安装位置
cd /opt/redis
wget http://download.redis.io/releases/redis-5.0.7.tar.gz ##	下载安装包
tar -zxvf redis-5.0.7.tar.gz	##	解压安装包
cd /redis-5.0.7
make && make PREFIX=/opt/redis/redis-cluster/master install

2.2 设置配置文件

  将redis安装目录下的redis.conf复制到master/bin目录下,并编辑配置文件

scp redis.conf /opt/redis/redis-cluster/master/bin
vim reids.conf
#bind 127.0.0.1  
protected-mode no 
daemonize yes
cluster-enabled yes

2.3 将master复制到slave中

  复制master到slave中,并修改配置文件redis.conf,修改端口号为6380

cd /opt/redis/redis-culster
scp -r master/ slave/
cd slave/bin
vim redis.conf
port 6380

2.4 将已配置好的redis 复制到其他的机器上

  在这里直接使用之前上课时用到过的rsync-script直接进行复制

cd /opt/redis/
rsync-script redis-cluster

2.5 启动各个节点

  在各个机器上执行以下命令

cd /opt/redis/redis-culster/master/bin 
./redis-server redis.conf
cd /opt/redis/redis-culster/slave/bin
./redis-server redis.conf

2.6 配置集群模式

  在192.168.122.128节点上,执行命令

cd /opt/redis/redis-culster/master/bin 
./redis-cli --cluster create 192.168.122.128:6379 192.168.122.129:6379 192.168.122.130:6379 192.168.122.128:6380 192.168.122.129:6380 192.168.122.130:6380 --cluster-replicas 1

  执行结果

[root@mysql-master bin]# ./redis-cli --cluster create 192.168.122.128:6379 192.168.122.129:6379 192.168.122.130:6379 192.168.122.128:6380 192.168.122.129:6380 192.168.122.130:6380 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.122.129:6380 to 192.168.122.128:6379
Adding replica 192.168.122.130:6380 to 192.168.122.129:6379
Adding replica 192.168.122.128:6380 to 192.168.122.130:6379
M: 0d41f47d99727e9782e7e812252c5a2d6bedf3ff 192.168.122.128:6379
   slots:[0-5460] (5461 slots) master
M: 655a430bfa4f8cee938aeb994999466e703ccfd7 192.168.122.129:6379
   slots:[5461-10922] (5462 slots) master
M: d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 192.168.122.130:6379
   slots:[10923-16383] (5461 slots) master
S: 880bb66b37ecd0dd8a25b8e05c474feec97f5d32 192.168.122.128:6380
   replicates d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9
S: 51d03c9c38c7cac9693ffddecb0973fbcf1f14c9 192.168.122.129:6380
   replicates 0d41f47d99727e9782e7e812252c5a2d6bedf3ff
S: 359583bb1a9ed9a3fa2540ba2208576bf267e980 192.168.122.130:6380
   replicates 655a430bfa4f8cee938aeb994999466e703ccfd7
Can I set the above configuration? (type 'yes' to accept): yese^H
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 192.168.122.128:6379)
M: 0d41f47d99727e9782e7e812252c5a2d6bedf3ff 192.168.122.128:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 51d03c9c38c7cac9693ffddecb0973fbcf1f14c9 192.168.122.129:6380
   slots: (0 slots) slave
   replicates 0d41f47d99727e9782e7e812252c5a2d6bedf3ff
S: 359583bb1a9ed9a3fa2540ba2208576bf267e980 192.168.122.130:6380
   slots: (0 slots) slave
   replicates 655a430bfa4f8cee938aeb994999466e703ccfd7
M: d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 192.168.122.130:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 880bb66b37ecd0dd8a25b8e05c474feec97f5d32 192.168.122.128:6380
   slots: (0 slots) slave
   replicates d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9
M: 655a430bfa4f8cee938aeb994999466e703ccfd7 192.168.122.129:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

2.7 登录redis查看结果

[root@mysql-master bin] cd /opt/redis/redis-cluster/master/bin
[root@mysql-master bin] ./redis-cli -h 192.168.122.128 -p 6379 -c 
192.168.122.128:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:333
cluster_stats_messages_pong_sent:335
cluster_stats_messages_sent:668
cluster_stats_messages_ping_received:330
cluster_stats_messages_pong_received:333
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:668

192.168.122.128:6379> cluster nodes
51d03c9c38c7cac9693ffddecb0973fbcf1f14c9 192.168.122.129:6380@16380 slave 0d41f47d99727e9782e7e812252c5a2d6bedf3ff 0 1599379774000 5 connected
359583bb1a9ed9a3fa2540ba2208576bf267e980 192.168.122.130:6380@16380 slave 655a430bfa4f8cee938aeb994999466e703ccfd7 0 1599379774000 6 connected
d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 192.168.122.130:6379@16379 master - 0 1599379775000 3 connected 12256-16383
880bb66b37ecd0dd8a25b8e05c474feec97f5d32 192.168.122.128:6380@16380 slave d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 0 1599379774577 4 connected
0d41f47d99727e9782e7e812252c5a2d6bedf3ff 192.168.122.128:6379@16379 myself,master - 0 1599379772000 1 connected 1333-5460
655a430bfa4f8cee938aeb994999466e703ccfd7 192.168.122.129:6379@16379 master - 0 1599379775892 2 connected 6795-10922

3 新增主从节点,加入到集群中

3.1 复制主从节点

  在192.168.122.130 这台机器上,复制redis环境

cd /opt
scp -r redis/ redisNew/

3.2 配置主从环境

  (1)删除 dump.rdb,nodes.conf这两个文件,如果不删除这两个文件,在往集群中添加节点时会报错
  (2)修改redis.conf文件,修改端口号

 cd /opt/redisNew/redis-cluster/master/bin
 vim redis.conf
 port 6749
 cd /opt/redisNew/redis-cluster/slave/bin
 vim redis.conf
 port 6480

3.3 启动主节点,并将主节点添加到集群中

  (1)在192.168.122.130这个节点上,启动主节点

 cd /opt/redisNew/redis-cluster/master/bin
./redis-server redis.conf

  (2)在192.168.122.128上,将节点添加到集群中

 cd /opt/redisNew/redis-cluster/master/bin
 ./redis-cli --cluster add-node 192.168.122.130:6479 192.168.122.128:6379
 >>> Adding node 192.168.122.130:6479 to cluster 192.168.122.128:6379
>>> Performing Cluster Check (using node 192.168.122.128:6379)
M: 0d41f47d99727e9782e7e812252c5a2d6bedf3ff 192.168.122.128:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 51d03c9c38c7cac9693ffddecb0973fbcf1f14c9 192.168.122.129:6380
   slots: (0 slots) slave
   replicates 0d41f47d99727e9782e7e812252c5a2d6bedf3ff
S: 359583bb1a9ed9a3fa2540ba2208576bf267e980 192.168.122.130:6380
   slots: (0 slots) slave
   replicates 655a430bfa4f8cee938aeb994999466e703ccfd7
M: d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 192.168.122.130:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 880bb66b37ecd0dd8a25b8e05c474feec97f5d32 192.168.122.128:6380
   slots: (0 slots) slave
   replicates d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9
M: 655a430bfa4f8cee938aeb994999466e703ccfd7 192.168.122.129:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.122.130:6479 to make it join the cluster.
[OK] New node added correctly.

3.4 启动从节点,将从节点添加到集群中,并指定从节点的master

  (1)在192.168.122.130这台机器上

cd /opt/redisNew/redis-cluster/slave/bin
./redis-server redis.conf

  (2)在192.168.122.128这台机器上将从节点加入到集群中,并指定主节点id

cd /opt/redisNew/redis-cluster/master/bin
./redis-cli --cluster add-node 192.168.122.130:6480 192.168.122.128:6379 --cluster-slave --cluster-master-id 46a2962a363a00f75852bf343f4226f78ebf0dcb

3.5 查看添加结果

[root@mysql-master bin]# ./redis-cli -h 127.0.0.1 -p 6379 -c
127.0.0.1:6379> cluster nodes
51d03c9c38c7cac9693ffddecb0973fbcf1f14c9 192.168.122.129:6380@16380 slave 0d41f47d99727e9782e7e812252c5a2d6bedf3ff 0 1599379774000 5 connected
359583bb1a9ed9a3fa2540ba2208576bf267e980 192.168.122.130:6380@16380 slave 655a430bfa4f8cee938aeb994999466e703ccfd7 0 1599379774000 6 connected
966f5695b5d4793027d176fb3af8fdc66f7a7256 192.168.122.130:6480@16480 slave 46a2962a363a00f75852bf343f4226f78ebf0dcb 0 1599379775000 7 connected
d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 192.168.122.130:6379@16379 master - 0 1599379775000 3 connected 12256-16383
46a2962a363a00f75852bf343f4226f78ebf0dcb 192.168.122.130:6479@16479 master - 0 1599379773000 7 connected 0-1332 5461-6794 10923-12255
880bb66b37ecd0dd8a25b8e05c474feec97f5d32 192.168.122.128:6380@16380 slave d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 0 1599379774577 4 connected
0d41f47d99727e9782e7e812252c5a2d6bedf3ff 192.168.122.128:6379@16379 myself,master - 0 1599379772000 1 connected 1333-5460
655a430bfa4f8cee938aeb994999466e703ccfd7 192.168.122.129:6379@16379 master - 0 1599379775892 2 connected 6795-10922

4 通过java客户端对集群进行操作

  (1) 创建maven项目

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.lagou</groupId>
    <artifactId>jedis-cluster</artifactId>
    <version>1.0-SNAPSHOT</version>
    <dependencies>
        <dependency>
            <groupId>redis.clients</groupId>
            <artifactId>jedis</artifactId>
            <version>3.2.0</version>
        </dependency>
    </dependencies>
</project>

(2)编写代码,查看结果

public class JedisCluster {
    public static void main(String[] args) {
        Set<HostAndPort> nodes = new HashSet<HostAndPort>();
        nodes.add(new HostAndPort("192.168.122.128", 6379));
        nodes.add(new HostAndPort("192.168.122.128", 6380));
        nodes.add(new HostAndPort("192.168.122.129", 6379));
        nodes.add(new HostAndPort("192.168.122.129", 6380));
        nodes.add(new HostAndPort("192.168.122.130", 6379));
        nodes.add(new HostAndPort("192.168.122.130", 6380));
        nodes.add(new HostAndPort("192.168.122.130", 6479));
        nodes.add(new HostAndPort("192.168.122.130", 6480));
        JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
        JedisCluster jedisCluster = new JedisCluster(nodes, jedisPoolConfig);
        jedisCluster.set("name:1", "zhangfei");
        jedisCluster.set("name:2", "zhaoyun");
        jedisCluster.set("name:3", "guanyu");

        System.out.println(jedisCluster.get("name:1"));
        System.out.println(jedisCluster.get("name:2"));
        System.out.println(jedisCluster.get("name:3"));
        jedisCluster.close();
    }
}
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Redis是一个key-value存储系统。和Memcached类似,它支持存储的value类型相对更多,包括string(字符串)、list(链表)、set(集合)、zset(sorted set --有序集合)和hash(哈希类型)。这些数据类型都支持push/pop、add/remove及取交集并集和差集及更丰富的操作,而且这些操作都是原子性的。在此基础上,redis支持各种不同方式的排序。与memcached一样,为了保证效率,数据都是缓存在内存中。区别的是redis会周期性的把更新的数据写入磁盘或者把修改操作写入追加的记录文件,并且在此基础上实现了master-slave(主从)同步。 Redis的出现,很大程度补偿了memcached这类key/value存储的不足,在部分场合可以对关系数据库起到很好的补充作用。它提供了Java,C/C++,C#,PHP,JavaScript,Perl,Object-C,Python,Ruby,Erlang等客户端,使用很方便。 Redis支持主从同步。数据可以从主服务器向任意数量的从服务器上同步,从服务器可以是关联其他从服务器的主服务器。这使得Redis可执行单层树复制。存盘可以有意无意的对数据进行写操作。由于完全实现了发布/订阅机制,使得从数据库在任何地方同步树时,可订阅一个频道并接收主服务器完整的消息发布记录。同步对读取操作的可扩展性和数据冗余很有帮助。   本课程主要讲解以下内容:1. Redis的基本使用2. Redis数据库的数据类型3. Redis数据库数据管理4. Redis的主从复制5. Redis数据库的持久性6. Redis的高可靠性和集群7. Redis的优化和性能测试8. Redis服务器的维护和管理9. Redis服务器的常见问题排错 

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值