前言
本文为拉勾高薪训练营作业,搭建三主三从redis集群模式,并在搭建完成后,添加一个master主节点和一个slave从节点
1 环境说明:
1.1 各个软件版本说明
(1)虚拟机:Vmware 15
(2)操作系统:Centos 7
(3)redis版本:redis-5.0.7
1.2 各个机器节点说明
redis节点 | IP | 端口 | 角色 |
---|---|---|---|
master1 | 192.168.122.128 | 6379 | 主节点1 |
master2 | 192.168.122.129 | 6379 | 主节点2 |
master3 | 192.168.122.130 | 6379 | 主节点3 |
master4 | 192.168.122.130 | 6479 | 集群搭建完成后,新增主节点 |
slave1 | 192.168.122.128 | 6380 | 从节点1 |
slave2 | 192.168.122.129 | 6380 | 从节点2 |
slave3 | 192.168.122.130 | 6380 | 从节点3 |
slave4 | 192.168.122.130 | 6480 | 集群搭建完成后,新增从节点 |
2 集群环境搭建
2.1单节点配置redis环境
(1)在192.168.122.129节点上下载安装包,并解压安装
mkdir -p /opt/redis/redis-cluster/master ##主节点安装位置
mkdir -p /opt/redis/redis-cluster/slave ##从节点安装位置
cd /opt/redis
wget http://download.redis.io/releases/redis-5.0.7.tar.gz ## 下载安装包
tar -zxvf redis-5.0.7.tar.gz ## 解压安装包
cd /redis-5.0.7
make && make PREFIX=/opt/redis/redis-cluster/master install
2.2 设置配置文件
将redis安装目录下的redis.conf复制到master/bin目录下,并编辑配置文件
scp redis.conf /opt/redis/redis-cluster/master/bin
vim reids.conf
#bind 127.0.0.1
protected-mode no
daemonize yes
cluster-enabled yes
2.3 将master复制到slave中
复制master到slave中,并修改配置文件redis.conf,修改端口号为6380
cd /opt/redis/redis-culster
scp -r master/ slave/
cd slave/bin
vim redis.conf
port 6380
2.4 将已配置好的redis 复制到其他的机器上
在这里直接使用之前上课时用到过的rsync-script直接进行复制
cd /opt/redis/
rsync-script redis-cluster
2.5 启动各个节点
在各个机器上执行以下命令
cd /opt/redis/redis-culster/master/bin
./redis-server redis.conf
cd /opt/redis/redis-culster/slave/bin
./redis-server redis.conf
2.6 配置集群模式
在192.168.122.128节点上,执行命令
cd /opt/redis/redis-culster/master/bin
./redis-cli --cluster create 192.168.122.128:6379 192.168.122.129:6379 192.168.122.130:6379 192.168.122.128:6380 192.168.122.129:6380 192.168.122.130:6380 --cluster-replicas 1
执行结果
[root@mysql-master bin]# ./redis-cli --cluster create 192.168.122.128:6379 192.168.122.129:6379 192.168.122.130:6379 192.168.122.128:6380 192.168.122.129:6380 192.168.122.130:6380 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.122.129:6380 to 192.168.122.128:6379
Adding replica 192.168.122.130:6380 to 192.168.122.129:6379
Adding replica 192.168.122.128:6380 to 192.168.122.130:6379
M: 0d41f47d99727e9782e7e812252c5a2d6bedf3ff 192.168.122.128:6379
slots:[0-5460] (5461 slots) master
M: 655a430bfa4f8cee938aeb994999466e703ccfd7 192.168.122.129:6379
slots:[5461-10922] (5462 slots) master
M: d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 192.168.122.130:6379
slots:[10923-16383] (5461 slots) master
S: 880bb66b37ecd0dd8a25b8e05c474feec97f5d32 192.168.122.128:6380
replicates d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9
S: 51d03c9c38c7cac9693ffddecb0973fbcf1f14c9 192.168.122.129:6380
replicates 0d41f47d99727e9782e7e812252c5a2d6bedf3ff
S: 359583bb1a9ed9a3fa2540ba2208576bf267e980 192.168.122.130:6380
replicates 655a430bfa4f8cee938aeb994999466e703ccfd7
Can I set the above configuration? (type 'yes' to accept): yese^H
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 192.168.122.128:6379)
M: 0d41f47d99727e9782e7e812252c5a2d6bedf3ff 192.168.122.128:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 51d03c9c38c7cac9693ffddecb0973fbcf1f14c9 192.168.122.129:6380
slots: (0 slots) slave
replicates 0d41f47d99727e9782e7e812252c5a2d6bedf3ff
S: 359583bb1a9ed9a3fa2540ba2208576bf267e980 192.168.122.130:6380
slots: (0 slots) slave
replicates 655a430bfa4f8cee938aeb994999466e703ccfd7
M: d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 192.168.122.130:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 880bb66b37ecd0dd8a25b8e05c474feec97f5d32 192.168.122.128:6380
slots: (0 slots) slave
replicates d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9
M: 655a430bfa4f8cee938aeb994999466e703ccfd7 192.168.122.129:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
2.7 登录redis查看结果
[root@mysql-master bin] cd /opt/redis/redis-cluster/master/bin
[root@mysql-master bin] ./redis-cli -h 192.168.122.128 -p 6379 -c
192.168.122.128:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:333
cluster_stats_messages_pong_sent:335
cluster_stats_messages_sent:668
cluster_stats_messages_ping_received:330
cluster_stats_messages_pong_received:333
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:668
192.168.122.128:6379> cluster nodes
51d03c9c38c7cac9693ffddecb0973fbcf1f14c9 192.168.122.129:6380@16380 slave 0d41f47d99727e9782e7e812252c5a2d6bedf3ff 0 1599379774000 5 connected
359583bb1a9ed9a3fa2540ba2208576bf267e980 192.168.122.130:6380@16380 slave 655a430bfa4f8cee938aeb994999466e703ccfd7 0 1599379774000 6 connected
d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 192.168.122.130:6379@16379 master - 0 1599379775000 3 connected 12256-16383
880bb66b37ecd0dd8a25b8e05c474feec97f5d32 192.168.122.128:6380@16380 slave d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 0 1599379774577 4 connected
0d41f47d99727e9782e7e812252c5a2d6bedf3ff 192.168.122.128:6379@16379 myself,master - 0 1599379772000 1 connected 1333-5460
655a430bfa4f8cee938aeb994999466e703ccfd7 192.168.122.129:6379@16379 master - 0 1599379775892 2 connected 6795-10922
3 新增主从节点,加入到集群中
3.1 复制主从节点
在192.168.122.130 这台机器上,复制redis环境
cd /opt
scp -r redis/ redisNew/
3.2 配置主从环境
(1)删除 dump.rdb,nodes.conf这两个文件,如果不删除这两个文件,在往集群中添加节点时会报错
(2)修改redis.conf文件,修改端口号
cd /opt/redisNew/redis-cluster/master/bin
vim redis.conf
port 6749
cd /opt/redisNew/redis-cluster/slave/bin
vim redis.conf
port 6480
3.3 启动主节点,并将主节点添加到集群中
(1)在192.168.122.130这个节点上,启动主节点
cd /opt/redisNew/redis-cluster/master/bin
./redis-server redis.conf
(2)在192.168.122.128上,将节点添加到集群中
cd /opt/redisNew/redis-cluster/master/bin
./redis-cli --cluster add-node 192.168.122.130:6479 192.168.122.128:6379
>>> Adding node 192.168.122.130:6479 to cluster 192.168.122.128:6379
>>> Performing Cluster Check (using node 192.168.122.128:6379)
M: 0d41f47d99727e9782e7e812252c5a2d6bedf3ff 192.168.122.128:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 51d03c9c38c7cac9693ffddecb0973fbcf1f14c9 192.168.122.129:6380
slots: (0 slots) slave
replicates 0d41f47d99727e9782e7e812252c5a2d6bedf3ff
S: 359583bb1a9ed9a3fa2540ba2208576bf267e980 192.168.122.130:6380
slots: (0 slots) slave
replicates 655a430bfa4f8cee938aeb994999466e703ccfd7
M: d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 192.168.122.130:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 880bb66b37ecd0dd8a25b8e05c474feec97f5d32 192.168.122.128:6380
slots: (0 slots) slave
replicates d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9
M: 655a430bfa4f8cee938aeb994999466e703ccfd7 192.168.122.129:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.122.130:6479 to make it join the cluster.
[OK] New node added correctly.
3.4 启动从节点,将从节点添加到集群中,并指定从节点的master
(1)在192.168.122.130这台机器上
cd /opt/redisNew/redis-cluster/slave/bin
./redis-server redis.conf
(2)在192.168.122.128这台机器上将从节点加入到集群中,并指定主节点id
cd /opt/redisNew/redis-cluster/master/bin
./redis-cli --cluster add-node 192.168.122.130:6480 192.168.122.128:6379 --cluster-slave --cluster-master-id 46a2962a363a00f75852bf343f4226f78ebf0dcb
3.5 查看添加结果
[root@mysql-master bin]# ./redis-cli -h 127.0.0.1 -p 6379 -c
127.0.0.1:6379> cluster nodes
51d03c9c38c7cac9693ffddecb0973fbcf1f14c9 192.168.122.129:6380@16380 slave 0d41f47d99727e9782e7e812252c5a2d6bedf3ff 0 1599379774000 5 connected
359583bb1a9ed9a3fa2540ba2208576bf267e980 192.168.122.130:6380@16380 slave 655a430bfa4f8cee938aeb994999466e703ccfd7 0 1599379774000 6 connected
966f5695b5d4793027d176fb3af8fdc66f7a7256 192.168.122.130:6480@16480 slave 46a2962a363a00f75852bf343f4226f78ebf0dcb 0 1599379775000 7 connected
d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 192.168.122.130:6379@16379 master - 0 1599379775000 3 connected 12256-16383
46a2962a363a00f75852bf343f4226f78ebf0dcb 192.168.122.130:6479@16479 master - 0 1599379773000 7 connected 0-1332 5461-6794 10923-12255
880bb66b37ecd0dd8a25b8e05c474feec97f5d32 192.168.122.128:6380@16380 slave d602bcdbe33cbd4ba27e98f122dd4f1870a03fc9 0 1599379774577 4 connected
0d41f47d99727e9782e7e812252c5a2d6bedf3ff 192.168.122.128:6379@16379 myself,master - 0 1599379772000 1 connected 1333-5460
655a430bfa4f8cee938aeb994999466e703ccfd7 192.168.122.129:6379@16379 master - 0 1599379775892 2 connected 6795-10922
4 通过java客户端对集群进行操作
(1) 创建maven项目
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.lagou</groupId>
<artifactId>jedis-cluster</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>3.2.0</version>
</dependency>
</dependencies>
</project>
(2)编写代码,查看结果
public class JedisCluster {
public static void main(String[] args) {
Set<HostAndPort> nodes = new HashSet<HostAndPort>();
nodes.add(new HostAndPort("192.168.122.128", 6379));
nodes.add(new HostAndPort("192.168.122.128", 6380));
nodes.add(new HostAndPort("192.168.122.129", 6379));
nodes.add(new HostAndPort("192.168.122.129", 6380));
nodes.add(new HostAndPort("192.168.122.130", 6379));
nodes.add(new HostAndPort("192.168.122.130", 6380));
nodes.add(new HostAndPort("192.168.122.130", 6479));
nodes.add(new HostAndPort("192.168.122.130", 6480));
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
JedisCluster jedisCluster = new JedisCluster(nodes, jedisPoolConfig);
jedisCluster.set("name:1", "zhangfei");
jedisCluster.set("name:2", "zhaoyun");
jedisCluster.set("name:3", "guanyu");
System.out.println(jedisCluster.get("name:1"));
System.out.println(jedisCluster.get("name:2"));
System.out.println(jedisCluster.get("name:3"));
jedisCluster.close();
}
}