Redis实现方法有
- 客户端分片
- 代理分片
- Twemproxy代理分片机制
- Codis代理分片机制
- 服务器端分片
部署Redis群集
资源列表
操作系统 | IP | 主机名 |
Centos7 | 192.168.10.51 | master1 |
Centos7 | 192.168.10.52 | master2 |
Centos7 | 192.168.10.53 | master3 |
Centos7 | 192.168.10.54 | slave1 |
Centos7 | 192.168.10.55 | slave2 |
Centos7 | 192.168.10.56 | slave3 |
关闭防火墙和Selinux
systemctl stop frewalld
seteforce 0
安装Redis
#上传redis-4.0.9.tar.gz
yum -y install gcc gcc-c++
tar zxf redis-4.0.9.tar.gz
cd redis-4.0.9
make && make PREFIX=/usr/local/redis install
#安装到/usr/local/redis
ln -s /usr/local/redis/bin/* /usr/local/bin/
cd utils/
./install_server.sh
ss -anpt |grep 6379
修改redis配置文件(所有节点操作)
vim /etc/redis/6379.conf
--------70行----------
#bind 127.0.0.1 //注释掉bind 项,Redis 中 bind 选项默认监听所有网卡
--------89行----------
protected-mode no //关闭保护模式
--------93行----------
port 6379 //端口6379
--------137行----------
daemonize yes //以独立进程启动
--------815行----------
cluster-enabled yes //打开集群
--------823行----------
cluster-config-file nodes-6379.conf //集群配置文件名称设置
--------829行----------
cluster-node-timeout 5000 //集群超时时间设置
--------673行----------
appendonly yes //开启aof 持久化
#重新启动redis
/etc/init.d/redis_6379 restart
创建Redis群集(仅在master1操作)
CentOS 默认支持Ruby 到 2.0.0,使用 gem 安装Redis 需要Ruby 的最低版本是 2.2.2。解决办法是先安装 rvm,再安装 Ruby 高版本,再使用这个版本
源码包安装
下载地址https://cache.ruby-lang.org/pub/ruby/2.6/ruby-2.6.9.tar.gz
#上传ruby-2.6.9.tar.gz安装包
#下载相关依赖包
yum -y install zlib-devel openssl-devel
yum install -y gcc make openssl-devel readline-devel zlib-devel
# 编译安装
tar zxf ruby-2.6.9.tar.gz -C /usr/src/
cd /usr/src/ruby-2.6.9/
./configure --prefix=/usr/local/ruby-2.6.9
make && make install
# 优化执行路径
ln -s /usr/local/ruby-2.6.9/bin/ruby /usr/bin/ruby
ln -s /usr/local/ruby-2.6.9/bin/gem /usr/bin/gem
# 查看版本
ruby -v
gem -v
# 使用gem安装redis依赖
gem install redis
创建集群
redis-4.0.9/src/redis-trib.rb create --replicas 1 192.168.10.51:6379 192.168.10.52:6379 192.168.10.53:6379 192.168.10.54:6379 192.168.10.55:6379 192.168.10.56:6379
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.10.51:6379
192.168.10.52:6379
192.168.10.53:6379
Adding replica 192.168.10.55:6379 to 192.168.10.51:6379
Adding replica 192.168.10.56:6379 to 192.168.10.52:6379
Adding replica 192.168.10.54:6379 to 192.168.10.53:6379
M: d5a73257d1d7be26e70326da76d3867e71736631 192.168.10.51:6379
slots:0-5460 (5461 slots) master
M: 32ffbb1e688ca888e8207a9556d3df6ec93f665a 192.168.10.52:6379
slots:5461-10922 (5462 slots) master
M: b508ec498729478a9e59596960e51ab701b1f53f 192.168.10.53:6379
slots:10923-16383 (5461 slots) master
S: 969b5cb70ec9f8c1cba3b578b95921c455be1025 192.168.10.54:6379
replicates b508ec498729478a9e59596960e51ab701b1f53f
S: e8b6ed88ebce5f96d6d4d12a75695621688e3a1a 192.168.10.55:6379
replicates d5a73257d1d7be26e70326da76d3867e71736631
S: 8422c0754676685395c3eec2125111ea71049c46 192.168.10.56:6379
replicates 32ffbb1e688ca888e8207a9556d3df6ec93f665a
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join..
>>> Performing Cluster Check (using node 192.168.10.51:6379)
M: d5a73257d1d7be26e70326da76d3867e71736631 192.168.10.51:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: 32ffbb1e688ca888e8207a9556d3df6ec93f665a 192.168.10.52:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: b508ec498729478a9e59596960e51ab701b1f53f 192.168.10.53:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: e8b6ed88ebce5f96d6d4d12a75695621688e3a1a 192.168.10.55:6379
slots: (0 slots) slave
replicates d5a73257d1d7be26e70326da76d3867e71736631
S: 8422c0754676685395c3eec2125111ea71049c46 192.168.10.56:6379
slots: (0 slots) slave
replicates 32ffbb1e688ca888e8207a9556d3df6ec93f665a
S: 969b5cb70ec9f8c1cba3b578b95921c455be1025 192.168.10.54:6379
slots: (0 slots) slave
replicates b508ec498729478a9e59596960e51ab701b1f53f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
#集群从节点和主节点是随机的
调整从节点和主节点顺序
删除从节点
/root/redis-4.0.9/src/redis-trib.rb del-node 192.168.10.55:6379 e8b6ed88ebce5f96d6d4d12a75695621688e3a1a
/root/redis-4.0.9/src/redis-trib.rb del-node 192.168.10.56:6379 8422c0754676685395c3eec2125111ea71049c46
/root/redis-4.0.9/src/redis-trib.rb del-node 192.168.10.54:6379 969b5cb70ec9f8c1cba3b578b95921c455be1025
查看群集状态
[root@master1 ~]# /root/redis-4.0.9/src/redis-trib.rb check 192.168.10.51:6379
>>> Performing Cluster Check (using node 192.168.10.51:6379)
M: d5a73257d1d7be26e70326da76d3867e71736631 192.168.10.51:6379
slots:0-5460 (5461 slots) master
0 additional replica(s)
M: 32ffbb1e688ca888e8207a9556d3df6ec93f665a 192.168.10.52:6379
slots:5461-10922 (5462 slots) master
0 additional replica(s)
M: b508ec498729478a9e59596960e51ab701b1f53f 192.168.10.53:6379
slots:10923-16383 (5461 slots) master
0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
加入集群(在上slave执行)
#删除缓存
#在3个slave节点执行
cd /var/lib/redis/6379/
rm -rf appendonly.aof dump.rdb nodes-6379.conf
/etc/init.d/redis_6379 restart
在master1节点加入群集
/root/redis-4.0.9/src/redis-trib.rb add-node --slave --master-id master的id 需要加入的IP master的IP
/root/redis-4.0.9/src/redis-trib.rb add-node --slave --master-id d5a73257d1d7be26e70326da76d3867e71736631 192.168.10.54:6379 192.168.10.51:6379
/root/redis-4.0.9/src/redis-trib.rb add-node --slave --master-id 32ffbb1e688ca888e8207a9556d3df6ec93f665a 192.168.10.55:6379 192.168.10.52:6379
/root/redis-4.0.9/src/redis-trib.rb add-node --slave --master-id b508ec498729478a9e59596960e51ab701b1f53f 192.168.10.56:6379 192.168.10.53:6379
验证连接
redis-cli -c
[root@master1 ~]# redis-cli -c
127.0.0.1:6379> set name lll
-> Redirected to slot [5798] located at 192.168.10.52:6379
OK
#会查找分片到集群中