部署 redis 的读写分离架构(包含节点间认证口令)
之前了解了各种 redis replication 的原理和知识,那么怎么搭建呢?
windows系统下搭建一主两从服务器,往主节点去写,在从节点去读,可以读到,主从架构就搭建成功了
步骤:
下载redis windows版本
解压完成后复制三份
如图所示
目录如下
修改redis.windows.conf
port 6380
6381文件夹,修改如下:
port 6381
slaveof 127.0.0.1 6380
6382文件夹,修改如下:
port 6380
slaveof 127.0.0.1 6380
加入简单的window脚本,方便快速启动!
在对应的redis文件夹下面新建
startRedisServer.bat
脚本的内容为:
@echo off
redis-server.exe redis.windows.conf
@pause
然后在redis文件夹同级的目录下在新建
start6380.cmd
@echo off
cd Redis-6380
startRedisServer.bat
然后6381和6382和上面操作一样,操作完成后如下图:
4.启动测试
先启动6380,后启动6381,6382
启动页面如图
启动成功后会进行全量复制。
linux搭建主从复制:
只搭建一主一从,往主节点去写,在从节点去读,可以读到,主从架构就搭建成功了
## 启用复制,部署 slave node
在 centos6.5 上安装 redis,一个IP地址为192.168.25.128,一个为192.168.25.129,分别安装好之后,128开启 `slaveof <masterip> <masterport>` 属性,把该机器变成 slave node
/etc/redis/6379.conf
```bash
slaveof 129 6379
```
## 强制读写分离
基于主从复制架构,实现读写分离
/etc/redis/6379.conf
```bash
# 该属性已经默认开启,
slave-read-only yes
```
开启了只读的 redis slave node,会拒绝所有的写操作,这样可以强制搭建成读写分离的架构
## 集群安全认证
master 上启用安全认证:requirepass
129/etc/redis/6379.conf
```bash
requirepass redis-pass
```
slave 上使用连接口令:masterauth
128/etc/redis/6379.conf
```bash
masterauth redis-pass
```
也就是 master 启用密码,slave 要持有相同的密码才能连接上
配置完成后,记得重启
```bash
cd /etc/init.d/
redis-cli shutdown
./redis_6379 start
```
由于配置名称都一样,不要上传错了。
## 读写分离架构的测试
- 先启动主节点,129 上的 redis实例
- 再启动从节点,128上的 redis实例
在 129 上尝试获取数据
```bash
[root@129 init.d]# redis-cli
127.0.0.1:6379> get k1
(error) NOAUTH Authentication required.
```
会发现报错了,原因是之前我们开启了密码,这个时候要怎么连接 redis-cli 呢?
```bash
# redis-cli -h
redis-cli 3.2.8
Usage: redis-cli [OPTIONS] [cmd [arg [arg ...]]]
-h <hostname> Server hostname (default: 127.0.0.1).
-p <port> Server port (default: 6379).
-s <socket> Server socket (overrides hostname and port).
-a <password> Password to use when connecting to the server.
```
可以看到帮助命令需要使用 -a 来指定密码
```bash
[root@ init.d]# redis-cli -a redis-pass
127.0.0.1:6379> get k1
(nil)
# 注意,在关闭 redis 的时候同样也需要使用密码
redis-cli -a redis-pass shutdown
```
### 在 master 上写数据
```
[root@ init.d]# redis-cli -a redis-pass
127.0.0.1:6379> set k1 123456
OK
```
### 在 slave 上获取数据
```
[root@ init.d]# redis-cli
127.0.0.1:6379> get k1
(nil)
```
发现没有获取到数据,这是怎么回事呢?那么一般说明我们的 slave 可能配置有问题。
这个时候要是能看到日志就好了,在 `128/etc/redis/6379.conf`中,
配置上 `logfile /etc/redis/log.log` 重启后可以看到日志中出现不能连接到 master
```
[root@eshop-cache02 redis]# ll
total 52
-rw-r--r-- 1 root root 46774 Mar 23 2019 6379.conf
-rw-r--r-- 1 root root 2719 Mar 19 05:14 log.log
[root@ redis]# tail -f log.log
24489:S 19 Mar 05:14:28.768 # Error condition on socket for SYNC: Connection refused
24489:S 19 Mar 05:14:29.789 * Connecting to MASTER eshop-cache01:6379
```
原因是:/etc/redis/6379.conf 中的 bind 属性配置没有放开
默认是绑定的 127.0.0.1,只能本机访问 redis。改成本机的内外 ip 地址就可以对外提供服务了,这里由于之前配置了 hosts 映射,使用 hostname
```
# bind 127.0.0.1
bind 127.0.0.1 128
bind 127.0.0.1 aa
```
bind 可以写多条,如果没有 127 的ip,使用 redis-cli 会默认连接 127 的 ip,这样你自己也不能使用这个本机简便的登录方式了
如果已经使用 redis-cli 连接不上怎么办?需要自己带上 ip 地址访问,如下:
redis-cli -h 128 shutdown
:::
记得是每个 redis 节点上都要修改成绑定自己机器的 hontname
对外开放访问后(bind 127.0.0.1 128) 终于连接上了
```
[root@ init.d]# ./redis_6379 start
Starting Redis server...
[root@ init.d]# redis-cli
127.0.0.1:6379> get k1 # 能获取到 master 上的数据
"123456"
127.0.0.1:6379> info replication # 查看信息
# Replication
role:slave
master_host:129 # 可以看到 master 的信息
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_repl_offset:253
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
```
对主从 redis 架构进行 QPS 压测以及水平扩容支撑更高 QPS
你如果要对自己刚刚搭建好的 redis 做一个基准的压测,测一下你的 redis 的性能和 QPS(query per second 每秒查询次数)
redis 自己提供的 redis-benchmark 压测工具,是最快捷最方便的,用一些简单的操作和场景去压测(主要是简单)
工具路径:/usr/local/redis-3.2.8/src/redis-benchmark
语法如下,除了下面自带的 三个控制并发,次数,大小等,其他的自带的链接参数也是需要的,如你配置了密码。需要带上 -a redis-pass密码
./redis-benchmark
-c <clients> Number of parallel connections (default 50)
-n <requests> Total number of requests (default 100000)
-d <size> Data size of SET/GET value in bytes (default 2)
比如:你的高峰期的访问量,在高峰期,瞬时最大用户量会达到 10 万+,供访问 1000万 次,每次 50 byte 数据 -c 100000,-n 10000000,-d 50
下面是对我的刚才部署好的 redis 测试的数据,这个测试还是需要几分钟时间
129 压测数据
1 核 1G,虚拟机,
/usr/local/redis-3.2.8/src
[root@129 src]# ./redis-benchmark -a redis-pass
数据太多,分屏复制都要复制好多次
====== PING_INLINE ======
100000 requests completed in 1.28 seconds
50 parallel clients
3 bytes payload
keep alive: 1
99.78% <= 1 milliseconds
99.93% <= 2 milliseconds
99.97% <= 3 milliseconds
100.00% <= 3 milliseconds
78308.54 requests per second
====== PING_BULK ======
100000 requests completed in 1.30 seconds
50 parallel clients
3 bytes payload
keep alive: 1
99.87% <= 1 milliseconds
100.00% <= 1 milliseconds
76804.91 requests per second
====== SET ======
100000 requests completed in 2.50 seconds
50 parallel clients
3 bytes payload
keep alive: 1
5.95% <= 1 milliseconds
99.63% <= 2 milliseconds
99.93% <= 3 milliseconds
99.99% <= 4 milliseconds
100.00% <= 4 milliseconds
40032.03 requests per second // 比如这个 set 操作,QPS 为 4 万多
====== GET ======
100000 requests completed in 1.30 seconds
50 parallel clients
3 bytes payload
keep alive: 1
99.73% <= 1 milliseconds
100.00% <= 2 milliseconds
100.00% <= 2 milliseconds
76628.35 requests per second // 比如这个 get 操作,QPS 为 7 万多
====== INCR ======
100000 requests completed in 1.90 seconds
50 parallel clients
3 bytes payload
keep alive: 1
80.92% <= 1 milliseconds
99.81% <= 2 milliseconds
99.95% <= 3 milliseconds
99.96% <= 4 milliseconds
99.97% <= 5 milliseconds
100.00% <= 6 milliseconds
52548.61 requests per second
====== LPUSH ======
100000 requests completed in 2.58 seconds
50 parallel clients
3 bytes payload
keep alive: 1
3.76% <= 1 milliseconds
99.61% <= 2 milliseconds
99.93% <= 3 milliseconds
100.00% <= 3 milliseconds
38684.72 requests per second
====== RPUSH ======
100000 requests completed in 2.47 seconds
50 parallel clients
3 bytes payload
keep alive: 1
6.87% <= 1 milliseconds
99.69% <= 2 milliseconds
99.87% <= 3 milliseconds
99.99% <= 4 milliseconds
100.00% <= 4 milliseconds
40469.45 requests per second
====== LPOP ======
100000 requests completed in 2.26 seconds
50 parallel clients
3 bytes payload
keep alive: 1
28.39% <= 1 milliseconds
99.83% <= 2 milliseconds
100.00% <= 2 milliseconds
44306.60 requests per second
====== RPOP ======
100000 requests completed in 2.18 seconds
50 parallel clients
3 bytes payload
keep alive: 1
36.08% <= 1 milliseconds
99.75% <= 2 milliseconds
100.00% <= 2 milliseconds
45871.56 requests per second
====== SADD ======
100000 requests completed in 1.23 seconds
50 parallel clients
3 bytes payload
keep alive: 1
99.94% <= 1 milliseconds
100.00% <= 2 milliseconds
100.00% <= 2 milliseconds
81168.83 requests per second
====== SPOP ======
100000 requests completed in 1.28 seconds
50 parallel clients
3 bytes payload
keep alive: 1
99.80% <= 1 milliseconds
99.96% <= 2 milliseconds
99.96% <= 3 milliseconds
99.97% <= 5 milliseconds
100.00% <= 5 milliseconds
78369.91 requests per second
====== LPUSH (needed to benchmark LRANGE) ======
100000 requests completed in 2.47 seconds
50 parallel clients
3 bytes payload
keep alive: 1
15.29% <= 1 milliseconds
99.64% <= 2 milliseconds
99.94% <= 3 milliseconds
100.00% <= 3 milliseconds
40420.37 requests per second
====== LRANGE_100 (first 100 elements) ======
100000 requests completed in 3.69 seconds
50 parallel clients
3 bytes payload
keep alive: 1
30.86% <= 1 milliseconds
96.99% <= 2 milliseconds
99.94% <= 3 milliseconds
99.99% <= 4 milliseconds
100.00% <= 4 milliseconds
27085.59 requests per second
====== LRANGE_300 (first 300 elements) ======
100000 requests completed in 10.22 seconds
50 parallel clients
3 bytes payload
keep alive: 1
0.03% <= 1 milliseconds
5.90% <= 2 milliseconds
90.68% <= 3 milliseconds
95.46% <= 4 milliseconds
97.67% <= 5 milliseconds
99.12% <= 6 milliseconds
99.98% <= 7 milliseconds
100.00% <= 7 milliseconds
9784.74 requests per second
====== LRANGE_500 (first 450 elements) ======
100000 requests completed in 14.71 seconds
50 parallel clients
3 bytes payload
keep alive: 1
0.00% <= 1 milliseconds
0.07% <= 2 milliseconds
1.59% <= 3 milliseconds
89.26% <= 4 milliseconds
97.90% <= 5 milliseconds
99.24% <= 6 milliseconds
99.73% <= 7 milliseconds
99.89% <= 8 milliseconds
99.96% <= 9 milliseconds
99.99% <= 10 milliseconds
100.00% <= 10 milliseconds
6799.48 requests per second
====== LRANGE_600 (first 600 elements) ======
100000 requests completed in 18.56 seconds
50 parallel clients
3 bytes payload
keep alive: 1
0.00% <= 2 milliseconds
0.23% <= 3 milliseconds
1.75% <= 4 milliseconds
91.17% <= 5 milliseconds
98.16% <= 6 milliseconds
99.04% <= 7 milliseconds
99.83% <= 8 milliseconds
99.95% <= 9 milliseconds
99.98% <= 10 milliseconds
100.00% <= 10 milliseconds
5387.35 requests per second
====== MSET (10 keys) ======
100000 requests completed in 4.02 seconds
50 parallel clients
3 bytes payload
keep alive: 1
0.01% <= 1 milliseconds
53.22% <= 2 milliseconds
99.12% <= 3 milliseconds
99.55% <= 4 milliseconds
99.70% <= 5 milliseconds
99.90% <= 6 milliseconds
99.95% <= 7 milliseconds
100.00% <= 8 milliseconds
24869.44 requests per second
128 压测数据
由于数据太长就不粘贴了
redis 支持的 QPS 数据说明
这个很难给出一个数据,刚才也看到了,这个与机器配置,场景(复制操作?简单操作?数据量大小)都有关系。
如搭建一些集群,专门为某个项目,搭建的专用集群,4 核 4G 内存,比较复杂的操作,数据比较大,几万的 QPS 单机做到,差不多了
一般来说 redis 提供的高并发,至少上万,没问题
看看和那些有关系?
-
机器配置
-
操作的复杂度
-
数据量的大小
-
网络带宽/网络开销
所以具体是多少 QPS 需要自己去测试,而且与生产环境可能还不一致, 因为有大量的网络请求的调用。网络开销等
后面讲到的商品详情页的 cache,可能是需要把大串数据拼接在一起,作为一个 json 串,大小可能都几 KB 了,所以根据环境的不同,QPS 也会不一样
水平扩容 redis 读节点,提升吞吐量
再在其他服务器上搭建 redis 从节点,假设单个从节点读请 QPS 在 5 万左右,两个 redis 从节点,所有的读请求打到两台机器上去,承载整个集群读 QPS 在 10万+