一、搭建redis,实现redis的复制管理
1.搭建Redis环境
这次实验准备一主两从,共三台主机测试。分别在三台主机上安装Redis软件包。
主:192.168.30.100
从1: 192.168.30.104
从2: 192.168.30.103
- 安装软件包
yum -y install redis
- 配置服务
修改主服务器监听端口,默认是监听在本地的回环地址上的。
vim /etc/redis.conf
------------------------
bind 192.168.30.100
requirepass ilinux.io # 配置服务认证密码
------------------------
- 启动服务
systemctl start redis
- 查看端口是否开启
~]# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 192.168.30.100:6379 *:*
LISTEN 0 128 *:111 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 :::111 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 100 ::1:25 :::*
- 使用客户端连接 Redis
redis-cli -h 192.168.30.100 -p 6379
192.168.30.100:6379> info server
# Server
redis_version:3.2.12
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:7897e7d0e13773f
redis_mode:standalone
os:Linux 3.10.0-957.el7.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.8.5
process_id:10818
run_id:a02d27544b6546e22329f37d68dc883685b6555d
tcp_port:6379
uptime_in_seconds:822
uptime_in_days:0
hz:10
lru_clock:5657162
executable:/usr/bin/redis-server
config_file:/etc/redis.conf
2.配置主从Redis
- 修改从服务器配置文件
vim /etc/redis.conf
------------------------
bind 192.168.30.104
------------------------
- 启动 主从同步
redis-cli -h 192.168.30.104 -p 6379
192.168.30.104:6379> SLAVEOF 192.168.30.100 6379
192.168.30.104:6379> CONFIG SET masterauth ilinux.io
- 测试主从效果
在主服务器上设置一两个字符串键值对
redis-cli -h 192.168.30.100 -p 6379
192.168.30.100:6379> set student1 tome
192.168.30.100:6379> set student2 lily
从服务器上查看是否能够查询到对应的值
192.168.30.104:6379> get student1
"tome"
192.168.30.104:6379> get student2
"lily"
从节点不能写入数据
192.168.30.104:6379> set student3 google
(error) READONLY You can't write against a read only slave.
3.配置Redis sentinel
- 修改配置文件
作为Sentinel节点的主机至少三个,我们把三个redis主从节点都配置成sentinel节点。
vim /etc/redis-sentinel.conf
---------------------------------------
bind 192.168.30.104
sentinel monitor mymaster 192.168.30.100 6379 2 # 配置主节点地址和判定主节点故障的 sentinel 节点个数
sentinel auth-pass mymaster ilinux.io # redis 服务的认证密码
sentinel down-after-milliseconds mymaster 30000 # 判定故障的超时时间
sentinel parallel-syncs mymaster 1 # 故障转移后同时同步数据的主机数量
sentinel failover-timeout mymaster 180000 # 判定故障转移失败的超时时间
---------------------------------------
- 连接服务器
redis-cli -h 192.168.30.104 -p 26379
- 查看主从状态
192.168.30.104:26379> sentinel masters
1) 1) "name"
2) "mymaster"
3) "ip"
4) "192.168.30.100"
5) "port"
6) "6379"
7) "runid"
8) "8fd18d586d9cbfc9cc0d4100d24e65a55d06f0f5"
9) "flags"
10) "master"
11) "link-pending-commands"
12) "0"
13) "link-refcount"
14) "1"
15) "last-ping-sent"
16) "0"
17) "last-ok-ping-reply"
18) "714"
19) "last-ping-reply"
20) "714"
21) "down-after-milliseconds"
22) "30000"
23) "info-refresh"
24) "9821"
25) "role-reported"
26) "master"
27) "role-reported-time"
28) "411371"
29) "config-epoch"
30) "0"
31) "num-slaves"
32) "1"
33) "num-other-sentinels"
34) "2"
35) "quorum"
36) "2"
37) "failover-timeout"
38) "180000"
39) "parallel-syncs"
40) "1"
192.168.30.104:26379> sentinel slaves mymaster
1) 1) "name"
2) "192.168.30.103:6379"
3) "ip"
4) "192.168.30.103"
5) "port"
6) "6379"
7) "runid"
8) "6cf1e7968702d1cd1da4b29a3b61d29607514ed9"
9) "flags"
10) "slave"
11) "link-pending-commands"
12) "0"
13) "link-refcount"
14) "1"
15) "last-ping-sent"
16) "0"
17) "last-ok-ping-reply"
18) "694"
19) "last-ping-reply"
20) "694"
21) "down-after-milliseconds"
22) "30000"
23) "info-refresh"
24) "5541"
25) "role-reported"
26) "slave"
27) "role-reported-time"
28) "25735"
29) "master-link-down-time"
30) "0"
31) "master-link-status"
32) "ok"
33) "master-host"
34) "192.168.30.100"
35) "master-port"
36) "6379"
37) "slave-priority"
38) "100"
39) "slave-repl-offset"
40) "106234"
2) 1) "name"
2) "192.168.30.104:6379"
3) "ip"
4) "192.168.30.104"
5) "port"
6) "6379"
7) "runid"
8) "6c32ec83f02a98e97d68c033119b90e7aa3d8c89"
9) "flags"
10) "slave"
11) "link-pending-commands"
12) "0"
13) "link-refcount"
14) "1"
15) "last-ping-sent"
16) "0"
17) "last-ok-ping-reply"
18) "1002"
19) "last-ping-reply"
20) "1002"
21) "down-after-milliseconds"
22) "30000"
23) "info-refresh"
24) "316"
25) "role-reported"
26) "slave"
27) "role-reported-time"
28) "296782"
29) "master-link-down-time"
30) "1562584142000"
31) "master-link-status"
32) "err"
33) "master-host"
34) "192.168.30.100"
35) "master-port"
36) "6379"
37) "slave-priority"
38) "100"
39) "slave-repl-offset"
40) "1"
- 测试故障转移
手动停止主节点的 redis 服务
systemctl stop redis
查看当前的主从关系
192.168.30.104:26379> sentinel masters
1) 1) "name"
2) "mymaster"
3) "ip"
4) "192.168.30.103"
5) "port"
6) "6379"
7) "runid"
8) "6cf1e7968702d1cd1da4b29a3b61d29607514ed9"
9) "flags"
10) "master"
11) "link-pending-commands"
12) "0"
13) "link-refcount"
14) "1"
15) "last-ping-sent"
16) "0"
17) "last-ok-ping-reply"
18) "451"
19) "last-ping-reply"
20) "451"
21) "down-after-milliseconds"
22) "30000"
23) "info-refresh"
24) "8695"
25) "role-reported"
26) "master"
27) "role-reported-time"
28) "129925"
29) "config-epoch"
30) "1"
31) "num-slaves"
32) "2"
33) "num-other-sentinels"
34) "2"
35) "quorum"
36) "2"
37) "failover-timeout"
38) "180000"
39) "parallel-syncs"
40) "1"
- 故障主节点重新上线可能遇到的问题
用于同步的密码没有设置,需要重新设置。或直接写入配置文件中 。
192.168.30.100:6379> config get slaveof
1) "slaveof"
2) "192.168.30.103 6379"
192.168.30.100:6379> config get masterauth
1) "masterauth"
2) ""
- 查看日志
43001:X 16 Aug 16:02:16.917 # +switch-master mymaster 192.168.30.100 6379 192.168.30.103 6379
43001:X 16 Aug 16:02:16.918 * +slave slave 192.168.30.104:6379 192.168.30.104 6379 @ mymaster 192.168.30.103 6379
43001:X 16 Aug 16:02:16.918 * +slave slave 192.168.30.100:6379 192.168.30.100 6379 @ mymaster 192.168.30.103 6379
43001:X 16 Aug 16:02:46.947 # +sdown slave 192.168.30.100:6379 192.168.30.100 6379 @ mymaster 192.168.30.103 6379
43001:X 16 Aug 16:05:07.878 # -sdown slave 192.168.30.100:6379 192.168.30.100 6379 @ mymaster 192.168.30.103 6379
43001:X 16 Aug 16:05:17.849 * +convert-to-slave slave 192.168.30.100:6379 192.168.30.100 6379 @ mymaster 192.168.30.103 6379
二、搭建mogilefs,与nginx做整合
1.搭建mogilefs tracker 节点
- 安装软件包
本地安装的rpm包:
MogileFS-Server-2.46-2.el6.noarch.rpm
perl-Danga-Socket-1.61-1.el6.rf.noarch.rpm
MogileFS-Server-mogilefsd-2.46-2.el6.noarch.rpm
perl-MogileFS-Client-1.14-1.el6.noarch.rpm
MogileFS-Server-mogstored-2.46-2.el6.noarch.rpm
perl-Perlbal-1.78-1.el6.noarch.rpm
MogileFS-Utils-2.19-1.el6.noarch.rpm
yum install perl-Net-Netmask perl-IO-stringy perl-Sys-Syslog perl-IO-AIO
yum -y install ./*.rpm
- 配置数据库(数据库服务器 192.168.30.107)
mysql> create database mogilefs;
mysql> grant all on mogilefs.* to mogile identified by 'mogpass';
- 初始化数据库
mogdbsetup --dbhost=192.168.30.107 --dbpass=mogpass
MariaDB [mogilefs]> show tables;
+----------------------+
| Tables_in_mogilefs |
+----------------------+
| checksum |
| class |
| device |
| domain |
| file |
| file_on |
| file_on_corrupt |
| file_to_delete |
| file_to_delete2 |
| file_to_delete_later |
| file_to_queue |
| file_to_replicate |
| fsck_log |
| host |
| server_settings |
| tempfile |
| unreachable_fids |
+----------------------+
- 修改配置文件
vim /etc/mogilefs/mogilefsd.conf
----------------------------------------
db_dsn = DBI:mysql:mogilefs:host=192.168.30.107
db_user = mogile
db_pass = mogpass
listen = 192.168.30.100:7001
----------------------------------------
- 启动 tracker 节点服务
service mogilefsd start
2.搭建 mogilefs 存储节点
- 安装软件包
yum install perl-Net-Netmask perl-IO-stringy perl-Sys-Syslog perl-IO-AIO
yum -y install ./*.rpm
- 配置服务
vim /etc/mogilefs/mogstored.conf
----------------------------------------
maxconns = 10000
httplisten = 0.0.0.0:7500
mgmtlisten = 0.0.0.0:7501
docroot = /var/mogdata
----------------------------------------
mkdir /var/mogdata
chown mogilefs.mogilefs /var/mogdata
- 启动服务
mogstored -c /etc/mogilefs/mogstored.conf -daemon
- 查看端口
~]# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:7500 *:*
LISTEN 0 128 *:7501 *:*
LISTEN 0 128 *:111 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 :::111 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 100 ::1:25 :::*
3.存储节点管理
三个主机都是存储节点,只有100和103时tracker节点
- 添加节点
mogadm --trackers=192.168.30.100:7001,192.168.30.103:7001 host add 192.168.30.104 --ip=192.168.30.104 --port=7500 --status=alive
mogadm --trackers=192.168.30.100:7001,192.168.30.103:7001 host add 192.168.30.103 --ip=192.168.30.103 --port=7500 --status=alive
mogadm --trackers=192.168.30.100:7001,192.168.30.103:7001 host add 192.168.30.100 --ip=192.168.30.100 --port=7500 --status=alive
- 查看节点
~]# mogadm --trackers=192.168.30.100:7001,192.168.30.103:7001 host list
192.168.30.104 [1]: alive
IP: 192.168.30.104:7500
192.168.30.103 [2]: alive
IP: 192.168.30.103:7500
192.168.30.100 [3]: alive
IP: 192.168.30.100:7500
- 在存储节点中添加设备
在各个主机上创建目录和添加节点
mkdir -p /var/mogdata/dev1
mkdir -p /var/mogdata/dev2
mkdir -p /var/mogdata/dev3
mogadm --trackers=192.168.30.100:7001,192.168.30.103:7001 device add 192.168.30.104 dev1
mogadm --trackers=192.168.30.100:7001,192.168.30.103:7001 device add 192.168.30.103 dev2
mogadm --trackers=192.168.30.100:7001,192.168.30.103:7001 device add 192.168.30.100 dev3
- 查看节点设备
mogadm --trackers=192.168.30.100:7001,192.168.30.103:7001 device list
192.168.30.104 [1]: alive
used(G) free(G) total(G) weight(%)
dev1: alive 0.000 0.000 0.000 100
192.168.30.103 [2]: alive
used(G) free(G) total(G) weight(%)
dev2: alive 0.000 0.000 0.000 100
192.168.30.100 [3]: alive
used(G) free(G) total(G) weight(%)
dev3: alive 1.601 48.375 49.976 100
- 创建域和类
mogadm --trackers=192.168.30.100:7001,192.168.30.103:7001 domain add jpg
mogadm --trackers=192.168.30.100:7001,192.168.30.103:7001 domain list
mogadm --trackers=192.168.30.100:7001,192.168.30.103:7001 class add jpg wallpaper --mindevcount=3
mogadm --trackers=192.168.30.100:7001,192.168.30.103:7001 class list
domain class mindevcount replpolicy hashtype
-------------------- -------------------- ------------- ------------ -------
jpg default 2 MultipleHosts() NONE
jpg wallpaper 3 MultipleHosts() NONE
- 上次文件
mogupload --trackers=192.168.30.100:7001,192.168.30.103:7001 --domain=jpg --class=wallpaper --key=1 --file=/root/1.jpg
- 查看和访问上传的文件
访问查询出来的http url
mogfileinfo --trackers=192.168.30.100:7001,192.168.30.103:7001 --domain=jpg --key=1
- file: 1
class: wallpaper
devcount: 1
domain: jpg
fid: 14
key: 1
length: 1146700
- http://192.168.30.100:7500/dev3/0/000/000/0000000014.fid
- 可能遇到的问题
有些设备无法使用,可能时Centos7的原因,因为这个 rpm 包时为 Centos6 准备的。
mogadm --trackers=192.168.30.100:7001,192.168.30.103:7001 check
Checking trackers...
192.168.30.100:7001 ... OK
192.168.30.103:7001 ... OK
Checking hosts...
[ 1] 192.168.30.104 ... OK
[ 2] 192.168.30.103 ... OK
[ 3] 192.168.30.100 ... OK
Checking devices...
host device size(G) used(G) free(G) use% ob state I/O%
---- ------------ ---------- ---------- ---------- ------ ---------- -----
[ 1] dev1 REQUEST FAILURE FETCHING: http://192.168.30.104:7500/dev1/usage
[ 2] dev2 REQUEST FAILURE FETCHING: http://192.168.30.103:7500/dev2/usage
[ 3] dev3 49.976 1.602 48.373 3.21% writeable 0.0
---- ------------ ---------- ---------- ---------- ------
total: 49.976 1.602 48.373 3.21%
2.编译安装 Nginx 实现反代(没有成功,没有找到原因)
- 安装软件包
新增第三方模块使用 --add-module=/root/nginx-mogilefs-module/
./configure \
--prefix=/usr \
--sbin-path=/usr/sbin/nginx \
--conf-path=/etc/nginx/nginx.conf \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--pid-path=/var/run/nginx/nginx.pid \
--lock-path=/var/lock/nginx.lock \
--user=nginx \
--group=nginx \
--with-http_ssl_module \
--with-http_flv_module \
--with-http_stub_status_module \
--with-http_gzip_static_module \
--http-client-body-temp-path=/var/tmp/nginx/client/ \
--http-proxy-temp-path=/var/tmp/nginx/proxy/ \
--http-fastcgi-temp-path=/var/tmp/nginx/fcgi/ \
--http-uwsgi-temp-path=/var/tmp/nginx/uwsgi \
--http-scgi-temp-path=/var/tmp/nginx/scgi \
--with-pcre \
--with-debug \
--add-module=/root/nginx-mogilefs-module/
make CFLAGS="-pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -g"
make install
- 配置
location ~ ([^\/]+)$ {
mogilefs_tracker 192.168.30.100:7001;
mogilefs_methods GET;
mogilefs_noverify on;
mogilefs_pass {
proxy_pass $mogilefs_path;
proxy_hide_header Content-Type;
proxy_buffering off;
}
}
- 启动服务
nginx
三、搭建fastdfs
1.配置 tracker 节点
- 各个节点都要安装软件包
这里下载对应的rpm包进行安装
ll ./*.rpm
-rw-r--r-- 1 root root 1924 Aug 19 13:41 fastdfs-5.0.11-1.el7.centos.x86_64.rpm
-rw-r--r-- 1 root root 203888 Aug 19 13:41 fastdfs-server-5.0.11-1.el7.centos.x86_64.rpm
-rw-r--r-- 1 root root 131464 Aug 19 13:41 fastdfs-tool-5.0.11-1.el7.centos.x86_64.rpm
-rw-r--r-- 1 root root 99140 Aug 19 13:41 libfastcommon-1.0.36-1.el7.centos.x86_64.rpm
-rw-r--r-- 1 root root 36428 Aug 19 13:41 libfdfsclient-5.0.11-1.el7.centos.x86_64.rpm
yum install ./*.rpm
- 配置 tracker 节点
需要的配置的文件:- tracker.conf
- http.conf
- mime.types
cd /etc/fdfs
cp tracker.conf.sample tracker.conf
vim /etc/fdfs/tracker.conf
---------------------------------------
base_path=/data/tracker
http.server_port=80
---------------------------------------
mkdir /data/tracker -pv
- 启动服务(默认监听端口22122)
service fdfs_trackerd start
2.配置存储节点
- 配置Storage存储节点
需要的配置的文件:- storage.conf
- http.conf
- mime.types
cd /etc/fdfs
cp storage.conf.sample storage.conf
vim /etc/fdfs/storage.conf
---------------------------------------
base_path=/data/fastdfs
store_path0=/data/fastdfs
tracker_server=192.168.30.100:22122
http.server_port=8888
---------------------------------------
mkdir /data/fastdfs -pv
- 启动服务(默认监听端口23000)
service fdfs_trackerd start
- 查看数据目录内容
ls /data/fastdfs/data/
00 07 0E 15 1C 23 2A 31 38 3F 46 4D 54 5B 62 69 70 77 7E 85 8C 93 9A A1 A8 AF B6 BD C4 CB D2 D9 E0 E7 EE F5 FC
01 08 0F 16 1D 24 2B 32 39 40 47 4E 55 5C 63 6A 71 78 7F 86 8D 94 9B A2 A9 B0 B7 BE C5 CC D3 DA E1 E8 EF F6 FD
02 09 10 17 1E 25 2C 33 3A 41 48 4F 56 5D 64 6B 72 79 80 87 8E 95 9C A3 AA B1 B8 BF C6 CD D4 DB E2 E9 F0 F7 fdfs_storaged.pid
03 0A 11 18 1F 26 2D 34 3B 42 49 50 57 5E 65 6C 73 7A 81 88 8F 96 9D A4 AB B2 B9 C0 C7 CE D5 DC E3 EA F1 F8 FE
04 0B 12 19 20 27 2E 35 3C 43 4A 51 58 5F 66 6D 74 7B 82 89 90 97 9E A5 AC B3 BA C1 C8 CF D6 DD E4 EB F2 F9 FF
05 0C 13 1A 21 28 2F 36 3D 44 4B 52 59 60 67 6E 75 7C 83 8A 91 98 9F A6 AD B4 BB C2 C9 D0 D7 DE E5 EC F3 FA storage_stat.dat
06 0D 14 1B 22 29 30 37 3E 45 4C 53 5A 61 68 6F 76 7D 84 8B 92 99 A0 A7 AE B5 BC C3 CA D1 D8 DF E6 ED F4 FB sync
3.配置客户端
- 配置
cd /etc/fdfs
cp client.conf.sample client.conf
vim /etc/fdfs/client.conf
---------------------------------------
base_path=/data/client
tracker_server=192.168.30.100:22122
---------------------------------------
mkdir /data/client -pv
- 上传文件
fdfs_upload_file /etc/fdfs/client.conf moon.png
group1/M00/00/00/wKgeZl1aSQeAYsbfACCsQ9_Iauw878.png
- 查看文件
fdfs_file_info /etc/fdfs/client.conf group1/M00/00/00/wKgeZl1aSQeAYsbfACCsQ9_Iauw878.png
source storage id: 0
source ip address: 192.168.30.102
file create timestamp: 2019-08-19 15:00:23
file size: 2141251
file crc32: 3754453740 (0xDFC86AEC)
ll /data/fastdfs/data/00/00/wKgeZl1aSQeAYsbfACCsQ9_Iauw878.png
-rw-r--r-- 1 root root 2141251 Aug 19 15:00 /data/fastdfs/data/00/00/wKgeZl1aSQeAYsbfACCsQ9_Iauw878.png