Redis集群创建

经过半个月的学习,自己总结了下Redis集群创建,以及相关的节点操作

因为这里主要使用redis-cluster的命令,所以选择redis5.0以上版本(5.0以下版本有些redis-cluster命令不支持,需要redis-ruby工具的支持)。

准备工作

下载redis5.0以上版本

解压

tar –zxvf redis-5.0.5.tar.gz

编译redis

cd redis-5.0.5

make

创建必要目录

mkdir /usr/local/reids

cp redis-server /usr/local/bin

cp redis-benchmark /usr/local/bin

cp redis-check-rdb /usr/local/bin

cp redis-sentinel /usr/local/bin

cp redis-cli /usr/local/bin

cp redis.conf /usr/local/bin

mkdir /usr/local/cluster/7000

mkdir /usr/local/cluster/7000/data

cp redis-5.0.5/redis.conf  /usr/local/cluster/7000/

编辑redis.conf

port:7000

cluster-enabled yes

cluster-config-file /usr/local/cluster/7000/data/node.conf

pidfile /var/run/redis_7000.pid

dir /usr/local/bin

同理创建7001、7002、8000、8001、8002

cp /usr/local/cluster/7000/ redis.conf /usr/local/cluster/7001

port:7001

cluster-enabled yes

cluster-config-file /usr/local/cluster/7001/data/node.conf

pidfile /var/run/redis_7001.pid

dir /usr/local/bin

创建集群节点

创建6个节点

redis-server ./cluster/7000/redis.conf &

redis-server ./cluster/7001/redis.conf &

redis-server ./cluster/7002/redis.conf &

edis-server ./cluster/8000/redis.conf &

redis-server ./cluster/8001/redis.conf &

redis-server ./cluster/8002/redis.conf &

使用客户端redis-cli查看集群状态

[root@centos7 cluster]# redis-cli -c -h 127.0.0.1 -p 7000

127.0.0.1:7000> cluster info

cluster_state:fail

cluster_slots_assigned:0

cluster_slots_ok:0

cluster_slots_pfail:0

cluster_slots_fail:0

cluster_known_nodes:1

cluster_size:0

cluster_current_epoch:0

cluster_my_epoch:0

cluster_stats_messages_sent:0

cluster_stats_messages_received:0

 

使用客户端redis-cli查看集群节点

127.0.0.1:7000> cluster nodes

fa96dbb2ea63f16f1bc8f8315386b183d8f77abf :7000@17000 myself,master - 0 0 0 connected

关联各个节点

[root@centos7 cluster]# redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:8000 127.0.0.1:8001 127.0.0.1:8002  --cluster-replicas 1

>>> Performing hash slots allocation on 6 nodes...

Master[0] -> Slots 0 - 5460

Master[1] -> Slots 5461 - 10922

Master[2] -> Slots 10923 - 16383

Adding replica 127.0.0.1:8001 to 127.0.0.1:7000

Adding replica 127.0.0.1:8002 to 127.0.0.1:7001

Adding replica 127.0.0.1:8000 to 127.0.0.1:7002

>>> Trying to optimize slaves allocation for anti-affinity

[WARNING] Some slaves are in the same host as their master

M: fa96dbb2ea63f16f1bc8f8315386b183d8f77abf 127.0.0.1:7000

   slots:[0-5460] (5461 slots) master

M: c6f8c708f7be74280c15eb6e744c18c3fa697776 127.0.0.1:7001

   slots:[5461-10922] (5462 slots) master

M: 2205895bf212f0e6f02ea5d119c9ff5ddb4344e3 127.0.0.1:7002

   slots:[10923-16383] (5461 slots) master

S: f111aa110aa32a6ead47858131f9836747a8e5a4 127.0.0.1:8000

   replicates 2205895bf212f0e6f02ea5d119c9ff5ddb4344e3

S: 41359b81cd266c9faadb243bad979bbae0307a17 127.0.0.1:8001

   replicates fa96dbb2ea63f16f1bc8f8315386b183d8f77abf

S: 10c3ab97d14e90f688adaa3eb36165176d2996c4 127.0.0.1:8002

   replicates c6f8c708f7be74280c15eb6e744c18c3fa697776

Can I set the above configuration? (type 'yes' to accept):

键入:yes

>>> Nodes configuration updated

>>> Assign a different config epoch to each node

51117:M 20 Jul 2019 23:34:32.773 # configEpoch set to 1 via CLUSTER SET-CONFIG-EPOCH

51125:M 20 Jul 2019 23:34:32.773 # configEpoch set to 2 via CLUSTER SET-CONFIG-EPOCH

51133:M 20 Jul 2019 23:34:32.773 # configEpoch set to 3 via CLUSTER SET-CONFIG-EPOCH

51154:M 20 Jul 2019 23:34:32.774 # configEpoch set to 4 via CLUSTER SET-CONFIG-EPOCH

51162:M 20 Jul 2019 23:34:32.774 # configEpoch set to 5 via CLUSTER SET-CONFIG-EPOCH

51171:M 20 Jul 2019 23:34:32.774 # configEpoch set to 6 via CLUSTER SET-CONFIG-EPOCH

>>> Sending CLUSTER MEET messages to join the cluster

51117:M 20 Jul 2019 23:34:32.843 # IP address for this node updated to 127.0.0.1

51154:M 20 Jul 2019 23:34:32.868 # IP address for this node updated to 127.0.0.1

51162:M 20 Jul 2019 23:34:32.870 # IP address for this node updated to 127.0.0.1

51171:M 20 Jul 2019 23:34:32.871 # IP address for this node updated to 127.0.0.1

51125:M 20 Jul 2019 23:34:32.873 # IP address for this node updated to 127.0.0.1

51133:M 20 Jul 2019 23:34:32.877 # IP address for this node updated to 127.0.0.1

Waiting for the cluster to join

....51133:M 20 Jul 2019 23:34:37.793 # Cluster state changed: ok

 

51154:S 20 Jul 2019 23:34:37.810 * Before turning into a replica, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.

51154:S 20 Jul 2019 23:34:37.810 # Cluster state changed: ok

51162:S 20 Jul 2019 23:34:37.812 * Before turning into a replica, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.

51162:S 20 Jul 2019 23:34:37.812 # Cluster state changed: ok

51117:M 20 Jul 2019 23:34:37.815 # Cluster state changed: ok

51171:S 20 Jul 2019 23:34:37.816 * Before turning into a replica, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.

51171:S 20 Jul 2019 23:34:37.816 # Cluster state changed: ok

>>> Performing Cluster Check (using node 127.0.0.1:7000)

M: fa96dbb2ea63f16f1bc8f8315386b183d8f77abf 127.0.0.1:7000

   slots:[0-5460] (5461 slots) master

   1 additional replica(s)

M: c6f8c708f7be74280c15eb6e744c18c3fa697776 127.0.0.1:7001

   slots:[5461-10922] (5462 slots) master

   1 additional replica(s)

M: 2205895bf212f0e6f02ea5d119c9ff5ddb4344e3 127.0.0.1:7002

   slots:[10923-16383] (5461 slots) master

   1 additional replica(s)

S: 10c3ab97d14e90f688adaa3eb36165176d2996c4 127.0.0.1:8002

   slots: (0 slots) slave

   replicates c6f8c708f7be74280c15eb6e744c18c3fa697776

S: f111aa110aa32a6ead47858131f9836747a8e5a4 127.0.0.1:8000

   slots: (0 slots) slave

   replicates 2205895bf212f0e6f02ea5d119c9ff5ddb4344e3

S: 41359b81cd266c9faadb243bad979bbae0307a17 127.0.0.1:8001

   slots: (0 slots) slave

   replicates fa96dbb2ea63f16f1bc8f8315386b183d8f77abf

51125:M 20 Jul 2019 23:34:37.836 # Cluster state changed: ok

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

[root@centos7 cluster]# 51162:S 20 Jul 2019 23:34:38.138 * Connecting to MASTER 127.0.0.1:7000

51162:S 20 Jul 2019 23:34:38.138 * MASTER <-> REPLICA sync started

51162:S 20 Jul 2019 23:34:38.139 * Non blocking connect for SYNC fired the event.

51162:S 20 Jul 2019 23:34:38.139 * Master replied to PING, replication can continue...

51162:S 20 Jul 2019 23:34:38.139 * Trying a partial resynchronization (request 6e5530f70a818ca64770e0fbbebba6b551975566:1).

51117:M 20 Jul 2019 23:34:38.139 * Replica 127.0.0.1:8001 asks for synchronization

51117:M 20 Jul 2019 23:34:38.139 * Partial resynchronization not accepted: Replication ID mismatch (Replica asked for '6e5530f70a818ca64770e0fbbebba6b551975566', my replication IDs are '512a0e3fc18ac6c82e17facdb2a87b53eff3428e' and '0000000000000000000000000000000000000000')

51117:M 20 Jul 2019 23:34:38.139 * Starting BGSAVE for SYNC with target: disk

51117:M 20 Jul 2019 23:34:38.140 * Background saving started by pid 51674

51162:S 20 Jul 2019 23:34:38.14

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值