现象:
[root@node02 redis]# redis-cli --cluster create $ip:$port--cluster-replicas 1 -a redis
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
[ERR] Node 192.168.100.214:16380 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.
尝试
1.https://blog.csdn.net/weixin_44829930/article/details/117558512
还是没有解决我的问题,依然是报机器A的实例又问题
2.删除原安装目录,下载redis-5.0.14的重新编译,仍然有问题
3.重启报错机器的redis,也没见生成rdb快照,感觉是redis启动异常了
4.连接报错机器测试
127.0.0.1:16379> set test 1
(error) CLUSTERDOWN Hash slot not served
或者
[ERR] Not all 16384 slots are covered by nodes.
用以下命令修复
./bin/redis-cli --cluster fix $ip:$port -a $passwd
我的解决办法
能用的机器新增两个实例,想着异常的机器用新增方式添加node
./bin/redis-cli --cluster add-node $ip:$port $ip:$port -a $passwd ###后一个ip为已在集群中的node-ip
[ERR] Node 192.168.100.214:16379 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.
检查node配置文件检查持久化存储rdb、aof文件,统统删掉再试,还是不行
连接本机加入集群异常的redis,查看集群信息
#开启了集群模式,未加入集群都是自为主
127.0.0.1:16379> cluster nodes
74c5394b9d43edb704e22f268dbf24a6e154d015 :16379@26379 myself,master - 0 0 0 connected
[ERR] Not all 16384 slots are covered by nodes. #继续fix
重新加入,在已加入集群的节点运行添加命令
[root@node02 redis]# redis-cli -a redis --cluster add-node 192.168.100.214:16380 192.168.100.205:16379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 192.168.100.214:16380 to cluster 192.168.100.205:16379
>>> Performing Cluster Check (using node 192.168.100.205:16379)
S: ed52e01bf6cb21f170ea621eb7800057633a67aa 192.168.100.205:16379
slots: (0 slots) slave
replicates c6ea961d6b8d5ebad01b373f99fcb25f2f920c06
M: ebe4e02cc3944d031f4c4efdcdd121c54c18b142 192.168.100.205:16381
slots:[0-5460] (5461 slots) master
S: e2c239b7e9e100199453e66519b9aaf19a91c3c3 192.168.100.212:16380
slots: (0 slots) slave
replicates c6ea961d6b8d5ebad01b373f99fcb25f2f920c06
M: c6ea961d6b8d5ebad01b373f99fcb25f2f920c06 192.168.100.205:16380
slots:[10923-16383] (5461 slots) master
2 additional replica(s)
M: 0e4843b90d20855cb15bcf124c15af6098cef3da 192.168.100.212:16379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 4cec9aee85b79473998594f700782d3411e3e631 192.168.100.205:16382
slots: (0 slots) slave
replicates 0e4843b90d20855cb15bcf124c15af6098cef3da
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.100.214:16380 to make it join the cluster.
[OK] New node added correctly.
芜湖,起飞。。。。
127.0.0.1:16379> cluster nodes
aa5e2961b31573c5d3e868f918ae5b2f016085d1 192.168.100.214:16380@26380 slave c6ea961d6b8d5ebad01b373f99fcb25f2f920c06 0 1653708612860 2 connected
ebe4e02cc3944d031f4c4efdcdd121c54c18b142 192.168.100.205:16381@26381 master - 0 1653708612000 7 connected 0-5460
e2c239b7e9e100199453e66519b9aaf19a91c3c3 192.168.100.212:16380@26380 slave c6ea961d6b8d5ebad01b373f99fcb25f2f920c06 0 1653708614882 4 connected
c6ea961d6b8d5ebad01b373f99fcb25f2f920c06 192.168.100.205:16380@26380 master - 0 1653708610845 2 connected 10923-16383
ed52e01bf6cb21f170ea621eb7800057633a67aa 192.168.100.205:16379@26379 myself,slave c6ea961d6b8d5ebad01b373f99fcb25f2f920c06 0 1653708611000 1 connected
0e4843b90d20855cb15bcf124c15af6098cef3da 192.168.100.212:16379@26379 master - 0 1653708613871 3 connected 5461-10922
4cec9aee85b79473998594f700782d3411e3e631 192.168.100.205:16382@26382 slave 0e4843b90d20855cb15bcf124c15af6098cef3da 0 1653708613565 6 connected
可是192.168.100.214:16380@26380 slave ,为什么是从呢???
难道是因为我后一个ip填写的是集群内部的slave?赶紧换一个mater的ip测试一下
127.0.0.1:16380> cluster nodes
ebe4e02cc3944d031f4c4efdcdd121c54c18b142 192.168.100.205:16381@26381 master - 0 1653709716159 7 connected 0-5460
0e4843b90d20855cb15bcf124c15af6098cef3da 192.168.100.212:16379@26379 master - 0 1653709720182 3 connected 5461-10922
d4978f3a5b1b27f48d487e0a9c29ac4b77f107d7 192.168.100.214:16380@26380 myself,master - 0 1653709702000 0 connected
e2c239b7e9e100199453e66519b9aaf19a91c3c3 192.168.100.212:16380@26380 slave c6ea961d6b8d5ebad01b373f99fcb25f2f920c06 0 1653709722205 2 connected
ed52e01bf6cb21f170ea621eb7800057633a67aa 192.168.100.205:16379@26379 slave c6ea961d6b8d5ebad01b373f99fcb25f2f920c06 0 1653709719171 2 connected
4cec9aee85b79473998594f700782d3411e3e631 192.168.100.205:16382@26382 slave 0e4843b90d20855cb15bcf124c15af6098cef3da 0 1653709715146 3 connected
c6ea961d6b8d5ebad01b373f99fcb25f2f920c06 192.168.100.205:16380@26380 master - 0 1653709721197 2 connected 10923-16383
192.168.100.214:16380@26380 myself,master
测试了多次也还是这样,我也一时不知道说啥了。。。。
好吧,意外发现,搞定,收工!