Caused by: redis.clients.jedis.exceptions.JedisNoReachableClusterNodeException: No reachable node in cluster
at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnection(JedisSlotBasedConnectionHandler.java:57)
at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnectionFromSlot(JedisSlotBasedConnectionHandler.java:74)
at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:116)
at redis.clients.jedis.JedisClusterCommand.run(JedisClusterCommand.java:31)
at redis.clients.jedis.JedisCluster.hincrBy(JedisCluster.java:444)
at MonitorAdvice$$anonfun$main$1$$anonfun$apply$5$$anonfun$apply$6.apply(MonitorAdvice.scala:96)
at MonitorAdvice$$anonfun$main$1$$anonfun$apply$5$$anonfun$apply$6.apply(MonitorAdvice.scala:95)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at org.apache.spark.util.CompletionIterator.foreach(CompletionIterator.scala:26)
at MonitorAdvice$$anonfun$main$1$$anonfun$apply$5.apply(MonitorAdvice.scala:95)
at MonitorAdvice$$anonfun$main$1$$anonfun$apply$5.apply(MonitorAdvice.scala:93)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
... 3 more
看了好多文章,有的说是jedis3.0集群以后,不需要手动关闭连接池 jedis.close()
原因:我们使用的是redis3.0的集群,用jedis的JedisCluster.close()方法造成的集群连接关闭的情况。 jedisCluster内部使用了池化技术,每次使用完毕都会自动释放Jedis因此不需要关闭。如果调用close方法后再调用jedisCluster的api进行操作时就会出现如上错误。
(但是经过检查,代码没有出现这种情况)
有的说 可能是你的redis集群状态以及是fail ,你可以进去查看下
集群状态是fail,重新执行命令
ruby redis-trib.rb create --replicas 1 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384
D:\Redis\Redis6379>ruby redis-trib.rb create --replicas 1 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384
>>> Creating cluster
Connecting to node 127.0.0.1:6379: OK
Connecting to node 127.0.0.1:6380: OK
Connecting to node 127.0.0.1:6381: OK
Connecting to node 127.0.0.1:6382: OK
Connecting to node 127.0.0.1:6383: OK
Connecting to node 127.0.0.1:6384: OK
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
127.0.0.1:6379
127.0.0.1:6380
127.0.0.1:6381
Adding replica 127.0.0.1:6382 to 127.0.0.1:6379
Adding replica 127.0.0.1:6383 to 127.0.0.1:6380
Adding replica 127.0.0.1:6384 to 127.0.0.1:6381
M: eb8145516f00fc303501a59264f732ad4208d28c 127.0.0.1:6379
slots:0-5460 (5461 slots) master
M: 9d6f93528c37d8cf6b63de492b3bc5632d790795 127.0.0.1:6380
slots:5461-10922 (5462 slots) master
M: e4cbba17a02c11a5f7463d47f3c6d4ab7eef9ffc 127.0.0.1:6381
slots:10923-16383 (5461 slots) master
S: c462011b08451507c95d8a63c5b03e843878ed89 127.0.0.1:6382
replicates eb8145516f00fc303501a59264f732ad4208d28c
S: bb0e32ca264e521a870cdcf696c923051659ce26 127.0.0.1:6383
replicates 9d6f93528c37d8cf6b63de492b3bc5632d790795
S: 3143deec10fdd151abf3218c858c47f44c9e052c 127.0.0.1:6384
replicates e4cbba17a02c11a5f7463d47f3c6d4ab7eef9ffc
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join..
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: eb8145516f00fc303501a59264f732ad4208d28c 127.0.0.1:6379
slots:0-5460 (5461 slots) master
M: 9d6f93528c37d8cf6b63de492b3bc5632d790795 127.0.0.1:6380
slots:5461-10922 (5462 slots) master
M: e4cbba17a02c11a5f7463d47f3c6d4ab7eef9ffc 127.0.0.1:6381
slots:10923-16383 (5461 slots) master
M: c462011b08451507c95d8a63c5b03e843878ed89 127.0.0.1:6382
slots: (0 slots) master
replicates eb8145516f00fc303501a59264f732ad4208d28c
M: bb0e32ca264e521a870cdcf696c923051659ce26 127.0.0.1:6383
slots: (0 slots) master
replicates 9d6f93528c37d8cf6b63de492b3bc5632d790795
M: 3143deec10fdd151abf3218c858c47f44c9e052c 127.0.0.1:6384
slots: (0 slots) master
replicates e4cbba17a02c11a5f7463d47f3c6d4ab7eef9ffc
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
再检查集群状态
D:\Redis\Redis6379>.\redis-cli.exe -c -p 6379
127.0.0.1:6379> CLUSTER info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_sent:34
cluster_stats_messages_received:34
代码正常运行