详解 Elasticsearch 使用问题收集

在三台机器组成的Elasticsearch集群中,遇到节点频繁脱机和分片未分配的问题。日志显示节点间通信超时和获取分片锁失败,导致数据同步和索引创建失败。解决方案包括在无实时数据时重启集群,或在有实时数据时通过API重新分配副本分片。此外,还提及了Elasticsearch源码在Eclipse中加载主类的常见问题和解决方法。
摘要由CSDN通过智能技术生成

1 Elasticsearch负载均衡出现节点(Node)脱机及分片unassigned的问题


项目场景

        有三台机器组成的集群,每半小时从第三方拉去数据到集群节点。大概有100个索引。现在数据量也不大每个索引就不到10G的数据。最近总出现节点脱机,或者是分片不工作的情况。


问题描述

[2021-09-02T05:46:19,297][WARN ][o.e.d.z.UnicastZenPing   ] [node-50] failed to send ping to [{node-37}{kUAw2H4ATFiNf8aElLVTJw}{r_lZpbo2RLi7trJs57gjtw}{172.169.9.37}{172.169.9.37:9300}]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [node-37][172.169.9.37:9300][internal:discovery/zen/unicast] request_id [156501] timed out after [3750ms]
    at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:908) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.2.jar:5.2.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]

[2021-09-02T10:16:36,077][INFO ][o.e.i.s.TransportNodesListShardStoreMetaData] [node-50] [hhl][2]: failed to obtain shard lock
org.elasticsearch.env.ShardLockObtainFailedException: [hhl][2]: obtaining shard lock timed out after 5000ms
    at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.index.store.Store.readMetadataSnapshot(Store.java:383) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.listStoreMetaData(TransportNodesListShardStoreMetaData.java:153) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:112) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:64) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:145) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:270) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:266) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1488) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.2.2.jar:5.2.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
[2021-09-02T10:16:39,535][WARN ][o.e.i.c.IndicesClusterStateService] [node-50] [[hhl][4]] marking and sending shard failed due to [failed to create shard]
java.io.IOException: failed to obtain in-memory shard lock
    at org.elasticsearch.index.IndexService.createShard(IndexService.java:367) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:476) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:146) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:542) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:519) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:204) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:856) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:810) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:628) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.2.2.jar:5.2.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: org.elasticsearch.env.ShardLockObtainFailedException: [hhl][4]: obtaining shard lock timed out after 5000ms
    at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.index.IndexService.createShard(IndexService.java:297) ~[elasticsearch-5.2.2.jar:5.2.2]
    ... 15 more

[2021-09-02T10:17:35,300][WARN ][o.e.i.c.IndicesClusterStateService] [node-50] [[otacomment1][4]] marking and sending shard failed due to [failed to create shard]
java.io.IOException: failed to obtain in-memory shard lock
    at org.elasticsearch.index.IndexService.createShard(IndexService.java:367) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:476) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:146) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:542) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:519) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:204) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:856) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:810) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:628) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.2.2.jar:5.2.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: org.elasticsearch.env.ShardLockObtainFailedException: [otacomment1][4]: obtaining shard lock timed out after 5000ms
    at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.index.IndexService.createShard(IndexService.java:297) ~[elasticsearch-5.2.2.jar:5.2.2]
    ... 15 more
[2021-09-02T10:17:40,301][WARN ][o.e.i.c.IndicesClusterStateService] [node-50] [[otacomment1][1]] marking and sending shard failed due to [failed to create shard]
java.io.IOException: failed to obtain in-memory shard lock
    at org.elasticsearch.index.IndexService.createShard(IndexService.java:367) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:476) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:146) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:542) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:519) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:204) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.cluster.service.ClusterService.callClusterStateAppliers(ClusterService.java:856) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.cluster.service.ClusterService.publishAndApplyChanges(ClusterService.java:810) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.cluster.service.ClusterService.runTasks(ClusterService.java:628) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:1112) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.2.2.jar:5.2.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: org.elasticsearch.env.ShardLockObtainFailedException: [otacomment1][1]: obtaining shard lock timed out after 5000ms
    at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.index.IndexService.createShard(IndexService.java:297) ~[elasticsearch-5.2.2.jar:5.2.2]
    ... 15 more
[2021-09-02T10:17:45,310][INFO ][o.e.i.s.TransportNodesListShardStoreMetaData] [node-50] [otacomment1][4]: failed to obtain shard lock
org.elasticsearch.env.ShardLockObtainFailedException: [otacomment1][4]: obtaining shard lock timed out after 5000ms
    at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:712) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:631) ~[elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.index.store.Store.readMetadataSnapshot(Store.java:383) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.listStoreMetaData(TransportNodesListShardStoreMetaData.java:153) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:112) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData.nodeOperation(TransportNodesListShardStoreMetaData.java:64) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:145) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:270) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:266) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1488) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) [elasticsearch-5.2.2.jar:5.2.2]
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.2.2.jar:5.2.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
[2021-09-02T10:17:45,310][INFO ][o.e.i.s.TransportNodesListShardStoreMetaData] [node-50] [otacomment1][1]: failed to obtain shard lock


解决方案

1. 无实时数据;

        同步的情况可以直接关闭重启

 2. 存在实时数据的时候

        5+

 curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
     "commands": [
        {
            " allocate_replica ": {
                "index": "'$INDEX'",
                "shard": '$SHARD',
                "node": "'$NODE'",
                "allow_primary": true
          }
        }
    ]
  }'

2 elasticsearch 源码在eclise中提示无法加载主类

解决方法

找到ElasticSearch类(main),将cli加载到buildpath

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

一介布衣+

做好事,当好人

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值