ElasticSearch小计

1. 单机ES启动报错
the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

解决修改配置文件

cluster.initial_master_nodes: ["node-1"]
2. 继续报错
[2021-09-15T06:03:22,693][INFO ][o.e.b.BootstrapChecks    ] [localhost.localdomain] bound or publishing to a non-loopback address, enforcing bootstrap checks[2021-09-15T06:03:22,706][INFO ][o.e.c.c.ClusterBootstrapService] [localhost.localdomain] skipping cluster bootstrapping as local node does not match bootstrap requirements: [node-1][2021-09-15T06:03:32,711][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0[2021-09-15T06:03:42,715][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0[2021-09-15T06:03:52,718][WARN ][o.e.n.Node               ] [localhost.localdomain] timed out while waiting for initial discovery state - timeout: 30s[2021-09-15T06:03:52,718][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0[2021-09-15T06:03:52,724][INFO ][o.e.h.AbstractHttpServerTransport] [localhost.localdomain] publish_address {192.168.1.11:9200}, bound_addresses {[::]:9200}[2021-09-15T06:03:52,724][INFO ][o.e.n.Node               ] [localhost.localdomain] started[2021-09-15T06:04:02,721][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0[2021-09-15T06:04:12,722][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0[2021-09-15T06:04:22,725][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2021-09-15T06:04:32,730][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
@                                                                                                                                                                                                                                
  

解决修改配置文件

node.name: node-1
3. 查询索引健康状态显示为yellow的原因,因为副本未能成功分配

比如,单机节点创建索引

报错
org.elasticsearch.transport.RemoteTransportException: [node-1][192.168.1.11:9300][internal:cluster/coordination/join]
Caused by: java.lang.IllegalArgumentException: can't add node {node-2}{ND-TbVinSv6p0ifkd-MrkQ}{L8A-717KT6ap199aygzBLg}{192.168.1.12}{192.168.1.12:9300}{dil}{ml.machine_memory=1907818496, ml.max_open_jobs=20, xpack.installed=true}, found existing node {node-1}{ND-TbVinSv6p0ifkd-MrkQ}{wvO5he8STo2MbVXVDL9big}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20} with the same id but is a different node instance
	at org.elasticsearch.cluster.node.DiscoveryNodes$Builder.add(DiscoveryNodes.java:612) ~[elasticsearch-7.6.2.jar:7.6.2]
	at org.elasticsearch.cluster.coordination.JoinTaskExecutor.execute(JoinTaskExecutor.java:147) ~[elasticsearch-7.6.2.jar:7.6.2]
	at org.elasticsearch.cluster.coordination.JoinHelper$1.execute(JoinHelper.java:119) ~[elasticsearch-7.6.2.jar:7.6.2]
	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.6.2.jar:7.6.2]
	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.6.2.jar:7.6.2]
	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) ~[elasticsearch-7.6.2.jar:7.6.2]
	at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) ~[elasticsearch-7.6.2.jar:7.6.2]
	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) ~[elasticsearch-7.6.2.jar:7.6.2]
	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-7.6.2.jar:7.6.2]
	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-7.6.2.jar:7.6.2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) ~[elasticsearch-7.6.2.jar:7.6.2]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-7.6.2.jar:7.6.2]
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-7.6.2.jar:7.6.2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-09-15T07:30:45,801][INFO ][o.e.c.c.JoinHelper       ] [node-2] failed to join {node-1}{ND-TbVinSv6p0ifkd-MrkQ}{wvO5he8STo2MbVXVDL9big}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, ml.max_open_jobs=20, xpack.installed=true} with JoinRequest{sourceNode={node-2}{ND-TbVinSv6p0ifkd-MrkQ}{L8A-717KT6ap199aygzBLg}{192.168.1.12}{192.168.1.12:9300}{dil}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}, optionalJoin=Optional.empty}
org.elasticsearch.transport.RemoteTransportException: [node-1][192.168.1.11:9300][internal:cluster/coordination/join]
Caused by: java.lang.IllegalArgumentException: can't add node {node-2}{ND-TbVinSv6p0ifkd-MrkQ}{L8A-717KT6ap199aygzBLg}{192.168.1.12}{192.168.1.12:9300}{dil}{ml.machine_memory=1907818496, ml.max_open_jobs=20, xpack.installed=true}, found existing node {node-1}{ND-TbVinSv6p0ifkd-MrkQ}{wvO5he8STo2MbVXVDL9big}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20} with the same id but is a different node instance

解决: 因为在复制其他节点时,将data文件一起复制了,而里面有数据,导致报错。删除data下的数据快就可以

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

synda@hzy

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值