1. 单机ES启动报错
the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
解决修改配置文件
cluster.initial_master_nodes: ["node-1"]
2. 继续报错
[2021-09-15T06:03:22,693][INFO ][o.e.b.BootstrapChecks ] [localhost.localdomain] bound or publishing to a non-loopback address, enforcing bootstrap checks[2021-09-15T06:03:22,706][INFO ][o.e.c.c.ClusterBootstrapService] [localhost.localdomain] skipping cluster bootstrapping as local node does not match bootstrap requirements: [node-1][2021-09-15T06:03:32,711][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0[2021-09-15T06:03:42,715][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0[2021-09-15T06:03:52,718][WARN ][o.e.n.Node ] [localhost.localdomain] timed out while waiting for initial discovery state - timeout: 30s[2021-09-15T06:03:52,718][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0[2021-09-15T06:03:52,724][INFO ][o.e.h.AbstractHttpServerTransport] [localhost.localdomain] publish_address {192.168.1.11:9200}, bound_addresses {[::]:9200}[2021-09-15T06:03:52,724][INFO ][o.e.n.Node ] [localhost.localdomain] started[2021-09-15T06:04:02,721][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0[2021-09-15T06:04:12,722][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0[2021-09-15T06:04:22,725][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2021-09-15T06:04:32,730][WARN ][o.e.c.c.ClusterFormationFailureHelper] [localhost.localdomain] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1] to bootstrap a cluster: have discovered [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, 127.0.0.1:9305, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304, [::1]:9305] from hosts providers and [{localhost.localdomain}{ND-TbVinSv6p0ifkd-MrkQ}{3fZ_E2WxS_2c66xXb2pgCA}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
@
解决修改配置文件
node.name: node-1
3. 查询索引健康状态显示为yellow的原因,因为副本未能成功分配
比如,单机节点创建索引
报错
org.elasticsearch.transport.RemoteTransportException: [node-1][192.168.1.11:9300][internal:cluster/coordination/join]
Caused by: java.lang.IllegalArgumentException: can't add node {node-2}{ND-TbVinSv6p0ifkd-MrkQ}{L8A-717KT6ap199aygzBLg}{192.168.1.12}{192.168.1.12:9300}{dil}{ml.machine_memory=1907818496, ml.max_open_jobs=20, xpack.installed=true}, found existing node {node-1}{ND-TbVinSv6p0ifkd-MrkQ}{wvO5he8STo2MbVXVDL9big}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20} with the same id but is a different node instance
at org.elasticsearch.cluster.node.DiscoveryNodes$Builder.add(DiscoveryNodes.java:612) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.cluster.coordination.JoinTaskExecutor.execute(JoinTaskExecutor.java:147) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.cluster.coordination.JoinHelper$1.execute(JoinHelper.java:119) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) ~[elasticsearch-7.6.2.jar:7.6.2]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) ~[elasticsearch-7.6.2.jar:7.6.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:834) [?:?]
[2021-09-15T07:30:45,801][INFO ][o.e.c.c.JoinHelper ] [node-2] failed to join {node-1}{ND-TbVinSv6p0ifkd-MrkQ}{wvO5he8STo2MbVXVDL9big}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, ml.max_open_jobs=20, xpack.installed=true} with JoinRequest{sourceNode={node-2}{ND-TbVinSv6p0ifkd-MrkQ}{L8A-717KT6ap199aygzBLg}{192.168.1.12}{192.168.1.12:9300}{dil}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20}, optionalJoin=Optional.empty}
org.elasticsearch.transport.RemoteTransportException: [node-1][192.168.1.11:9300][internal:cluster/coordination/join]
Caused by: java.lang.IllegalArgumentException: can't add node {node-2}{ND-TbVinSv6p0ifkd-MrkQ}{L8A-717KT6ap199aygzBLg}{192.168.1.12}{192.168.1.12:9300}{dil}{ml.machine_memory=1907818496, ml.max_open_jobs=20, xpack.installed=true}, found existing node {node-1}{ND-TbVinSv6p0ifkd-MrkQ}{wvO5he8STo2MbVXVDL9big}{192.168.1.11}{192.168.1.11:9300}{dilm}{ml.machine_memory=1907818496, xpack.installed=true, ml.max_open_jobs=20} with the same id but is a different node instance
解决: 因为在复制其他节点时,将data文件一起复制了,而里面有数据,导致报错。删除data下的数据快就可以