一,集群启动报错
[node-1] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2, node-3, node-4, node-5, node-6] to bootstrap a cluster: have discovered [{node-1}{hHLczlpgRrilhz4YckzPvw}{vJw3MlmtQ_WsRNkijTAJpg}{dm-storm01}{10.217.134.23:9300}{dilm}{ml.machine_memory=135353155584, rack=rack1, xpack.installed=true, box_type=hot, ml.max_open_jobs=20}]; discovery will continue using [10.217.134.32:9300, 10.217.134.31:9300, 10.217.134.30:9300, 10.217.109.155:9300, 10.217.109.156:9300] from hosts providers and [{node-1}{hHLczlpgRrilhz4YckzPvw}{vJw3MlmtQ_WsRNkijTAJpg}{dm-storm01}{10.217.134.23:9300}{dilm}{ml.machine_memory=135353155584, rack=rack1, xpack.installed=true, box_type=hot, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
^C[2020-12-23T11:05:19,610][INFO ][o.e.x.m.p.NativeController] [node-1] Native controller process has stopped - no new native processes can be started
[2020-12-23T11:05:19,611][INFO ][o.e.n.Node ] [node-1] stopping ...
discovery.seed_hosts: ["dm-storm01:9300","dm-storm02:9300","dm-storm03:9300","dm-storm04:9300","dm-storm05:9300","dm-storm06:9300"]
原因是这不是第一次搭集群,之前的集群和当前集群使用相同的data目录,导致报错,清空data目录就可以。
二,开启xpack后执行设置密码的命令报错
Connection failure to: http://192.168.88.161:19200/_security/_authenticate?pretty failed: Connection refused (Connection refused)
原因:集群还未启动成功,要等几分钟后重试
三,关于用户:This account is currently not available
elasticsearch默认不能用root启动,在安装elasticsearch时,如果是采用npm安装,会自动创建elasticsearch用户名和用户组,但这个用户名不可用,需要在root用户下执行如下修改:
vim /etc/password
四,启动es报错: BindException[Cannot assign requested address];
解决,/etc/elasticsearch/elasticsearch.yml
配置文件要做如下修改:
network.host: 0.0.0.0
dTransportException[Failed to bind to 120.48.2.202:[9300-9400]]; nested: BindException[Cannot assign requested address];
Likely root cause: java.net.BindException: Cannot assign requested address
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:552)
at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:336)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:550)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:248)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:356)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)