关于CentOS7搭建ELK集群遇到的问题及解决办法

使用虚拟机安装的CentOS7系统搭建ELK集群过程中遇到的问题记录如下,对阿里云等虚拟环境或有参考作用。
安装过程可以参考:https://www.cnblogs.com/bixiaoyu/p/9460554.html 系列,其作者总结的很详细,安装的是elasticsearch-6.x 系列版本。我安装的是elasticsearch-7.x,所以在安装过程中也出现了一些问题。

安装版本:
elasticsearch-7.2.0-linux-x86_64.tar.gz
logstash-7.2.0.tar.gz
kibana-7.2.0-linux-x86_64.tar.gz
filebeat-7.2.0-linux-x86_64.tar.gz
zookeeper-3.4.12.tar.gz
kafka_2.11-2.0.0.tgz

一、es集群不通

[root@localhost soft]# curl http://localhost:9200/_cat/master?v
{"error":{"root_cause":[{"type":"master_not_discovered_exception","reason":null}],"type":"master_not_discovered_exception","reason":null},"status":503}[root@localhost soft]# 
[root@localhost soft]# 

原因:elasticsearch.yml采用的是es6.x的配置方式
解决:采用es7.x配置方式

master:

[root@localhost config]# cat elasticsearch.yml
cluster.name: es_data
node.name: es_node1
network.host: 0.0.0.0
http.port: 9200
node.master: true
node.data: true
discovery.seed_hosts:
   -  192.168.41.137:9300
   -  192.168.41.135:9300

cluster.initial_master_nodes:
   - 192.168.41.135:9300

http.cors.enabled: true
http.cors.allow-origin: "*"
[root@localhost config]# 

slave:

[root@ecs-trf-k8s-137 config]# cat elasticsearch.yml
cluster.name: es_data
node.name: es_node2
network.host: 0.0.0.0
http.port: 9200
node.master: false
node.data: true
discovery.seed_hosts:
   -  192.168.41.137:9300
   -  192.168.41.135:9300

cluster.initial_master_nodes:
   - 192.168.41.135:9300

http.cors.enabled: true
http.cors.allow-origin: "*"
[root@ecs-trf-k8s-137 config]# 

更多配置参数项参照官方说明。

[root@localhost config]# curl http://localhost:9200/_cat/master?v
id                     host           ip             node
Lzz-rF2TRxi4szaU-e6Peg 192.168.41.135 192.168.41.135 es_node1
[root@localhost config]# 

二、elasticsearch-head “集群健康值: 未连接”
访问head地址:http://xxx:9100,显示“集群健康值:未连接”
在这里插入图片描述
原因:
1、es7.x按照es6.x配置了,集群不通
2、未配置跨域

解决:
问题1:采用es7.x的配置
问题2:
es和head插件实际上属于两个不同的进程,这里存在一个跨域问题,
在elasticsearch.yml添加两行配置:

http.cors.enabled: true        #表示开启跨域访问支持,默认为false
http.cors.allow-origin: "*"    #表示跨域访问允许的域名地址,可支持正则表达式,这里“*”表示允许所有域名访问

然后,重新启动es生效。
刷新head 插件页面,即可看到head插件已经正确查找到ES节点
在这里插入图片描述

三、kafka启动报错:
Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)

......
:46:15,195] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
:46:15,384] INFO Opening socket connection to server ecs-trf-k8s-137/192.168.41.137:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
:46:20,256] INFO Socket connection established to ecs-trf-k8s-137/192.168.41.137:2181, initiating session (org.apache.zookeeper.ClientCnxn)
:46:21,391] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
:46:26,337] WARN Client session timed out, have not heard from server in 6030ms for sessionid 0x0 (org.apache.zookeeper.ClientCnxn)
:46:27,980] INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
:46:28,171] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
:46:28,192] INFO EventThread shut down for session: 0x0 (org.apache.zookeeper.ClientCnxn)
:46:28,346] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
        at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply$mcV$sp(ZooKeeperClient.scala:230)
        at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:226)
        at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:226)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
        at kafka.zookeeper.ZooKeeperClient.kafka$zookeeper$ZooKeeperClient$$waitUntilConnected(ZooKeeperClient.scala:226)
        at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:95)
        at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1580)
        at kafka.server.KafkaServer.kafka$server$KafkaServer$$createZkClient$1(KafkaServer.scala:348)
        at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:372)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:202)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
        at kafka.Kafka$.main(Kafka.scala:75)
        at kafka.Kafka.main(Kafka.scala)
:46:29,522] INFO shutting down (kafka.server.KafkaServer)
:46:29,621] WARN  (kafka.utils.CoreUtils$)
java.lang.NullPointerException
        at kafka.server.KafkaServer$$anonfun$shutdown$5.apply$mcV$sp(KafkaServer.scala:579)
        at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:86)
        at kafka.server.KafkaServer.shutdown(KafkaServer.scala:579)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:329)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
        at kafka.Kafka$.main(Kafka.scala:75)
        at kafka.Kafka.main(Kafka.scala)
:46:29,680] INFO shut down completed (kafka.server.KafkaServer)
:46:29,689] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
:46:29,774] INFO shutting down (kafka.server.KafkaServer)
......

原因:
1、kafka与zookeeper版本不匹配
2、内存不足
3、防火墙未关闭

解决:
1、kafka的zookeeper客户端和zookeeper服务端版本要一致,kafka和zk版本要匹配,如:
zookeeper-3.4.12.tar.gz
kafka_2.11-2.0.0.tgz

2、因为我是使用本地虚拟机搭建的,硬件配置不高,所以关掉了同台主机的内存占用比较高的服务,如es节点,以恢复。若是真实环境,一定要选用内存配置高的主机,包括logstash也会因CPU不足告警,所以内存、CPU、磁盘等硬件配置要跟上。

3、关闭防火墙
systemctl stop firewalld.service

四、zookeeper集群启动报错

......
[root@ecs-trf-k8s-137 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
[root@ecs-trf-k8s-137 bin]# cat zookeeper.out
:44:48,466 [myid:1] - INFO  [main:QuorumPeer@1467] - QuorumPeer communication is not secured!
:44:48,468 [myid:1] - INFO  [main:QuorumPeer@1496] - quorum.cnxn.threads.size set to 20
:44:48,661 [myid:1] - INFO  [ListenerThread:QuorumCnxManager$Listener@736] - My election bind port: /192.168.41.137:3888
:44:48,697 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer@909] - LOOKING
:44:48,703 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@813] - New election. My id =  1, proposed zxid=0x100000004
:44:48,708 [myid:1] - INFO  [WorkerReceiver[myid=1]:FastLeaderElection@595] - Notification: 1 (message format version), 1 (n.leader), 0x100000004 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x2 (n.peerEpoch) LOOKING (my state)
:44:48,720 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@584] - Cannot open channel to 2 at election address /192.168.41.135:3888
java.net.ConnectException: 拒绝连接
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:534)
        at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:454)
        at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:435)
        at java.lang.Thread.run(Thread.java:745)
:44:48,741 [myid:1] - INFO  [WorkerSender[myid=1]:QuorumPeer$QuorumServer@184] - Resolved hostname: 192.168.41.135 to address: /192.168.41.135
:44:48,743 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@584] - Cannot open channel to 3 at election address /192.168.41.136:3888
java.net.ConnectException: 拒绝连接
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:534)
        at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:454)
        at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:435)
        at java.lang.Thread.run(Thread.java:745)
:44:48,744 [myid:1] - INFO  [WorkerSender[myid=1]:QuorumPeer$QuorumServer@184] - Resolved hostname: 192.168.41.136 to address: /192.168.41.136
:44:48,915 [myid:1] - WARN  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@584] - Cannot open channel to 2 at election address /192.168.41.135:3888
java.net.ConnectException: 拒绝连接
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:610)
        at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:838)
        at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:957)
:44:48,916 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@184] - Resolved hostname: 192.168.41.135 to address: /192.168.41.135
......

原因:
配置集群的话,注意本机的server配置为0.0.0.0:2888:3888,否则会失效,阿里云环境可参考
解决:
本机的server配置为0.0.0.0:2888:3888

[root@ecs-trf-k8s-137 conf]# cat zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=0.0.0.0:2888:3888
server.2=192.168.41.136:2888:3888
server.3=192.168.41.135:2888:3888
[root@ecs-trf-k8s-137 conf]# 

另外,集群各节点的myid必须不同,如果有重复的,则后启动的节点会在启动时报错。

五、logstash连接elasticsearch报错503

[root@ecs-trf-k8s-137 logstash]# ./bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["192.168.41.135:9200"] } }'
......
message=>"Got response code '503' contacting Elasticsearch at
......

原因:
elasticsearch集群不通
解决:
按上面的问题方法解决es集群问题后恢复

六、打开kibana地址:http://xxx:5601,一直显示:“Kibana server is not ready yet”
在这里插入图片描述
原因:
1、elasticsearch集群不通
2、elasticsearch的.kibana索引别名被删了

log   [08:56:14.363] [info][migrations] Creating index .kibana_1.
  log   [08:56:14.376] [warning][migrations] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_1 and restarting Kibana.
 error  [08:56:22.796] [warning][stats-collection] [index_not_found_exception] no such index [.kibana], with { resource.type="index_or_alias" & resource.id=".kibana" & index_uuid="_na_" & index=".kibana" } :: {"path":"/.kibana/_search","query":{},"body":"{\"track_total_hits\":true,\"query\":{\"term\":{\"type\":{\"value\":\"space\"}}},\"aggs\":{\"disabledFeatures\":{\"terms\":{\"field\":\"space.disabledFeatures\",\"include\":[\"discover\",\"visualize\",\"dashboard\",\"dev_tools\",\"advancedSettings\",\"indexPatterns\",\"savedObjectsManagement\",\"timelion\",\"graph\",\"monitoring\",\"ml\",\"apm\",\"maps\",\"canvas\",\"infrastructure\",\"logs\",\"siem\",\"uptime\"],\"size\":18}}},\"size\":0}","statusCode":404,"response":"{\"error\":{\"root_cause\":[{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [.kibana]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\".kibana\",\"index_uuid\":\"_na_\",\"index\":\".kibana\"}],\"type\":\"index_not_found_exception\",\"reason\":\"no such index [.kibana]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\".kibana\",\"index_uuid\":\"_na_\",\"index\":\".kibana\"},\"status\":404}"}
    at respond (/usr/local/kibana/node_modules/elasticsearch/src/lib/transport.js:315:15)
    at checkRespForFailure (/usr/local/kibana/node_modules/elasticsearch/src/lib/transport.js:274:7)
    at HttpConnector.<anonymous> (/usr/local/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)
    at IncomingMessage.wrapper (/usr/local/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)
    at IncomingMessage.emit (events.js:194:15)
    at endReadableNT (_stream_readable.js:1103:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)
  log   [08:56:22.797] [warning][stats-collection] Unable to fetch data from spaces collector

在这里插入图片描述
解决:
1、按上面的问题方法解决es集群问题后恢复
2、通过elasticsearch-head重建别名
在这里插入图片描述
在这里插入图片描述
注意:把elasticsearch的.kibana_*索引删掉,也能重新启动,但是原来在kibana中设置保存的索引模式、视图、仪表盘等都会丢失!所以不能随便删es索引数据,恢复别名即可。
在这里插入图片描述

  • 2
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值