CentOS7安装部署ELK8.3(集群)
先按照 https://blog.csdn.net/gjjhyd/article/details/135343239 部署ELK单节点,再组集群
架构:Beats+Kafka+Logstash+Elasticsearch+Kibana
版本:8.3.3
下载地址:https://mirrors.tuna.tsinghua.edu.cn/elasticstack/8.x/yum/8.3.3/
兼容性:
https://www.elastic.co/cn/support/matrix#matrix_compatibility
一、Elasticsearch集群
-
Elasticsearch添加节点
要加入Elasticsearch集群的node2安装Elasticsearch
[root@node2 ~]# rpm -ivh /localdata/tools/elk/elasticsearch-8.3.3-x86_64.rpmNode1生成token:
[root@node1 elasticsearch]# ./bin/elasticsearch-create-enrollment-token -s node
warning: ignoring JAVA_HOME=/usr/local/jdk-18.0.2.1; using bundled JDK
eyJ2ZXIiOiI4LjMuMy…XdtRVozMWZSRzNEcXcifQ==
node2更新token:
[root@node2 ~]# /usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token eyJ2ZXIiOiI4LjMuMy…XdtRVozMWZSRzNEcXcifQ==Node2启动服务无报错,接下来修改配置文件
[root@node2 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: es
node.name: node2
path.data: /opt/elk/esdata
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: [“node1”, “node2”]
action.destructive_requires_name: false
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
cluster.initial_master_nodes: [“node1”, “node2”]
http.host: 0.0.0.0
transport.host: 0.0.0.0node2重启服务
查看集群节点,*表示主节点
[root@node1 elk]# curl --insecure -u elastic:qvV… -XGET “https://node2:9200/_cat/nodes”
192.168.1.10 33 42 1 1.24 0.96 0.64 cdmhilrsftw * node1
192.168.1.11 5 99 8 0.25 0.15 0.09 cdmhilrsftw – node2 -
Logstash更新配置
[root@node1 elk]# vim /etc/logstash/conf.d/logstash-sample.conf
output {
elasticsearch {
hosts => [“https://node1:9200”, “https://node2:9200”]
index => “%{[@metadata][kafka][topic]}-%{+YYYY.MM.dd}”
user => “elastic”
password => “qvV…”
cacert => “/etc/logstash/http_ca.crt”
}
}[root@node1 elk]# systemctl restart logstash
Kibana discover中查看实时log是否有收集到
-
Kibana更新配置
[root@node1 elk]# vim /etc/kibana/kibana.yml
elasticsearch.hosts: [‘https://192.168.1.10:9200’, ‘https://192.168.1.11:9200’]
[root@node1 elk]# systemctl restart kibana
[root@node1 elk]# systemctl status kibana
二、Kafka集群
- Kafka添加节点
Node1:
[root@node1 bin]# tar -xvf /opt/kafka_2.12-3.3.1.tar -C /usr/local/
[root@node1 bin]# ls /usr/local/kafka_2.12-3.3.1/
[root@node1 bin]# groupadd -g 1000 kafka
[root@node1 bin]# useradd -u 1000 -g 1000 -d /home/kafka -s /sbin/nologin kafka
[root@node1 bin]# vim /etc/systemd/system/kafka.service
[Unit]
Description=Apache Kafka server (broker)
Documentation=http://kafka.apache.org/documentation.html
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
User=kafka
Group=kafka
LimitNOFILE=102400
Environment=LOG_DIR=/opt/elk/kafka/logs
Environment=JAVA_HOME=/usr/local/jdk-18.0.2.1/
ExecStart=/usr/local/kafka_2.12-3.3.1/bin/kafka-server-start.sh /usr/local/kafka_2.12-3.3.1/config/kraft/server.properties
ExecStop=/usr/local/kafka_2.12-3.3.1/bin/kafka-server-stop.sh
[Install]
WantedBy=multi-user.target
[root@node1 bin]# vim /usr/local/kafka_2.12-3.3.1/config/kraft/server.properties
node.id=1
controller.quorum.voters=1@node1:9093,2@node2:9093
log.dirs=/opt/elk/kafka/kraft-combined-logs
[root@node1 bin]# chown -R kafka:kafka /usr/local/kafka_2.12-3.3.1/
[root@node1 bin]# systemctl daemon-reload
[root@node1 bin]# mkdir /localdata/elk/kafka
[root@node1 bin]# chown kafka:kafka /localdata/elk/kafka/
[root@node1 bin]# cd /usr/local/kafka_2.12-3.3.1/
[root@node1 bin]# ./bin/kafka-storage.sh random-uuid //只有主节点做
[root@node1 bin]# ./bin/kafka-storage.sh format -t qX3SFKBGS7…9A -c config/kraft/server.properties //上一条生成的UUID更新到storage
[root@node1 bin]# ls /opt/elk/kafka/kraft-combined-logs
[root@node1 bin]# cat /opt/elk/kafka/kraft-combined-logs/meta.properties //查看
[root@node1 bin]# cd /opt/elk/kafka/
[root@node1 bin]# chown kafka:kafka -R kraft-combined-logs
[root@node1 bin]# systemctl start kafka
[root@node1 bin]# systemctl status kafka
Node2:
上面的操作重复一遍,以下设置与node1不同
[root@node2 bin]# vim /usr/local/kafka_2.12-3.3.1/config/kraft/server.properties
node.id=2
controller.quorum.voters=1@node1:9093,2@node2:9093
log.dirs=/opt/elk/kafka/kraft-combined-logs
[root@node1 bin]# ./bin/kafka-storage.sh random-uuid //只有主节点做,node2不做 - Kafka优化
CPU和内存优化
[root@node2 bin]# vim /usr/local/kafka_2.12-3.3.1/config/kraft/server.properties
num.network.threads=2 //数据传输线程数,建议占总核数的50%的2/3
num.io.threads=6 //写磁盘的线程数,建议占总核数的50%
[root@node2 bin]# vim /usr/local/kafka_2.12-3.3.1/bin/kafka-server-start.sh
export KAFKA_HEAP_OPTS=“-Xmx4G -Xms4G” //内存改为4G
打开文件数优化
[root@node2 bin]# vim /etc/systemd/system/kafka.service
LimitNOFILE=102400 //设置打开文件数
partition和replicas数量的优化
增加partition和replicas的数量,分别设置为2,replicas的数量不能超过broker的数量
[root@node01 kafka_2.12-3.3.1]# pwd
/usr/local/kafka_2.12-3.3.1
[root@node01 kafka_2.12-3.3.1]# ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --alter --topic filebeat --partitions 2
[root@node01 kafka_2.12-3.3.1]# cat increase-replication-factor.json
{“version”:1,
“partitions”:[{“topic”:“filebeat”,“partition”:0,“replicas”:[1,2]}] //设置partition0的副本为1和2,依次设置其余patition的副本
}
./bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file increase-replication-factor.json --execute -additional
[root@node01 kafka_2.12-3.3.1]# ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic filebeat //查看修改后的结果
Topic: filebeat TopicId: 0azg…DaA PartitionCount: 2 ReplicationFactor: 2 Configs: segment.bytes=1073741824
Topic: filebeat Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: filebeat Partition: 1 Leader: 2 Replicas: 2,1 Isr: 2,1
查询集群Leader
[root@elk04 kafka_2.12-3.3.1]# ./bin/kafka-metadata-quorum.sh --bootstrap-server localhost:9092 describe --status
ClusterId: qX3…kA
LeaderId: 1
LeaderEpoch: 13768
HighWatermark: 5606841
MaxFollowerLag: 0
MaxFollowerLagTimeMs: 0
CurrentVoters: [1,2]
CurrentObservers: []
查看重新分配的partition
[root@elk04 kafka_2.12-3.3.1]# ./bin/kafka-reassign-partitions.sh --list --bootstrap-server localhost:9092
三、Logstash集群
每一台都安装并配置logstash,只开启一台server的logstash服务,其他备用
四、Kibana集群
- 导入验证
Node1和Node2按照之前的步骤已配置好,接下来
Node1:
[root@node1 ~]# /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
warning: ignoring JAVA_HOME=/usr/local/jdk-18.0.2.1/; using bundled JDK
eyJ2Z…lUSJ9
Node2:
[root@node2 ~]# /usr/share/kibana/bin/kibana-setup --enrollment-token eyJ2Z…lUSJ9
✔ Kibana configured successfully.
To start Kibana run:
bin/kibana - 重启服务
[root@node2 ~]# systemctl restart kibana - Web访问
Web访问http://node2:5601查看discover数据均已同步
按照之前方法配置https方式访问 - 修改index的shard和replicas的数量
如果index已存在,只能修改replicas不能修改shards
修改index默认的replicas和shards,修改后对新创建的index生效
查看设置
针对某些index设置shards和replicas
查看template_2