文章目录
CentOS7安装部署ELK8.3(单节点)
架构:Beats+Kafka+Logstash+Elasticsearch+Kibana
版本:8.3.3
下载地址:https://mirrors.tuna.tsinghua.edu.cn/elasticstack/8.x/yum/8.3.3/
兼容性:
https://www.elastic.co/cn/support/matrix#matrix_compatibility
一、Elasticsearch
-
Install
[root@node1 ~]# rpm -ivh /opt/elasticsearch-8.3.3-x86_64.rpm
安装时生成一些密码,将这些密码保存在一个文件里,已备后续使用 -
ES Configuration
[root@node1 ~]# mkdir /localdata/esdata/
[root@node1 ~]# chown elasticsearch:elasticsearch /localdata/esdata/
[root@node1 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: node1
node.name: node1
path.data: /localdata/esdata //数据存放目录
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true //lock the process address space into RAM, preventing any Elasticsearch heap memory from being swapped out.
network.host: 192.168.1.10 //es服务器IP
http.port: 9200 //es服务器端口
discovery.seed_hosts: [“node1”] //es服务器发现主机初始列表
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
cluster.initial_master_nodes: [“node1”]
discovery.type: single-node
http.host: 0.0.0.0
transport.host: 0.0.0.0
ingest.geoip.downloader.enabled: false
[root@node1 ~]# cd /etc/elasticsearch/
[root@node1 elasticsearch]# vim jvm.options
-XX:HeapDumpPath=/localdata/esdata //在内存不足异常时将堆转储到该目录
[root@node1 tmp]# vim /etc/elasticsearch/log4j2.properties
appender.rolling.strategy.action.condition.nested_condition.age = 7D //日志保留7天 -
System Configuration
swap对性能和节点稳定性非常不利,应该不惜一切代价加以避免。它会导致垃圾回收持续几分钟而不是几毫秒,并且会导致节点响应缓慢,甚至与集群断开连接
注:如果是systemd而非systemctl,则需要通过以下方式修改限制
查看系统使用的Sysv是init or systemd
[root@node1 ~]# ps -p 1
PID TTY TIME CMD
1 ? 00:00:49 systemd
[root@node1 ~]# systemctl edit elasticsearch //
[Service]
LimitMEMLOCK=infinity
[root@node1 ~]# ulimit –a //查看最大打开文件数
查看es默认的最大打开文件数
[root@node1 ~]# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic “https://node1:9200/_nodes/stats/process?filter_path=**.max_file_descriptors&pretty”
Enter host password for user ‘elastic’:
{
“nodes” : {
“x6StOmqzMO9GSNpYtLyM8A” : {
“process” : {
“max_file_descriptors” : 65535
}
}
}
}
Elasticsearch使用mmapfs目录存储其索引。默认操作系统对mmap计数的限制可能太低,这可能导致内存不足异常
[root@node1 ~]# sysctl -w vm.max_map_count=524288 //临时修改mmap限制
[root@node1 ~]# vim /etc/sysctl.conf //永久修改mmap限制
vm.max_map_count=524288
重启服务器之后查看是否生效:
[root@node1 elasticsearch]# sysctl vm.max_map_count
vm.max_map_count = 524288
为了防止/tmp没有操作权限,指定JNA和libffi的临时目录
[root@node1 elasticsearch]# vim jvm.options
-Djna.tmpdir=/usr/share/elasticsearch/tmp //添加此行
[root@node1 elasticsearch]# mkdir /usr/share/elasticsearch/tmp/
[root@node1 elasticsearch]# chown elasticsearch:elasticsearch /usr/share/elasticsearch/tmp/
[root@node1 elasticsearch]# export LIBFFI_TMPDIR=/usr/share/elasticsearch/tmp
[root@node1 elasticsearch]# vim /etc/profile
export LIBFFI_TMPDIR=/usr/share/elasticsearch/tmp //添加此行
减少系统默认TCP丢包重传次数由15改为5,以使集群迅速检测节点故障
[root@node1 elasticsearch]# sysctl net.ipv4.tcp_retries2
net.ipv4.tcp_retries2 = 15
[root@node1 elasticsearch]# sysctl net.ipv4.tcp_retries2=5
net.ipv4.tcp_retries2 = 5
[root@node1 elasticsearch]# vim /etc/sysctl.conf //永久生效
net.ipv4.tcp_retries2=5 -
Start
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl enable elasticsearch.service
[root@node1 ~]# systemctl start elasticsearch.serviceCheck that elasticsearch id running
[root@node1 elasticsearch]# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://node1:9200
Enter host password for user ‘elastic’:
{
“name” : “node1”,
“cluster_name” : “es”,
“cluster_uuid” : “_xD7vC…Q”,
“version” : {
“number” : “8.3.3”,
“build_flavor” : “default”,
“build_type” : “rpm”,
“build_hash” : “801f…f6”,
“build_date” : “2022-07-23T19:30:09.227964828Z”,
“build_snapshot” : false,
“lucene_version” : “9.2.0”,
“minimum_wire_compatibility_version” : “7.17.0”,
“minimum_index_compatibility_version” : “7.0.0”
},
“tagline” : “You Know, for Search”
}
二、Kibana
-
Install
[root@node1 ~]# rpm -ivh /opt/kibana-8.3.3-x86_64.rpm -
Configure
[root@node1 elasticsearch]# cd /usr/share/elasticsearch/
[root@node1 elasticsearch]# ./bin/elasticsearch-create-enrollment-token -s kibana
warning: ignoring JAVA_HOME=/usr/local/jdk-18.0.2.1; using bundled JDK
eyJ2ZXIiOiI4LjMuMyIsImFkciI6WyIxMC4yOS4yMTYuMTAxOuuuuuuuuuuuuuuuuuuuuuuuuMjUyOTRjNThmMDNlZTRiZDliNThjMzFkMzJkMjFiNTFlOGIzYjAttttttttttttttttttttttttttttttttttttttttY4VEl5WTBIQTRlZmN1LWcifQ==[root@node1 bin]# grep -v ^# /etc/kibana/kibana.yml
server.port: 5601
server.host: 0.0.0.0
server.maxPayload: 1048576
server.name: node1
elasticsearch.hosts: [‘https://192.168.1.10:9200’]
elasticsearch.requestTimeout: 90000
logging.appenders.file.type: file
logging.appenders.file.fileName: /var/log/kibana/kibana.log
logging.appenders.file.layout.type: json
logging.root.appenders: [default, file]
pid.file: /run/kibana/kibana.pid
ops.interval: 5000
[root@node1 bin]# systemctl start kibana
浏览器访问http:node1:5601,如果启动后web无法打开,则手动同步token
[root@node1 bin]# ./kibana-setup --enrollment-token eyJ2ZXIiOiI4LjMuMyIsImFkciI6WyIxMC4yOS4yMTYuMTAxOuuuuuuuuuuuuuuuuuuuuuuuuMjUyOTRjNThmMDNlZTRiZDliNThjMzFkMzJkMjFiNTFlOGIzYjAttttttttttttttttttttttttttttttttttttttttY4VEl5WTBIQTRlZmN1LWcifQ==✔ Kibana configured successfully.
To start Kibana run:
bin/kibana重启kibana服务,刷新浏览器,输入用户名elastic密码登录
-
修改最大分片数
更改最大分片数,默认1000
通过Kibana界面查看文件数限制是否生效
-
Encrypt traffic between your browser and kibana
[root@node1 ~]# /usr/share/elasticsearch/bin/elasticsearch-certutil csr -name node1 -dns test.com
[root@node1 ~]# cd /usr/share/elasticsearch/
[root@node1 elasticsearch]# unzip csr-bundle.zip
[root@node1 elasticsearch]# cp node1/* /etc/kibana/
[root@node1 elasticsearch]# openssl x509 -req -in /etc/kibana/node1.csr -signkey /etc/kibana/node1.key -out /etc/kibana/node1.crt
[root@node1 elasticsearch]# vim /etc/kibana/kibana.yml 以下3行去掉注释并修改
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/node1.crt
server.ssl.key: /etc/kibana/node1.key
[root@node1 elasticsearch]# systemctl restart kibana
接下来采用Https方式访问
https://node1:5601
三、Logstash
-
Install
[root@node1 ~]# rpm -ivh /opt/logstash-8.3.3-x86_64.rpm -
Configuration
[root@node1 logstash]# vim jvm.options //注释掉以下3行
#11-13:-XX:+UseConcMarkSweepGC
#11-13:-XX:CMSInitiatingOccupancyFraction=75
#11-13:-XX:+UseCMSInitiatingOccupancyOnly
[root@node1 ~]# cat /etc/logstash/logstash.yml
node.name: node1
path.data: /var/lib/logstash
pipeline.workers: 8
pipeline.batch.size: 1000
pipeline.batch.delay: 50
path.logs: /var/log/logstash
[root@node1 ~]# vim /etc/logstash/jvm.options
-Xms4g
-Xmx4g -
测试标准输入和输出
[root@node1 ~]# cd /usr/share/logstash/bin/
./logstash -e ‘input { stdin { } } output { stdout { } }’
输入123456
输出:
{
“@version” => “1”,
“host” => {
“hostname” => “node1”
},
“message” => “123456”,
“@timestamp” => 2022-10-28T06:43:57.676416Z,
“event” => {
“original” => “123456”
}
} -
测试输出到文件
[root@node1 ~]# cat /etc/logstash/conf.d/logstash-sample.conf
input {
stdin {}
}
output {
file {
path => “/tmp/test.log”
}
}
[root@node1 bin]# ./logstash -f /etc/logstash/conf.d/logstash-sample.conf //开启服务 -
beat输入,输出到es
[root@node1 ~]# cat /etc/logstash/conf.d/logstash-sample.conf
input {
file {
path => “/tmp/test”
}
}output {
elasticsearch {
hosts => [“http://node1:9200”]
index => “test20221031”
user => “elastic”
password => “qv…”
cacert => “/etc/logstash/http_ca.crt”
}
}
cp /etc/elasticsearch/certs/http_ca.crt /etc/logstash/
chmod 777 /etc/logstash/http_ca.crt -
Verify your configuration
[root@node1 elasticsearch]# cd /usr/share/logstash/bin/
[root@node1 bin]# ./logstash -f /etc/logstash/conf.d/logstash-sample.conf --config.test_and_exit -
Start Logstash
[root@node1 bin]# systemctl start logstash
[root@node1 bin]# echo 123456 >> /tmp/test
查看es索引
[root@node1 ~]# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic “https://node1:9200/_cat/indices”
Enter host password for user ‘elastic’:
green open test20221031 iaBs7cCUTi2Y7Tlwc8DsXA 1 1 2 0 7.1kb 7.1kb
四、Auditbeat
you can use Auditbeat to collect and centralize audit events from the Linux Audit Framework. You can also use Auditbeat to detect changes to critical files, like binaries and configuration files, and identify potential security policy violations.
- Install
[root@node1 ~]# rpm -ivh /opt/auditbeat-8.3.3-x86_64.rpm - Configuration
- Verify your configuration
[root@node1 bin]# auditbeat test config –e - Stop Auditd service
[root@node1 ~]# service auditd stop
[root@node1 ~]# systemctl disable auditd.service - Start auditbeat Service
[root@node1 ~]# systemctl start auditbeat - 查看rules
[root@node1 auditbeat]# auditbeat show auditd-rules - 查看ES的Index
[root@node1 ~]# curl --cacert /etc/elastcisearch/certs/http_ca.crt -u elastic “https://localhost:9200/_cat/indices”
五、Kibana仪表盘
点击左上角展开列表,可以看到有哪些功能
- 创建Index
点击列表中stack management,然后点击kibana 中的Data Views添加es的index到kibana中
auditbeat*即可将所有auditbeat开头的的index添加进去 - 查找数据
点击discover
Select fields,选择时间,查找需要的log
- Create visualization
点击visualize Library
- 创建dashboard
创建完visualize直接跳转到dashboard,save即可
- Setup Dashboards
这一步将导入默认模板,模板有几十个,会影响机器性能,酌情导入。
[root@node1 modules.d]# auditbeat setup --dashboards
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards - 查看导入的Auditbeat dashboard模板
六、Filebeat
- Install
[root@node1 modules.d]# rpm -ivh /opt/filebeat-8.3.3-x86_64.rpm - Enable Module
[root@node1 ~]# cd /etc/filebeat/modules.d/
[root@node1 modules.d]# filebeat modules list
[root@node1 modules.d]# filebeat modules enable system //启动监测system模块 - Configurate
- Start Service
[root@node1 modules.d]# systemctl start filebeat.service
查看es是否还有filebeat的index
[root@node1 ~]# curl --cacert /etc/elastcisearch/certs/http_ca.crt -u elastic “https://localhost:9200/_cat/indices” - Setup Dashboards
这一步将导入默认模板,模板有几十个,会影响机器性能,酌情导入。
- Kibana查看已导入的Dashboard
查看system overview dashboard
七、Kafka
This Beat output works with all Kafka versions in between 0.8.2.0 and 2.6.0.
- 下载
https://archive.apache.org/dist/kafka/2.6.0/kafka_2.13-2.6.0.tgz
[root@node1 ~]# tar -zxvf kafka_2.13-2.6.0.tgz -C /opt/ - 启动服务
[root@node1 ~]# cd /opt/kafka_2.13-2.6.0
[root@node1 kafka_2.13-2.6.0]# nohup bin/zookeeper-server-start.sh config/zookeeper.properties > /tmp/kafka.log 2>&1 &
[root@node1 kafka_2.13-2.6.0]# nohup bin/kafka-server-start.sh config/server.properties > /tmp/kafka.log 2>&1 & - Create a topic
[root@node1 kafka_2.13-2.6.0]# bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test //测试
[root@node1 kafka_2.13-2.6.0]# bin/kafka-topics.sh --list --bootstrap-server localhost:9092
Test - Send some messages
[root@node1 kafka_2.13-2.6.0]# bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
This is a message
This is another message
在另一个termanel查看输出
[root@node1 kafka_2.13-2.6.0]# bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
This is a message
This is another message - Configuration
[root@node1 kafka_2.13-2.6.0]# vim config/kraft/server.properties
log.dirs=/opt/elk/kafka //数据存放地
log.retention.hours=168 //segments保留168H=7天,7天前的数据将被清理
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000 //5分钟check一次
八、Filebeat+Kafka+Logstash+ES
- Filebeat将logs发送到Kafka
将之前的output配置注释掉,增加以下行
- Logstash将Kafka消息发送到ES