kafka - 集成 elk+filebeat 日志收集
基于 elasticsearch 7.x
什么是 ELK ?
- ELK: Elasticsearch + Logstash + Kibana
- Logstash: 日志读取
- FileBeat: 比 Logstash 更轻量级
日志收集流程
流程:
- fileBeat 从各节点读取日志文件
- fileBeat 将日志写入 kafka
- logstash 从 kafka 订阅 日志
- logstash 将日志写入 elasticsearch
- kibana 展示查询 elasticsearch 中的日志
为什么需要 kafka?
防止读取的速度 大于写入的elasticsearch 中的速度,使用消息中间件做缓冲
docker-compose 配置
version: '3.1'
services:
zk:
image: zookeeper
container_name: zk
ports:
- 2181:2181
volumes:
- "./zoo/data:/data"
- "./zoo/datalog:/datalog"
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo:2888:3888;2181
kf:
image: wurstmeister/kafka
container_name: kf
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.17:9092 ## 宿主机IP
KAFKA_ADVERTISED_HOST_NAME: 192.168.1.17
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: "zk:2181"
volumes:
- "./kafka/data/:/kafka"
elasticsearch:
image: elasticsearch:7.12.1
container_name: elasticsearch
environment:
- discovery.type=single-node
- http.port=9200
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
- http.cors.allow-credentials=false
- bootstrap.memory_lock=true
- 'ES_JAVA_OPTS=-Xms512m -Xmx512m'
- TZ=Asia/Shanghai
volumes:
- $PWD/es/plugins:/usr/share/elasticsearch/plugins
- $PWD/es/data:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
kibana:
image: kibana:7.12.1
container_name: kibanana
links:
- elasticsearch:es
depends_on:
- elasticsearch
environment:
- elasticsearch.hosts=http://es:9200
- TZ=Asia/Shanghai
ports:
- 5601:5601
volumes:
# - $PWD/data/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
- $PWD/kibana/data:/usr/share/kibana/data
logstash:
image: docker.elastic.co/logstash/logstash:7.12.1
container_name: log_stash
environment:
- TZ=Asia/Shanghai
volumes:
- $PWD/logstash/config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
# - $PWD/data/logstash/tpl:/usr/share/logstash/config/tpl
links:
- elasticsearch:es
depends_on:
- elasticsearch
- kf
ports:
- 9600:9600
- 5044:5044
filebeat:
image: docker.elastic.co/beats/filebeat:7.12.1
container_name: filebeat
user: root
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./filebeat/data:/usr/share/filebeat/data
# 将宿主机的日志目录映射到容器内!!不然会读取不到日志
- /var/log/app:/var/www/log
depends_on:
- kf
environment:
- TZ=Asia/Shanghai
问题
-
es 启动需要权限
-
chmod -R 777 es(es目录)
-
-
filebeat.yml 文件需要 root 权限
-
chown root filebeat/filebeat.yml
-
logstash 配置文件
logstash.conf 配置文件:
input {
# 从 kafka 消费
kafka {
bootstrap_servers => ["192.168.1.17:9092"]
group_id => "kafka_elk_group"
topics => ["kafka_elk_log"]
auto_offset_reset => "earliest"
codec => "plain"
}
}
filter {
}
output {
# 输出到 elasticsearch
elasticsearch {
hosts => "192.168.1.17:9200"
index => "kafka_elk_log‐%{+YYYY.MM.dd}"
codec => "plain"
}
# 输出到控制台
stdout { codec => rubydebug }
}
filebeat 配置文件
filebeat.yml 配置文件:
filebeat.inputs:
- type: log
enabled: true
paths:
# 注意 /var/www/log 目录是容器内的目录,需要将宿主机的日志目录进行映射,不然会加载不到日志
- /var/www/log/*.log
output.kafka:
hosts: ["192.168.1.17:9092"]
topic: 'kafka_elk_log'
partition.round_robin:
reachable_only: false
compression: gzip
max_message_bytes: 1000000
注意 /var/www/log 目录是容器内的目录,需要将宿主机的日志目录进行映射,不然会加载不到日志!!!
测试
# 向日志写入信息
xiao@z:/var/log/app$ echo "我是中国人" >> a.log
xiao@z:/var/log/app$
# logstash , filebeat 读取日志
filebeat | 2022-04-30T11:42:44.156+0800 INFO log/harvester.go:302 Harvester started for file: /var/www/log/a.log
log_stash | {
log_stash | "message" => "{\"@timestamp\":\"2022-04-30T03:42:44.156Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"_doc\",\"version\":\"7.12.1\"},\"agent\":{\"ephemeral_id\":\"fc7259a7-7b4f-400b-aa72-f6041f118336\",\"id\":\"f6c3697d-6b9b-4ec4-9f1f-0b1ee0c194bc\",\"name\":\"c49f93715899\",\"type\":\"filebeat\",\"version\":\"7.12.1\",\"hostname\":\"c49f93715899\"},\"message\":\"我是中国人\",\"log\":{\"offset\":18,\"file\":{\"path\":\"/var/www/log/a.log\"}},\"input\":{\"type\":\"log\"},\"ecs\":{\"version\":\"1.8.0\"},\"host\":{\"name\":\"c49f93715899\"}}",
log_stash | "@version" => "1",
log_stash | "@timestamp" => 2022-04-30T03:42:46.290Z
log_stash | }
访问 kibana , localhost:5601
:
可以发现日志已经成功读取
good luck!