一、首先配置filebeat.yml
(1)最简配置
filebeat.inputs:
- type: log
enabled: true
paths:
- /data/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
output.kafka:
#hosts: ["172.16.208.149:6667","172.16.208.150:6667","172.16.208.151:6667","172.16.208.152:6667","172.16.208.153:6667"]
hosts: ["s101:9091","s102:9091","s103:9091"]
topic: 'test_0314_01'
# bulk_max_size: 4096
#keep_alive: 30
(2)日志合并配置
# 日志输出类型
- type: log
enabled: true
# 定义收集日志的目录
paths:
- /data/apps/logs/test1/*
# 自定义的两个字段,区分日志类型及host
fields:
type: test1
host: 161
# 忽略一小时以为的日志变化
ignore_older: 1h
# 正则匹配 以正则表达式开头的行,作为起始行
multiline.pattern: '(WARN|DEBUG|ERROR|INFO) \d{4}/\d{2}/\d{2}'
# 是否需要对pattern条件转置使用,不翻转设为true,反转设置为false
multiline.negate: true
# after 追加到文件后面
multiline.match: after
# 日志输出类型
- type: log
enabled: true
paths:
- /data/apps/logs/test2/*
fields:
type: test1
host: 161
ignore_older: 1h
multiline.pattern: '(WARN|DEBUG|ERROR|INFO) \d{4}/\d{2}/\d{2}' multiline.negate: true
multiline.match: after
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
output.kafka:
enabled: true
hosts: ["192.168.0.11:9092","192.168.0.12:9092","192.168.0.13:9092"]
topic: "test-log"
注:multiline字段,必须每个日志类型一个,不然日志不会合并.(坑)
二、启动 filebeat
----------------------------------
filebeat -C filebeat_kafka.yml
nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &
三、启动kafka和comsumer
---------------------------------
bin/kafka-server-start.sh config/server.properties
./kafka-topics.sh --create --zookeeper s101:2181,s102:2181,s103:2181 --replication-factor 3 --partitions 3 --topic test_0314_01
./kafka-console-consumer.sh --bootstrap-server s101:2181,s102:2181,s103:2181 --topic test_0314_01 --from-beginning
四、测试
-----------------------------------
往日志文件追加内容。从kafka的consumer端可以查看到消息
echo "aaa" >> a.log
五、日志格式
-----------------------------------
日志消息的类型
@timestamp:消息发送时间
beat:Filebeat运行主机和版本信息
input_type:input类型
message:原始日志内容
offset:此条消息在原始日志文件中的offset
source:日志文件