1.到官网下载自己系统对应的filebeat
https://www.elastic.co/downloads/beats/filebeat
2.我是centos7,下载的filebeat-6.5.3-linux-x86_64.tar.gz
3.解压缩并进入根目录
tar xzf filebeat-6.5.3-linux-x86_64.tar.gz
cd filebeat-6.5.3-linux-x86_64
4.配置(filebeat.yml为配置文件)
配置输入模块:
enabled :启动此输入模块
paths: 配置日志路径(路径规则自己自定义)
fields:附加字段,将topic加入日志中,方便输出模块区分不同日志
具体配置如下所示(图中配置了两个输入模块)
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /home/pplive/logs/webrtc/common-elk-longcon.log
#- c:\programdata\elasticsearch\logs\*
fields: {log_topic: "rtc_longcon_log"}
- type: log
enabled: true
paths:
- /home/pplive/logs/webrtc/common-elk-drive.log
fields: {log_topic: "rtc_drive_log"}
5.配置输出模块(配置输出到kafka,topic为输入模块所定义):
这里编解码器采用的是format.string, %{[message]}
代表的是原始日志,filebeat默认编解码器是json,这里之所以采用string,是因为要采集的日志已经规定业务方定义成了json格式,所以这边不需要再转json,直接采集即可
#================================ kafka =====================================
output.kafka:
# initial brokers for reading cluster metadata
hosts: ["10.200.24.4:9092", "10.200.24.3:9092", "10.200.24.5:9092"]
# kafka version
version: "0.8.2"
codec.format:
string: '%{[message]}'
# message topic selection + partitioning
topic: '%{[fields.log_topic]}'
编解码器参考:
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-output-codec.html
6.启动filebeat
nohup ./filebeat -e -c filebeat.yml > /dev/null 2>&1 & echo $! > pidfile.txt
7.查找并杀死filebeat
ps -ef |grep filebeat<br>kill -9 进程号
参考自:https://www.jianshu.com/p/229c01447e54
---------------------------------------------------------------------------问题排查-----------------------------------------------------------------------------
1.
2019-01-21T19:53:26.762+0800 INFO kafka/log.go:53 Failed to connect to broker vmmspreapp02:9092: dial tcp: lookup vmmspreapp02 on 10.242.11.97:53: no such host
2019-01-21T19:53:26.762+0800 INFO kafka/log.go:53 producer/broker/0 state change to [closing] because dial tcp: lookup vmmspreapp02 on 10.242.11.97:53: no such host
如果出现这个错误,需要配置host,将kafka所在的主机名和ip绑定
10.244.0.116 vmmspreapp02