配置文件
input {
kafka {
bootstrap_servers => ["10.4.1.2:9092"]
client_id => "test1"
topics => ["ep5-ep.sa.log"]
auto_offset_reset => "earliest"
codec => "json"
type => "ee"
}
kafka {
bootstrap_servers => ["10.4.1.3:9092"]
client_id => "test2"
topics => ["log-test1"]
auto_offset_reset => "earliest"
codec => "json"
type => "pp"
}
}
output {
if[type] == "ee"{
elasticsearch{
hosts => ["10.4.1.1:9200"]
index => "ep5-ep.sa.log"
}
}
if[type] == "pp"{
elasticsearch{
hosts => ["10.4.1.1:9200"]
index => "log-test1"
}
}
}
启动
nohup ./bin/logstash -f config/kafka-logstash-es.conf > logstash.out 2>&1 &
filebeats
查找进程 ps -ef | grep filebeat
启动 nohup ./filebeat -e -c filebeat.yml &
# ====Filebeat inputs
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/perjar/*.log
#====Outputs
output.kafka:
enabled: true
hosts: ["10.4.1.1:19092"]
topic: test
坑集
- Logstash从kafka读取json字符串,无法输入到es中,也没有提示信息。是由于一条记录里一个字段有两个值。举例:logstash的配置文件中配置如下:
kafka {
bootstrap_servers => ["x.x.x.x:9092"]
client_id => "test2"
topics => ["log-test"]
auto_offset_reset => "earliest"
codec => "json"
type => "aaa"
}
kafka里插入的json字符串:{“thread”:“1234”,“type”:“bbb”}
这时候logstash就无法将这条数据插入es中,json字符串里不应该有type类型
2.