目录
1. redis缓存中间件
官网配置如下,找到对应的版本,然后找output模块中redis的配置信息:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.key_under_root: true
json.overwrite_keys: true
tags: ["access"]
output.redis:
hosts: ["172.123.154.13:5002"]
#password: "my_password"
key: "nginx_access"
db: 0
timeout: 5
setup.kibana:
host: "172.123.154.13:5601"
setup.ilm.enabled: false
setup.template:
name: "nginx"
pattern: "nginx-*"
setup.template.overwrite: true
setup.template.enabled: false
启动后,我们到redis中查看数据是否会写入。
接下来还需要将数据导入到es中,并在kibana中展示。
配置如下:
input{
redis {
host => "172.16.158.11"
port => 5002
db => 0
key => "nginx_access"
data_type => "list"
}
}
filter{
mutate {
convert => ["upstream_time", "float"]
convert => ["request_time", "float"]
}
}
output{
elasticsearch {
hosts => "http://172.16.158.11:9203" # [ip1:port1,ip2:port2,ip3:port3]
manage_template => false
index => "aaa-log-%{+YYYY.MM.dd}"
}
}
数据会从redis中写入到es中,然后在kibana中显示,redis中的数据被消费到es中后,redis中是不存之前的数据的。
2 kafka缓存中间件
filebeat的配置如下:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.key_under_root: true
json.overwrite_keys: true
tags: ["access"]
output.kafka:
hosts: ["172.16.158.11:9092","172.16.158.12:9092","172.16.158.13:9092"]
topic: nginxlog
setup.kibana:
host: "172.16.158.11:5601"
请求后可以看到kafka中生成了topic
logstash的配置文件:
input{
kafka {
bootstrap_servers => "172.16.158.11:9092"
topics => ["nginxlog"]
group_id => "logstash"
codec => "json"
}
}
output{
elasticsearch {
hosts => "http://172.16.158.11:9203"
manage_template => false
index => "kafka-log-%{+YYYY.MM.dd}"
}
}
对于在一个topic中的日志,可以先打一个tag,然后在做if判断。