1.查看日志内容
2020-08-10 20:19:54 Logout 121.*.*.5 wujian ppp0 1**.*8.4*.1 1*.1*8.*2.1
2.grok进行切割
%{DATESTAMP:timestamp} %{WORD:severity} %{IP:client} %{USERNAME:username} %{USERNAME:device} %{IP:vlanIP} %{IP:assignIP}
grok官方给出了很多封装好的patterns 我们可以直接用
官方patterns
filebeat
3.编写filebeat.yml
cat >> /etc/filebeat/filebeat.yml <<EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/ipsec.log
output.logstash:
hosts: ["10.0.0.42:5045"] #logstash的端口
EOF
4.filebeat开机自启|重启
systemctl enable filebeat.service
systemctl restart filebeat.service
第一次启动用start
logstash
5.编写**.conf文件
cat >> /etc/logstasg/conf.d/*.conf <<EOF
input {
beats {
port => 5045
host => "0.0.0.0"
}
}
filter {
mutate {
remove_field => ["host","input","fields"]
}
grok {
match => { "message" => "%{DATESTAMP:timestamp} %{WORD:severity} %{IP:client} %{USERNAME:username} %{USERNAME:device} %{IP:vlanIP} %{IP:assignIP}" }
remove_field => "message" #删除message那一行
}
}
output {
elasticsearch {
hosts => ["http://10.0.0.41:9200"]
index => "vpn-%{+YYYY.MM.dd}" #设置索引
manage_template => false
}
}
EOF
6.编辑pipelines.yml
vim /etc/logstash/pipelines.yml
- pipeline.id: * #自定义名字最好和conf.d下的文件名一致
path.config: "/etc/logstash/conf.d/*.conf" #使用那个文件
7.重启logstash
systemctl restart logstash.service
8.查看日志
tailf /var/log/logstash/logstash-plain.log #稍等一会 启动比较慢
9.访问kibana
发现数据已经分割开