1.首先需要安装logstash
下载 :链接:https://pan.baidu.com/s/1U23f6da1Zm6xO5kYXJUfuw 密码:lrga
注意运行logstash需要jdk,自行安装。
2.编写logstash的配置文件 logstash.conf
input {
file {
path => "/root/logs/allOut.log"
}
}
output {
stdout{}
}
3.测试下logstash
./logstash -f /root/conf/logstash.conf
启动完成之后对allOut.log进行新增,此时logstash控制太会打印出新增内容,至此logstash就安装完成了。
4. 收集项目日志,将收集到的日志进行格式化
日志的打印规则
[2019-08-02 10:00:30] [78150bde-a2e4-4437-bd8c-8974fde7ec8c] [INFO ] [com.zhujg.kafkatest.Test] [71] [test1] [-] test1-301
对这类日志进行收集并发送redis
input {
file {
path => "/root/logs/allOut.log"
}
}
filter{
grok{
match => {
"message" => "\[%{TIMESTAMP_ISO8601:exetime}\] \[%{UUID:businessId}\] \[%{DATA:loglevel}\] \[%{DATA:classname}\] \[%{NUMBER:line}\] \[%{DATA:method}\] \[-\] %{GREEDYDATA:param}"
}
}
}
output {
if [businessId] != '' and "_grokparsefailure" not in [tags] {
redis {
data_type => "list"
host => "127.0.0.1"
db => "0"
port => "6379"
key => "systemlog"
}
}
}
5.对于异常信息的处理
日志输出会存在异常堆栈信息,如何将异常信息也收集呢?
[2019-08-02 10:00:30] [dba98f63-878f-4da1-ac37-bc8224f23463] [ERROR] [com.zhujg.kafkatest.ThreadTest] [58] [test] [-] error
java.lang.NullPointerException: null
at com.zhujg.kafkatest.ThreadTest.test(ThreadTest.java:58) [classes!/:0.0.1-SNAPSHOT]
at com.zhujg.kafkatest.Thread1.run(ThreadTest.java:26) [classes!/:0.0.1-SNAPSHOT]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
思路:将异常信息与上一条日志信息合并到一起,将异常信息作为上一条日志的信息。
使用multiline来让logstash判断当到行末时是否结束。
改造:
input {
file {
path => "/root/logs/allOut.log"
}
}
filter{
multiline {
pattern => "^\[\d{4}-\d{1,2}-\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2}\]"
negate => true
what => "previous"
}
grok{
match => {
"message" => "\[%{TIMESTAMP_ISO8601:exetime}\] \[%{UUID:businessId}\] \[%{DATA:loglevel}\] \[%{DATA:classname}\] \[%{NUMBER:line}\] \[%{DATA:method}\] \[-\] %{GREEDYDATA:param}"
}
}
}
output {
if [businessId] != '' and "_grokparsefailure" not in [tags] {
redis {
data_type => "list"
host => "127.0.0.1"
db => "0"
port => "6379"
key => "systemlog"
}
}
}
这块通过匹配是否为时间开头,如果为时间开头那就是新的日志信息,否则就当作一条日志信息处理。