前言
本文重点介绍Logback和ELK和SpringBoot是怎么整合收集日志的
关于ELK的说明和安装可以点击查看ELK(Elasticsearch、Logstash、Kibana)安装,
https://blog.csdn.net/qq_36793589/article/details/114257574 本文不再过多赘述
上架构图:
开始logback-ELK整合
1:添加logback-logstash的整合依赖包
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.4</version>
</dependency>
2:logback.xml添加logstash配置
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<param name="Encoding" value="UTF-8"/>
<!--可以访问的logstash日志收集ip和端口-->
<remoteHost>192.168.44.131</remoteHost>
<port>10514</port>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" >
<customFields>{"appname":"${appName}"}</customFields>
</encoder>
</appender>
3:Logstash logstash-es.conf 配置
input {
tcp {
port => 10514
codec => "json"
}
}
output {
elasticsearch {
action => "index"
hosts => ["localhost:9200"]
index => "%{[appname]}"
}
}
4:springboot测试类测试
import lombok.extern.slf4j.Slf4j;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.test.context.web.WebAppConfiguration;
@RunWith(SpringRunner.class)
@SpringBootTest
@Slf4j
@WebAppConfiguration
public class LogTest {
@Test
public void Test() {
log.trace("trace Logback To ELK 测试日志{}","hello word");
log.debug("debug Logback To ELK 测试日志{}","hello word");
log.info("info Logback To ELK 测试日志{}","hello word");
log.warn("warn Logback To ELK 测试日志{}","hello word");
log.error("error Logback To ELK 测试日志{}","hello word");
}
}
5:运行测试类查看elasticsearch日志记录
成功查到日志,logback-ELK整合成功
问题说明和改进
问题一:@timestamp时间不是北京时间
问题二:日志记录内容过多,想要自定义输入日志内容
问题三:所有日志级别的日志都打印出来,日志内容过多
针对上述问题进行改进
1:修改logback.xml添加logstash配置
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!--临界值日志过滤级别配置 -->
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<!-- 在日志配置级别的基础上过滤掉warn级别以下的日志 -->
<level>warn</level>
</filter>
<!--可以访问的logstash日志收集端口-->
<destination>192.168.44.131:10514</destination>
<!-- 日志输出编码 -->
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<pattern>
<pattern>
{
"logTime":"%d{yyyy-MM-dd HH:mm:ss.SSS}",
"logLevel": "%level",
"msg": "%message",
"service": "${APP_NAME:-}",
"trace": "%X{X-B3-TraceId:-}",
"span": "%X{X-B3-SpanId:-}",
"exportable": "%X{X-Span-Export:-}",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger{40}"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
2:修改Logstash logstash-es.conf 配置 重启Logstash
input {
tcp {
mode => "server"
port => 10514
codec => "json_lines"
}
}
filter {
ruby {
code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*60*60)"
}
ruby {
code => "event.set('@timestamp',event.get('timestamp'))"
}
mutate {
remove_field => ["timestamp"]
}
}
output {
elasticsearch {
action => "index"
hosts => ["localhost:9200"]
index => "springboot-%{+YYYY.MM.dd}"
}
}
3:启动springboot测试类打印日志
4:查看elasticsearch日志记录
说明:1
filter {
ruby {
code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*60*60)"
}
ruby {
code => "event.set('@timestamp',event.get('timestamp'))"
}
mutate {
remove_field => ["timestamp"]
}
}
以上内容是解决@timestamp时间不是北京时间问题
说明:2
<pattern>
{
"logTime":"%d{yyyy-MM-dd HH:mm:ss.SSS}",
"logLevel": "%level",
"msg": "%message",
"service": "${APP_NAME:-}",
"trace": "%X{X-B3-TraceId:-}",
"span": "%X{X-B3-SpanId:-}",
"exportable": "%X{X-Span-Export:-}",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger{40}"
}
</pattern>
这些是来定义哪些内容要输出到 Logstash--->>Elasticsearch
说明:3
<!--临界值日志过滤级别配置 -->
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<!-- 在日志配置级别的基础上过滤掉warn级别以下的日志 -->
<level>warn</level>
</filter>
这个是用来设置哪个日志级别以下的日志输出到 Logstash--->>Elasticsearch
以上的方式是springboot+logback+logstash直接把日记写到logstash,还有一种是通过logstash去日志的具体路径采集,不过logstash相对filebeat需要很多内存且logstash使用tcp收集日志在高并发的情况下会有性能瓶颈,虽然这种方案是简单方法,但是在生产环境下建议采用filebeat+elk 或者更大的系统可以用kafka做中间缓冲来收集日志,具体优化进阶方案会在后续文章中详细讲解。