在web项目实战过程中,我们对日志的处理一般有多种方式,直接写到文件、或者直接写入ELK或者直接写入kafka,今天我们分享直接写入kafka的实战流程:
1、pom文件引入jar
<dependency>
<groupId>com.github.danielwegener</groupId>
<artifactId>logback-kafka-appender</artifactId>
<version>0.2.0-RC1</version>
</dependency>
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.4</version>
</dependency>
2、配置文件添加 kafka 地址:
spring:
kafka:
log:
bootstrap-servers: nandao-01.com:9092,nandao-02.com:9092
topic:
log: applog-nd
注意一下:此配置尽量放到加载优先级高的配置文件里,比如:bootstrap.yml 中。因为服务在启动时,logback-spring.xml文件加载的优先级是比较高的,高于配置中心的配置文件,低于bootstrap.yml 文件;如果把此配置放到配置文件里,加载启动的可能会报类似空指针的错误,也即连不上服务,一直报连接异常的错误。大家可以根据具体的服务,测试一下,就会明白。
3、logback-spring.xml中添加配置:
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml" />
<!-- 上面两个引用的作用是给日志添加默认的标签-->
<springProperty scope="context" name="KAFKA_SERVERS" source="spring.kafka.log.bootstrap-servers"/>
<springProperty scope="context" name="KAFKA_TOPIC" source="spring.kafka.topic.log"/>
<!-- 这里把整个测试环境的 模块复制过来了,里面包含 输入到 kafka 的配置-->
<springProfile name="test">
<property name="LOG_FILE_HOME" value="../logs/hbchat"/>
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %level [%thread] %logger{5}[%L] %msg%n</pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
</appender>
<appender name="FILE_INFO" class="ch.qos.logback.core.rolling.RollingFileAppender">
<File>${LOG_FILE_HOME}/${LOG_NAME_PREFIX}-info.log</File>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %level [%thread] %logger{5}[%L] %msg%n</pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>${LOG_FILE_HOME}/backup/${LOG_NAME_PREFIX}-info.%d{yyyy-MM-dd}.log
</FileNamePattern>
</rollingPolicy>
</appender>
<appender name="FILE_ERROR"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<File>${LOG_FILE_HOME}/${LOG_NAME_PREFIX}-error.log</File>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %level [%thread] %logger{5}[%L] %msg%n</pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>ERROR</level>
</filter>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>${LOG_FILE_HOME}/backup/${LOG_NAME_PREFIX}-error.%d{yyyy-MM-dd}.log
</FileNamePattern>
</rollingPolicy>
</appender>
<!-- kafka的appender配置-->
<appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers class="net.logstash.logback.composite.loggingevent.LoggingEventJsonProviders">
<pattern>
<pattern>
{"app":"${APP}",
"profile":"${PROFILES_ACTIVE}",
"thread": "%thread",
"logger": "%logger{5}",
"message":"%msg",
"app_name":"${APP_NAME}",
"env_name":"${ENV_NAME}",
"hostname":"${HOSTNAME}",
"captain_seq":"${CAPTAIN_SEQ}",
"captain_gen":"${CAPTAIN_GEN}",
"build_name":"${BUILD_NAME}",
"build_git_version":"${BUILD_GIT_VERSION}",
"build_git_hash":"${BUILD_GIT_HASH}",
"build_timestamp":"${BUILD_TIMESTAMP}",
"date":"%d{yyyy-MM-dd HH:mm:ss.SSS}",
"level":"%level",
"stack_trace":"%exception"
}
</pattern>
</pattern>
</providers>
</encoder>
<topic>${KAFKA_TOPIC}</topic>
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy"/>
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy"/>
<producerConfig>bootstrap.servers=${KAFKA_SERVERS}</producerConfig>
<producerConfig>retries=1</producerConfig>
<producerConfig>batch-size=16384</producerConfig>
<producerConfig>buffer-memory=33554432</producerConfig>
<producerConfig>properties.max.request.size==2097152</producerConfig>
<appender-ref ref="CONSOLE"/>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE_INFO"/>
<appender-ref ref="FILE_ERROR"/>
<appender-ref ref="kafkaAppender"/>
</root>
</springProfile>
4、此方案是标准的实战场景的配置,大家可以反复测试验证,定会了如指掌!到此实战场景分享完毕,后期我们会分享,kafka在业务的使用和配置、流量控制等,敬请期待!