logback.xml同步数据到kafka

一,接到一个需求,需要在写日志的同时,将日志信息也同步一份给kafka,且要求kafka有异常后不得影响原业务

1、在网上找了很久,起初是想使用log4j做,这个网上资源很多,但是我们的业务量大,每天上亿,KafkaLog4jAppender有个参数为max.block.ms=60s,表示在kafka抛出异常之前,会有60s的阻塞时间,这个参数在log4j的配置文件无法设置,所以放弃。

2、后来找到一个logback的配置,完全符合我们的要求于是写到这记录下

二、以下是具体内容,参考源链接在最后

1、首先添加maven,logback的版本要一致

[maven pom.xml]
<dependency>
    <groupId>com.github.danielwegener</groupId>
    <artifactId>logback-kafka-appender</artifactId>
    <version>0.2.0-RC2</version>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.2.3</version>
    <scope>runtime</scope>
</dependency>

2、logback.xml内容如下

<configuration debug="true" scan="true">
<!-- <property scope="context" name="path" value="D:/" /> -->

	<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
		<encoder>
			<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
		</encoder>
		
	</appender>
	<appender name="DEFAULT"
		class="ch.qos.logback.core.rolling.RollingFileAppender">
		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
			<fileNamePattern>../logs/busi.%d{yyyy-MM-dd}.log.gz
			</fileNamePattern>
			<maxHistory>30</maxHistory>
		</rollingPolicy>
		<encoder>
			<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
		</encoder>
	</appender>
	
	<appender name="mylog"
		class="ch.qos.logback.core.rolling.RollingFileAppender">
		<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
			<fileNamePattern>../logs/my.%d{yyyy-MM-dd}.log.gz
			</fileNamePattern>
			<maxHistory>90</maxHistory>
		</rollingPolicy>
		<encoder>
			<pattern>%msg%n</pattern>
		</encoder>
	</appender>
	
	   <!-- This is the kafkaAppender -->
     <appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%msg%n</pattern>
        </encoder>
        <topic>my_topic</topic>
        <!-- we don't care how the log messages will be partitioned  -->
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />

        <!-- use async delivery. the application threads are not blocked by logging -->
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />

        <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
        <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
        <!-- bootstrap.servers is the only mandatory producerConfig -->
        <producerConfig>bootstrap.servers=localhost:9021</producerConfig>
        <!-- don't wait for a broker to ack the reception of a batch.  -->
        <producerConfig>acks=0</producerConfig>
        <!-- wait up to 1000ms and collect log messages before sending them as a batch -->
        <producerConfig>linger.ms=1000</producerConfig>
        <!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
        <producerConfig>max.block.ms=0</producerConfig>
        <!-- define a client-id that you use to identify yourself against the kafka broker -->
        <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig>

        <!-- there is no fallback <appender-ref>. If this appender cannot deliver, it will drop its messages. -->
  		<appender-ref ref="DEFAULT" />
    </appender>
	
	
	
	<logger name="kafkaLog" additivity="false">
		<level value="info" />
		<appender-ref ref="mylog"/>
		<appender-ref ref="kafkaAppender"/>
	</logger>



	<root level="info">
		<appender-ref ref="STDOUT" />
	</root>
</configuration>

 

3、测试,项目启动,运行main

import java.text.SimpleDateFormat;
import java.util.Date;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class TestKafkaLog {
	static Logger logger = LoggerFactory.getLogger("kafkaLog");
	@Test
	public void test() throws InterruptedException {
		int i= 0;
		busilogger.info(new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS").format(new Date())+"-----------start------------");
		while(true) {
			logger.info(new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS").format(new Date())+"=========== {}",i++);
			Thread.sleep(1000);
		}
	}
}

 

参考链接:https://github.com/danielwegener/logback-kafka-appender

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值