大家一提到log4j2的优化,可能就会有人说,采用异步日志
最近因为工作的原因最近在做压测,由于某个项目中的日志量特别的大,压测吞吐量一直上不去,通过分析栈快照发现是在写日志的时候发生了IO阻塞,在网上搜了很多文章,有许多讲log4j2配置调优配置,但实际验证下来,却有很大的出入(误人子弟)。没办法,只能自己亲自验证,分析。
首先先抛出结论:
-
开启异步日志以后,如果日志的生产速度,大于消费速度的话,那么异步的效率反而会比同步的低。在官网的一句描述中找到原因
参考:https://logging.apache.org/log4j/2.x/manual/async.html
如果日志的生产速度,大于消费速度,导致RingBuffer被塞满,为了减少cpu的使用,会转换成同步模式,并且由于多了上下文切换,所以性能比同步的还要低
-
在log4j中,Logger对象,和Appender对象 都可以实现异步,但如果你理所当然的认为1+1>2 的话,那可能会大跌眼镜。
其实想起来也很容易理解,Logger对象是记录日志的对象,Appender是写到目标的对象,只要其中一个有Buffer的功能,就可提升吞吐量,多加一层,只会增加上下文切换的消耗。
官方是比较推荐使用AsyncLogger
具体可以参考:https://newbedev.com/difference-between-asynclogger-and-asyncappender-in-log4j2
-
immediateFlush有一个缺点,就是会导致日志记录时间,不能反应event实际发生的时间。(因为要等到buffer满了一次性刷盘)
-
开启immediateFlush和异步日志,在应用意外宕机后,都会产生日志丢失的问题
-
如果日志是要接到ES做集中分析的话,建议直接通过Appender直接输出到MQ,省去了写文件,再从文件采集到ES中的步骤
-
优化思路,Root采用同步日志,Logger对象尽量按照功能拆分的细一点root,info,error,remote,access,metrics,cache,mq ,info和error可以用同步的,其他的采用AsyncLogger
-
如果日志量非常大,并且日志的生产速度大于写文件的速度,那么还是同步的Logger+immediateFlush=“false”效率比较高,建议直接将日志写入kafka
log4j2默认是开启bufferedIO的,但是如果immediateFlush=“true”的时候(默认),还是会一条日志,刷一次盘
见源码:
但是开启BufferedIO还是会比使用FileOutputStream要快,即使在关闭批量刷盘的情况下
压测
压测代码
通过Jmeter起10个线程进行压测
同步Logger+bufferedIO(默认开启)
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="OFF">
<properties>
<property name="logPath">log</property>
<property name="logName">userValidator-rest</property>
<property name="errPath">log/err</property>
<property name="errName">userValidator-error</property>
</properties>
<appenders>
<RollingFile name="log_file" filename="${logPath}/${logName}.log"
filepattern="${logPath}/history-%i-${logName}.log.gz" >
<Filters>
<ThresholdFilter level="error" onMatch="DENY"
onMismatch="NEUTRAL" />
<ThresholdFilter level="debug" onMatch="ACCEPT"
onMismatch="DENY" />
</Filters>
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<Policies>
<SizeBasedTriggeringPolicy size="250 MB" />
</Policies>
<DefaultRolloverStrategy max="20" />
</RollingFile>
<RollingFile name="error_file" filename="${errPath}/${errName}.log"
filepattern="${errPath}/history-%i-${logName}_error.log.gz" >
<Filters>
<ThresholdFilter level="error" onMatch="ACCEPT"
onMismatch="DENY" />
</Filters>
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<Policies>
<SizeBasedTriggeringPolicy size="250 MB" />
</Policies>
<DefaultRolloverStrategy max="20" />
</RollingFile>
</appenders>
<loggers>
<Logger name="cn.cmvideo" level="debug" additivity="false" includeLocation="false" >
<AppenderRef ref="log_file" />
<AppenderRef ref="error_file" />
</Logger>
</loggers>
</configuration>
压测结果
异步AsyncLogger+bufferedIO(默认开启)
将 Logger 修改为 AsyncLogger
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="OFF">
<properties>
<property name="logPath">log</property>
<property name="logName">userValidator-rest</property>
<property name="errPath">log/err</property>
<property name="errName">userValidator-error</property>
</properties>
<appenders>
<RollingFile name="log_file" filename="${logPath}/${logName}.log"
filepattern="${logPath}/history-%i-${logName}.log.gz" >
<Filters>
<ThresholdFilter level="error" onMatch="DENY"
onMismatch="NEUTRAL" />
<ThresholdFilter level="debug" onMatch="ACCEPT"
onMismatch="DENY" />
</Filters>
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<Policies>
<SizeBasedTriggeringPolicy size="250 MB" />
</Policies>
<DefaultRolloverStrategy max="20" />
</RollingFile>
<RollingFile name="error_file" filename="${errPath}/${errName}.log"
filepattern="${errPath}/history-%i-${logName}_error.log.gz" >
<Filters>
<ThresholdFilter level="error" onMatch="ACCEPT"
onMismatch="DENY" />
</Filters>
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<Policies>
<SizeBasedTriggeringPolicy size="250 MB" />
</Policies>
<DefaultRolloverStrategy max="20" />
</RollingFile>
</appenders>
<loggers>
<AsyncLogger name="cn.cmvideo" level="debug" additivity="false" includeLocation="false" >
<AppenderRef ref="log_file" />
<AppenderRef ref="error_file" />
</AsyncLogger>
</loggers>
</configuration>
压测结果
同步Logger+bufferedIO(默认开启)+immediateFlush=“false”
在同步Logger+bufferedIO的基础上,在appender上加上了immediateFlush=“false”,表示开启批量刷盘,这个时候不会记录一条日志就写一次磁盘,会根据bufferSize的大小,一次性刷盘。(bufferSize可以自己设置,默认是8K)
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="OFF">
<properties>
<property name="logPath">log</property>
<property name="logName">userValidator-rest</property>
<property name="errPath">log/err</property>
<property name="errName">userValidator-error</property>
</properties>
<appenders>
<!--<Console name="Console" target="SYSTEM_OUT">
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
</Console>-->
<RollingFile name="log_file" filename="${logPath}/${logName}.log"
filepattern="${logPath}/history-%i-${logName}.log.gz" immediateFlush="false" >
<Filters>
<ThresholdFilter level="error" onMatch="DENY"
onMismatch="NEUTRAL" />
<ThresholdFilter level="debug" onMatch="ACCEPT"
onMismatch="DENY" />
</Filters>
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<Policies>
<SizeBasedTriggeringPolicy size="250 MB" />
</Policies>
<DefaultRolloverStrategy max="20" />
</RollingFile>
<RollingFile name="error_file" filename="${errPath}/${errName}.log"
filepattern="${errPath}/history-%i-${logName}_error.log.gz" immediateFlush="false" >
<Filters>
<ThresholdFilter level="error" onMatch="ACCEPT"
onMismatch="DENY" />
</Filters>
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<Policies>
<SizeBasedTriggeringPolicy size="250 MB" />
</Policies>
<DefaultRolloverStrategy max="20" />
</RollingFile>
</appenders>
<loggers>
<Logger name="cn.cmvideo" level="debug" additivity="false" includeLocation="false" >
<AppenderRef ref="log_file" />
<AppenderRef ref="error_file" />
</Logger>
</loggers>
</configuration>
压测结果
异步AsyncLogger+bufferedIO(默认开启)+immediateFlush=“false”
性能比只使用AsyncLogger要低一些,个人理解是由于immediateFlush已经把io阻塞的问题通过Buffer优化了,再加上异步缓存,其实只是多了一次上下文切换,并不能进一步消峰。但是immediateFlush有一个缺点,就是会导致日志记录时间,不能反应event实际发生的时间。(因为要等到buffer满了一次性刷盘)
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="OFF">
<properties>
<property name="logPath">log</property>
<property name="logName">userValidator-rest</property>
<property name="errPath">log/err</property>
<property name="errName">userValidator-error</property>
</properties>
<appenders>
<!--<Console name="Console" target="SYSTEM_OUT">
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
</Console>-->
<RollingFile name="log_file" filename="${logPath}/${logName}.log"
filepattern="${logPath}/history-%i-${logName}.log.gz" immediateFlush="false" >
<Filters>
<ThresholdFilter level="error" onMatch="DENY"
onMismatch="NEUTRAL" />
<ThresholdFilter level="debug" onMatch="ACCEPT"
onMismatch="DENY" />
</Filters>
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<Policies>
<SizeBasedTriggeringPolicy size="250 MB" />
</Policies>
<DefaultRolloverStrategy max="20" />
</RollingFile
<RollingFile name="error_file" filename="${errPath}/${errName}.log"
filepattern="${errPath}/history-%i-${logName}_error.log.gz" immediateFlush="false" >
<Filters>
<ThresholdFilter level="error" onMatch="ACCEPT"
onMismatch="DENY" />
</Filters>
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<Policies>
<SizeBasedTriggeringPolicy size="250 MB" />
</Policies>
<DefaultRolloverStrategy max="20" />
</RollingFile>
</appenders>
<loggers>
<AsyncLogger name="cn.cmvideo" level="debug" additivity="false" includeLocation="false" >
<AppenderRef ref="log_file" />
<AppenderRef ref="error_file" />
</AsyncLogger>
</loggers>
</configuration>
压测结果
异步AsyncLogger+bufferedIO(默认开启)+immediateFlush=“false” + RollingRandomAccessFileAppender
Appender 修改成 RollingRandomAccessFile
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="OFF">
<properties>
<property name="logPath">log</property>
<property name="logName">userValidator-rest</property>
<property name="errPath">log/err</property>
<property name="errName">userValidator-error</property>
</properties>
<appenders>
<RollingRandomAccessFile name="log_file" filename="${logPath}/${logName}.log"
filepattern="${logPath}/history-%i-${logName}.log.gz" immediateFlush="false" >
<Filters>
<ThresholdFilter level="error" onMatch="DENY"
onMismatch="NEUTRAL" />
<ThresholdFilter level="debug" onMatch="ACCEPT"
onMismatch="DENY" />
</Filters>
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<Policies>
<SizeBasedTriggeringPolicy size="250 MB" />
</Policies>
<DefaultRolloverStrategy max="20" />
</RollingRandomAccessFile>
<RollingRandomAccessFile name="error_file" filename="${errPath}/${errName}.log"
filepattern="${errPath}/history-%i-${logName}_error.log.gz" immediateFlush="false" >
<Filters>
<ThresholdFilter level="error" onMatch="ACCEPT"
onMismatch="DENY" />
</Filters>
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<Policies>
<SizeBasedTriggeringPolicy size="250 MB" />
</Policies>
<DefaultRolloverStrategy max="20" />
</RollingRandomAccessFile>
</appenders>
<loggers>
<AsyncLogger name="cn.cmvideo" level="debug" additivity="false" includeLocation="false" >
<AppenderRef ref="log_file" />
<AppenderRef ref="error_file" />
</AsyncLogger>
</loggers>
</configuration>
压测结果
通过Jmeter起30个线程进行压测
异步AsyncLogger+bufferedIO(默认开启)+immediateFlush=“false” + RollingRandomAccessFileAppender
不稳定TPS一开始比较高,后面慢慢滑落
前20s吞吐量非常高:
然后慢慢下降,最终:
异步AsyncLogger+bufferedIO(默认开启)+immediateFlush=“false”
不稳定TPS一开始比较高,后面慢慢滑落
前20s吞吐量非常高:
然后慢慢下降,最终:
异步AsyncLogger+bufferedIO(默认开启)
同步Logger+bufferedIO(默认开启)+immediateFlush=“false”
如果日志量非常大,并且日志的生产速度大于写文件的速度,那么还是同步的Logger+immediateFlush=“false”效率比较高,建议直接将日志写入kafka