上一篇转载了用log4j来实现同时按照日期和大小来分隔日志,后来又研究了下log4j的升级版logback,用logback也来实现同时按照日期和大小来分隔日志,话不多说,直接上配置文件:
<configuration>
<appender name="ROLLING" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>mylog.txt</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!-- rollover daily -->
<fileNamePattern>mylog-%d{yyyy-MM-dd}.%i.txt</fileNamePattern>
<!-- each file should be at most 100MB, keep 60 days worth of history, but at most 20GB -->
<maxFileSize>100MB</maxFileSize>
<maxHistory>60</maxHistory>
<totalSizeCap>20GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%msg%n</pattern>
</encoder>
</appender>
<root level="DEBUG">
<appender-ref ref="ROLLING" />
</root>
</configuration>
这是从官方文档上直接摘抄下来的,地址:https://logback.qos.ch/manual/appenders.html
Sometimes you may wish to archive files essentially by date but at the same time limit the size of each log file, in particular if post-processing tools impose size limits on the log files. In order to address this requirement, logback ships with SizeAndTimeBasedRollingPolicy
.
有时候你本来是希望按照日期来对日志进行归档,但是同时你又希望限制每个日志文件的大小,为了满足这一需求,logback提供了SizeAndTimeBasedRollingPolicy.
按照官网说明根据你想每天,每个小时甚至每分钟都生成一个新的日志文件,并且按照文件大小来分隔,都是可以的,没问题的。
附:经过测试,简单写10w条数据,发现按照大小分割的时候并没有log4j那么精确,比如我按照2MB来分割,log4j基本上在2048K左右就会进行分割,但是logback有可能在2.1M,2.5M左右进行分割。还有写日志总共用时也略微高于log4j。(⊙﹏⊙)(⊙﹏⊙)
pom依赖备忘:
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.1.8</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.1.8</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-access</artifactId>
<version>1.1.7</version>
</dependency>