log4j2配置解析及发送log到kafka(xml/yml)

  • log4j2的优势

在异步,吞吐量等方面有极大的性能提升,Log4j2的性能为什么这么好https://www.jianshu.com/p/359b14067b9e

  • maven

<!-- https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-api -->
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-api</artifactId>
    <version>2.11.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-core -->
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.12.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-slf4j-impl -->
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-slf4j-impl</artifactId>
    <version>2.11.1</version>
    <scope>test</scope>
</dependency>

  • gradle

compile 'org.apache.logging.log4j:log4j-api:2.11.1'
compile 'org.apache.logging.log4j:log4j-core:2.11.1'
compile "org.apache.logging.log4j:log4j-slf4j-impl:2.11.1"

  • log4j与slf4j的关系

slf4j是日志框架的标准接口之一,log4j是它的一个实现。面向接口编程,告诉我们使用具体的日志系统,不方便升级换代,引入的jar中的日志系统还可能不一样,
无法使用,所以需要一个统一的日志接口,可以兼容各类日志系统。那就是slf4j。

slf4j提供接口,供用户使用。但不提供实现,用户要在自己的项目中进行选择配置
期望的日志系统。只要引入的jar中都使用slf4j,那么就不会出现兼容问题。

具体使用方法是,在slf4j和具体的日志系统中间使用桥接,实现slf4j的spi接口,同时
使用具体的日志系统。

详细请参考https://www.jianshu.com/p/370ed25cb7c4

  • Log2j配置文件

Log4j2支持多种配置文件,XML、JSON、YAML和perperties文件都支持,当然最常用的还是XML文件。将配置文件放在类路径下即可,如果使用Maven或者Gradle的话,就是在resources文件夹下。

  • xml格式

  • <?xml version="1.0" encoding="UTF-8"?>
    <configuration status="WARN" monitorInterval="30">
    	<!-- appeenders自定义log控制的标签 -->
        <appenders>
    		<!-- 输出到控制台的配置 -->
            <console name="Console" target="SYSTEM_OUT">
    			<!-- 日志输出的格式 -->
                <PatternLayout pattern="[%d{yyyy-MM-dd HH:mm:ss:SSS}] [%p] - %l - %m%n"/>
            </console>
    		<!--文件回打印出所有的信息,这个log每次运行回自动清空,由append属性决定,适合临时测试使用
    			append为TRUE表示消息增加到指定文件中,false表示 消息覆盖指定的文件内容,默认是TRUE-->
            <File name="log" fileName="./logs/test.log" append="false">
                <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level %class{36} %L %M - %msg%xEx%n"/>
            </File>
    		<!--产生一个新的log文件 filePattern指定在文件超过指定size的时候的备份路径 -->
            <RollingFile name="RollingFileInfo" fileName="./logs/info.log"
                         filePattern="./logs/backups/$${date:yyyy-MM}/info-%d{yyyy-MM-dd}-%i.log.gz">
    			<!-- 指定log输出的最低优先级 -->		 
                <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
                <PatternLayout pattern="[%d{yyyy-MM-dd HH:mm:ss:SSS}] [%p] - %l - %m%n"/>
                <Policies>
                    <TimeBasedTriggeringPolicy/>
    				<!-- 日志的size-->
                    <SizeBasedTriggeringPolicy size="5 MB"/>
                </Policies>
            </RollingFile>
            <RollingFile name="RollingFileWarn" fileName="./logs/warn.log"
                         filePattern="./logs/backups/$${date:yyyy-MM}/warn-%d{yyyy-MM-dd}-%i.log">
                <ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY"/>
                <PatternLayout pattern="[%d{yyyy-MM-dd HH:mm:ss:SSS}] [%p] - %l - %m%n"/>
                <Policies>
                    <TimeBasedTriggeringPolicy/>
                    <SizeBasedTriggeringPolicy size="5 MB"/>
                </Policies>
                <DefaultRolloverStrategy max="20"/>
            </RollingFile>
            <RollingFile name="RollingFileError" fileName="./logs/error.log"
                         filePattern="./logs/backups/$${date:yyyy-MM}/error-%d{yyyy-MM-dd}-%i.log">
                <ThresholdFilter level="error" onMatch="ACCEPT" onMismatch="DENY"/>
                <PatternLayout pattern="[%d{yyyy-MM-dd HH:mm:ss:SSS}] [%p] - %l - %m%n"/>
                <Policies>
                    <TimeBasedTriggeringPolicy/>
                    <SizeBasedTriggeringPolicy size="5 MB"/>
                </Policies>
            </RollingFile>
        </appenders>
    	<!-- log的记录器 建立要给默认的logger-->
        <loggers>
            <root level="all">
                <appender-ref ref="Console"/>
                <appender-ref ref="RollingFileInfo"/>
                <appender-ref ref="RollingFileWarn"/>
                <appender-ref ref="RollingFileError"/>
            </root>
        </loggers>
    </configuration>

     

  • Yaml格式

如果要使用yaml格式则需要在引入jar包

maven:

<dependency>
    <groupId>com.fasterxml.jackson.dataformat</groupId>
    <artifactId>jackson-dataformat-yaml</artifactId>
    <version>2.10.1</version>
</dependency>

gradle:
compile group: 'com.fasterxml.jackson.dataformat', name: 'jackson-dataformat-yaml', version: '2.10.1'

Configuration:
  Properties:
    Property:
      - name: log-path
        value: "logs"
      - name: charset
        value: "UTF-8"
      - name: compact
        value: false
      - name: eventEol
        value: true
      - name: kafka-topic
        value: test
      - name: bootstrap-servers
        value: 127.0.0.1:9092
      - name: complete
        value: false
      - name: stacktraceAsString
        value: true
      - name: log.pattern
        value: "%d{yyyy-MM-dd HH:mm:ss.SSS} -%5p ${PID:-} [%X{tracking_id}] [%15.15t] %-30.30C{1.} : %m%n"

  Appenders:
    Console:
      name: CONSOLE
      target: SYSTEM_OUT
      PatternLayout:
        pattern: ${log.pattern}

    RollingFile:
      - name: REQUEST_LOG
        fileName: ${log-path}/request.log
        filePattern: "${log-path}/historyLog/info-%d{yyyy-MM-dd}-%i.log.gz"
        Filters:
          ThresholdFilter:
            - level: error
              onMatch: DENY
              onMismatch: NEUTRAL
            - level: warn
              onMatch: DENY
              onMismatch: NEUTRAL
            - level: info
              onMatch: ACCEPT
              onMismatch: DENY
        JsonLayout:
          - charset: ${charset}
            compact: ${compact}
            complete: ${complete}
            stacktraceAsString: ${stacktraceAsString}
            eventEol: ${eventEol}
            properties: true
            KeyValuePair:
              - key: tags
                value: REQUEST_LOG
        Policies:
          TimeBasedTriggeringPolicy:
            interval: 1
            modulate: true
        DefaultRolloverStrategy:
          max: 100
      - name: SERVICE_LOG
        fileName: ${log-path}/service.log
        filePattern: "${log-path}/historyLog/service-%d{yyyy-MM-dd}-%i.log.gz"
        Filters:
          ThresholdFilter:
            - level: error
              onMatch: DENY
              onMismatch: NEUTRAL
            - level: info
              onMatch: ACCEPT
              onMismatch: DENY
        JsonLayout:
          - charset: ${charset}
            compact: ${compact}
            complete: ${complete}
            stacktraceAsString: ${stacktraceAsString}
            eventEol: ${eventEol}
            properties: true
            objectMessageAsJsonObject: true
            KeyValuePair:
              - key: tags
                value: SERVICE_LOG
        Policies:
          TimeBasedTriggeringPolicy:
            interval: 1
            modulate: true
        DefaultRolloverStrategy:
          max: 100
      - name: ERROR_LOG
        fileName: ${log-path}/error.log
        filePattern: "${log-path}/historyLog/error-%d{yyyy-MM-dd}-%i.log.gz"
        Filters:
          ThresholdFilter:
            - level: error
              onMatch: ACCEPT
              onMismatch: DENY
        JsonLayout:
          - charset: ${charset}
            compact: ${compact}
            complete: ${complete}
            stacktraceAsString: ${stacktraceAsString}
            eventEol: ${eventEol}
            properties: true
            KeyValuePair:
              - key: tags
                value: ERROR_LOG
        Policies:
          TimeBasedTriggeringPolicy:
            interval: 1
            modulate: true
        DefaultRolloverStrategy:
          max: 100
    #    RollingFile:
    #      - name: REQUEST_LOG
    #        fileName: ${log-path}/request.log
    #        filePattern: "${log-path}/historyLog/info-%d{yyyy-MM-dd}-%i.log.gz"
    #        PatternLayout:
    #          charset: ${charset}
    #          pattern: ${log.pattern}
    #        Filters:
    #          ThresholdFilter:
    #            - level: error
    #              onMatch: DENY
    #              onMismatch: NEUTRAL
    #            - level: warn
    #              onMatch: DENY
    #              onMismatch: NEUTRAL
    #            - level: debug
    #              onMatch: ACCEPT
    #              onMismatch: DENY
    #        Policies:
    #          TimeBasedTriggeringPolicy:
    #            interval: 1
    #            modulate: true
    #        DefaultRolloverStrategy:
    #          max: 100
    #      - name: SERVICE_LOG
    #        fileName: ${log-path}/service.log
    #        filePattern: "${log-path}/historyLog/service-%d{yyyy-MM-dd}-%i.log.gz"
    #        PatternLayout:
    #          charset: ${charset}
    #          pattern: ${log.pattern}
    #        Filters:
    #          ThresholdFilter:
    #            - level: info
    #              onMatch: ACCEPT
    #              onMismatch: DENY
    #        Policies:
    #          TimeBasedTriggeringPolicy:
    #            interval: 1
    #            modulate: true
    #        DefaultRolloverStrategy:
    #          max: 100
    #      - name: ERROR_LOG
    #        fileName: ${log-path}/error.log
    #        filePattern: "${log-path}/historyLog/error-%d{yyyy-MM-dd}-%i.log.gz"
    #        PatternLayout:
    #          charset: ${charset}
    #          pattern: ${log.pattern}
    #        Filters:
    #          ThresholdFilter:
    #            - level: error
    #              onMatch: ACCEPT
    #              onMismatch: DENY
    #        Policies:
    #          TimeBasedTriggeringPolicy:
    #            interval: 1
    #            modulate: true
    #        DefaultRolloverStrategy:
    #          max: 100
    Kafka:
      - name: KAFKA_REQUEST_LOG
        topic: ${kafka-topic}
        Property:
          name: bootstrap.servers
          value: ${bootstrap-servers}
        Filters:
          ThresholdFilter:
            - level: error
              onMatch: DENY
              onMismatch: NEUTRAL
            - level: warn
              onMatch: DENY
              onMismatch: NEUTRAL
            - level: info
              onMatch: ACCEPT
              onMismatch: DENY
        JsonLayout:
          - charset: ${charset}
            compact: ${compact}
            complete: ${complete}
            stacktraceAsString: ${stacktraceAsString}
            eventEol: ${eventEol}
            properties: true
            KeyValuePair:
              - key: tags
                value: INFO_LOG
      - name: KAFKA_SERVICE_LOG
        topic: ${kafka-topic}
        Property:
          name: bootstrap.servers
          value: ${bootstrap-servers}
        Filters:
          ThresholdFilter:
            - level: error
              onMatch: DENY
              onMismatch: NEUTRAL
            - level: info
              onMatch: ACCEPT
              onMismatch: DENY
        JsonLayout:
          - charset: ${charset}
            compact: ${compact}
            complete: ${complete}
            stacktraceAsString: ${stacktraceAsString}
            eventEol: ${eventEol}
            properties: true
            objectMessageAsJsonObject: true
            KeyValuePair:
              - key: tags
                value: SERVICE_LOG
      - name: KAFKA_ERROR_LOG
        topic: ${kafka-topic}
        Property:
          name: bootstrap.servers
          value: ${bootstrap-servers}
        Filters:
          ThresholdFilter:
            - level: error
              onMatch: ACCEPT
              onMismatch: DENY
        JsonLayout:
          - charset: ${charset}
            compact: ${compact}
            complete: ${complete}
            stacktraceAsString: ${stacktraceAsString}
            eventEol: ${eventEol}
            properties: true
            KeyValuePair:
              - key: tags
                value: ERROR_LOG
  Loggers:
    AsyncRoot:
      level: debug
      #      add location in async
      includeLocation: true
      AppenderRef:
        - ref: CONSOLE
    AsyncLogger:
      - name: REQUEST_LOG
        AppenderRef:
          - ref: REQUEST_LOG
      - name: SERVICE_LOG
        AppenderRef:
          - ref: SERVICE_LOG
      - name: ERROR_LOG
        AppenderRef:
          - ref: ERROR_LOG

 

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
在 Flink 中,可以通过设置 `flink-conf.yaml` 文件来配置 Flink 的日志输出。在该文件中,可以设置 `flink.logging.log4j2.appender.kafka` 属性来指定使用 Kafka Appender 进行日志输出。具体的配置方式如下: 1. 在 `flink-conf.yaml` 文件中添加以下配置: ``` flink.logging.log4j2.appender.kafka.type = Kafka flink.logging.log4j2.appender.kafka.name = Kafka flink.logging.log4j2.appender.kafka.topic = log_topic flink.logging.log4j2.appender.kafka.layout.type = JsonLayout flink.logging.log4j2.appender.kafka.layout.compact = true flink.logging.log4j2.appender.kafka.property.bootstrap.servers = localhost:9092 ``` 上述配置中,`flink.logging.log4j2.appender.kafka.type` 属性指定了使用 Kafka Appender 进行日志输出,`flink.logging.log4j2.appender.kafka.name` 属性指定了 Appender 的名称,`flink.logging.log4j2.appender.kafka.topic` 属性指定了 Kafka Topic 的名称,`flink.logging.log4j2.appender.kafka.layout.type` 属性指定了日志输出的格式,这里使用了 JsonLayout,`flink.logging.log4j2.appender.kafka.property.bootstrap.servers` 属性指定了 Kafka Broker 的地址。 2. 在 Flink 代码中启动流处理任务时,可以通过 `StreamExecutionEnvironment.getConfig()` 方法获取 ExecutionConfig 对象,然后通过 `ExecutionConfig.setGlobalJobParameters()` 方法将 `flink-conf.yaml` 文件中的配置加载到 ExecutionConfig 对象中,如下所示: ``` StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); ExecutionConfig config = env.getConfig(); config.setGlobalJobParameters(ParameterTool.fromPropertiesFile("/path/to/flink-conf.yaml")); ``` 上述代码中,`ParameterTool.fromPropertiesFile()` 方法可以将 `flink-conf.yaml` 文件中的配置加载到一个 ParameterTool 对象中,然后通过 `ExecutionConfig.setGlobalJobParameters()` 方法将该对象中的配置加载到 ExecutionConfig 对象中。 这样就可以使用 Kafka Appender 进行日志输出了。需要注意的是,Kafka Appender 的具体配置方式可以根据实际需求进行调整。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值