Flink cep监控 Flink任务日志

本文介绍了如何配置Flink的日志4j,包括在IDEA本地测试和集群环境下的设置,以便将日志数据发送到Kafka topic。数据以'全域名'进行分割,便于后续分析。此外,通过CEP对Kafka中的错误日志进行分析,当检测到ERROR级别时触发报警。这是一个简单的日志监控和分析示例。
摘要由CSDN通过智能技术生成

1, 配置flink的log4j(idea本地测试)

 

#log4j.rootLogger=info,console
#log4j.appender.console=org.apache.log4j.ConsoleAppender
#log4j.appender.console.layout=org.apache.log4j.PatternLayout
#log4j.appender.console.layout.ConversionPattern=%d{HH:mm:ss,SSS} %-5p %-60c %x - %m%n



log4j.rootLogger=WARN,kafka
# for package com.demo.kafka, log would be sent to kafka appender.
log4j.logger.kafka=WARN
log4j.logger.org.apache.kafka=WARN
log4j.logger.org.apache.zookeeper=WARN
log4j.logger.org.I0Itec.zkclient=WARN
## appender console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.out
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d (%t) [%p - %l] %m%n

## appender kafka
log4j.appender.kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender

#指定topic
log4j.appender.kafka.topic=aaa


log4j.appender.kafka.brokerList=192.168.6.34:9092,192.168.6.35:9092,192.168.6.36:9092


log4j.appender.kafka.compressionType=none
log4j.appender.kafka.syncSend=false
log4j.appender.kafka.layout=org.apache.log4j.PatternLayout
log4j.appender.kafka.ThresholdFilter.level=INFO
log4j.appender.kafka.ThresholdFilter.onMatch=ACCEPT
log4j.appender.kafka.ThresholdFilter.onMismatch=DENY

#log4j.appender.kafka.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L %% - %m%n
log4j.appender.kafka.layout.ConversionPattern=%p||%c{1}||%L||%d{yyyy-MM-dd HH:mm:ss}||%m%n


 

2, 集群配置log4j.properties

rootLogger.appenderRef.rolling.ref = RollingFile
rootLogger.appenderRef.kafka.ref = Kafka
rootLogger.level = WARN


# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
logger.akka.name = akka
logger.akka.level = INFO
logger.kafka.name= org.apache.kafka
logger.kafka.level = INFO
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = INFO
logger.zookeeper.name = org.apache.zookeeper
logger.zookeep
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值