LOG4J



如下描述均为 1.2.16 版本


LOG4J中创建 APPENDER在:

OptionConverter.instantiateByClassName(String className, Class superClass, Object defaultValue)

Class classObj = Loader.loadClass(className);
return classObj.newInstance();

这是读取 properties文件时的创建APPENDER  的方式。

如果要使用指定带参构造方法创建apender ,就需要使用 XML 配置




上次写了个Appender 配置 layout 后总是 null,这次还是这个问题, 又调了半天,伤身啊,记下来好了,总是忘记

	@Override
	public boolean requiresLayout() {
		return true;
	}


2、Log4j的并发性

Log4j 写日志时会对当前 Appender 所在的 Logger  加锁,Category.callAppenders(LoggingEvent event):

 public
  void callAppenders(LoggingEvent event) {
    int writes = 0;

    for(Category c = this; c != null; c=c.parent) {
      // Protected against simultaneous call to addAppender, removeAppender,...
      synchronized(c) {
	if(c.aai != null) { //第205行 
	  writes += c.aai.appendLoopOnAppenders(event);
	}
      //......
  }

所以在扩展的 Appender 中如果再使用这个 LOG 写日志,那么上面这个 LOG 就会永远停在 callAppenders() 方法中,

并且如果查看线程栈,它只会告诉你 No deadlocks found.

下面这个线程就阻塞在了 callAppender 处无法返回,这是个正在向 DataNode 发送数据包的线程:

Thread 7042: (state = BLOCKED)
 - org.apache.log4j.Category.callAppenders(org.apache.log4j.spi.LoggingEvent) @bci=13, line=205 (Interpreted frame)
 - org.apache.log4j.Category.forcedLog(java.lang.String, org.apache.log4j.Priority, java.lang.Object, java.lang.Throwable) @bci=14, line=391 (Interpreted frame)
 - org.apache.log4j.Category.log(java.lang.String, org.apache.log4j.Priority, java.lang.Object, java.lang.Throwable) @bci=34, line=856 (Interpreted frame)
 - org.apache.commons.logging.impl.Log4JLogger.warn(java.lang.Object, java.lang.Throwable) @bci=12, line=234 (Interpreted frame)
 - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run() @bci=1107, line=586 (Interpreted frame)
没有异常,没有日志,客户端得乖乖地等着,这无疑是个很悲剧的事情,直接原因是下面的线程已经进入了Category.callAppenders,

并且它就是不返回,就是不返回,就是。。。就是。。。

Thread 7033: (state = BLOCKED)
 - java.lang.Object.wait(long) @bci=0 (Interpreted frame)
 - org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(long) @bci=75, line=1708 (Interpreted frame)
 - org.apache.hadoop.hdfs.DFSOutputStream.flushInternal() @bci=38, line=1694 (Interpreted frame)
 - org.apache.hadoop.hdfs.DFSOutputStream.close() @bci=82, line=1778 (Interpreted frame)
 - org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close() @bci=4, line=66 (Interpreted frame)
 - org.apache.hadoop.fs.FSDataOutputStream.close() @bci=4, line=99 (Interpreted frame)
 - org.apache.hadoop.io.IOUtils.cleanup(org.apache.commons.logging.Log, java.io.Closeable[]) @bci=27, line=237 (Interpreted frame)
 - org.apache.hadoop.io.IOUtils.closeStream(java.io.Closeable) @bci=9, line=254 (Interpreted frame)
 - com.embracesource.edh.log4j.append.HDFSAppender.doLog(org.apache.log4j.spi.LoggingEvent) @bci=161, line=137 (Interpreted frame)
 - com.embracesource.edh.log4j.append.GeneralAppender.subAppend(org.apache.log4j.spi.LoggingEvent) @bci=7, line=134 (Interpreted frame)
 - com.embracesource.edh.log4j.append.GeneralAppender.append(org.apache.log4j.spi.LoggingEvent) @bci=10, line=121 (Interpreted frame)
 - org.apache.log4j.AppenderSkeleton.doAppend(org.apache.log4j.spi.LoggingEvent) @bci=106, line=251 (Interpreted frame)
 - org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(org.apache.log4j.spi.LoggingEvent) @bci=41, line=66 (Interpreted frame)
 - org.apache.log4j.Category.callAppenders(org.apache.log4j.spi.LoggingEvent) @bci=26, line=206 (Interpreted frame)
 - org.apache.log4j.Category.forcedLog(java.lang.String, org.apache.log4j.Priority, java.lang.Object, java.lang.Throwable) @bci=14, line=391 (Interpreted frame)
 - org.apache.log4j.Category.log(java.lang.String, org.apache.log4j.Priority, java.lang.Object, java.lang.Throwable) @bci=34, line=856 (Interpreted frame)
 - org.apache.commons.logging.impl.Log4JLogger.info(java.lang.Object) @bci=12, line=199 (Interpreted frame)
 - org.apache.hive.service.cli.CLIService.getOperationStatus(org.apache.hive.service.cli.OperationHandle) @bci=40, line=261 (Interpreted frame)
 - org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(org.apache.hive.service.cli.thrift.TExecuteStatementReq) @bci=61, line=193 (Interpreted frame)
 - org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(org.apache.hive.service.cli.thrift.TCLIService$Iface, org.apache.hive.service.cli.thrift.TCLIService$ExecuteStatement_args) @bci=14, line=1193 (Interpreted frame)
 - org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(java.lang.Object, org.apache.thrift.TBase) @bci=9, line=1178 (Interpreted frame)
 - org.apache.thrift.ProcessFunction.process(int, org.apache.thrift.protocol.TProtocol, org.apache.thrift.protocol.TProtocol, java.lang.Object) @bci=86, line=39 (Interpreted frame)
 - org.apache.thrift.TBaseProcessor.process(org.apache.thrift.protocol.TProtocol, org.apache.thrift.protocol.TProtocol) @bci=126, line=39 (Interpreted frame)
 - org.apache.hive.service.cli.thrift.TSetIpAddressProcessor.process(org.apache.thrift.protocol.TProtocol, org.apache.thrift.protocol.TProtocol) @bci=13, line=38 (Interpreted frame)
 - org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run() @bci=151, line=206 (Interpreted frame)
 - java.util.concurrent.ThreadPoolExecutor$Worker.runTask(java.lang.Runnable) @bci=59, line=886 (Interpreted frame)
 - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=28, line=908 (Interpreted frame)
 - java.lang.Thread.run() @bci=11, line=662 (Interpreted frame)














  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值