Flume-ng出现HDFS IO error,Callable timed out异常

这两台flume-ng晚上9点~11点flume出现异常:
25 Mar 2014 22:18:25,189 ERROR [hdfs-thrift_hdfsSink-roll-timer-0] (org.apache.flume.sink.hdfs.BucketWriter$2.call:257)  - Unexpected error
java.io.IOException: Callable timed out after 10000 ms on file: /logdata/2014/03/25/software_log/197a.1395757003934.tmp
     at org.apache.flume.sink.hdfs.BucketWriter.callWithTimeout(BucketWriter.java:550)
     at org.apache.flume.sink.hdfs.BucketWriter.doFlush(BucketWriter.java:353)
     at org.apache.flume.sink.hdfs.BucketWriter.flush(BucketWriter.java:319)
     at org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:277)
     at org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:255)
     at org.apache.flume.sink.hdfs.BucketWriter$2.call(BucketWriter.java:250)
     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
     at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
     at java.lang.Thread.run(Thread.java:724)
Caused by: java.util.concurrent.TimeoutException
     at java.util.concurrent.FutureTask.get(FutureTask.java:201)
     at org.apache.flume.sink.hdfs.BucketWriter.callWithTimeout(BucketWriter.java:543)
     ... 11 more
25 Mar 2014 22:34:17,639 WARN  [ResponseProcessor for block BP-928773537-10.31.246.10-1392969615809:blk_-6973900543529394933_175021] (org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run:747)  - DFSOutputStream ResponseProcessor exception  for block BP-928773537-10.31.246.10-1392969615809:blk_-6973900543529394933_175021
java.io.EOFException: Premature EOF: no length prefix available
     at org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
     at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:114)
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:694)
查看日志高峰期:


可以明显的看到18点开始,到23点之间是高峰期,hadoop集群百兆带宽,在日志写入高峰期时,达到带宽上限。hadoop这边我们还没有部署监控工具(-。。-)

目前解决方案:
转载请注明:来自 http://blog.csdn.net/wsscy2004
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值