flume报错:监控的access.log日志数据量大导致 org.apache.flume.ChannelFullException: Space for commit to queue could

一、问题描述

通过flume监控日志,传输到kafka,进而streaming消费,但是,突然streaming消费不到信息。向kafka单独发送消息,streaming可以收到信息,所以确定是flume问题,查看flume日志

Failed while running command: tail -F /opt/datas/access.log
org.apache.flume.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight
	at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doCommit(MemoryChannel.java:128)
	at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)
	at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194)
	at org.apache.flume.source.ExecSource$ExecRunnable.flushEventBatch(ExecSource.java:378)
	at org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:338)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

二、问题原因

查看flume日志

问题原因很清楚:sinks的熟读跟不上source的速度,buffer的大小太紧张了。解决就是提升chennel的buffer大小。

三、问题解决

修改chennel的容量

execmemoryavro.channels.memory_channel.type = memory
#添加以下两条
execmemoryavro.channels.memory_channel.keep-alive = 60
execmemoryavro.channels.memory_channel.capacity = 1000000

四、参考

1.https://blog.csdn.net/qq_35688140/article/details/85238426

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值