以前Flume遇到的坑

flume配置

master
agent.channels = memoryChannel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 1000000
agent.sources = seqGenSrc
agent.sources.seqGenSrc.type = exec
agent.sources.seqGenSrc.command = tail -F /home/hadoop/apache-tomcat-7.0.56-2/logs/catalina.out
agent.sources.seqGenSrc.channels = memoryChannel
agent.sinks = remoteSink
agent.sinks.remoteSink.type = avro
agent.sinks.remoteSink.hostname = bx104
agent.sinks.remoteSink.port = 23004
agent.sinks.remoteSink.channel = memoryChannel

bx103
agent.channels = memoryChannel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 1000000
agent.channels.memoryChannel.request-timeout=100000
agent.sources = seqGenSrc1
agent.sources.seqGenSrc1.type = avro
agent.sources.seqGenSrc1.bind = bx104
agent.sources.seqGenSrc1.port = 23004
agent.sources.seqGenSrc1.channels = memoryChannel
agent.sinks = fileSink
agent.sinks.fileSink.type = file_roll
agent.sinks.fileSink.sink.directory=/home/hadoop/qiaoting/
agent.sinks.fileSink.channel = memoryChannel

若是报
2014-07-08 15:15:16,105 (pool-4-thread-1) [ERROR - org.apache.flume.source.AvroSource.appendBatch(AvroSource.java:341)] Avro source seqGenSrc1: Unable to process event batch. Exception follows.
org.apache.flume.ChannelException: Unable to put batch on required channel: org.apache.flume.channel.MemoryChannel{name: memoryChannel}
at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:200)
at org.apache.flume.source.AvroSource.appendBatch(AvroSource.java:339)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:88)
at org.apache.avro.ipc.Responder.respond(Responder.java:149)
at org.apache.avro.ipc.NettyServer NettyServerAvroHandler.messageReceived(NettyServer.java:188)atorg.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)atorg.apache.avro.ipc.NettyServer NettyServerAvroHandler.handleUpstream(NettyServer.java:173)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792)atorg.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)atorg.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:321)atorg.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:303)atorg.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:220)atorg.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)atorg.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)atorg.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)atorg.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)atorg.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)atorg.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)atorg.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)atorg.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)atorg.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)atorg.jboss.netty.util.internal.DeadLockProofWorker 1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor Worker.run(ThreadPoolExecutor.java:615)atjava.lang.Thread.run(Thread.java:745)Causedby:org.apache.flume.ChannelException:SpaceforcommittoqueuecouldntbeacquiredSinksarelikelynotkeepingupwithsources,orthebuffersizeistootightatorg.apache.flume.channel.MemoryChannel MemoryTransaction.doCommit(MemoryChannel.java:128)
at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)
at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:192)
… 28 more

需要将capacity加大
加大内存溢出再将内存加大

启动脚本
bin/flume-ng agent –conf ./conf/ -f example.conf -Dflume.root.logger=DEBUG,console -n agent

问题:
java.lang.NoSuchMethodError: com.google.common.cache.CacheBuilder.build()Lcom/google/common/cache/Cache;
- at org.apache.hadoop.hdfs.DomainSocketFactory.(DomainSocketFactory.java:45)
- at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:517)
- at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:453)
解决办法:把guava-10.0.1.jar需要换成guava-11.0.2.jar

问题:
java.lang.NoSuchMethodError: com.google.common.cache.CacheBuilder.build()Lcom/google/common/cache/Cache;
- at org.apache.hadoop.hdfs.DomainSocketFactory.(DomainSocketFactory.java:45)
- at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:517)
解决办法:把protobuf-java-2.4.1-shaded.jar换成protobuf-java-2.5.0.jar

问题:
java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Increment.setWriteToWAL(Z)Lorg/apache/hadoop/hbase/client/Increment;
at org.apache.flume.sink.hbase.HBaseSink 4.run(HBaseSink.java:285)atorg.apache.flume.sink.hbase.HBaseSink 4.run(HBaseSink.java:281)
at org.apache.flume.sink.hbase.HBaseSink.runPrivileged(HBaseSink.java:325)
at org.apache.flume.sink.hbase.HBaseSink.putEventsAndCommit(HBaseSink.java:281)
at org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:257)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
解决办法:
则不要用序列化SimpleHbaseEventSerializer
用RegexHbaseEventSerializer

往hdfs写数据
agent.sinks.fileSink.type = hdfs
agent.sinks.fileSink.channel = c1
agent.sinks.fileSink.hdfs.path = hdfs://xxx:8020/tmp/flume
agent.sinks.fileSink.hdfs.filePrefix = events-
agent.sinks.fileSink.hdfs.fileType = DataStream
agent.sinks.fileSink.hdfs.writeFormat = Text
agent.sinks.fileSink.hdfs.rollSize = 0
agent.sinks.fileSink.hdfs.rollInterval= 60
agent.sinks.fileSink.hdfs.rollCount = 600000

问题:
2014-07-10 14:54:54,755 (pool-5-thread-1) [ERROR - org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:173)] Uncaught exception in Runnable
java.lang.IllegalStateException: Serializer has been closed
原因是:
看了一下spool的目录下的文件,发现有个文件已经处理了(后缀加上了.COMPLETED)了,然后目录下还有一个文件跟这个是同名的即,目录下存在:
123.log.COMPLETED和123.log两个文件,就会报上诉错误。

问题:
Exception in thread “SinkRunner-PollingRunner-DefaultSinkProcessor” java.lang.NoSuchMethodError: com.google.common.cache.CacheBuilder.build()Lcom/google/common/cache/Cache;
是因为lib下面有guava-10.0.1.jar两个版本的

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值