flume接收kafka消息过多导致的GC错误的解决办法

当flume接收kafka消息过多会导致如下错误:

Exception in thread "PollableSourceRunner-KafkaSource-s1" java.lang.OutOfMemoryError: GC overhead limit exceeded
	at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
	at java.lang.StringCoding.decode(StringCoding.java:193)
	at java.lang.String.<init>(String.java:426)
	at java.lang.String.<init>(String.java:491)
	at org.apache.kafka.common.serialization.StringDeserializer.deserialize(StringDeserializer.java:47)
	at org.apache.kafka.common.serialization.StringDeserializer.deserialize(StringDeserializer.java:28)
	at org.apache.kafka.common.serialization.ExtendedDeserializer$Wrapper.deserialize(ExtendedDeserializer.java:65)
	at org.apache.kafka.common.serialization.ExtendedDeserializer$Wrapper.deserialize(ExtendedDeserializer.java:55)
	at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:1038)
	at org.apache.kafka.clients.consumer.internals.Fetcher.access$3300(Fetcher.java:110)
	at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1223)
	at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1400(Fetcher.java:1072)
	at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:562)
	at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:523)
	at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1230)
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
	at org.apache.flume.source.kafka.KafkaSource.doProcess(KafkaSource.java:216)
	at org.apache.flume.source.AbstractPollableSource.process(AbstractPollableSource.java:60)
	at org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:133)
	at java.lang.Thread.run(Thread.java:748)
[2020-04-22 21:52:50,985] WARN Sink failed to consume event. Attempting next sink if available. (org.apache.flume.sink.LoadBalancingSinkProcessor)
org.apache.flume.EventDeliveryException: Failed to send events
	at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:398)
	at org.apache.flume.sink.LoadBalancingSinkProcessor.process(LoadBalancingSinkProcessor.java:156)
	at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flume.EventDeliveryException: NettyAvroRpcClient { host: slave1, port: 52020 }: Failed to send batch
	at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:310)
	at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:380)
	... 3 more
Caused by: org.apache.flume.EventDeliveryException: NettyAvroRpcClient { host: slave1, port: 52020 }: RPC request exception
	at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:360)
	at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:298)
	... 4 more
Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded
	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.util.concurrent.FutureTask.get(FutureTask.java:206)
	at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:352)
	... 5 more
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
	at java.util.ArrayList.iterator(ArrayList.java:840)
	at org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:103)
	at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:66)
	at org.apache.avro.generic.GenericDatumWriter.writeArray(GenericDatumWriter.java:131)
	at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:68)
	at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:58)
	at org.apache.avro.ipc.specific.SpecificRequestor.writeRequest(SpecificRequestor.java:127)
	at org.apache.avro.ipc.Requestor$Request.getBytes(Requestor.java:473)
	at org.apache.avro.ipc.Requestor.request(Requestor.java:181)
	at org.apache.avro.ipc.Requestor.request(Requestor.java:129)
	at org.apache.avro.ipc.specific.SpecificRequestor.invoke(SpecificRequestor.java:84)
	at com.sun.proxy.$Proxy6.appendBatch(Unknown Source)
	at org.apache.flume.api.NettyAvroRpcClient$2.call(NettyAvroRpcClient.java:343)
	at org.apache.flume.api.NettyAvroRpcClient$2.call(NettyAvroRpcClient.java:339)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more
[2020-04-22 21:52:53,746] INFO Rpc sink k1: Building RpcClient with hostname: slave1, port: 52020 (org.apache.flume.sink.AbstractRpcSink)

如下图所示:

或者:

Exception in thread "PollableSourceRunner-KafkaSource-s1" java.lang.OutOfMemoryError: GC overhead limit exceeded

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "PollableSourceRunner-KafkaSource-s1"
Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor" java.lang.OutOfMemoryError: GC overhead limit exceeded

如下图所示:

出现以上这种情况主要是由于flume的jvm设置的最大内存默认为20M,只需要改一下这个值即可,如下图所示:

  • 2
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值