1、详细报错信息
在用flume收集kafka中数据到hdfs中内存溢出了
2020-11-26 17:41:25,679 (kafka-coordinator-heartbeat-thread | flume) [ERROR - org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1083)] [Consumer clientId=consumer-2, groupId=flume] Heartbeat thread failed due to unexpected error
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:335)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:296)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:562)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:498)

在使用Flume从Kafka收集数据到HDFS时遇到了Java heap space内存溢出错误。通过编辑flume-env.sh文件,增加-Xms和-Xmx参数来调整JVM堆内存大小,从而解决了这个问题。
最低0.47元/天 解锁文章
329

被折叠的 条评论
为什么被折叠?



