kafka警告---java.io.EOFException: null

完成报错信息如下:

[2019-06-12 18:12:13.199][WARN ][][ org.apache.kafka.common.network.Selector.poll(Selector.java:276)
] ==> Error in I/O with /192.168.10.165
java.io.EOFException: null
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
        at java.lang.Thread.run(Thread.java:745)

使用卡夫卡版本信息如下:

<!-- https://mvnrepository.com/artifact/org.springframework.integration/spring-integration-kafka -->
<dependency>
    <groupId>org.springframework.integration</groupId>
    <artifactId>spring-integration-kafka</artifactId>
    <version>1.3.0.RELEASE</version>
</dependency>

因为项目需要,使用的卡夫卡版本为kafka_2.10-0.8.2.2.jar,版本较老,但是后台架构采用springboot2.0.0.RELEASE,所以采用spring-integration-kafka来配置。

先说结果吧:

首先这个错误没有太大影响,只是0.8版本的问题,当版本升到0.9及以上时,就不会出现这个问题了。如果依旧使用0.8版本,可以参考 KAFKA-3205 Support passive close by broker 中的 Support passive close by broker以及Fix white space附件。

先看生产者配置文件spring-kafka-producer.xml:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:int="http://www.springframework.org/schema/integration"
    xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
    xmlns:task="http://www.springframework.org/schema/task"
    xsi:schemaLocation="http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
        http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd">
     
   <!-- commons config -->
    <bean id="stringSerializer" class="org.apache.kafka.common.serialization.StringSerializer"/>
    <bean id="kafkaEncoder" class="org.springframework.integration.kafka.serializer.avro.AvroReflectDatumBackedKafkaEncoder">
        <constructor-arg value="java.lang.String" />
    </bean>
    <bean id="producerProperties"
        class="org.springframework.beans.factory.config.PropertiesFactoryBean">
        <property name="properties">
            <props>
                <prop key="topic.metadata.refresh.interval.ms">3600000</prop>
                <prop key="message.send.max.retries">5</prop>
                <prop key="serializer.class">kafka.serializer.StringEncoder</prop>
                <prop key="request.required.acks">1</prop>
            </props>
        </property>
    </bean>
     
    <!-- topic test config  -->
     
    <int:channel id="kafkaTopicSend">
        <int:queue />
    </int:channel>
     
    <int-kafka:outbound-channel-adapter
        id="kafkaOutboundChannelAdapterTopicTest" kafka-producer-context-ref="producerContextTopicTest"
        auto-startup="true" channel="kafkaTopicSend" order="3">
        <int:poller fixed-delay="1000" time-unit="MILLISECONDS"
            receive-timeout="1" task-executor="taskExecutor" />
    </int-kafka:outbound-channel-adapter>
    <task:executor id="taskExecutor" pool-size="5"
        keep-alive="120" queue-capacity="500" />
    <int-kafka:producer-context id="producerContextTopicTest"
        producer-properties="producerProperties">
        <int-kafka:producer-configurations>
            <!-- 多个topic配置 -->
            <int-kafka:producer-configuration
                broker-list="192.168.10.170:9092,192.168.10.170:9093,192.168.10.170:9094"
                key-class-type="java.lang.String"
                key-serializer="stringSerializer"
                value-class-type="java.lang.String"
                value-serializer="stringSerializer"
                topic="dealMessage" />       
           <int-kafka:producer-configuration
                broker-list="192.168.10.170:9092,192.168.10.170:9093,192.168.10.170:9094"
                key-class-type="java.lang.String"
                key-serializer="stringSerializer"
                value-class-type="java.lang.String"
                value-serializer="stringSerializer"
                topic="keepAlive" />
        </int-kafka:producer-configurations>
    </int-kafka:producer-context>
</beans>

消费者配置文件spring-kafka-consumer.xml:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xmlns:int="http://www.springframework.org/schema/integration"
     xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
     xmlns:task="http://www.springframework.org/schema/task"
     xsi:schemaLocation="http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
    http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
    http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd">
 
    <!-- topic test conf -->
    <int:channel id="inputFromKafka" >
        <int:dispatcher task-executor="kafkaMessageExecutor" />
    </int:channel>
    <!-- zookeeper配置 可以配置多个 -->
    <int-kafka:zookeeper-connect id="zookeeperConnect"
        zk-connect="192.168.10.170:2181" zk-connection-timeout="6000"
        zk-session-timeout="12000" zk-sync-time="200" />
    <!-- channel配置 auto-startup="true"  否则接收不发数据 -->
    <int-kafka:inbound-channel-adapter
        id="kafkaInboundChannelAdapter" kafka-consumer-context-ref="consumerContext"
        auto-startup="true" channel="inputFromKafka">
        <int:poller fixed-delay="1" time-unit="MILLISECONDS" />
    </int-kafka:inbound-channel-adapter>
    <task:executor id="kafkaMessageExecutor" pool-size="8" keep-alive="120" queue-capacity="500" />
    <bean id="kafkaDecoder"
        class="org.springframework.integration.kafka.serializer.common.StringDecoder" />
 
    <bean id="consumerProperties"
        class="org.springframework.beans.factory.config.PropertiesFactoryBean">
        <property name="properties">
            <props>
                <prop key="auto.offset.reset">smallest</prop>
                <prop key="socket.receive.buffer.bytes">10485760</prop> <!-- 10M -->
                <prop key="fetch.message.max.bytes">5242880</prop>
                <prop key="auto.commit.interval.ms">1000</prop>
            </props>
        </property>
    </bean>
    <!-- 消息接收的BEEN -->
    <bean id="kafkaConsumerService" class="cn.test.kafka.KafkaConsumerService" />
    <!-- 指定接收的方法 -->
    <int:outbound-channel-adapter channel="inputFromKafka"
        ref="kafkaConsumerService" method="processMessage" />
 
    <int-kafka:consumer-context id="consumerContext"
        consumer-timeout="1000" zookeeper-connect="zookeeperConnect"
        consumer-properties="consumerProperties">
        <int-kafka:consumer-configurations>
            <int-kafka:consumer-configuration
                group-id="default1" value-decoder="kafkaDecoder" key-decoder="kafkaDecoder"
                max-messages="5000">
                <!-- 两个TOPIC配置 -->
                <int-kafka:topic id="dealMessage" streams="4" />
                <int-kafka:topic id="keepAlive" streams="4" />
            </int-kafka:consumer-configuration>
        </int-kafka:consumer-configurations>
    </int-kafka:consumer-context>
</beans>

项目启动后,kafka配置信息:

[2019-06-12 15:09:42.093][INFO ][][ org.apache.kafka.common.config.AbstractConfig.logAll(AbstractConfig.java:113)
] ==> ProducerConfig values: 
	compression.type = none
	metric.reporters = []
	metadata.max.age.ms = 300000
	metadata.fetch.timeout.ms = 60000
	acks = 1
	batch.size = 16384
	reconnect.backoff.ms = 10
	bootstrap.servers = [192.168.10.170:9092, 192.168.10.170:9093, 192.168.10.170:9094]
	receive.buffer.bytes = 32768
	retry.backoff.ms = 100
	buffer.memory = 33554432
	timeout.ms = 30000
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	retries = 0
	max.request.size = 1048576
	block.on.buffer.full = true
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer
	metrics.sample.window.ms = 30000
	send.buffer.bytes = 131072
	max.in.flight.requests.per.connection = 5
	metrics.num.samples = 2
	linger.ms = 0
	client.id = 

参考三个网站:

1. https://stackoverflow.com/questions/33432027/kafka-error-in-i-o-java-io-eofexception-null

内容大致如下:

问题

I am using Kafka 0.8.2.0 (Scala 2.10). In my log files, I see the following message intermittently. This seems like a connectivity issue, but I’m running both in my localhost.
Is this a harmless warning message or should I do something to avoid it?

翻译:

我正在使用Kafka 0.8.2.0(Scala 2.10)。在我的日志文件中,我间歇地看到以下消息。这似乎是一个连接问题,但都运行在我的localhost中。
这是一条无害的警告信息,我应该做些什么来避免它?

2015-10-30 14:12:38.015  WARN 4251 --- [ad | producer-1] [                                    ] o.apache.kafka.common.network.Selector   : Error in I/O with localhost/127.0.0.1

java.io.EOFException: null
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
    at java.lang.Thread.run(Thread.java:745)
Phil Brock:

This is a bit later to the party, but may help someone - it would have helped me.

What you’re seeing occurs because the Kafka broker is passively closing the connection after a certain period of idleness is exceeded. It’s defined by this broker property: connections.max.idle.ms - the default is 10 minutes.

Apparently the kafka client in 0.8.x doesn’t honour that setting and just leaves idle connections open. You’ll see the warning in your logs but it should have no bad effect on your application.

More details here: https://issues.apache.org/jira/browse/KAFKA-3205

The broker config is documented here: https://kafka.apache.org/090/documentation/#configuration

In that table you’ll find:

Name: connections.max.idle.ms
Description: Idle connections timeout: the server socket processor threads close the connections that idle more than this
Type:long
Default: 600000

Hope that helps.

翻译:

这对讨论群组来说有点晚了,但可能对某人有所帮助 - 这对我有所帮助。

您看到的是因为Kafka broker 在超过一定的闲置时间后被动关闭连接。 它由此broker属性定义:connections.max.idle.ms- 默认值为10分钟。

显然,0.8.x中的kafka客户端不遵循该设置,只是让空闲连接保持打开状态。 您将在日志中看到警告,但它应该对您的应用程序没有任何不良影响。

更多细节:https://issues.apache.org/jira/browse/KAFKA-3205

broker配置在此处记录:https://kafka.apache.org/090/documentation/#configuration

在那里你会发现:

Name: connections.max.idle.ms
Description: Idle connections timeout: the server socket processor threads close the connections that idle more than this
Type:long
Default: 600000

希望对你有帮助。

2. https://issues.apache.org/jira/browse/KAFKA-3205

内容大致如下:

问题

In a situation with a Kafka broker in 0.9 and producers still in 0.8.2.x, producers seems to raise the following after a variable amount of time since start :

2016-01-29 14:33:13,066 WARN [] o.a.k.c.n.Selector: Error in I/O with 172.22.2.170
java.io.EOFException: null
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62) ~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
        at org.apache.kafka.common.network.Selector.poll(Selector.java:248) ~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]

This can be reproduced successfully by doing the following :

  1. Start a 0.8.2 producer connected to the 0.9 broker
  2. Wait 15 minutes, exactly
  3. See the error in the producer logs.
    Oddly, this also shows up in an active producer but after 10 minutes of activity.

Kafka’s server.properties :

broker.id=1
listeners=PLAINTEXT://:9092
port=9092
num.network.threads=2
num.io.threads=2
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
log.dirs=/mnt/data/kafka
num.partitions=4
auto.create.topics.enable=false
delete.topic.enable=true
num.recovery.threads.per.data.dir=1
log.retention.hours=48
log.retention.bytes=524288000
log.segment.bytes=52428800
log.retention.check.interval.ms=60000
log.roll.hours=24
log.cleanup.policy=delete
log.cleaner.enable=true
zookeeper.connect=127.0.0.1:2181
zookeeper.connection.timeout.ms=1000000

Producer’s configuration :

compression.type = none
	metric.reporters = []
	metadata.max.age.ms = 300000
	metadata.fetch.timeout.ms = 60000
	acks = all
	batch.size = 16384
	reconnect.backoff.ms = 10
	bootstrap.servers = [127.0.0.1:9092]
	receive.buffer.bytes = 32768
	retry.backoff.ms = 500
	buffer.memory = 33554432
	timeout.ms = 30000
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	retries = 3
	max.request.size = 5000000
	block.on.buffer.full = true
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer
	metrics.sample.window.ms = 30000
	send.buffer.bytes = 131072
	max.in.flight.requests.per.connection = 5
	metrics.num.samples = 2
	linger.ms = 0
	client.id = 
主要回答:
Mart Haitjema

I also ran into this issue and discovered that the broker closes connections that have been idle for connections.max.idle.ms (https://kafka.apache.org/090/configuration.html#brokerconfigs) which has a default of 10 minutes.
While this parameter was introduced in 0.8.2 (https://kafka.apache.org/082/configuration.html#brokerconfigs) it wasn’t actually enforced by the broker until 0.9.0 which closes the connections inside Selector.java::maybeCloseOldestConnection()
(see https://github.com/apache/kafka/commit/78ba492e3e70fd9db61bc82469371d04a8d6b762#diff-d71b50516bd2143d208c14563842390a).
While the producer config also defines this parameter with a default of 9 minutes, it does not appear to be respected by the 0.8.2.x clients which mean idle connections aren’t being closed on the client-side but are timed out by the broker.
When the broker drops the connection, it results in an java.io.EOFException: null exception on the producer-side that looks exactly like the one shown in the description.

To work around this issue, we explicitly set the connections.max.idle.ms to something very large in the broker config (e.g. 1 year) which seems to have mitigated the problem for us.

翻译:

我也遇到了这个问题并发现broker关闭了对于connections.max.idle.mshttps://kafka.apache.org/090/configuration.html#brokerconfigs)空闲的连接,其默认值为10分钟。
虽然这个参数是在0.8.2(https://kafka.apache.org/082/configuration.html#brokerconfigs)中引入的,但直到0.9.0它才真正由broker强制执行:Selector.java中的maybeCloseOldestConnection()
(参见https://github.com/apache/kafka/commit/78ba492e3e70fd9db61bc82469371d04a8d6b762#diff-d71b50516bd2143d208c14563842390a)。
虽然生产者配置也定义了这个参数,默认值为9分钟,但0.8.2.x客户端似乎并不遵循它,这意味着空闲连接在客户端没有被关闭但被broker判断为超时。
当broker断开连接时,它会在生产者端导致java.io.EOFException:null异常,该异常看起来与描述中显示的完全相同。

为了解决这个问题,我们明确地将broker配置中的connections.max.idle.ms设置为非常大的东西(例如1年),这似乎已经为我们缓解了这个问题。

ASF GitHub Bot

GitHub user bondj opened a pull request:

https://github.com/apache/kafka/pull/1166

KAFKA-3205 Support passive close by broker

An attempt to fix KAFKA-3205. It appears the problem is that the broker has closed the connection passively, and the client should react appropriately.

In NetworkReceive.readFrom() rather than throw an EOFException (Which means the end of stream has been reached unexpectedly during input), instead return the negative bytes read signifying an acceptable end of stream.

In Selector if the channel is being passively closed, don’t try to read any more data, don’t try to write, and close the key.

I believe this will fix the problem.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bondj/kafka passiveClose

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1166.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

commit 5dc11015435a38a0d97efa2f46b4d9d9f41645b5
Author: Jonathan Bond <jbond@netflix.com>
Date: 2016-03-30T03:57:11Z

Support passive close by broker

This closes #1166

翻译:

GitHub用户bondj打开了一个拉取请求:

https://github.com/apache/kafka/pull/1166

卡夫卡3205 broker支持被动关闭

试图修复 卡夫卡3205。看来问题是broker已被动地关闭了连接,客户端应该做出适当的反应。

在NetworkReceive.readFrom()中,而不是抛出EOFException(这意味着在输入期间意外地到达了流的末尾),而是返回负字节读取,表示可接受的流结束。

在Selector中,如果通道被动关闭,请勿尝试读取更多数据,不要尝试写入并关闭密钥。

我相信这将解决问题。

您可以通过运行以下命令将此拉取请求合并到Git存储库中:

$ git pull https://github.com/bondj/kafka passiveClose

或者,您可以查看并应用这些更改作为补丁:

https://github.com/apache/kafka/pull/1166.patch

要关闭此拉取请求,请
在提交消息中使用(至少)以下内容提交您的主/主干分支:

commit 5dc11015435a38a0d97efa2f46b4d9d9f41645b5
Author: Jonathan Bond <jbond@netflix.com>
Date: 2016-03-30T03:57:11Z

Support passive close by broker

关闭#1166

Flavio Junqueira

The changes currently in 0.9+ doesn’t have as many messages printed out because both ends, client and server, enforce the connection timeout. The change discussed in the pull request doesn’t print it in the case of a passive close initiated by the server (in 0.9 the timeout is enforced), which is desirable only because it pollutes the logs otherwise. It is better that we keep these messages in 0.9 and later to be informed of connections being closed. They are not supposed to happen very often, but if it turns out to be a problem, we can revisit this issue.

翻译:

当前在0.9+中的更改没有打印出多少消息,因为两端(客户端和服务器)都强制执行连接超时。拉取请求中讨论的更改不会在服务器启动的被动关闭的情况下打印它(在0.9中强制执行超时),这只是因为它否则会污染日志。我们最好将这些消息保存在0.9及更高版本,以便获知关闭的连接。它们不应该经常发生,但如果它成为一个问题,我们可以重新审视这个问题。

3. https://github.com/apache/kafka/pull/1166

解决方案的作者:Jonathan Bond
提交的附件里面有代码上的解决方案。

  • 5
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值