【kafka】Connection to node -1) terminated during authentication. This may happen due to any of

前言

最近在使用 java 消费 kafka 服务时,遇到报错,报错内容如下:

环境信息

  • kafka 2.5.0
  • kerberos

报错内容

Connection to node -1) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue.

[2020-05-18 18:14:38,615] INFO [Group Metadata Manager on Broker 193]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-18 18:24:38,615] INFO [Group Metadata Manager on Broker 193]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-18 18:34:38,616] INFO [Group Metadata Manager on Broker 193]: Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-18 18:44:02,879] WARN Unexpected error from ; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = -720899)
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:89)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:160)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:141)
at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
at kafka.network.Processor.run(SocketServer.scala:413)
at java.lang.Thread.run(Thread.java:748)
[2020-05-18 18:44:38,615] INFO [Group Metadata Manager on Broker 193]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2020-05-18 18:54:38,615] INFO [Group Metadata Manager on Broker 193]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)

排查方向

  1. 时钟同步(应用与kafka集群之间、kafka集群内部之间)重要
  2. 使用主机名是否与KDC、KAFKA服务器网络/端口正确通信。
  3. 防火墙是否关闭,检查selinux
  4. krb5.conf kafka_client_jaas.conf xxx.keytab 三个配置文件内容是否正确,文件所属用户,权限是否正确。
  5. 是否指定 JVM 参数,或者在代码里面是否指定环境变量。
  6. 如下配置,是否正确配置
    properties.setProperty("sasl.kerberos.service.name", "kafka");
    properties.setProperty("sasl.mechanism", "GSSAPI");
    properties.setProperty("security.protocol", "SASL_PLAINTEXT");
  7. Kafka 相关jar包版本是否一致,是否有依赖冲突。
  8. KDC 服务器网络是否通信良好(有无丢包),因为kerberos 默认是udp通信,建议换成tcp

对于使用spark-kafka 程序消费 kafka 的,需要注意 "sasl.kerberos.service.name”,“sasl.mechanism”, “security.protocol” 三项配置需要增加前缀 “kafka.”

补充

另外查到网上有别的解决方式:

  1. 调整 socket.request.max.bytes 参数
  2. 缺少jackjsonpom 依赖

目前个人感觉,上面两个应该不是正确的解决方向,这里也顺便贴出来,供大家参考。

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值