FlinkKafkaProducer 连接虚拟机中Kafka报错问题

FlinkKafkaProducer 连接虚拟机中Kafka报错问题

解决org.apache.flink.streaming.connectors.kafka.FlinkKafkaException: Failed to send data to Kafka: Expiring 3org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy 报错

这个错误是在IDEA中连接虚拟机中的Kafka单节点测试时出现的,由于版本太新了,没有找到对应的解决办法,故记录如下;

  • 版本信息

    • flink: 1.13.0
    • scala: 2.11.8
    • kafka: 2.1.0
  • pom.xml

        <properties>
            <flink.version>1.13.0</flink.version>
            <scala.version>2.11</scala.version>
            <kafka.version>2.1.0</kafka.version>
            <scala.binary.version>2.11</scala.binary.version>
        </properties>
    
    <dependencies>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
        </dependency>
    
    <!--    Starting from Flink 1.11.0, the flink-streaming-java module does not have a dependency on flink-clients anymore.
    If your project was depending on this transitive dependency you now have to add flink-clients as an explicit dependency.-->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-clients_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
    
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-kafka_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
    
        <dependency>
        <groupId>org.slf4j</groupId>
        <artifactId>slf4j-simple</artifactId>
        </dependency>
    </dependencies>
    
  • 详细报错

org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
	at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
	at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
	at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:207)
	at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:197)
	at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:188)
	at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:677)
	at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:79)
	at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:435)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:305)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212)
	at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
	at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
	at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
	at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
	at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
	at akka.actor.ActorCell.invoke(ActorCell.scala:561)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
	at akka.dispatch.Mailbox.run(Mailbox.scala:225)
	at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
	at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.util.concurrent.ExecutionException: org.apache.flink.streaming.connectors.kafka.FlinkKafkaException: Failed to send data to Kafka: Failed to send data to Kafka: Expiring 3 record(s) for test-0:120003 ms has passed since batch creation
	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
	at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.quiesceTimeServiceAndCloseOperator(StreamOperatorWrapper.java:168)
	at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:131)
	at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:135)
	at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:135)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain.closeOperators(OperatorChain.java:448)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:671)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:629)
	at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:776)
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
	at java.lang.Thread.run(Thread.java:748)

开启debug后发现有如下日志:

producer向kafka producer 请求时,获取到的METADATA中返回的brokers=[{node_id=0,host=localhost,port=9092,rack=null}] ,导致本地程序跑到localhost去连接broker导致了这个错误

[kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.

[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Using older server API v7 to send METADATA {topics=[{name=test}],allow_auto_topic_creation=true} with correlation id 2 to node -1

[kafka-producer-network-thread | producer-1] TRACE org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Completed receive from node -1 for METADATA with correlation id 2, received {throttle_time_ms=0,brokers=[{node_id=0,host=localhost,port=9092,rack=null}],cluster_id=EfJ-cImoSjas4Ql26TYA9w,controller_id=0,topics=[{error_code=0,name=test,is_internal=false,partitions=[{error_code=0,partition_index=0,leader_id=0,leader_epoch=0,replica_nodes=[0],isr_nodes=[0],offline_replicas=[]}]}]}
  • 解决方案

修改服务器上的 kafka配置文件,显示指定IP,代替localhost

config/server.properties

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
listeners=PLAINTEXT://192.168.100.130:9092 # 添加这一行,显式指定IP地址

另:不建议修改端口,因为我测试过修改端口,Flink程序中获取的还是原端口

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值