flink 报错ByteArraySerializer is not an instance of org.apache.kafka.common.serialization.Serializer

文章讲述了在使用ApacheFlink将socket流数据写入Kafka时遇到的KafkaProducer构造失败问题,原因是由于类加载器顺序导致的dependencyconflict。解决方案是修改flink-conf.yml中的classloader.resolve-order参数为parent-first。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

问题:
  代码逻辑为从socket流中读取数据写入kafka

        Configuration conf = new Configuration();
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(conf);
        env.setParallelism(4);
        env.enableCheckpointing(2000, CheckpointingMode.EXACTLY_ONCE);

        DataStreamSource<String> source = env.socketTextStream("172.18.26.53", 7777);

        KafkaSink<String> sink = KafkaSink.<String>builder()
                .setBootstrapServers("172.18.26.218:9092")
                .setRecordSerializer(
                        KafkaRecordSerializationSchema.<String>builder()
                                .setTopic("test-topic")
                                .setValueSerializationSchema(new SimpleStringSchema())
                                .build())
                .setDeliveryGuarantee(DeliveryGuarantee.EXACTLY_ONCE)
                .setTransactionalIdPrefix("cz-")
                .setProperty(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 1 * 60 * 1000 + "")
                .build();
        source.sinkTo(sink);

        env.execute();

  命令行bin/flink run-application -t yarn-application -Dyarn.provided.lib.dirs="hdfs://nameservice1/flink-dist" -c com.hex.cz.CZDemo examples/FlinkTutorial-1.17-1.0-SNAPSHOT.jar提交到yarn集群后报错

原因:
  查看flink作业日志yarn logs -applicationId application_1706238034141_6936

org.apache.kafka.common.KafkaException: Failed to construct kafka producer
	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:439) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:289) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:316) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:301) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.<init>(FlinkKafkaInternalProducer.java:55) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.flink.connector.kafka.sink.KafkaWriter.getOrCreateTransactionalProducer(KafkaWriter.java:326) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.flink.connector.kafka.sink.TransactionAborter.abortTransactionOfSubtask(TransactionAborter.java:104) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.flink.connector.kafka.sink.TransactionAborter.abortTransactionsWithPrefix(TransactionAborter.java:82) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.flink.connector.kafka.sink.TransactionAborter.abortLingeringTransactions(TransactionAborter.java:66) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.flink.connector.kafka.sink.KafkaWriter.abortLingeringTransactions(KafkaWriter.java:289) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.flink.connector.kafka.sink.KafkaWriter.<init>(KafkaWriter.java:176) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.flink.connector.kafka.sink.KafkaSink.createWriter(KafkaSink.java:111) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.flink.connector.kafka.sink.KafkaSink.createWriter(KafkaSink.java:57) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.flink.streaming.runtime.operators.sink.StatefulSinkWriterStateHandler.createWriter(StatefulSinkWriterStateHandler.java:117) ~[flink-dist-1.17.0.jar:1.17.0]
	at org.apache.flink.streaming.runtime.operators.sink.SinkWriterOperator.initializeState(SinkWriterOperator.java:146) ~[flink-dist-1.17.0.jar:1.17.0]
	at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122) ~[flink-dist-1.17.0.jar:1.17.0]
	at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:274) ~[flink-dist-1.17.0.jar:1.17.0]
	at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106) ~[flink-dist-1.17.0.jar:1.17.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:734) ~[flink-dist-1.17.0.jar:1.17.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55) ~[flink-dist-1.17.0.jar:1.17.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:709) ~[flink-dist-1.17.0.jar:1.17.0]
	at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:675) ~[flink-dist-1.17.0.jar:1.17.0]
	at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952) ~[flink-dist-1.17.0.jar:1.17.0]
	at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:921) ~[flink-dist-1.17.0.jar:1.17.0]
	at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745) ~[flink-dist-1.17.0.jar:1.17.0]
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562) ~[flink-dist-1.17.0.jar:1.17.0]
	at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181]
Caused by: org.apache.kafka.common.KafkaException: class org.apache.kafka.common.serialization.ByteArraySerializer is not an instance of org.apache.kafka.common.serialization.Serializer
	at org.apache.kafka.common.config.AbstractConfig.getConfiguredInstance(AbstractConfig.java:403) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.kafka.common.config.AbstractConfig.getConfiguredInstance(AbstractConfig.java:434) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.kafka.common.config.AbstractConfig.getConfiguredInstance(AbstractConfig.java:419) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:365) ~[FlinkTutorial-1.17-1.0-SNAPSHOT.jar:?]
	... 26 more

解决:
  ByteArraySerializer存在依赖冲突,把conf目录下flink-conf.yml中的classloader.resolve-order参数由默认的child-first改成parent-first
在这里插入图片描述

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

但行益事莫问前程

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值