Flink Table连接Kafka出现问题

报错信息

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Exception in thread "main" org.apache.flink.table.api.TableException: findAndCreateTableSource failed.
	at org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:55)
	at org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:92)
	at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.findAndCreateTableSource(CatalogSourceTable.scala:162)
	at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.tableSource$lzycompute(CatalogSourceTable.scala:65)
	at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.tableSource(CatalogSourceTable.scala:65)
	at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.getQualifiedName(CatalogSourceTable.scala:67)
	at org.apache.calcite.tools.RelBuilder$Frame.deriveAlias(RelBuilder.java:2824)
	at org.apache.calcite.tools.RelBuilder$Frame.<init>(RelBuilder.java:2810)
	at org.apache.calcite.tools.RelBuilder$Frame.<init>(RelBuilder.java:2800)
	at org.apache.calcite.tools.RelBuilder.push(RelBuilder.java:300)
	at org.apache.calcite.tools.RelBuilder.scan(RelBuilder.java:1073)
	at org.apache.calcite.tools.RelBuilder.scan(RelBuilder.java:1094)
	at org.apache.flink.table.planner.plan.QueryOperationConverter$SingleRelVisitor.visit(QueryOperationConverter.java:310)
	at org.apache.flink.table.planner.plan.QueryOperationConverter$SingleRelVisitor.visit(QueryOperationConverter.java:148)
	at org.apache.flink.table.operations.CatalogQueryOperation.accept(CatalogQueryOperation.java:69)
	at org.apache.flink.table.planner.plan.QueryOperationConverter.defaultMethod(QueryOperationConverter.java:145)
	at org.apache.flink.table.planner.plan.QueryOperationConverter.defaultMethod(QueryOperationConverter.java:125)
	at org.apache.flink.table.operations.utils.QueryOperationDefaultVisitor.visit(QueryOperationDefaultVisitor.java:91)
	at org.apache.flink.table.operations.CatalogQueryOperation.accept(CatalogQueryOperation.java:69)
	at org.apache.flink.table.planner.calcite.FlinkRelBuilder.queryOperation(FlinkRelBuilder.scala:159)
	at org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:216)
	at org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
	at org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
	at org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImpl.toDataStream(StreamTableEnvironmentImpl.scala:210)
	at org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImpl.toAppendStream(StreamTableEnvironmentImpl.scala:107)
	at org.apache.flink.table.api.scala.TableConversions.toAppendStream(TableConversions.scala:101)
	at TableAPI.TableApiTest$.main(TableApiTest.scala:49)
	at TableAPI.TableApiTest.main(TableApiTest.scala)
Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory' in
the classpath.

Reason: Required context properties mismatch.

The matching candidates:
org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactory
Mismatched properties:
'connector.version' expects 'universal', but is '2.0.0'

The following properties are requested:
connector.properties.bootstrap.servers=192.168.95.99:9092
connector.properties.zookeeper.connect=192.168.95.99:2181
connector.property-version=1
connector.topic=tableTest
connector.type=kafka
connector.version=2.0.0
format.property-version=1
format.type=csv
schema.0.data-type=VARCHAR(2147483647)
schema.0.name=id
schema.1.data-type=BIGINT
schema.1.name=ts
schema.2.data-type=DOUBLE
schema.2.name=vc

The following factories have been considered:
org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactory
org.apache.flink.table.sources.CsvBatchTableSourceFactory
org.apache.flink.table.sources.CsvAppendTableSourceFactory
	at org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
	at org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
	at org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
	at org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:96)
	at org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:52)
	... 36 more

问题描述

  • 这个报错是在读取Kafka中数据保存在Flink Table中的时候发生的,原因是配置kafka version版本的问题
	tableEnv.connect(new Kafka()
        .version("2.0.0") //问题所在
        .topic("tableTest")
        .property("zookeeper.connect","192.168.**.**:2181")
        .property("bootstrap.servers","192.168.**.**:9092")
    ).withFormat(new Csv())
        .withSchema(new Schema()
            .field("id",DataTypes.STRING())
            .field("ts",DataTypes.BIGINT())
            .field("vc",DataTypes.DOUBLE())
        ).createTemporaryTable("kafkaInputTable")

报错提取

The matching candidates:
org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactory
Mismatched properties:
'connector.version' expects 'universal', but is '2.0.0'
  • 其实报错也很明显了,Flink希望version拿到的是universal,但我输入了2.0.0

universal是什么?

  • Flink附带了提供了多个Kafka连接器:universal通用版本,0.10,0.11

  • 官方文档解释说universal(通用版本)的连接器,会尝试跟踪Kafka最新版本,兼容0.10或者之后的Kafka版本,官方文档也说对于绝大多数情况使用这个即可。在最新的官方文档上有这个通用版本连接器的迁移介绍:

Migrating Kafka Connector from 0.11 to universal
In order to perform the migration, see the upgrading jobs and Flink versions guide and:

Use Flink 1.9 or newer for the whole process.
Do not upgrade Flink and user operators at the same time.
Make sure that Kafka Consumer and/or Kafka Producer used in your job have assigned unique identifiers (uid):
Use stop with savepoint feature to take the savepoint (for example by using stop --withSavepoint)CLI command.
  • 但是如果你使用的Kafka版本是0.11.x 或者0.10.x,官方建议使用专用版本的Kafka连接器。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值