[flink1.14.4]Unable to create a source for reading table ‘default_catalog.default_database.new_buyer

5 篇文章 0 订阅
4 篇文章 1 订阅

升级flink1.14.4报错 

Caused by: org.apache.flink.table.api.ValidationException: Unable to create a source for reading table 'default_catalog.default_database.new_buyer_trade_order2'  

CAUSED BY: 2022-03-11 16:45:04,169 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.class=org.apache.flink.metrics.influxdb.InfluxdbReporter
2022-03-11 16:45:04,169 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.class=org.apache.flink.metrics.influxdb.InfluxdbReporter
2022-03-11 16:45:04,170 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: state.backend.rocksdb.ttl.compaction.filter.enabled=true
2022-03-11 16:45:04,170 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: state.backend.rocksdb.ttl.compaction.filter.enabled=true
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.db=flink
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.db=flink
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.host=192.168.5.57
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.host=192.168.5.57
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: web.timeout=120000
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: web.timeout=120000
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: akka.ask.timeout=120 s
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: akka.ask.timeout=120 s
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: env.java.opts=-verbose:gc -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:ParallelGCThreads=4 -Duser.timezone=Asia/Shanghai
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: env.java.opts=-verbose:gc -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:ParallelGCThreads=4 -Duser.timezone=Asia/Shanghai
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.port=8086
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: metrics.reporter.influxdb.port=8086
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: akka.watch.heartbeat.interval=10 s
2022-03-11 16:45:04,171 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: akka.watch.heartbeat.interval=10 s
2022-03-11 16:45:04,172 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: slotmanager.taskmanager-timeout=600000
2022-03-11 16:45:04,172 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: slotmanager.taskmanager-timeout=600000
2022-03-11 16:45:04,172 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: containerized.heap-cutoff-min=100
2022-03-11 16:45:04,172 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic Property set: containerized.heap-cutoff-min=100
建表语句解析成功!connector:upsert-kafka
设置kafka时间撮:upsert-kafka
建表语句解析成功!connector:print
hive conf path:/etc/ecm/hive-conf
hive conf path:/etc/ecm/hive-conf
create iceberg catalog success!
start to run sql:CREATE TABLE new_buyer_trade_order2 (
  database VARCHAR,  `table`  VARCHAR,  type  VARCHAR,  ts   BIGINT,  xid   BIGINT,  xoffset  BIGINT,  data  VARCHAR,    `old`   VARCHAR      
) WITH (
'value.format' = 'json',
'key.format' = 'json',
'properties.bootstrap.servers' = '192.168.8.142:9092,192.168.8.141:9092,192.168.8.143:9092',
'connector' = 'upsert-kafka',
'topic' = 'new_buyer_trade_order2')
start to run sql:create table result_print(   database VARCHAR,  `table`  VARCHAR,  type  VARCHAR,  ts   BIGINT,  xid   BIGINT,  xoffset  BIGINT,  data  VARCHAR,    `old`   VARCHAR )with(     'connector' = 'print' )
start to run sql:insert into result_print select      database,  `table`,  type,  ts,  xid,  xoffset,  database,    `old` from new_buyer_trade_order2

------------------------------------------------------------
 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Unable to create a source for reading table 'default_catalog.default_database.new_buyer_trade_order2'.

Table options are:

'connector'='upsert-kafka'
'key.format'='json'
'properties.bootstrap.servers'='192.168.8.142:9092,192.168.8.141:9092,192.168.8.143:9092'
'topic'='new_buyer_trade_order2'
'value.format'='json'
	at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
	at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
	at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
	at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)
	at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
	at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
	at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
	at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
	at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
Caused by: org.apache.flink.table.api.ValidationException: Unable to create a source for reading table 'default_catalog.default_database.new_buyer_trade_order2'.

Table options are:

'connector'='upsert-kafka'
'key.format'='json'
'properties.bootstrap.servers'='192.168.8.142:9092,192.168.8.141:9092,192.168.8.143:9092'
'topic'='new_buyer_trade_order2'
'value.format'='json'
	at org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:150)
	at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.createDynamicTableSource(CatalogSourceTable.java:116)
	at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:82)
	at org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3585)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2507)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2144)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2093)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2050)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:663)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:644)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3438)
	at org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:570)
	at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:177)
	at org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:169)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:1057)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:1026)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:301)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:639)
	at org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:290)
	at org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101)
	at org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:834)
	at com.gegejia.flink.FlinkJobBoot.main(FlinkJobBoot.java:95)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
	... 11 more
Caused by: org.apache.flink.table.api.ValidationException: 'upsert-kafka' tables require to define a PRIMARY KEY constraint. The PRIMARY KEY specifies which columns should be read from or write to the Kafka message key. The PRIMARY KEY also defines records in the 'upsert-kafka' table should update or delete on which keys.
	at org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.validatePKConstraints(UpsertKafkaDynamicTableFactory.java:261)
	at org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.validateSource(UpsertKafkaDynamicTableFactory.java:219)
	at org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.createDynamicTableSource(UpsertKafkaDynamicTableFactory.java:119)
	at org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:147)
	... 37 more

 source表未加主键导致,注释放开,提交成功

“failed to execute job 'insert-into_default_catalog.default_database.my_sink”是一个错误消息,通常出现在使用Flink或其他分布式计算框架进行数据处理时。这个错误消息表示作业无法成功执行,并且可能有多个原因导致。 首先,检查作业的代码是否存在错误。可能存在语法错误、逻辑问题或其他错误,导致作业执行失败。查看作业的日志文件,尝试找到错误消息或异常堆栈跟踪,以确定问题所在。 其次,检查作业所需的资源是否足够。可能存在作业需要的内存、CPU或其他资源不足,导致作业无法成功执行。增加作业所需资源的配额或重新分配资源,以确保作业能够正常执行。 还有可能是由于底层数据源或目标出现了问题。检查数据源是否可用,并且提供的连接参数是否正确。同样地,检查目标是否可用,并且接收器的连接参数是否正确。如果有必要,联系相关团队或管理员以解决这些问题。 最后,检查作业的配置是否正确。作业的配置决定了作业如何执行,可能存在配置错误导致作业无法成功执行。检查作业的配置文件或相关配置选项,确保它们符合预期并且没有错误。 综上所述,“failed to execute job 'insert-into_default_catalog.default_database.my_sink”错误可能是由代码错误、资源不足、数据源或目标问题以及配置错误等多种原因导致的。通过仔细检查和排查可能的问题,可以找到并解决这个错误,使作业能够成功执行。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值