-
Sink Kafka
错误1:doesn't support consuming update and delete changes which is produced by node TableSourceScan
解答:flink1.11之后引入了CDC(Change Data Capture,变动数据捕捉)阿里大神开源的,此次错误是因为Source源是mysql-cdc所以获取的数据类型为Changelog格式,所以在WITH kafka的时候需要指定format=debezium-json
错误2:No operators defined in streaming topology. Cannot execute
解答:在此次流计算当中没有任何一个operators算子/算子链执行 - Source Mysql
0、错误信息:The server time zone value 'Öйú±ê׼ʱ¼ä' is unrecognized or represents more than one time zone
1、set global time_zone = '+8:00';
2、set time_zone = '+8:00';
3、flush privileges;
4、或者根据参数serverTimeZone指定 - Flink sql任务在本地正常,上线报错
<dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-table-planner-blink_2.11</artifactId> <version>${flink.version}</version> <scope>provided</scope>//需要添加 </dependency>
- 1.11 版本之前用户的 DDL 需要声明成如下方式
CREATE TABLE user_behavior ( ... ) WITH ( 'connector.type'='kafka', 'connector.version'='universal', 'connector.topic'='user_behavior', 'connector.startup-mode'='earliest-offset', 'connector.properties.zookeeper.connect'='localhost:2181', 'connector.properties.bootstrap.servers'='localhost:9092', 'format.type'='json' );
而在 Flink SQL 1.11以及之后的版本中则简化为
CREATE TABLE user_behavior ( ... ) WITH ( 'connector'='kafka', 'topic'='user_behavior', 'scan.startup.mode'='earliest-offset', 'properties.zookeeper.connect'='localhost:2181', 'properties.bootstrap.servers'='localhost:9092', 'format'='json' );
Flink Sql 实用记录
最新推荐文章于 2023-02-28 14:50:02 发布