FlinkSql的DDL报错依赖缺失:Could not find a suitable table factory

使用正确的Planner

Exception in thread “main” org.apache.flink.table.api.TableException: Could not instantiate the executor.Make sure a planner module is on the classpath
at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.lookupExecutor(StreamTableEnvironmentImpl.java:176)
at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.create(StreamTableEnvironmentImpl.java:138)
at org.apache.flink.table.api.bridge.java.StreamTableEnvironment.create(StreamTableEnvironment.java:113)
at com.cbry.flinksql.CreateDDLConsumer.main(CreateDDLConsumer.java:16)
Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for ‘org.apache.flink.table.delegation.ExecutorFactory’ in
the classpath.

Reason: No factory supports the additional filters.

The following properties are requested:
class-name=org.apache.flink.table.planner.delegation.BlinkExecutorFactory
streaming-mode=true

The following factories have been considered:
org.apache.flink.table.executor.StreamExecutorFactory
at org.apache.flink.table.factories.ComponentFactoryService.find(ComponentFactoryService.java:76)
at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.lookupExecutor(StreamTableEnvironmentImpl.java:167)
… 3 more

解决方案

原先的pom引入是引入的:

		<dependency>
			<groupId>org.apache.flink</groupId>
			<artifactId>flink-table-planner_${scala.binary.version}</artifactId>
			<version>${flink.version}</version>
		</dependency>

但是flinkSQL建表的时候用的是blink,所以需要

		<dependency>
			<groupId>org.apache.flink</groupId>
			<artifactId>flink-table-planner-blink_${scala.binary.version}</artifactId>
			<version>${flink.version}</version>
		</dependency>

具体根据建表语句的env设置:

在这里插入图片描述

又遇到:引入对应格式的类型

在这里插入图片描述

DDL中有使用到:format.type’=‘json’

在这里插入图片描述

解决方案

添加:

    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-json</artifactId>
        <version>${flink.version}</version>
    </dependency>

还有:无执行应用

在这里插入图片描述

java.lang.IllegalStateException: No ExecutorFactory found to execute the application.
at org.apache.flink.core.execution.DefaultExecutorServiceLoader.getExecutorFactory(DefaultExecutorServiceLoader.java:88)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1895)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1796)
at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:69)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1782)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1765)
at com.cbry.flinksql.CreateDDLConsumer.main(CreateDDLConsumer.java:39)

引入

        <dependency>
			<groupId>org.apache.flink</groupId>
			<artifactId>flink-clients_${scala.binary.version}</artifactId>
			<version>${flink.version}</version>
		</dependency>

executeSql方法未引入

import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

		<dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-api-java-bridge_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>

遇到的灵性问题

开发控制台,在数据生产后,消费端一直未打印应该操作后的数据。但是服务器kafka一消费,再去控制台运行consumer程序就有了消费数据了。

小结

	<!-- FlinkSql -->

		<dependency>
			<groupId>org.apache.flink</groupId>
			<artifactId>flink-table-planner-blink_${scala.binary.version}</artifactId>
			<version>${flink.version}</version>
		</dependency>
		
		<dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-api-java-bridge_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        
        <dependency>
			<groupId>org.apache.flink</groupId>
			<artifactId>flink-clients_${scala.binary.version}</artifactId>
			<version>${flink.version}</version>
		</dependency>
        
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-json</artifactId>
            <version>${flink.version}</version>
        </dependency>

		<!-- Flink Connect Kafka -->

		<dependency>
			<groupId>org.apache.flink</groupId>
			 <artifactId>flink-connector-kafka_${scala.binary.version}</artifactId>
			<version>${flink.version}</version>
		</dependency>
  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 6
    评论
对于 Flink SQLDDL 参数配置化生成,你可以使用 Flink 的动态参数功能来实现。下面是一些步骤和示例代码,供你参考: 1. 创建一个配置类,用于存储参数配置的信息,比如: ```java public class DDLConfig { private String tableName; private String schema; // 其他参数配置... // 构造函数、getters 和 setters 方法... } ``` 2. 在你的应用程序中,根据配置类生成动态的 DDL 语句,例如: ```java public class DDLGenerator { public static String generateDDL(DDLConfig config) { StringBuilder ddl = new StringBuilder(); ddl.append("CREATE TABLE ").append(config.tableName).append(" (\n"); ddl.append(" ... \n"); ddl.append(") WITH (\n"); ddl.append(" 'connector' = '...'\n"); ddl.append(" 'schema' = '").append(config.schema).append("'\n"); // 其他参数配置... ddl.append(")"); return ddl.toString(); } } ``` 3. 在你的应用程序中,根据配置类生成 Flink SQL 的作业,例如: ```java public class JobGenerator { public static StreamExecutionEnvironment createJob(DDLConfig config) throws Exception { StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); // 设置 Flink SQL 环境... String ddl = DDLGenerator.generateDDL(config); tEnv.executeSql(ddl); // 定义和执行其他操作... return env; } } ``` 通过上述步骤,你可以将 Flink SQLDDL 参数配置化生成,并在应用程序中动态生成相应的作业。希望这对你有所帮助!如有更多问题,请继续提问。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

长臂人猿

客官们众筹请博主喝杯奶茶吧

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值