kafka数据源

<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0modelVersion> <groupId>com.duoduogroupId> <artifactId>flinksql111artifactId> <version>1.0-SNAPSHOTversion> <dependencies> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-table-api-scala-bridge_2.11artifactId> <version>1.11.2version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-table-planner_2.11artifactId> <version>1.11.2version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-table-planner-blink_2.11artifactId> <version>1.11.2version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-streaming-scala_2.11artifactId> <version>1.11.2version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-table-commonartifactId> <version>1.11.2version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-jsonartifactId> <version>1.11.2version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-connector-kafka-0.11_2.11artifactId> <version>1.11.2version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-connector-jdbc_2.11artifactId> <version>1.11.2version> dependency> <dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-clients_2.11artifactId> <version>1.11.2version> dependency> <dependency> <groupId>mysqlgroupId> <artifactId>mysql-connector-javaartifactId> <version>5.1.47version> dependency> dependencies>project>执行代码
import org.apache.flink.streaming.api.scala._import org.apache.flink.table.api.bridge.scala.StreamTableEnvironmentimport org.apache.flink.table.api.{EnvironmentSettings, Table}/** * Author z * Date 2020-12-07 20:35:16 */object Test1 { def main(args: Array[String]): Unit = { // ********************** // 01.定义Blink执行环境 // ********************** val bsEnv = StreamExecutionEnvironment.getExecutionEnvironment val bsSettings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build() val bsTableEnv = StreamTableEnvironment.create(bsEnv, bsSettings) // or val bsTableEnv = TableEnvironment.create(bsSettings) // 02.定义kafka输入表 val souce_table = """ |create table kafkaInputTable ( |id varchar, |name varchar, |age varchar |) with ( |'connector' = 'kafka-0.11', |'topic' = 'student_test', |'properties.bootstrap.servers'='hadoop01:9092,hadoop02:9092,hadoop03:9092', |'scan.startup.mode' = 'earliest-offset', |'properties.group.id' = 'testGroup', |'format' = 'json' |) |""".stripMargin bsTableEnv.executeSql(souce_table) val table: Table = bsTableEnv.from("kafkaInputTable") // 03.将kafka数据转换成流打印 val value = bsTableEnv.toAppendStream[(String, String, String)](table) value.print() // 04.定义mysql输出表 val sink_table: String = """ |create table student_test ( | id string, | name string, | age string |) with ( | 'connector' = 'jdbc', | 'url' = 'jdbc:mysql://hadoop01:3306/test?characterEncoding=utf-8&useSSL=false', | 'table-name' = 'student_test', | 'driver'='com.mysql.jdbc.Driver', | 'username' = 'root', | 'password' = '123456', | 'sink.buffer-flush.interval'='1s', | 'sink.buffer-flush.max-rows'='1', | 'sink.max-retries' = '5' |) """.stripMargin // 05.查询kafka table并插入mysql table val insert = """ |INSERT INTO student_test SELECT * FROM kafkaInputTable |""".stripMargin bsTableEnv.executeSql(sink_table) bsTableEnv.executeSql(insert) bsEnv.execute() }}结果展示
idea打印kafka数据:

mysql数据:

问题一:Flink 解决 No ExecutorFactory found to execute the application
Flink 1.11 开始报错如下:
Exception in thread "main" java.lang.IllegalStateException: No ExecutorFactory found to execute the application. at org.apache.flink.core.execution.DefaultExecutorServiceLoader.getExecutorFactory(DefaultExecutorServiceLoader.java:84) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1801) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1711) at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74) at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1697) at com.ishansong.bigdata.SqlKafka.main(SqlKafka.java:54)
解决方式
缺少 flink-client jar
引入即可
<dependency> <groupId>org.apache.flinkgroupId> <artifactId>flink-clients_2.11artifactId> <version>${flink.version}version> dependency>
原因

问题二:Idea中运行需要注释
<scope>providedscope>

886

被折叠的 条评论
为什么被折叠?



