flink sink jdbc没有数据_FlinkSQL读取Kafka数据写入Mysql

kafka数据源

38f02b3a572fbf50c021cebd3540f462.png

POM文件
<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0"         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">    <modelVersion>4.0.0modelVersion>    <groupId>com.duoduogroupId>    <artifactId>flinksql111artifactId>    <version>1.0-SNAPSHOTversion>    <dependencies>        <dependency>            <groupId>org.apache.flinkgroupId>            <artifactId>flink-table-api-scala-bridge_2.11artifactId>            <version>1.11.2version>        dependency>        <dependency>            <groupId>org.apache.flinkgroupId>            <artifactId>flink-table-planner_2.11artifactId>            <version>1.11.2version>        dependency>                <dependency>            <groupId>org.apache.flinkgroupId>            <artifactId>flink-table-planner-blink_2.11artifactId>            <version>1.11.2version>        dependency>        <dependency>            <groupId>org.apache.flinkgroupId>            <artifactId>flink-streaming-scala_2.11artifactId>            <version>1.11.2version>        dependency>        <dependency>            <groupId>org.apache.flinkgroupId>            <artifactId>flink-table-commonartifactId>            <version>1.11.2version>        dependency>        <dependency>            <groupId>org.apache.flinkgroupId>            <artifactId>flink-jsonartifactId>            <version>1.11.2version>        dependency>        <dependency>            <groupId>org.apache.flinkgroupId>            <artifactId>flink-connector-kafka-0.11_2.11artifactId>            <version>1.11.2version>        dependency>        <dependency>            <groupId>org.apache.flinkgroupId>            <artifactId>flink-connector-jdbc_2.11artifactId>            <version>1.11.2version>        dependency>        <dependency>            <groupId>org.apache.flinkgroupId>            <artifactId>flink-clients_2.11artifactId>            <version>1.11.2version>        dependency>        <dependency>            <groupId>mysqlgroupId>            <artifactId>mysql-connector-javaartifactId>            <version>5.1.47version>        dependency>    dependencies>project>
执行代码
import org.apache.flink.streaming.api.scala._import org.apache.flink.table.api.bridge.scala.StreamTableEnvironmentimport org.apache.flink.table.api.{EnvironmentSettings, Table}/** * Author z * Date 2020-12-07 20:35:16 */object Test1 {  def main(args: Array[String]): Unit = {    // **********************    // 01.定义Blink执行环境    // **********************    val bsEnv = StreamExecutionEnvironment.getExecutionEnvironment    val bsSettings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()    val bsTableEnv = StreamTableEnvironment.create(bsEnv, bsSettings)    // or val bsTableEnv = TableEnvironment.create(bsSettings)    // 02.定义kafka输入表    val souce_table =      """        |create table kafkaInputTable (        |id varchar,        |name varchar,        |age varchar        |) with (        |'connector' = 'kafka-0.11',        |'topic' = 'student_test',        |'properties.bootstrap.servers'='hadoop01:9092,hadoop02:9092,hadoop03:9092',        |'scan.startup.mode' = 'earliest-offset',        |'properties.group.id' = 'testGroup',        |'format' = 'json'        |)        |""".stripMargin      bsTableEnv.executeSql(souce_table)    val table: Table = bsTableEnv.from("kafkaInputTable")    // 03.将kafka数据转换成流打印    val value = bsTableEnv.toAppendStream[(String, String, String)](table)    value.print()    // 04.定义mysql输出表    val sink_table: String =      """        |create table student_test (        |  id string,        |  name string,        |  age string        |) with (        |  'connector' = 'jdbc',        |  'url' = 'jdbc:mysql://hadoop01:3306/test?characterEncoding=utf-8&useSSL=false',        |  'table-name' = 'student_test',        |  'driver'='com.mysql.jdbc.Driver',        |  'username' = 'root',        |  'password' = '123456',        |  'sink.buffer-flush.interval'='1s',        |  'sink.buffer-flush.max-rows'='1',        |  'sink.max-retries' = '5'        |)  """.stripMargin    // 05.查询kafka table并插入mysql table    val insert =      """        |INSERT INTO student_test SELECT  * FROM kafkaInputTable        |""".stripMargin        bsTableEnv.executeSql(sink_table)    bsTableEnv.executeSql(insert)        bsEnv.execute()  }}
结果展示

idea打印kafka数据:

3307af31f75e52d12978a6a45ff12de7.png

mysql数据:

6de381f3c5a32f21f43a49b0eb75a797.png

问题

问题一:Flink 解决 No ExecutorFactory found to execute the application

Flink 1.11 开始报错如下:

Exception in thread "main" java.lang.IllegalStateException: No ExecutorFactory found to execute the application.  at org.apache.flink.core.execution.DefaultExecutorServiceLoader.getExecutorFactory(DefaultExecutorServiceLoader.java:84)  at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1801)  at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1711)  at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:74)  at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1697)  at com.ishansong.bigdata.SqlKafka.main(SqlKafka.java:54)
解决方式

缺少 flink-client jar
引入即可

<dependency>            <groupId>org.apache.flinkgroupId>            <artifactId>flink-clients_2.11artifactId>            <version>${flink.version}version>  dependency>
原因

d6c2fc451e9a5c5898371f966f3341c6.png

问题二:Idea中运行需要注释

<scope>providedscope>

0d1730b8578dc3a442c176ce718a6fac.png

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值