FlinkSQL_1.12_用DDL实现Kafka到MySQL的数据传输_实现按照条件进行过滤写入MySQL

1.FlinkSQL_用DDL实现Kafka到MySQL的数据传输_实现按照条件进行过滤写入MySQL

package com.atguigu.day10;

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

/**
 * @author dxy
 * @date 2021/4/21 20:58
 */

//TODO 用DDL实现Kafka到MySQL的数据传输
public class FlinkSQL15_SQL_DDL_Kafka_MySQL {
    public static void main(String[] args) throws Exception {
        //1.获取执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);

        //2.使用DDL的方式加载数据--注册SourceTable
        tableEnv.executeSql("create table source_sensor(id string,ts bigint,vc double)" +
                "with (" +
                "'connector.type' = 'kafka'," +
                "'connector.version' = 'universal'," +
                "'connector.topic' = 'test'," +
                "'connector.properties.bootstrap.servers' = 'hadoop102:9092'," +
                "'connector.properties.group.id' = 'bigdata1109'," +
                "'format.type' = 'csv'"
                + ")");

        Table table = tableEnv.sqlQuery("select * from source_sensor where id = 'ws_001'");

        //3.注册SinkTable:Mysql
        tableEnv.executeSql("create table sink_sensor(id string,ts bigint,vc double)" +
                "with (" +
                "'connector' = 'jdbc'," +
                "'url' = 'jdbc:mysql://hadoop102:3306/test',"+
                "'table-name' = 'sink_table',"+
                "'username' = 'root',"+
                "'password' = '123456'"
                + ")");

/*        //4.执行查询kafka数据
        Table source_sensor = tableEnv.from("source_sensor");

        //5.将数据写入Mysql
        source_sensor.executeInsert("sink_sensor");*/

        table.executeInsert("sink_sensor");

        //6.执行任务
        env.execute();
    }
}

2.测试

[atguigu@hadoop102 ~]$ kafka-console-producer.sh --broker-list hadoop102:9092 --topic test
>ws_001,1577844001,45
>ws_002,1577844001,45
>ws_001,1577844001,45

IDEA控制台

报错但是不会影响程序运行。

查看MySQL中数据

在运行之前需要先把表创建,不然没法测试,不会自动帮我们在mysql中创建表。

-- flinksql测试把kafka数据写入mysql
create table sink_table
(id VARCHAR(255),
ts BIGINT,
vc DOUBLE
);

结合数据,我们可以再继续实现mysql一个特性,有则更新,无则插入原则。

 

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值