【Flink1.14实战】Docker环境Flink Sql mysql连接器

JDBC SQL 连接器

JDBC 连接器允许使用 JDBC 驱动向任意类型的关系型数据库读取或者写入数据。本文档描述了针对关系型数据库如何通过建立 JDBC 连接器来执行 SQL 查询。

如果在 DDL 中定义了主键,JDBC sink 将以 upsert 模式与外部系统交换 UPDATE/DELETE 消息;否则,它将以 append 模式与外部系统交换消息且不支持消费 UPDATE/DELETE 消息。

依赖
wget https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-jdbc_2.11/1.14.4/flink-connector-jdbc_2.11-1.14.4.jar
wget https://repo.maven.apache.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar

要在 flink的lib 目录添加对应的jar

如何创建 JDBC 表

JDBC table 可以按如下定义:

-- 在 Flink SQL 中注册一张 MySQL 表 'users'
CREATE TABLE MyUserTable (
  id BIGINT,
  name STRING,
  age INT,
  status BOOLEAN,
  PRIMARY KEY (id) NOT ENFORCED
) WITH (
   'connector' = 'jdbc',
   'url' = 'jdbc:mysql://localhost:3306/mydatabase',
   'table-name' = 'users',
   'connector.username' = 'root',
   'connector.password' = '123456'
);

-- 从另一张表 "T" 将数据写入到 JDBC 表中
INSERT INTO MyUserTable
SELECT id, name, age, status FROM T;

-- 查看 JDBC 表中的数据
SELECT id, name, age, status FROM MyUserTable;

-- JDBC 表在时态表关联中作为维表
SELECT * FROM myTopic
LEFT JOIN MyUserTable FOR SYSTEM_TIME AS OF myTopic.proctime
ON myTopic.key = MyUserTable.id;
实战

基于docker-compose。

1、编辑 docker-compose.yml

version: "3"
services:
  jobmanager:
    image: flink:1.14.4-scala_2.11
    ports:
      - "8081:8081"
    command: jobmanager
    environment:
      - |
        FLINK_PROPERTIES=
        jobmanager.rpc.address: jobmanager        

  taskmanager:
    image: flink:1.14.4-scala_2.11
    depends_on:
      - jobmanager
    command: taskmanager
    scale: 1
    environment:
      - |
        FLINK_PROPERTIES=
        jobmanager.rpc.address: jobmanager
        taskmanager.numberOfTaskSlots: 4   
  mysql:
    image: mysql:5.7
    ports:
      - "3306:3306"
    volumes:
      - ./data:/var/lib/mysql
      - ./mysql-init:/docker-entrypoint-initdb.d
    command: [
          'mysqld',
          '--innodb-buffer-pool-size=80M',
          '--character-set-server=utf8mb4',
          '--collation-server=utf8mb4_unicode_ci',
          '--default-time-zone=+8:00',
          '--lower-case-table-names=1',
          '--skip-name-resolve'
        ]
    environment:
      MYSQL_USER: "sql-demo"
      MYSQL_PASSWORD: "demo-sql"
      MYSQL_DATABASE: "sql-demo"
      MYSQL_RANDOM_ROOT_PASSWORD: "yes"

2、创建文件夹mysql-init, create-table.sql

DROP TABLE IF EXISTS `book`;
CREATE TABLE book (
	id INT NOT NULL,
	title varchar(30),
	author varchar(30),
	price	   INT,
	PRIMARY KEY (id)
);

INSERT INTO `book` VALUES ('1', 'zhangsan', '123456', '18'), ('2', 'lishi', '123', '17'), ('3', 'wangwu', '1234', '18'), ('4', 'jam', '12345', '16');
COMMIT;

3、启动服务

$ docker-compose up -d

4、从mysql探索数据

$ docker-compose exec mysql mysql -Dsql-demo -usql-demo -pdemo-sql

mysql> use sql-demo;
Database changed

mysql> select count(*) from book;

5、复制依赖文件

$ docker cp flink-connector-jdbc_2.11-1.14.4.jar mysql_taskmanager_1:/opt/flink/lib
$ docker cp mysql-connector-java-8.0.28.jar  mysql_taskmanager_1:/opt/flink/lib

# 查看lib
docker exec -it mysql_jobmanager_1 /bin/bash
# 重启
$ docker-compose restart
注意: 如果不重启的话,无法加载到上面的依赖jar
java.lang.ClassNotFoundException: org.apache.flink.connector.jdbc.table.JdbcRowDataInputFormat

4、在flink中创建book表,该表的字段mysql的字段相同

$ docker-compose exec jobmanager./bin/sql-client.sh
Flink SQL> drop table book;
CREATE TABLE book (
  id INT,
  title STRING,
  author STRING,
  price INT,
  PRIMARY KEY (id) NOT ENFORCED
) WITH (
   'connector' = 'jdbc',
   'url' = 'jdbc:mysql://mysql:3306/sql-demo',
   'table-name' = 'book',
   'driver'='com.mysql.jdbc.Driver',
   'username' = 'sql-demo',
   'password' = 'demo-sql'
);
#flink sql中查询该行数据
Flink SQL> select *  from book;

在这里插入图片描述

5、java 流代码

package quick.table;

import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.types.Row;



public class SqlExample {

    public static void main(String[] args) throws Exception {

        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);

        tableEnv.executeSql("CREATE TABLE book (" +
                "  id INT," +
                "  title STRING," +
                "  author STRING," +
                "  price INT," +
                "  PRIMARY KEY (id) NOT ENFORCED" +
                ") WITH (" +
                "   'connector' = 'jdbc'," +
                "   'url' = 'jdbc:mysql://mysql:3306/sql-demo'," +
                "   'table-name' = 'book'," +
                "   'driver'='com.mysql.jdbc.Driver'," +
                "   'username' = 'sql-demo'," +
                "   'password' = 'demo-sql'" +
                ")");

            // execute SELECT statement
        Table resultTable = tableEnv.sqlQuery("SELECT * FROM book");

        DataStream<Row> resultStream = tableEnv.toDataStream(resultTable);

        // 打印
        resultStream.print();
        env.execute();
    }
}

6、提交job

然后,将打包应用程序提交,Flink 的Web UI来提交作业监控集群的状态和正在运行的作业。

$ docker-compose logs -f taskmanager

在这里插入图片描述

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

吕布辕门

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值