简介
- 下载官方Flink依赖包 (笔者所用版本为1.13.6)
下载下面列出的依赖包,并将它们放到目录 flink-1.13.6/lib/ 下:
- 下载elasticsearch连接器flink-sql-connector-elasticsearch7_2.11-1.13.6.jar
- 下载MySQL-CDC flink-connector-mysql-cdc-2.0.1.jar
点击下载本文资源懒人包 (包含上面资源,Flink官方包 和两个jar包)
方法 / 步骤
一: 部署Flink
1.1添加Flink 到系统环境变量
# 编辑 /etc/profile文件
#flink
export FLINK_HOME=/usr/local/flink/flink-1.13.6/
export PATH=$FLINK_HOME/bin:$PATH
# 重载环境变量配置
source /etc/profile
1.2 Flink 配置
- 主目录 conf/flink-conf.yaml文件 (未标明配置其他默认即可)
# jobmanager 的rpc 管理地址,单机版本写本机IP即可
jobmanager.rpc.address: 127.0.0.1
jobmanager.rpc.port: 6123
# 每个TaskManager提供的任务槽数。每个插槽运行一条并行管道。
taskmanager.numberOfTaskSlots: 3
# 每个TaskManager运行的管道。
parallelism.default: 1
# 开启外网访问
rest.bind-address: 0.0.0.0
#是否进行预分配内存,默认不进行预分配,这样在我们不使用flink集群时候不会占用集群资源
taskmanager.memory.preallocate: false
# 单机版本的时候要配置上,否则会报错
taskmanager.host: localhost
# enable/disable :是否支持增量检查点的标志 支持增量检查点
state.backend.incremental: enable
# 故障切换策略
# region : 包含失败任务的区域将重新启动 如果一个区域要重启,它的所有消费者区域也将被重启。这是为了保证数据一致性,因为非确定性处理或分区会导致不同的分区
# full :重新启动作业中的所有任务以从任务失败中恢复。
jobmanager.execution.failover-strategy: full
- 启动Flink
# 启动Flink集群
./start-cluster.sh
# 停止Flink集群
#./stop-cluster.sh
启动成功以后访问 服务的8081端口,可看到Flink Web UI 界面:
二: CDC数据同步设置
2.1 配置数据源MySQL
# 校验MySQL是否开启binlog 如果显示OFF则代表未开启
show variables like 'log_bin';
如果没有开启,找到配置文件添加配置
[mysqld]
#开启binlog
log-bin = mysql-bin
#选择row模式
binlog-format = ROW
#配置mysql replication需要定义,不能喝canal的slaveId重复
server_id = 1
2.1.1 MySQL创建数据库和表 products,orders,并插入数据
-- MySQL
CREATE DATABASE flinkcdc;
USE flinkcdc;
CREATE TABLE products (
id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(255) NOT NULL,
description VARCHAR(512)
);
ALTER TABLE products AUTO_INCREMENT = 101;
INSERT INTO products
VALUES (default,"scooter","Small 2-wheel scooter"),
(default,"car battery","12V car battery"),
(default,"12-pack drill bits","12-pack of drill bits with sizes ranging from #40 to #3"),
(default,"hammer","12oz carpenter's hammer"),
(default,"hammer","14oz carpenter's hammer"),
(default,"hammer","16oz carpenter's hammer"),
(default,"rocks","box of assorted rocks"),
(default,"jacket","water resistent black wind breaker"),
(default,"spare tire","24 inch spare tire");
CREATE TABLE orders (
order_id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,
order_date DATETIME NOT NULL,
customer_name VARCHAR(255) NOT NULL,
price DECIMAL(10, 5) NOT NULL,
product_id INTEGER NOT NULL,
order_status BOOLEAN NOT NULL -- Whether order has been placed
) AUTO_INCREMENT = 10001;
INSERT INTO orders
VALUES (default, '2020-07-30 10:08:22', 'Jark', 50.50, 102, false),
(default, '2020-07-30 10:11:09', 'Sally', 15.00, 105, false),
(default, '2020-07-30 12:00:30', 'Edward', 25.25, 106, false);
2.2 ETL中间件CDC配置
将驱动jar包放到主目录的lib目录下
三: 创建Flink_CDC虚拟表
3.1 启动Flink SQL CLI
./bin/sql-client.sh
- 开启 checkpoint,每隔3秒做一次 checkpoint
Flink SQL> SET execution.checkpointing.interval = 3s;
3.2 控制台创建CDC虚拟表
- 使用 Flink SQL CLI 创建对应的表,用于同步这些底层数据库表的数据:
CREATE TABLE products (
id INT,
name STRING,
description STRING,
PRIMARY KEY (id) NOT ENFORCED
) WITH (
'connector' = 'mysql-cdc',
'hostname' = 'ip',
'port' = '3307',
'username' = 'xxx',
'password' = 'xxx',
'database-name' = 'flinkcdc',
'table-name' = 'products'
);
CREATE TABLE orders (
order_id INT,
order_date TIMESTAMP(0),
customer_name STRING,
price DECIMAL(10, 5),
product_id INT,
order_status BOOLEAN,
PRIMARY KEY (order_id) NOT ENFORCED
) WITH (
'connector' = 'mysql-cdc',
'hostname' = 'ip',
'port' = 'xx',
'username' = 'xxx',
'password' = 'xxx',
'database-name' = 'flinkcdc',
'table-name' = 'orders'
);
# 创建 enriched_orders 表
CREATE TABLE enriched_orders (
order_id INT,
order_date TIMESTAMP(0),
customer_name STRING,
price DECIMAL(10, 5),
product_id INT,
order_status BOOLEAN,
product_name STRING,
product_description STRING,
PRIMARY KEY (order_id) NOT ENFORCED
) WITH (
'connector' = 'elasticsearch-7',
'hosts' = 'http://ip:9200',
'index' = 'enriched_orders'
);
3.3 创建ES数据
- 创建 enriched_orders 表, 用来将关联后的订单数据写入 Elasticsearch 中
insert into enriched_orders
select
o.order_id as order_id,
o.order_date as order_date,
o.customer_name as customer_name,
o.price as price,
o.product_id as product_id,
o.order_status as order_status,
p.name as product_name,
p.description as product_description
from orders as o
left join products as p on o.product_id=p.id;
- flink 控制台没有报错产生
3.4 访问 Kibana 可看到订单宽表的数据:
接下来,修改 MySQL 和 Postgres 数据库中表的数据,Kibana中显示的订单数据也将实时更新:
3.5 kibina 创建索引可以查看该索引的相关日志
- 相关日志
四: 代码业务定制
- 详情在写在下面的一篇博客里面
Flink/Flink-CDC代码实现业务接入
参考资料 & 致谢
[1] 基于 Flink CDC 构建 MySQL 和 Postgres 的 Streaming ETL
[2] MySQL的CDC源表