flink cdc 2.0.0 sql 开发模板,及踩坑记录

本文介绍了Flink CDC 2.0.0版本用于SQL开发的模板,包括从MySQL读取binlog日志进行实时同步的方法。文章详细列举了在使用Flink CDC过程中遇到的问题及其解决策略,如数据类型时区问题、checkpoint超时、锁权限、增量数据读取等。还探讨了datastream API和sql API的优劣势,以及如何解决日志jar包冲突和savepoint恢复失败等问题。
摘要由CSDN通过智能技术生成

flink cdc sql 开发模板

flink cdc sql 读mysql的binlog日志,实时同步到mysql开发模板

使用flink cdc前提条件:读取目标库的用户必须开启binlog权限

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>ysservice-flink</artifactId>
    <packaging>pom</packaging>
    <version>1.0-SNAPSHOT</version>
    <modules>
        <module>ysservice-flink-batch</module>
        <module>ysservice-flink-streaming</module>
        <module>ysservice-flink-warehouse</module>
        <module>ysservice-flink-datapush</module>
    </modules>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>8</maven.compiler.source>
        <maven.compiler.target>8</maven.compiler.target>

        <encoding>UTF-8</encoding>
        <flink.version>1.13.2</flink.version>
        <scala.tools.version>2.11</scala.tools.version>
        <scala.binary.version>2.11</scala.binary.version>
        <spark.version>2.4.0-cdh6.3.1</spark.version>
        <hadoop.version>3.0.0-cdh6.3.1</hadoop.version>
        <mysql.version>5.1.47</mysql.version>
        <druid.version>1.2.3</druid.version>
        <!--<redis.version>2.9.0</redis.version>>-->
        <!--<ipaddress.version>5.3.3</ipaddress.version>-->
        <junit.version>4.12</junit.version>
        <fastjson.version>1.2.73</fastjson.version>
        <httpclient.version>4.5.13</httpclient.version>
        <logback.version>1.2.3</logback.version>
        <log4j-over-slf4j.version>1.7.30</log4j-over-slf4j.version>
    </properties>

    <repositories>
        <!-- 阿里云仓库 -->

        <repository>
            <id>aliyun</id>
            <url>http://maven.aliyun.com/nexus/content/groups/public</url>
        </repository>

        <!-- CDH仓库 -->
        <repository>
            <id>cloudera</id>
            <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
        </repository>
    </repositories>

    <dependencies>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-java</artifactId>
            <version>${flink.version}</version>
        </dependency>

        <dependency>
            <groupId>com.ververica</groupId>
            <artifactId>flink-connector-mysql-cdc</artifactId>
            <version>2.0.2</version>
        </dependency>

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-jdbc_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-clients_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>

        <!-- web ui的依赖 -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-runtime-web_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
            <!--<scope>provided</scope>-->
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-kafka_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>${hadoop.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>${hadoop.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-queryable-state-client-java</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-statebackend-rocksdb_2.11</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-state-processor-api_2.11</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-parquet_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-scala_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
            <scope>${scope.level}</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-redis_2.11</artifactId>
            <version>1.1.5</version>
        </dependency>

        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>${mysql.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-planner-blink_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-csv</artifactId>
            <version>${flink.version}</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-shaded-hadoop-3-uber</artifactId>
            <version>3.1.1.7.2.9.0-173-9.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.postgresql</groupId>
            <artifactId>postgresql</artifactId>
            <version>42.2.5</version>
        </dependency>
        <dependency>
            <groupId>com.google.code.gson</groupId>
            <artifactId>gson</artifactId>
            <version>2.8.6</version>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <!-- 打jar插件 -->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>2.4.3</version>
                <executions>

                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <artifactSet>
                                <excludes>
                                    <exclude>org.apache.flink:force-shading</exclude>
                                    <exclude>com.google.code.findbugs:jsr305</exclude>
                                    <exclude>org.slf4j:*</exclude>
                                    <exclude>log4j:*</exclude>
                                    <exclude>org.apache.logging.log4j:*</exclude>
                                    <exclude>ch.qos.logback:*</exclude>
                                </excludes>
                            </artifactSet>
                            <filters>
                                <filter>
                                    <artifact>*:*</artifact>
                                    <excludes>
                                        <exclude>META-INF/*.SF</exclude>
                                        <exclude>META-INF/*.DSA</exclude>
                                        <exclude>META-INF/*.RSA</exclude>
                                    </excludes>
                                </filter>
                            </filters>
                        </configuration>
                    </execution>
                </executions>
            </plugin>

        </plugins>
    </build>

</project>

log4j.properties

################################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

# This affects logging for both user code and Flink
log4j.rootLogger=INFO, console

# Uncomment this if you want to _only_ change Flink's logging
log4j.logger.org.apache.flink=WARN

# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
log4j.logger.akka=WARN
log4j.logger.org.apache.kafka=INFO
log4j.logger.org.apache.hadoop=WARN
log4j.logger.org.apache.zookeeper=WARN

# Log all infos to the console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n

# Suppress the irrelevant (wrong) warnings from the Netty channel handler
log4j.logger.org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline=ERROR, console
package com.ysservice;
 
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
import com.ysservice.utils.MyCheckpoint;
import com.ysservice.utils.SystemConstants;
import com.ysservice.yszt.owner.yszt_owner_customer_base_info;
import org.apache.flink.api.common.restartstrategy.RestartStrategies;
import org.apache.flink.streaming.api.CheckpointingMode;
import org.apache.flink.streaming.api.environment.CheckpointConfig;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.log4j.Logger;
 
import static org.apache.flink.api.common.time.Time.seconds;
 
/**
 * @Description:用flink cdc同步mysql数据
 * @author: WuBo
 * @date:2021/10/19 15:21
 */
public class TestDemo {
 
    public static void main(String[] args) throws Exception {
 
        //创建执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        //创建tableEnv
        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
        //开启Checkpoint
        env.enableCheckpointing(60*1000);//开启chechPoint,每60秒记录一次中间状态
        env.getCheckpointConfig().setCheckpointTimeout(60*1000);//记录状态的超时时间为60秒
        env.getCheckpointConfig().setTolerableCheckpointFailureNumber(10);//chechPoint最多失败次数,因为Flink CDC Connector 在初始的全量快照同步阶段,会屏蔽掉快照的执行
        env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);//保存状态的类型的精准一次
        env.setRestartStrategy(RestartStrategies.failureRateRestart(5, seconds(60), seconds(2)));//60秒内报错5次,终止程序,每次重启间隔2秒
        env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);//停止任务时,保留Checkpoint
 
        //创建flink cdc的输入表,  datatime 的字段类型要改成 timestamp,否则会有时区问题
        tableEnv.executeSql("CREATE TABLE Data_Input (" +
                " ID bigint," +                                                             //字段类型
                " PROJECT_ID bigint," +                                                     //字段类型
                " PROJECT_CODE STRING," +                                                   //字段类型
                " PROJECT_NAME STRING," +                                                   //字段类型
                " AMOUNT decimal(20,2)," +                                                  //字段类型
                " ACTUAL_TYPE STRING," +                                                    //字段类型
                " TYPE_NAME STRING," +                                                      //字段类型
                " CREATED_AT timestamp," +                                                  //字段类型
                " CREATED_MAN STRING," +                                                    //字段类型
                " UPDATED_AT timestamp," +                                                  //字段类型
                " UPDATED_MAN STRING," +                                                    //字段类型
                " PRIMARY KEY (`ID`) NOT ENFORCED " +                                       //mysql表的主键,这个必须设置,否则不能无锁分布式读取和切块
                ") WITH (" +
                " 'connector' = 'mysql-cdc'," +                                             //connector类型:mysql-cdc
                " 'hostname' = '"+ SystemConstants.dataInput_hostname_test +"'," +          //MySQL的hostname,此处用的配置文件获取
                " 'port' = '3306'," +
                " 'username' = '"+ SystemConstants.dataInput_username_test +"'," +          //MySQL的username,此处用的配置文件获取
                " 'password' = '"+ SystemConstants.dataInput_password_test +"'," +          //MySQL的password,此处用的配置文件获取
                " 'database-name' = 'test'," +                                              //要读取的库名
                " 'table-name' = 'OUT_NORM_RULE_LIBRARY'," +                                //要读取的表名
                //" 'scan.startup.mode' = 'latest-offset'," +
                " 'scan.incremental.snapshot.enabled' = 'true'," +                          //增量式快照启动,启用后可以无锁分布式读表,默认启用
                " 'server-id' = '8000-8000'" +                                              //server-id,每个程序都得有一个独自的server-id,否则程序会报错,id区间按并行度的数量进行设置
                ")");
 
        //创建输出表
        tableEnv.executeSql("CREATE TABLE Data_Output (" +
                " ID bigint," +
                " PROJECT_ID bigint," +
                " PROJECT_CODE STRING," +
                " PROJECT_NAME STRING," +
                " AMOUNT decimal(20,2)," +
                " ACTUAL_TYPE STRING," +
                " TYPE_NAME STRING," +
                " CREATED_AT timestamp," +
                " CREATED_MAN STRING," +
                " UPDATED_AT timestamp," +
                " UPDATED_MAN STRING," +
                " PRIMARY KEY (`ID`) NOT ENFORCED " +
                ") WITH (" +
                " 'connector' = 'jdbc'," +                                                   //输出表使用jdbc connector输出到mysql
                " 'url' = '"+ SystemConstants.dataOutput_url_datapush_out +"'," +
                " 'username' = '"+ SystemConstants.dataOutput_username_datapush_out +"'," +
                " 'password' = '"+ SystemConstants.dataOutput_password_datapush_out +"'," +
                " 'table-name' = 'OUT_NORM_RULE_LIBRARY2'" +
                ")");
        //执行sql,执行sql时,flink会自动判断过来的数据是插入还是删除(updata会变成两条数据,先删除再插入),并且会自动判断主键是否已经存在,存在就upsert
        tableEnv.executeSql("INSERT INTO Data_Output (SELECT * FROM Data_Input)");
 
    }
}

flink cdc 踩坑记录:

以下总结都是基于flink 1.13.2 对应的 flink cdc 2.0的

1.flink cdc 分两种api代码,一种是datastream api,一种是sql api,两种api有较大的差异,在这总结一下两种api的优劣势:

datastream api优势:可以读多库多表,代码灵活
劣势:只能单并行度读表,且mysql的datatime类型和timestamp的数据读出来有时区问题,而且程序启动时,需要reload锁表权限去做全量快照,会短暂的锁表,而且不能做Checkpoint

sql api 优势:可以多并行度的读表,且不需要锁表,定义数据类型时将datatime定义为timestamp类型,也能避免时区的问题,还能做Checkpoint
劣势:只能读取单表

2.datastream api作业在扫描 MySQL 全量数据时,checkpoint 超时,出现作业 failover

原因:Flink CDC 在 scan 全表数据,而在 scan 全表过程中是没有 offset 可以记录的(意味着没法做 checkpoint),但是 Flink 框架任何时候都会按照固定间隔时间做 checkpoint,所以此处 mysql-cdc source 做了比较取巧的方式,即在 scan 全表的过程中,会让执行中的 checkpoint 一直等待甚至超时。超时的 checkpoint 会被仍未认为是 failed checkpoint,默认配置下,这会触发 Flink 的 failover 机制,而默认的 failover 机制是不重启。所以会造成上面的现象

解决办法:配置 failed checkpoint 容忍次数,以及失败重启策略

3.datastream api执行时报锁权限问题

原因: 由于使用的 mysql 用户未授权 RELOAD 权限,导致无法获取全局读锁(FLUSH TABLES WITH READ LOCK), CDC source 就会退化成表级读锁,而使用表级读锁需要等到全表 scan 完,才能释放锁,所以会发现持锁时间过长的现象,影响其他业务写入数据。

解决方法:给使用的 MySQL 用户授予 RELOAD 权限即可

4.sql api 正常提交任务后,只读全量数据,不读增量数据

原因:sql api在分布式全量读表完成后需要做一次全量的checkpoint,因为checkpoint未开启,导致无法进行下一步读取增量数据

解决方法:开启checkpoint还有输入表和输出表的binlog权限

5.mysql的datatime和timestamp数据类型时区问题

在使用datastream api读出来的datatime类型数据,会将年月日的数据类型读成时间戳的类型,那是因为binlog在存储datatime数据类型时,就是用时间戳的形式存储的,且该时间搓有时区问题,和现实时间差8小时,timestamp类型的数据读出来虽然不是时间戳类型的,但是依然会有8小时的时区差异,所以在使用datastream api时需要手动进行时区转换(datastream api目前没有找到其他解决方案)

但使用sql api时,读取datatime类型的数据时,只需要将该字段类型定义为timestamp去读取,就能解决时区和时间戳的问题,timestamp类型的数据正常读取即可,但是在使用sql api写入mysql时,需要在输出库中配置一下时区为+8:00,避免写入时造成时区问题,否则时间会相差12-13小时

6.运行flink任务时,flink输出的日志为空

原因:log4j jar包冲突

解决方法:将项目的log4j依赖全部排除掉,因为flink有自带的log4j jar包,我们再上传log4j jar包很容易造成jar包冲突

7.idea本地依赖中的 flink-table-planner-blink依赖 和 flink集群上的 table api jar包冲突

在idea本地执行时需要将该jar包依赖放开,在打包到集群上运行时又需要将该依赖provided

<dependency>
 <groupId>org.apache.flink</groupId>
 <artifactId>flink-table-planner-blink_${scala.binary.version}</artifactId>
 <version>${flink.version}</version>
 <scope>provided</scope>
</dependency>

8.两个程序的server-id重复导致程序报错

原因:每个cdc程序都会生成一个5400-6400的随机server-id,如果你不手动指定server-id,就有可能造成两个cdc程序的server-id重复

解决办法:在sql中设置server-id,例如:

" ‘server-id’ = ‘8000-8000’" + //id区间按并行度的数量进行设置,我这儿并行度是1,所以区间长度只有一个,两个并行度,就可以是’8000-8001’

9.任务挂掉后无法从savepoint恢复:

原因:任务挂掉的时间内,输入表中有新数据产生,恢复任务的时候,还未从savepoint恢复,就已经开始读数据,造成savepoint恢复失败

解决办法:将flink-connector-mysql-cdc-2.0.0升级到flink-connector-mysql-cdc-2.0.2,并设置server-id

flink cdc 2.2.0+ 开发模板 待续…

评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值