Flink消费kafka的debezium-json数据(包含增删改消息),将数据同步到starrocks

本文介绍了如何解决Flink CDC在处理Oracle数据时遇到的问题,转而采用高版本Debezium结合Kafka Connect将Oracle数据同步到Kafka。然后利用Flink SQL消费Kafka中的Debezium-JSON消息,实现数据实时同步到Starrocks。通过本地MySQL源测试,展示启动Zookeeper和Kafka,配置binlog到Kafka的流程,并强调Flink SQL源表需定义主键以处理删除消息。
摘要由CSDN通过智能技术生成

业务上需要同步oracle的数据到starrocks,先开始调研使用了flinkCDC,运行一段时间后发现Oracle内存不足,查阅相关issues以及相关资料,最终确认是flinkCDC2.3版本中debezium版本太低导致的,具体issues参考: https://github.com/ververica/flink-cdc-connectors/issues/815

所以只能更换方案使用高版本debezium + kafka connect的方式来同步对应的数据到kafka中,后面使用flink sql消费对应的kafka消息,来达到实时同步的目的。

本地测试调研使用mysql source作为测试案例

启动本地zookeeper以及kafka,
我这里的版本是 zookeeper3.4.6、kafka2.1
采集mysql binlog数据发送kafka:

package debezium_cdc

import io.debezium.engine.DebeziumEngine.{ChangeConsumer, CompletionCallback}
import io.debezium.engine.format.Json
import io.debezium.engine.{ChangeEvent, DebeziumEngine}
import org.apache.kafka.clients.producer.{KafkaProducer, ProducerRecord}

import java.util
import java.util.Properties
import java.util.concurrent.{ExecutorService, Executors, TimeUnit}
import java.util.function.Consumer
import scala.collection.JavaConverters.asScalaBufferConverter

/**
 * 采集binlog日志发送kafka
 *
 * @author zhangyunhao
 */
object MysqlDebeziumEngine {


  def main(args: Array[String]): Unit = {

    val bootstrapList = "localhost:9092"
    val topicName = "test_zyh_kafka_cdc"

    // kafka配置
    val kafkaProps = new Properties()
    kafkaProps.put("bootstrap.servers", bootstrapList)
    kafkaProps.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
    kafkaProps.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")

    val producer = new KafkaProducer[String,String](kafkaProps)

    // debezium配置

    val props: Properties = new Properties()
    // engine的参数设置
    props.setProperty("name", "engine")
    props.setProperty("offset.storage", "org.apache.kafka.connect.storage.FileOffsetBackingStore")
    props.setProperty("offset.storage.file.filename", "/Users/zhangyunhao/IdeaProjects/flink_explore/offsets.log")
    props.setProperty("offset.flush.interval.ms", "6000")
    props.setProperty("converter.schemas.enable", "true")
    // mysql connector的参数设置
    props.setProperty("connector.class", "io.debezium.connector.mysql.MySqlConnector")
    props.setProperty("database.hostname", "127.0.0.1")
    props.setProperty("database.port", "3306")
    props.setProperty("database.user", "root")
    props.setProperty("database.password", "123456")
    props.setProperty("database.server.id", "85744")   // 随便设置
    props.setProperty("database.server.name", "my-app-connector")   // 随便设置
    props.setProperty("database.include.list", "db_test")         // 同步库
    props.setProperty("table.include.list", "db_test.stu_test")   // 同步表
    props.setProperty("snapshot.mode", "schema_only")
    props.setProperty("decimal.handling.mode", "double")
    props.setProperty("database.history",
      "io.debezium.relational.history.FileDatabaseHistory")
    props.setProperty("database.history.file.filename",
      "/Users/zhangyunhao/IdeaProjects/flink_explore/dbhistory.log")

    try {
      // 创建engine。DebeziumEngine继承了Closeable,会自动关闭
      val engine: DebeziumEngine[ChangeEvent[String, String]] =
        DebeziumEngine.create(classOf[Json])
          .using(props)
          .notifying(new Consumer[ChangeEvent[String, String]] {
            override def accept(changeEvent: ChangeEvent[String, String]): Unit = {

              println("日志key" + changeEvent.key())
              println("日志value" + changeEvent.value())
            }
          })
          .notifying(
            new ChangeConsumer[ChangeEvent[String, String]] {
              override def handleBatch(list: util.List[ChangeEvent[String, String]], recordCommitter: DebeziumEngine.RecordCommitter[ChangeEvent[String, String]]): Unit = {
                for (changeEvent <- list.asScala) {
                  println("日志key" + changeEvent.key())
                  println("日志value" + changeEvent.value())

                  // 消息发送kafka
                  producer.send(new ProducerRecord[String,String](topicName, changeEvent.key(), changeEvent.value()))
                  producer.flush()

                  recordCommitter.markProcessed(changeEvent)
                }

                recordCommitter.markBatchFinished()
              }
            }
          )
          // 加上回调代码,查看错误信息
          .using(new CompletionCallback {
            override def handle(success: Boolean, message: String, error: Throwable): Unit = {
              if (!success && error != null) {
                System.out.println("----------error------")
                System.out.println(message)
                error.printStackTrace()
              }

            }
          })
          .build()

      // 异步执行engine
      val executor: ExecutorService = Executors.newSingleThreadExecutor()
      executor.execute(engine)

      // 优雅的关闭应用
      executor.shutdown() // 执行shutdown,等待已经提交的任务执行完毕
      // 持续监控任务是否完成,如果未完成,则继续等待
      while (!executor.awaitTermination(10, TimeUnit.SECONDS)) {
        println("Waiting another 10 seconds for the embedded engine to shut down")
      }


    } catch {
      case e: InterruptedException => {
        Thread.currentThread().interrupt()
      }
    }


  }

}

使用flink sql消费对应的消息:

package debezium_cdc

import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.table.api.{EnvironmentSettings}
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment

object BinlogParseApp {

  def main(args: Array[String]): Unit = {

    val fsSettings: EnvironmentSettings = EnvironmentSettings.newInstance()
      .useBlinkPlanner()
      .inStreamingMode()
      .build()

    val fsEnv: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
    // fsEnv.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
    fsEnv.setParallelism(1)

    val tEnv: StreamTableEnvironment = StreamTableEnvironment.create(fsEnv,fsSettings)

    // 可以删除,注意这里要设置主键字段
    // 目前debezium-json格式支持的连接器,
    // 参考 https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/connectors/table/formats/overview/
    val sourceSql =
      """
        |CREATE TABLE topic_products (
        |  id BIGINT,
        |  name STRING,
        |  PRIMARY KEY (id) NOT ENFORCED
        |) WITH (
        | 'connector' = 'kafka',
        | 'topic' = 'test_zyh_kafka_cdc',
        | 'properties.bootstrap.servers' = 'localhost:9092',
        | 'properties.group.id' = 'testGroup_zyh',
        | 'format' = 'debezium-json',
        | 'debezium-json.schema-include' = 'true'
        |)
        |""".stripMargin

    tEnv.executeSql(sourceSql)

    // tEnv.executeSql("select * from topic_products").print()

    val sinkSql =
      """
        |CREATE TABLE user_sink_table (
        |  id BIGINT,
        |  name STRING,
        |  PRIMARY KEY (id) NOT ENFORCED
        |) WITH (
        |   'connector' = 'jdbc',
        |   'url' = 'jdbc:mysql://localhost:3306/db_test',
        |   'table-name' = 'stu_test2',
        |   'username' = 'root',
        |   'password' = '123456',
        |	  'sink.buffer-flush.interval' = '5s',
        |   'sink.buffer-flush.max-rows' = '100'
        |)
        |""".stripMargin

    tEnv.executeSql(sinkSql)

    val insertSql =
      """
        |insert into user_sink_table
        |select
        | id,
        | name
        |from
        |topic_products
        |""".stripMargin

    tEnv.executeSql(insertSql)

  }

}

在表 db_test.stu_test 上执行增删改操作,在对应的sink表 db_test.stu_test2 也会出现对应的操作。
⚠️: flink sql程序的source需要定义主键,否则无法处理删除消息

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

雾岛与鲸

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值