Flink 实现 MySQL CDC 动态同步表结构

作者:陈少龙,腾讯 CSIG 高级工程师

使用 Flink CDC(Change Data Capture) 实现数据同步被越来越多的人接受。本文介绍了在数据同步过程中,如何将 Schema 的变化实时地从 MySQL 中同步到 Flink 程序中去。

背景

MySQL 存储的数据量大了之后往往会出现查询性能下降的问题,这时候通过 Flink SQL 里的 MySQL CDC Connector 将数据同步到其他数据存储是常见的一种处理方式。

例如 CDC 到 ES 实现数据检索,CDC 到 ClikHouse 进行 OLAP 分析,CDC 到 Kafka 实现数据同步等,然而目前官方 MySQL CDC Connector 还无法实现动态同步表结构,如果新增字段,则下游无法收到新增字段的数据,如果删除字段,那 Flink 任务将会报错退出,需要修改 SQL 后才能正常启动。

对于某些业务来说,数据库 Schema 变动是非常频繁的操作,如果只是变动就需要修改 SQL 并重启 Flink 任务,那么会带来很多不必要的维护成本。

适用版本

flink 1.11

flink-cdc-connector 1.x

无法同步表结构的原因

那么为什么 Flink SQL 无法通过 binlog 来同步表结构呢?查阅下源码可以发现,Flink 进行 binlog 数据转换时主要是通过 Flink SQL 中类似 Create Table 的语法预先定义的 Schema 来进行转换的,具体代码如下:

public ScanRuntimeProvider getScanRuntimeProvider(ScanContext scanContext) {
    RowType rowType = (RowType) physicalSchema.toRowDataType().getLogicalType();
    TypeInformation<RowData> typeInfo = (TypeInformation<RowData>) scanContext.createTypeInformation(physicalSchema.toRowDataType());
    DebeziumDeserializationSchema<RowData> deserializer = new RowDataDebeziumDeserializeSchema(
      rowType,
      typeInfo,
      ((rowData, rowKind) -> {}),
      serverTimeZone);
         ...
}

DebeziumDeserializationSchema 是用于转换 binlog 数据到 RowData 的核心接口,创建这个类时传入了 Flink SQL 定义的物理 Schema(封装为 RowType)。

public RowDataDebeziumDeserializeSchema(RowType rowType, TypeInformation<RowData> resultTypeInfo, ValueValidator validator, ZoneId serverTimeZone) {
    this.runtimeConverter = createConverter(rowType);
    this.resultTypeInfo = resultTypeInfo;
    this.validator = validator;
    this.serverTimeZone = serverTimeZone;
}

RowDataDebeziumDeserializeSchema 是 DebeziumDeserializationSchema 核心实现类,可以看到 createConverter 方法创建了用于转换 binlog 数据的转换器。

private DeserializationRuntimeConverter createRowConverter(RowType rowType) {
    final DeserializationRuntimeConverter[] fieldConverters = rowType.getFields().stream()
      .map(RowType.RowField::getType)
      .map(this::createConverter)
      .toArray(DeserializationRuntimeConverter[]::new);
    final String[] fieldNames = rowType.getFieldNames().toArray(new String[0]);


    return (dbzObj, schema) -> {
      Struct struct = (Struct) dbzObj;
      int arity = fieldNames.length;
      GenericRowData row = new GenericRowData(arity);
      for (int i = 0; i < arity; i++) {
        String fieldName = fieldNames[i];
        Object fieldValue = struct.get(fieldName);
        Schema fieldSchema = schema.field(fieldName).schema();
                ...
}

在最核心的转换方法中,Flink 通过 rowType.getFieldNames 获取到了 SQL 定义好的 fieldNames,并在后续的转换函数中通过 fieldName 来读取 binlog 的 schema 和 value,因此当数据库的表结构发生变更时,binlog 数据中即使已经有了新增的 schema 结构与数据,但因为 fieldNames 依然还是旧的,因此无法获取到新的变更。

解决方案

既然 Flink SQL 无法实现需求,那么很容易想到,使用 JAR 作业进行一些自定义扩展是非常适合这个场景的。

  1. 首先我们需要实现自己的 DebeziumDeserializationSchema,这里实现了一个名为 JsonStringDebeziumDeserializationSchema 的简单示例,实现将 binlog 数据转换为 JSON,在实际业务中可以根据业务需求实现更个性化的操作,例如向下游发送自定义的 Schema 变更通知等等。

public class JsonStringDebeziumDeserializationSchema implements DebeziumDeserializationSchema {
   
   @Override
   public void deserialize(SourceRecord record, Collector out) throws Exception {
       Envelope.Operation op = Envelope.operationFor(record);
       Struct value = (Struct) record.value();
       Schema valueSchema = record.valueSchema();
       if (op == Envelope.Operation.CREATE || op == Envelope.Operation.READ) {
           String insert = extractAfterRow(value, valueSchema);
           out.collect(new Tuple2<>(true, insert));
      } else if (op == Envelope.Operation.DELETE) {
           String delete = extractBeforeRow(value, valueSchema);
           out.collect(new Tuple2<>(false, delete));
      }else {
           String after = extractAfterRow(value, valueSchema);
           out.collect(new Tuple2<>(true, after));
      }
  }


   private Map<String,Object> getRowMap(Struct after){
       return after.schema().fields().stream().collect(Collectors.toMap(Field::name,f->after.get(f)));
  }


   private String extractAfterRow(Struct value, Schema valueSchema) throws Exception {
       Struct after = value.getStruct(Envelope.FieldName.AFTER);
       Map<String,Object> rowMap = getRowMap(after);
       ObjectMapper objectMapper = new ObjectMapper();
       return objectMapper.writeValueAsString(rowMap);
  }
   private String extractBeforeRow(Struct value, Schema valueSchema) throws Exception {
       Struct after = value.getStruct(Envelope.FieldName.BEFORE);
       Map<String,Object> rowMap = getRowMap(after);
       ObjectMapper objectMapper = new ObjectMapper();
       return objectMapper.writeValueAsString(rowMap);
  }


   @Override
   public TypeInformation getProducedType() {
       return TypeInformation.of(new TypeHint<Tuple2<Boolean,String>>(){});
  }
}

实现 DebeziumDeserializationSchema 需要实现 deserialize、getProducedType 两个函数。deserialize 实现转换数据的逻辑,getProducedType 定义返回的类型,这里返回两个参数,第一个Boolean 类型的参数表示数据是 upsert 或是 delete,第二个参数返回转换后的 JSON string,这里的 JSON 将会包含 Schema 变更后的 Column 与对应的 Value。

  1. 编写启动 Main 函数,将我们自定义的 DebeziumDeserializationSchema 实现设置到 SourceFunction 中

public class MySQLCDC{
   public static void main(String[] args) throws Exception {
       ParameterTool params = ParameterTool.fromArgs(args);
       StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
       // 关闭 Operator Chaining, 令运行图更容易初学者理解
       env.disableOperatorChaining();
       env.setParallelism(1);
       //checkpoint的一些配置
       env.enableCheckpointing(params.getInt("checkpointInterval",60000));
       env.getCheckpointConfig().setCheckpointTimeout(5000);
       env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
       SourceFunction source = MySQLSource.builder()
              .hostname(params.get("hostname","127.0.0.1"))
              .port(params.getInt("port",3306))
              .username(params.get("username","root"))
              .password(params.get("password",""))
              .serverTimeZone("Asia/Shanghai")
               //设置我们自己的实现
              .deserializer(new JsonStringDebeziumDeserializationSchema())
              .databaseList(params.get("databaseList","test"))
              .tableList(params.get("tableList","test.my_test"))
              .build();
       // 定义数据源
       DataStream<Tuple2<Boolean, String>> streamSource =
       env.addSource(source).name("MySQLSource");
      ...
       env.execute(MySQLCDC.class.getSimpleName());
  }
}

建立测试数据库,并插入几条数据

CREATE TABLE `my_test` (
 `f_sequence` int(11) DEFAULT NULL,
 `f_random` int(11) DEFAULT NULL,
 `f_random_str` varchar(255) NOT NULL DEFAULT '',
 `Name` varchar(255) DEFAULT '',
 `f_date` date DEFAULT NULL,
 `f_datetime` datetime DEFAULT NULL,
 `f_timestamp` bigint(20) DEFAULT NULL,
 PRIMARY KEY (`f_random_str`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

这个时候运行程序,已经可以看到一些输出了。

Schema 变更前输出:

(true,{"f_date":18545,"f_random_str":"1","f_sequence":1,"f_timestamp":1630486762,"f_datetime":1602328271000,"f_random":1,"Name":"1"})
(true,{"f_date":18545,"f_random_str":"2","f_sequence":2,"f_timestamp":1630486762,"f_datetime":1602328271000,"f_random":2,"Name":"2"})
(true,{"f_date":18545,"f_random_str":"3","f_sequence":3,"f_timestamp":1630486762,"f_datetime":1602328271000,"f_random":3,"Name":"3"})
(true,{"f_date":18545,"f_random_str":"4","f_sequence":33333,"f_timestamp":1630486762,"f_datetime":1602328271000,"f_random":4,"Name":"3"})

但是与数据库对比可以发现,这里的时间戳与数据库时间刚好相差了 8 个小时

f_sequence|f_random|f_random_str|Name|f_date    |f_datetime         |f_timestamp|
----------+--------+------------+----+----------+-------------------+-----------+
        1|       1|1           |1   |2020-10-10|2020-10-10 11:11:11| 1630486762|
        2|       2|2           |2   |2020-10-10|2020-10-10 11:11:11| 1630486762|
        3|       3|3           |3   |2020-10-10|2020-10-10 11:11:11| 1630486762|
    33333|       4|4           |3   |2020-10-10|2020-10-10 11:11:11| 1630486762|

说明我们启动时设置的 .serverTimeZone("Asia/Shanghai") 并没有生效,查源码可以发现,底层的 Debezium 并没有实现 serverTimeZone 的配置,相应的转换是在 RowDataDebeziumDeserializeSchema 内实现的,源码如下:

private TimestampData convertToTimestamp(Object dbzObj, Schema schema) {
  if (dbzObj instanceof Long) {
    switch (schema.name()) {
      case Timestamp.SCHEMA_NAME:
        return TimestampData.fromEpochMillis((Long) dbzObj);
      case MicroTimestamp.SCHEMA_NAME:
        long micro = (long) dbzObj;
        return TimestampData.fromEpochMillis(micro / 1000, (int) (micro % 1000 * 1000));
      case NanoTimestamp.SCHEMA_NAME:
        long nano = (long) dbzObj;
        return TimestampData.fromEpochMillis(nano / 1000_000, (int) (nano % 1000_000));
    }
  }
  //这里的serverTimeZone来自于Bean构造函数传入的配置项
  LocalDateTime localDateTime = TemporalConversions.toLocalDateTime(dbzObj, serverTimeZone);
  return TimestampData.fromLocalDateTime(localDateTime);
}

因此如果要实现完整的功能,那么我们自己实现的 JsonStringDebeziumDeserializationSchema 也需要包含对应的 Converter,最终代码如下:

public class JsonStringDebeziumDeserializationSchema implements DebeziumDeserializationSchema {




  public JsonStringDebeziumDeserializationSchema(int zoneOffset) {
      //实现一个用于转换时间的Converter  
      this.runtimeConverter = (dbzObj,schema) -> {
            if(schema.name() != null){
                switch (schema.name()) {
                    case Timestamp.SCHEMA_NAME:
                        return TimestampData.fromEpochMillis((Long) dbzObj).toLocalDateTime().atOffset(ZoneOffset.ofHours(zoneOffset)).format(DateTimeFormatter.ISO_OFFSET_DATE_TIME);
                    case MicroTimestamp.SCHEMA_NAME:
                        long micro = (long) dbzObj;
                        return TimestampData.fromEpochMillis(micro / 1000, (int) (micro % 1000 * 1000)).toLocalDateTime().atOffset(ZoneOffset.ofHours(zoneOffset)).format(DateTimeFormatter.ISO_OFFSET_DATE_TIME);
                    case NanoTimestamp.SCHEMA_NAME:
                        long nano = (long) dbzObj;
                        return TimestampData.fromEpochMillis(nano / 1000_000, (int) (nano % 1000_000)).toLocalDateTime().atOffset(ZoneOffset.ofHours(zoneOffset)).format(DateTimeFormatter.ISO_OFFSET_DATE_TIME);
                    case Date.SCHEMA_NAME:
                        return TemporalConversions.toLocalDate(dbzObj).format(DateTimeFormatter.ISO_LOCAL_DATE);
                }
            }
            return dbzObj;
        };
    }
    
    //定义接口
    private interface DeserializationRuntimeConverter extends Serializable {
        Object convert(Object dbzObj, Schema schema);
    }


    private final JsonStringDebeziumDeserializationSchema.DeserializationRuntimeConverter runtimeConverter;




    private Map<String,Object> getRowMap(Struct after){
        //转换时使用对应的转换器
        return after.schema().fields().stream()
          .collect(Collectors.toMap(Field::name,f->runtimeConverter.convert(after.get(f),f.schema())));
    }


    ...
}

同时修改 Main 函数,在构造 JsonStringDebeziumDeserializationSchema 时传入对应的时区,再次运行时就可以看到符合我们预期的输出了。

修改时区后输出:

(true,{"f_date":"2020-10-10","f_random_str":"1","f_sequence":1,"f_timestamp":1630486762,"f_datetime":"2020-10-10T11:11:11+08:00","f_random":1,"Name":"1"})
(true,{"f_date":"2020-10-10","f_random_str":"2","f_sequence":2,"f_timestamp":1630486762,"f_datetime":"2020-10-10T11:11:11+08:00","f_random":2,"Name":"2"})
(true,{"f_date":"2020-10-10","f_random_str":"3","f_sequence":3,"f_timestamp":1630486762,"f_datetime":"2020-10-10T11:11:11+08:00","f_random":3,"Name":"3"})
(true,{"f_date":"2020-10-10","f_random_str":"4","f_sequence":33333,"f_timestamp":1630486762,"f_datetime":"2020-10-10T11:11:11+08:00","f_random":4,"Name":"3"})

最后我们可以验证一下 Schema 变更是不是可以及时同步到输出的 JSON 中,通过语句在数据库中新增一个字段,并插入一条新数据:

ALTER TABLE my_test ADD f_added_string varchar(255) NOT NULL DEFAULT '' COMMENT '新增字段';
INSERT INTO my_test VALUES(1,1,'new','new','2020-10-10 10:10:10',1630486762,'new');

可以看到输出中已经出现了新增的字段

Schema 变更后输出:

1ff1b29a52b5b294aa8cc05ff51f0f22.png

总结

流计算 Oceanus 是腾讯云基于 Apache Flink 构建的大数据分析平台,本问题在该平台上进行了验证。近期 Oceanus推出了新用户 1 元使用 1 个月的活动,大家可以购买进行练习实践。流计算 Oceanus 1 元购活动链接:https://cloud.tencent.com/product/oceanus。

  • 2
    点赞
  • 36
    收藏
    觉得还不错? 一键收藏
  • 7
    评论
以下是使用 Flink 实现 MySQL CDC 的 Scala 代码示例: ```scala import org.apache.flink.streaming.api.scala._ import org.apache.flink.streaming.api.functions.source.SourceFunction import org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks import org.apache.flink.streaming.api.watermark.Watermark import org.apache.flink.streaming.api.functions.sink.SinkFunction import org.apache.flink.streaming.api.functions.ProcessFunction import org.apache.flink.util.Collector import org.apache.flink.streaming.api.TimeCharacteristic import org.apache.flink.streaming.api.windowing.time.Time import org.apache.flink.streaming.api.scala.function.WindowFunction import org.apache.flink.streaming.api.windowing.windows.TimeWindow import org.apache.flink.api.common.functions.MapFunction import org.apache.flink.api.common.typeinfo.TypeInformation import org.apache.flink.api.java.typeutils.RowTypeInfo import org.apache.flink.types.Row import org.apache.flink.api.common.serialization.SimpleStringSchema import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011 import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011 import org.apache.flink.streaming.connectors.kafka.KafkaSerializationSchema import org.apache.flink.streaming.connectors.kafka.KafkaContextAware import org.apache.flink.streaming.connectors.kafka.KafkaContextAware.KafkaContext import org.apache.flink.streaming.util.serialization.KeyedSerializationSchemaWrapper import java.util.Properties object MySQLCDC { def main(args: Array[String]): Unit = { val env = StreamExecutionEnvironment.getExecutionEnvironment env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime) val properties = new Properties() properties.setProperty("bootstrap.servers", "localhost:9092") properties.setProperty("group.id", "flink-group") val consumer = new FlinkKafkaConsumer011[String]("mysql-cdc", new SimpleStringSchema(), properties) val stream = env.addSource(consumer).map(new MapFunction[String, Row]() { override def map(value: String): Row = { val fields = value.split(",") Row.of(fields(0).toInt.asInstanceOf[Object], fields(1).asInstanceOf[Object], fields(2).asInstanceOf[Object]) } }).assignTimestampsAndWatermarks(new AssignerWithPunctuatedWatermarks[Row]() { override def extractTimestamp(row: Row, previousTimestamp: Long): Long = { row.getField(0).asInstanceOf[Int].toLong } override def checkAndGetNextWatermark(row: Row, extractedTimestamp: Long): Watermark = { new Watermark(extractedTimestamp) } }) val windowedStream = stream.keyBy(1).timeWindow(Time.seconds(10)).apply(new WindowFunction[Row, Row, Tuple, TimeWindow]() { override def apply(key: Tuple, window: TimeWindow, input: Iterable[Row], out: Collector[Row]): Unit = { val sortedInput = input.toList.sortBy(_.getField(0).asInstanceOf[Int]) val firstRow = sortedInput.head val lastRow = sortedInput.last out.collect(Row.of(firstRow.getField(1), firstRow.getField(2), lastRow.getField(2))) } }) val producer = new FlinkKafkaProducer011[String]("mysql-cdc-output", new KafkaSerializationSchema[String]() with KafkaContextAware[String] { var context: KafkaContext = _ override def serialize(element: String, timestamp: java.lang.Long): org.apache.kafka.clients.producer.ProducerRecord[Array[Byte], Array[Byte]] = { new org.apache.kafka.clients.producer.ProducerRecord(context.getOutputTopic(), element.getBytes()) } override def setRuntimeContext(context: KafkaContext): Unit = { this.context = context } }, properties, FlinkKafkaProducer011.Semantic.EXACTLY_ONCE) windowedStream.map(new MapFunction[Row, String]() { override def map(row: Row): String = { s"${row.getField(0)},${row.getField(1)},${row.getField(2)}" } }).addSink(producer) env.execute("MySQL CDC") } } ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值