Apache Paimon 使用之 Pulsar CDC 解析

Pulsar CDC

a)依赖准备

flink-connector-pulsar-*.jar

b)支持的文件格式

Flink提供了几种Pulsar CDC格式:Canal、Debezium、Ogg和Maxwell JSON。

如果Pulsar Topic中的消息是使用 CDC 工具从另一个数据库捕获的change event,那么可以使用Paimon Pulsar CDC,将解析的INSERT、UPDATE、DELETE消息写入paimon表中。

注意

JSON源可能缺少信息。例如,Ogg和Maxwell格式不包含字段类型;当将JSON源写入Flink Pulsar Sink时,它只会保留数据和行类型并删除其他信息。同步工作将尽最大努力处理问题,如下所示:

  • 如果缺少字段类型,Paimon将默认使用“STRING”类型。
  • 如果缺少数据库名称或表名,则无法进行数据库同步,但仍然可以进行表同步。
  • 如果缺少主键,该作业可能会创建非主键表。可以在tablesynchronization中提交作业时设置主键。

c)同步表(Synchronizing Tables)

在Flink DataStream作业中使用PulsarSyncTableAction或直接通过flink run,可以将Pulsar的一个Topic中的一个或多个表同步到一个Paimon表中。

<FLINK_HOME>/bin/flink run \
    /path/to/paimon-flink-action-0.7.0-incubating.jar \
    pulsar_sync_table
    --warehouse <warehouse-path> \
    --database <database-name> \
    --table <table-name> \
    [--partition_keys <partition_keys>] \
    [--primary_keys <primary-keys>] \
    [--type_mapping to-string] \
    [--computed_column <'column-name=expr-name(args[, ...])'> [--computed_column ...]] \
    [--pulsar_conf <pulsar-source-conf> [--pulsar_conf <pulsar-source-conf> ...]] \
    [--catalog_conf <paimon-catalog-conf> [--catalog_conf <paimon-catalog-conf> ...]] \
    [--table_conf <paimon-table-sink-conf> [--table_conf <paimon-table-sink-conf> ...]]
ConfigurationDescription
–warehouseThe path to Paimon warehouse.
–databaseThe database name in Paimon catalog.
–tableThe Paimon table name.
–partition_keysThe partition keys for Paimon table. If there are multiple partition keys, connect them with comma, for example “dt,hh,mm”.
–primary_keysThe primary keys for Paimon table. If there are multiple primary keys, connect them with comma, for example “buyer_id,seller_id”.
–type_mappingIt is used to specify how to map MySQL data type to Paimon type. Supported options:“tinyint1-not-bool”: maps MySQL TINYINT(1) to TINYINT instead of BOOLEAN.“to-nullable”: ignores all NOT NULL constraints (except for primary keys). This is used to solve the problem that Flink cannot accept the MySQL ‘ALTER TABLE ADD COLUMN column type NOT NULL DEFAULT x’ operation.“to-string”: maps all MySQL types to STRING.“char-to-string”: maps MySQL CHAR(length)/VARCHAR(length) types to STRING.“longtext-to-bytes”: maps MySQL LONGTEXT types to BYTES.“bigint-unsigned-to-bigint”: maps MySQL BIGINT UNSIGNED, BIGINT UNSIGNED ZEROFILL, SERIAL to BIGINT. You should ensure overflow won’t occur when using this option.
–computed_columnThe definitions of computed columns. The argument field is from Pulsar topic’s table field name. See here for a complete list of configurations.
–pulsar_confThe configuration for Flink Pulsar sources. Each configuration should be specified in the format key=value. topic/topic-pattern, value.format, pulsar.client.serviceUrl, pulsar.admin.adminUrl, and pulsar.consumer.subscriptionName are required configurations, others are optional.See its document for a complete list of configurations.
–catalog_confThe configuration for Paimon catalog. Each configuration should be specified in the format “key=value”. See here for a complete list of catalog configurations.
–table_confThe configuration for Paimon table sink. Each configuration should be specified in the format “key=value”. See here for a complete list of table configurations.

如果指定的Paimon表不存在,将自动创建该表。它的模式将从所有指定的Pulsar Topic的表中派生出来,它从Topic中获取最早的非DDL数据解析模式。

如果Paimon表已经存在,其模式将与所有指定的Pulsar Topic表的模式进行比较。

示例1

<FLINK_HOME>/bin/flink run \
    /path/to/paimon-flink-action-0.7.0-incubating.jar \
    pulsar_sync_table \
    --warehouse hdfs:///path/to/warehouse \
    --database test_db \
    --table test_table \
    --partition_keys pt \
    --primary_keys pt,uid \
    --computed_column '_year=year(age)' \
    --pulsar_conf topic=order \
    --pulsar_conf value.format=canal-json \
    --pulsar_conf pulsar.client.serviceUrl=pulsar://127.0.0.1:6650 \
    --pulsar_conf pulsar.admin.adminUrl=http://127.0.0.1:8080 \
    --pulsar_conf pulsar.consumer.subscriptionName=paimon-tests \
    --catalog_conf metastore=hive \
    --catalog_conf uri=thrift://hive-metastore:9083 \
    --table_conf bucket=4 \
    --table_conf changelog-producer=input \
    --table_conf sink.parallelism=4

如果启动同步作业时Pulsar Topic不包含消息,则必须在提交作业之前手动创建表。只能定义分区键和主键,剩余列将由同步作业添加。

注意:在这种情况下,不应该使用-partition_keys或-primary_keys,因为这些键是在创建表时定义的,不能修改。此外,如果指定了计算列,还应该定义用于计算列的所有参数列。

示例2:如果想同步具有主键“id INT”的表,并且要计算分区键“part=date_format(create_time,yyyy-MM-dd)”,可以先创建如下表(其他列可以省略)

CREATE TABLE test_db.test_table (
    id INT,                 -- primary key
    create_time TIMESTAMP,  -- the argument of computed column part
    part STRING,            -- partition key
    PRIMARY KEY (id, part) NOT ENFORCED
) PARTITIONED BY (part);

然后,可以提交同步作业:

<FLINK_HOME>/bin/flink run \
    /path/to/paimon-flink-action-0.7.0-incubating.jar \
    pulsar_sync_table \
    --warehouse hdfs:///path/to/warehouse \
    --database test_db \
    --table test_table \
    --computed_column 'part=date_format(create_time,yyyy-MM-dd)' \
    ... (other conf)

d)同步数据库(Synchronizing Databases)

通过在Flink DataStream作业中或直接通过flink run使用PulsarSyncDatabaseAction,可以将多个Topic或一个Topic同步到一个Paimon数据库中。

<FLINK_HOME>/bin/flink run \
    /path/to/paimon-flink-action-0.7.0-incubating.jar \
    pulsar_sync_database
    --warehouse <warehouse-path> \
    --database <database-name> \
    [--table_prefix <paimon-table-prefix>] \
    [--table_suffix <paimon-table-suffix>] \
    [--including_tables <table-name|name-regular-expr>] \
    [--excluding_tables <table-name|name-regular-expr>] \
    [--type_mapping to-string] \
    [--pulsar_conf <pulsar-source-conf> [--pulsar_conf <pulsar-source-conf> ...]] \
    [--catalog_conf <paimon-catalog-conf> [--catalog_conf <paimon-catalog-conf> ...]] \
    [--table_conf <paimon-table-sink-conf> [--table_conf <paimon-table-sink-conf> ...]]
ConfigurationDescription
–warehouseThe path to Paimon warehouse.
–databaseThe database name in Paimon catalog.
–ignore_incompatibleIt is default false, in this case, if MySQL table name exists in Paimon and their schema is incompatible,an exception will be thrown. You can specify it to true explicitly to ignore the incompatible tables and exception.
–table_prefixThe prefix of all Paimon tables to be synchronized. For example, if you want all synchronized tables to have “ods_” as prefix, you can specify “–table_prefix ods_”.
–table_suffixThe suffix of all Paimon tables to be synchronized. The usage is same as “–table_prefix”.
–including_tablesIt is used to specify which source tables are to be synchronized. You must use ‘|’ to separate multiple tables.Because ‘|’ is a special character, a comma is required, for example: ‘a|b|c’.Regular expression is supported, for example, specifying “–including_tables test|paimon.*” means to synchronize table ‘test’ and all tables start with ‘paimon’.
–excluding_tablesIt is used to specify which source tables are not to be synchronized. The usage is same as “–including_tables”. “–excluding_tables” has higher priority than “–including_tables” if you specified both.
–type_mappingIt is used to specify how to map MySQL data type to Paimon type. Supported options:“tinyint1-not-bool”: maps MySQL TINYINT(1) to TINYINT instead of BOOLEAN.“to-nullable”: ignores all NOT NULL constraints (except for primary keys). This is used to solve the problem that Flink cannot accept the MySQL ‘ALTER TABLE ADD COLUMN column type NOT NULL DEFAULT x’ operation.“to-string”: maps all MySQL types to STRING.“char-to-string”: maps MySQL CHAR(length)/VARCHAR(length) types to STRING.“longtext-to-bytes”: maps MySQL LONGTEXT types to BYTES.“bigint-unsigned-to-bigint”: maps MySQL BIGINT UNSIGNED, BIGINT UNSIGNED ZEROFILL, SERIAL to BIGINT. You should ensure overflow won’t occur when using this option.
–pulsar_confThe configuration for Flink Pulsar sources. Each configuration should be specified in the format key=value. topic/topic-pattern, value.format, pulsar.client.serviceUrl, pulsar.admin.adminUrl, and pulsar.consumer.subscriptionName are required configurations, others are optional.See its document for a complete list of configurations.
–catalog_confThe configuration for Paimon catalog. Each configuration should be specified in the format “key=value”. See here for a complete list of catalog configurations.
–table_confThe configuration for Paimon table sink. Each configuration should be specified in the format “key=value”. See here for a complete list of table configurations.

只有带有主键的表才会同步。

此操作将为所有表构建一个single combined sink。对于要同步的每个Pulsar Topic的表,如果相应的Paimon表不存在,此操作将自动创建该表,其模式将从所有指定的Pulsar Topic的表中派生。

如果Paimon表已经存在,并且其模式与从Pulsar Topic数据中解析的模式不同,则此操作将尝试schema evolution。

示例:从一个Pulsar Topic同步到Paimon数据库。

<FLINK_HOME>/bin/flink run \
    /path/to/paimon-flink-action-0.7.0-incubating.jar \
    pulsar_sync_database \
    --warehouse hdfs:///path/to/warehouse \
    --database test_db \
    --pulsar_conf topic=order \
    --pulsar_conf value.format=canal-json \
    --pulsar_conf pulsar.client.serviceUrl=pulsar://127.0.0.1:6650 \
    --pulsar_conf pulsar.admin.adminUrl=http://127.0.0.1:8080 \
    --pulsar_conf pulsar.consumer.subscriptionName=paimon-tests \
    --catalog_conf metastore=hive \
    --catalog_conf uri=thrift://hive-metastore:9083 \
    --table_conf bucket=4 \
    --table_conf changelog-producer=input \
    --table_conf sink.parallelism=4

从多个Pulsar Topic同步到Paimon数据库。

<FLINK_HOME>/bin/flink run \
    /path/to/paimon-flink-action-0.7.0-incubating.jar \
    pulsar_sync_database \
    --warehouse hdfs:///path/to/warehouse \
    --database test_db \
    --pulsar_conf topic=order,logistic_order,user \
    --pulsar_conf value.format=canal-json \
    --pulsar_conf pulsar.client.serviceUrl=pulsar://127.0.0.1:6650 \
    --pulsar_conf pulsar.admin.adminUrl=http://127.0.0.1:8080 \
    --pulsar_conf pulsar.consumer.subscriptionName=paimon-tests \
    --catalog_conf metastore=hive \
    --catalog_conf uri=thrift://hive-metastore:9083 \
    --table_conf bucket=4 \
    --table_conf changelog-producer=input \
    --table_conf sink.parallelism=4

e) 额外的 pulsar_config

非 flink-pulsar-connector 文档提供的配置项。

KeyDefaultTypeDescription
value.format(none)StringDefines the format identifier for encoding value data.
topic(none)StringTopic name(s) from which the data is read. It also supports topic list by separating topic by semicolon like ‘topic-1;topic-2’. Note, only one of “topic-pattern” and “topic” can be specified.
topic-pattern(none)StringThe regular expression for a pattern of topic names to read from. All topics with names that match the specified regular expression will be subscribed by the consumer when the job starts running. Note, only one of “topic-pattern” and “topic” can be specified.
pulsar.startCursor.fromMessageIdEARLIESTStingUsing a unique identifier of a single message to seek the start position. The common format is a triple ‘ledgerId,entryId,partitionIndex’. Specially, you can set it to EARLIEST (-1, -1, -1) or LATEST (Long.MAX_VALUE, Long.MAX_VALUE, -1).
pulsar.startCursor.fromPublishTime(none)LongUsing the message publish time to seek the start position.
pulsar.startCursor.fromMessageIdInclusivetrueBooleanWhether to include the given message id. This option only works when the message id is not EARLIEST or LATEST.
pulsar.stopCursor.atMessageId(none)StringStop consuming when the message id is equal or greater than the specified message id. Message that is equal to the specified message id will not be consumed. The common format is a triple ‘ledgerId,entryId,partitionIndex’. Specially, you can set it to LATEST (Long.MAX_VALUE, Long.MAX_VALUE, -1).
pulsar.stopCursor.afterMessageId(none)StringStop consuming when the message id is greater than the specified message id. Message that is equal to the specified message id will be consumed. The common format is a triple ‘ledgerId,entryId,partitionIndex’. Specially, you can set it to LATEST (Long.MAX_VALUE, Long.MAX_VALUE, -1).
pulsar.stopCursor.atEventTime(none)LongStop consuming when message event time is greater than or equals the specified timestamp. Message that even time is equal to the specified timestamp will not be consumed.
pulsar.stopCursor.afterEventTime(none)LongStop consuming when message event time is greater than the specified timestamp. Message that even time is equal to the specified timestamp will be consumed.
pulsar.source.unboundedtrueBooleanTo specify the boundedness of a stream.
为什么要学习这门课程?·新一代流式数据湖技术组件深入讲解,帮助你快速构造数据湖知识体系。·为构建湖仓一体架构提供底层技术支撑。本课程将从原理、架构、底层存储细节、性能优化、管理等层面对Paimon流式数据湖组件进行详细讲解,原理+实战,帮助你快速上手使用数据湖技术。讲师介绍华为HCIP认证大数据高级工程师北京猎豹移动大数据技术专家中科院大数据研究院大数据技术专家51CTO企业IT学院优秀讲师电子工业出版社2022年度优秀作者出版书籍:《Flink入门与实战》、《大数据技术及架构图解实战派》。本课程提供配套课件、软件、试题、以及源码。课程内容介绍:1、什么是Apache Paimon2、Paimon的整体架构3、Paimon的核心特点4、Paimon支持的生态5、基于Flink SQL操作Paimon6、基于Flink DataStream API 操作Paimon7、Paimon中的内部表和外部表8、Paimon中的分区表和临时表9、Paimon中的Primary Key表(主键表)10、Paimon中的Append Only表(仅追加表)11、Changelog Producers原理及案例实战12、Merge Engines原理及案例实战13、Paimon中的Catalog详解14、Paimon中的Table详解15、Paimon之Hive Catalog的使用16、动态修改Paimon表属性17、查询Paimon系统表18、批量读取Paimon表19、流式读取Paimon表20、流式读取高级特性Consumer ID21、Paimon CDC数据摄取功能22、CDC之MySQL数据同步到Paimon23、CDC之Kafka数据同步到Paimon24、CDC高级特性之Schema模式演变25、CDC高级特性之计算列26、CDC高级特性之特殊的数据类型映射27、CDC高级特性之中文乱码28、Hive引擎集成Paimon29、在Hive中配置Paimon依赖30、在Hive中读写Paimon表31、在Hive中创建Paimon表32、Hive和Paimon数据类型映射关系33、Paimon底层文件基本概念34、Paimon底层文件布局35、Paimon底层文件操作详解36、Flink流式写入Paimon表过程分析37、读写性能优化详细分析38、Paimon中快照、分区、小文件的管理39、管理标签(自动管理+手工管理)40、管理Bucket(创建+删除+回滚)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

猫猫爱吃小鱼粮

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值