2024年最新新一代数据湖存储技术Apache Paimon入门Demo(1),面试官不讲武德

img
img
img

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,涵盖了95%以上大数据知识点,真正体系化!

由于文件比较多,这里只是将部分目录截图出来,全套包含大厂面经、学习笔记、源码讲义、实战项目、大纲路线、讲解视频,并且后续会持续更新

需要这份系统化资料的朋友,可以戳这里获取

note:

(1)multiWrite代码

(2)读延迟

三、可能遇到的问题

四、展望


前言

1. 什么是 Apache Paimon

Apache Paimon (incubating) 是一项流式数据湖存储技术,可以为用户提供高吞吐、低延迟的数据摄入、流式订阅以及实时查询能力。

Paimon 采用开放的数据格式和技术理念,可以与 Apache Flink / Spark / Trino 等诸多业界主流计算引擎进行对接,共同推进 Streaming Lakehouse 架构的普及和发展。

Paimon 以湖存储的方式基于分布式文件系统管理元数据,并采用开放的 ORC、Parquet、Avro 文件格式,支持各大主流计算引擎,包括 Flink、Spark、Hive、Trino、Presto。未来会对接更多引擎,包括 Doris 和 Starrocks。

官网:https://paimon.apache.org/

Github:https://github.com/apache/incubator-paimon

以下为快速入门上手Paimon的example:

一、本地环境快速上手

基于paimon 0.4-SNAPSHOT (Flink 1.14.4),Flink版本太低是不支持的,paimon基于最低版本1.14.6,经尝试在Flink1.14.0是不可以的!

paimon-flink-1.14-0.4-20230504.002229-50.jar

1、本地Flink伪集群

  1. 需要先下载jar包,并添加至flink的lib中;

  2. 根据官网demo,启动flinksql-client,创建catalog,创建表,创建数据源(视图),insert数据到表中。

  1. 通过 localhost:8081 查看 Flink UI

  1. 查看filesystem数据、元数据文件

2、IDEA中跑Paimon Demo

pom依赖:

        <dependency>
            <groupId>org.apache.paimon</groupId>
            <artifactId>paimon-flink-1.14</artifactId>
            <version>0.4-SNAPSHOT</version>
        </dependency>

拉取不到的可以手动添加到本地maven仓库:

mvn install:install-file -DgroupId=org.apache.paimon -DartifactId=paimon-flink-1.14 -Dversion=0.4-SNAPSHOT -Dpackaging=jar -Dfile=D:\software\paimon-flink-1.14-0.4-20230504.002229-50.jar

2.1 代码
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

/**
 * @Author: YK.Leo
 * @Date: 2023-05-14 15:12
 * @Version: 1.0
 */

// Succeed at local !!!
public class OfficeDemoV1 {
    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        env.setParallelism(1);
        env.enableCheckpointing(10000l);
        env.getCheckpointConfig().setCheckpointStorage("file:/D:/tmp/paimon/");

        TableEnvironment tableEnv = StreamTableEnvironment.create(env);

        // 0. Create a Catalog and a Table
        tableEnv.executeSql("CREATE CATALOG my_catalog_api WITH (\n" +
                "    'type'='paimon',\n" +                           // todo: !!!
                "    'warehouse'='file:///D:/tmp/paimon'\n" +
                ")");

        tableEnv.executeSql("USE CATALOG my_catalog_api");

        tableEnv.executeSql("CREATE TABLE IF NOT EXISTS word_count_api (\n" +
                "    word STRING PRIMARY KEY NOT ENFORCED,\n" +
                "    cnt BIGINT\n" +
                ")");

        // 1. Write Data
        tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS word_table_api (\n" +
                "    word STRING\n" +
                ") WITH (\n" +
                "    'connector' = 'datagen',\n" +
                "    'fields.word.length' = '1'\n" +
                ")");

        // tableEnv.executeSql("SET 'execution.checkpointing.interval' = '10 s'");

        tableEnv.executeSql("INSERT INTO word_count_api SELECT word, COUNT(*) FROM word_table_api GROUP BY word");

        env.execute();
    }
}
2.2 IDEA中成功运行

3、IDEA中Stream读写

3.1 流写

代码:

package com.study.flink.table.paimon.demo;

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.StatementSet;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

/**
 * @Author: YK.Leo
 * @Date: 2023-05-17 11:11
 * @Version: 1.0
 */

// succeed at local !!!
public class OfficeStreamsWriteV2 {
    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        env.setParallelism(1);
        env.enableCheckpointing(10000L);
        env.getCheckpointConfig().setCheckpointStorage("file:/D:/tmp/paimon/");

        TableEnvironment tableEnv = StreamTableEnvironment.create(env);


        // 0. Create a Catalog and a Table
        tableEnv.executeSql("CREATE CATALOG my_catalog_local WITH (\n" +
                "    'type'='paimon',\n" +                           // todo: !!!
                "    'warehouse'='file:///D:/tmp/paimon'\n" +
                ")");

        tableEnv.executeSql("USE CATALOG my_catalog_local");

        tableEnv.executeSql("CREATE DATABASE IF NOT EXISTS my_catalog_local.local_db");
        tableEnv.executeSql("USE local_db");

        // drop tbl
        tableEnv.executeSql("DROP TABLE IF EXISTS paimon_tbl_streams");
        tableEnv.executeSql("CREATE TABLE IF NOT EXISTS paimon_tbl_streams(\n"
                + " uuid bigint,\n"
                + " name VARCHAR(3),\n"
                + " age int,\n"
                + " ts TIMESTAMP(3),\n"
                + " dt VARCHAR(10), \n"
                + " PRIMARY KEY (dt, uuid) NOT ENFORCED \n"
                + ") PARTITIONED BY (dt) \n"
                + " WITH (\n" +
                "    'merge-engine' = 'partial-update',\n" +
                "    'changelog-producer' = 'full-compaction', \n" +
                "    'file.format' = 'orc', \n" +
                "    'scan.mode' = 'compacted-full', \n" +
                "    'bucket' = '5', \n" +
                "    'sink.parallelism' = '5', \n" +
                "    'sequence.field' = 'ts' \n" +   // todo, to check
                ")"
        );

        // datagen ====================================================================
        tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_A (\n" +
                " uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
                " `name` VARCHAR(3)," +
                " _ts1 TIMESTAMP(3)\n" +
                ") WITH (\n" +
                " 'connector' = 'datagen', \n" +
                " 'fields.uuid.kind'='sequence',\n" +
                " 'fields.uuid.start'='0', \n" +
                " 'fields.uuid.end'='1000000', \n" +
                " 'rows-per-second' = '1' \n" +
                ")");
        tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_B (\n" +
                " uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
                " `age` int," +
                " _ts2 TIMESTAMP(3)\n" +
                ") WITH (\n" +
                " 'connector' = 'datagen', \n" +
                " 'fields.uuid.kind'='sequence',\n" +
                " 'fields.uuid.start'='0', \n" +
                " 'fields.uuid.end'='1000000', \n" +
                " 'rows-per-second' = '1' \n" +
                ")");

        //
        //tableEnv.executeSql("insert into paimon_tbl_streams(uuid, name, _ts1) select uuid, concat(name,'_A') as name, _ts1 from source_A");
        //tableEnv.executeSql("insert into paimon_tbl_streams(uuid, age, _ts1) select uuid, concat(age,'_B') as age, _ts1 from source_B");
        StatementSet statementSet = tableEnv.createStatementSet();
        statementSet
                .addInsertSql("insert into paimon_tbl_streams(uuid, name, ts, dt) select uuid, name, _ts1 as ts, date_format(_ts1,'yyyy-MM-dd') as dt from source_A")
                .addInsertSql("insert into paimon_tbl_streams(uuid, age, dt) select uuid, age, date_format(_ts2,'yyyy-MM-dd') as dt from source_B")
                ;

        statementSet.execute();
        // env.execute();
    }
}

结果:

如果只有一个流,上述代码完全没有问题【仅作为write demo一个流即可】,两个流会出现“写冲突”问题!

如下:

使用了官网的方法:Dedicated Compaction Job,似乎并没有奏效,至于解决方法请看下文 “二、进阶:本地(IDEA)多流拼接测试”;

3.2 流读(toChangeLogStream)

代码:

package com.study.flink.table.paimon.demo;

import org.apache.flink.api.common.functions.FilterFunction;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.Schema;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.table.connector.ChangelogMode;
import org.apache.flink.types.Row;
import org.apache.flink.types.RowKind;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

/**
 * @Author: YK.Leo
 * @Date: 2023-05-15 18:50
 * @Version: 1.0
 */

// 流读单表OK!
public class OfficeStreamReadV1  {

    public static final Logger LOGGER = LogManager.getLogger(OfficeStreamReadV1.class);

    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        env.setParallelism(1);
        env.enableCheckpointing(10000L);
        env.getCheckpointConfig().setCheckpointStorage("file:/D:/tmp/paimon/");

        TableEnvironment tableEnv = StreamTableEnvironment.create(env);


        // 0. Create a Catalog and a Table
        tableEnv.executeSql("CREATE CATALOG my_catalog_local WITH (\n" +
                "    'type'='paimon',\n" +                           // todo: !!!
                "    'warehouse'='file:///D:/tmp/paimon'\n" +
                ")");

        tableEnv.executeSql("USE CATALOG my_catalog_local");

        tableEnv.executeSql("CREATE DATABASE IF NOT EXISTS my_catalog_local.local_db");
        tableEnv.executeSql("USE local_db");

        // 不需要再次创建表

        // convert to DataStream
        // Table table = tableEnv.sqlQuery("SELECT * FROM paimon_tbl_streams");
        Table table = tableEnv.sqlQuery("SELECT * FROM paimon_tbl_streams WHERE name is not null and age is not null");
        // DataStream<Row> dataStream = ((StreamTableEnvironment) tableEnv).toChangelogStream(table);
        // todo : doesn't support consuming update and delete changes which is produced by node TableSourceScan
        // DataStream<Row> dataStream = ((StreamTableEnvironment) tableEnv).toDataStream(table);
        // 剔除 -U 数据(即:更新前的数据不需要重新发送,剔除)!!!
        DataStream<Row> dataStream = ((StreamTableEnvironment) tableEnv)
                .toChangelogStream(table, Schema.newBuilder().primaryKey("dt","uuid").build(), ChangelogMode.upsert())
                .filter(new FilterFunction<Row>() {
                    @Override
                    public boolean filter(Row row) throws Exception {
                        boolean isNoteUpdateBefore = !(row.getKind().equals(RowKind.UPDATE_BEFORE));
                        if (!isNoteUpdateBefore) {
                            LOGGER.info("UPDATE_BEFORE: " + row.toString());
                        }
                        return isNoteUpdateBefore;
                    }
                })
                ;

        // use this datastream
        dataStream.executeAndCollect().forEachRemaining(System.out::println);

        env.execute();
    }
}

结果:

二、进阶:本地(IDEA)多流拼接测试

要解决的问题:

多个流拥有相同的主键,每个流更新除主键外的部分字段,通过主键完成多流拼接。

note:

如果是两个Flink Job 或者 两个 pipeline 写同一个paimon表,则直接会产生conflict,其中一条流不断exception、重启;

可以使用 “UNION ALL” 将多个流合并为一个流,最终一个Flink job写paimon表;

img
img

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化资料的朋友,可以戳这里获取

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!

aimon表,则直接会产生conflict,其中一条流不断exception、重启;

可以使用 “UNION ALL” 将多个流合并为一个流,最终一个Flink job写paimon表;

[外链图片转存中…(img-wBtzRoqm-1715660805036)]
[外链图片转存中…(img-pEyprZLq-1715660805037)]

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化资料的朋友,可以戳这里获取

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!

  • 14
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值