大数据最全新一代数据湖存储技术Apache Paimon入门Demo(1),2024年最新阿里三面

img
img
img

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,涵盖了95%以上大数据知识点,真正体系化!

由于文件比较多,这里只是将部分目录截图出来,全套包含大厂面经、学习笔记、源码讲义、实战项目、大纲路线、讲解视频,并且后续会持续更新

需要这份系统化资料的朋友,可以戳这里获取

    env.setParallelism(1);
    env.enableCheckpointing(10000L);
    env.getCheckpointConfig().setCheckpointStorage("file:/D:/tmp/paimon/");

    TableEnvironment tableEnv = StreamTableEnvironment.create(env);


    // 0. Create a Catalog and a Table
    tableEnv.executeSql("CREATE CATALOG my_catalog_local WITH (\n" +
            "    'type'='paimon',\n" +                           // todo: !!!
            "    'warehouse'='file:///D:/tmp/paimon'\n" +
            ")");

    tableEnv.executeSql("USE CATALOG my_catalog_local");

    tableEnv.executeSql("CREATE DATABASE IF NOT EXISTS my_catalog_local.local_db");
    tableEnv.executeSql("USE local_db");

    // drop tbl
    tableEnv.executeSql("DROP TABLE IF EXISTS paimon_tbl_streams");
    tableEnv.executeSql("CREATE TABLE IF NOT EXISTS paimon_tbl_streams(\n"
            + " uuid bigint,\n"
            + " name VARCHAR(3),\n"
            + " age int,\n"
            + " ts TIMESTAMP(3),\n"
            + " dt VARCHAR(10), \n"
            + " PRIMARY KEY (dt, uuid) NOT ENFORCED \n"
            + ") PARTITIONED BY (dt) \n"
            + " WITH (\n" +
            "    'merge-engine' = 'partial-update',\n" +
            "    'changelog-producer' = 'full-compaction', \n" +
            "    'file.format' = 'orc', \n" +
            "    'scan.mode' = 'compacted-full', \n" +
            "    'bucket' = '5', \n" +
            "    'sink.parallelism' = '5', \n" +
            "    'sequence.field' = 'ts' \n" +   // todo, to check
            ")"
    );

    // datagen ====================================================================
    tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_A (\n" +
            " uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
            " `name` VARCHAR(3)," +
            " _ts1 TIMESTAMP(3)\n" +
            ") WITH (\n" +
            " 'connector' = 'datagen', \n" +
            " 'fields.uuid.kind'='sequence',\n" +
            " 'fields.uuid.start'='0', \n" +
            " 'fields.uuid.end'='1000000', \n" +
            " 'rows-per-second' = '1' \n" +
            ")");
    tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_B (\n" +
            " uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
            " `age` int," +
            " _ts2 TIMESTAMP(3)\n" +
            ") WITH (\n" +
            " 'connector' = 'datagen', \n" +
            " 'fields.uuid.kind'='sequence',\n" +
            " 'fields.uuid.start'='0', \n" +
            " 'fields.uuid.end'='1000000', \n" +
            " 'rows-per-second' = '1' \n" +
            ")");

    //
    //tableEnv.executeSql("insert into paimon_tbl_streams(uuid, name, _ts1) select uuid, concat(name,'_A') as name, _ts1 from source_A");
    //tableEnv.executeSql("insert into paimon_tbl_streams(uuid, age, _ts1) select uuid, concat(age,'_B') as age, _ts1 from source_B");
    StatementSet statementSet = tableEnv.createStatementSet();
    statementSet
            .addInsertSql("insert into paimon_tbl_streams(uuid, name, ts, dt) select uuid, name, _ts1 as ts, date_format(_ts1,'yyyy-MM-dd') as dt from source_A")
            .addInsertSql("insert into paimon_tbl_streams(uuid, age, dt) select uuid, age, date_format(_ts2,'yyyy-MM-dd') as dt from source_B")
            ;

    statementSet.execute();
    // env.execute();
}

}


结果:


![](https://img-blog.csdnimg.cn/56d9290f711c47e7bf18c2ee69e15644.png)


       如果只有一个流,上述代码完全没有问题【仅作为write demo一个流即可】,两个流会出现“写冲突”问题!


如下:


![](https://img-blog.csdnimg.cn/2cd69062d45f4510b34a85ff71a3d3d5.png)


        使用了官网的方法:[Dedicated Compaction Job]( ),似乎并没有奏效,至于解决方法请看下文 “**二、进阶:本地(IDEA)多流拼接测试**”; 


#### 3.2 流读(toChangeLogStream)


代码:



package com.study.flink.table.paimon.demo;

import org.apache.flink.api.common.functions.FilterFunction;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.Schema;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.table.connector.ChangelogMode;
import org.apache.flink.types.Row;
import org.apache.flink.types.RowKind;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

/**

  • @Author: YK.Leo
  • @Date: 2023-05-15 18:50
  • @Version: 1.0
    */

// 流读单表OK!
public class OfficeStreamReadV1 {

public static final Logger LOGGER = LogManager.getLogger(OfficeStreamReadV1.class);

public static void main(String[] args) throws Exception {
    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

    env.setParallelism(1);
    env.enableCheckpointing(10000L);
    env.getCheckpointConfig().setCheckpointStorage("file:/D:/tmp/paimon/");

    TableEnvironment tableEnv = StreamTableEnvironment.create(env);


    // 0. Create a Catalog and a Table
    tableEnv.executeSql("CREATE CATALOG my_catalog_local WITH (\n" +
            "    'type'='paimon',\n" +                           // todo: !!!
            "    'warehouse'='file:///D:/tmp/paimon'\n" +
            ")");

    tableEnv.executeSql("USE CATALOG my_catalog_local");

    tableEnv.executeSql("CREATE DATABASE IF NOT EXISTS my_catalog_local.local_db");
    tableEnv.executeSql("USE local_db");

    // 不需要再次创建表

    // convert to DataStream
    // Table table = tableEnv.sqlQuery("SELECT * FROM paimon_tbl_streams");
    Table table = tableEnv.sqlQuery("SELECT * FROM paimon_tbl_streams WHERE name is not null and age is not null");
    // DataStream<Row> dataStream = ((StreamTableEnvironment) tableEnv).toChangelogStream(table);
    // todo : doesn't support consuming update and delete changes which is produced by node TableSourceScan
    // DataStream<Row> dataStream = ((StreamTableEnvironment) tableEnv).toDataStream(table);
    // 剔除 -U 数据(即:更新前的数据不需要重新发送,剔除)!!!
    DataStream<Row> dataStream = ((StreamTableEnvironment) tableEnv)
            .toChangelogStream(table, Schema.newBuilder().primaryKey("dt","uuid").build(), ChangelogMode.upsert())
            .filter(new FilterFunction<Row>() {
                @Override
                public boolean filter(Row row) throws Exception {
                    boolean isNoteUpdateBefore = !(row.getKind().equals(RowKind.UPDATE_BEFORE));
                    if (!isNoteUpdateBefore) {
                        LOGGER.info("UPDATE_BEFORE: " + row.toString());
                    }
                    return isNoteUpdateBefore;
                }
            })
            ;

    // use this datastream
    dataStream.executeAndCollect().forEachRemaining(System.out::println);

    env.execute();
}

}


结果:


![](https://img-blog.csdnimg.cn/69fb1657b5c744189a57725f10513310.png)


## 


## 二、进阶:本地(IDEA)多流拼接测试


#### 要解决的问题:


        多个流拥有相同的主键,每个流更新除主键外的部分字段,通过主键完成多流拼接。


#### note:


        如果是两个Flink Job 或者 两个 pipeline 写同一个paimon表,则直接会产生conflict,其中一条流不断exception、重启;


        可以使用 “UNION ALL” 将多个流合并为一个流,最终一个Flink job写paimon表;


        使用主键表,'merge-engine' = 'partial-update' ;


### 1、'changelog-producer' = 'full-compaction'


#### (1)multiWrite代码



package com.study.flink.table.paimon.multi;

import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.StatementSet;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;

/**

  • @Author: YK.Leo
  • @Date: 2023-05-18 10:17
  • @Version: 1.0
    */

// Succeed as local !!!
// 而且不会产生conflict,跑5分钟没有任何异常(公司跑几天无异常)! 数据也可以在另一个job流读!
public class MultiStreamsUnionWriteV1 {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
env.enableCheckpointing(10*1000L);
env.getCheckpointConfig().setCheckpointStorage(“file:/D:/tmp/paimon/”);
TableEnvironment tableEnv = StreamTableEnvironment.create(env);

    // 0. Create a Catalog and a Table
    tableEnv.executeSql("CREATE CATALOG my_catalog_local WITH (\n" +
            "    'type'='paimon',\n" +                           // todo: !!!
            "    'warehouse'='file:///D:/tmp/paimon'\n" +
            ")");
    tableEnv.executeSql("USE CATALOG my_catalog_local");

    tableEnv.executeSql("CREATE DATABASE IF NOT EXISTS my_catalog_local.local_db");
    tableEnv.executeSql("USE local_db");

    // drop & create tbl
    tableEnv.executeSql("DROP TABLE IF EXISTS paimon_tbl_streams");
    tableEnv.executeSql("CREATE TABLE IF NOT EXISTS paimon_tbl_streams(\n"
            + " uuid bigint,\n"
            + " name VARCHAR(3),\n"
            + " age int,\n"
            + " ts TIMESTAMP(3),\n"
            + " dt VARCHAR(10), \n"
            + " PRIMARY KEY (dt, uuid) NOT ENFORCED \n"
            + ") PARTITIONED BY (dt) \n"
            + " WITH (\n" +
            "    'merge-engine' = 'partial-update',\n" +
            "    'changelog-producer' = 'full-compaction', \n" +
            "    'file.format' = 'orc', \n" +
            "    'scan.mode' = 'compacted-full', \n" +
            "    'bucket' = '5', \n" +
            "    'sink.parallelism' = '5', \n" +
            // "    'write_only' = 'true', \n" +
            "    'sequence.field' = 'ts' \n" +   // todo, to check
            ")"
    );

    // datagen ====================================================================
    tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_A (\n" +
            " uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
            " `name` VARCHAR(3)," +
            " _ts1 TIMESTAMP(3)\n" +
            ") WITH (\n" +
            " 'connector' = 'datagen', \n" +
            " 'fields.uuid.kind'='sequence',\n" +
            " 'fields.uuid.start'='0', \n" +
            " 'fields.uuid.end'='1000000', \n" +
            " 'rows-per-second' = '1' \n" +
            ")");
    tableEnv.executeSql("CREATE TEMPORARY TABLE IF NOT EXISTS source_B (\n" +
            " uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
            " `age` int," +
            " _ts2 TIMESTAMP(3)\n" +
            ") WITH (\n" +
            " 'connector' = 'datagen', \n" +
            " 'fields.uuid.kind'='sequence',\n" +
            " 'fields.uuid.start'='0', \n" +
            " 'fields.uuid.end'='1000000', \n" +
            " 'rows-per-second' = '1' \n" +
            ")");

    //
    StatementSet statementSet = tableEnv.createStatementSet();
    String sqlText = "INSERT INTO paimon_tbl_streams(uuid, name, age, ts, dt) \n" +
            "select uuid, name, cast(null as int) as age, _ts1 as ts, date_format(_ts1,'yyyy-MM-dd') as dt from source_A \n" +
            "UNION ALL \n" +
            "select uuid, cast(null as string) as name, age, _ts2 as ts, date_format(_ts2,'yyyy-MM-dd') as dt from source_B"
            ;
    statementSet.addInsertSql(sqlText);

    statementSet.execute();
}

}


读代码同上。


#### (2)读延迟


        即:从client数据落到paimon,完成与server的join,再到被Flink-paimon流读到的时间延迟;


       **分钟级别延迟**!


### 2、'changelog-producer' = 'lookup'


读写同上,建表时修改参数即可: changelog-producer='lookup',与此匹配的scan-mode需要分别配置为 'latest' ;


lookup延迟性可能会更低,但是数据质量有待验证。



**note:**


经测试,在企业生产环境中full-compaction模式目前一切稳定(两条join的流QPS约3K左右,延迟2-3分钟)。


         99.9%的数据延迟在2-3分钟;


        (multiWrite的checkpoint间隔为60s时)


![](https://img-blog.csdnimg.cn/96b644ab0d2f40a0921f9b8cbb268752.png)



## 三、可能遇到的问题


1. Caused by: java.lang.ClassCastException: org.codehaus.janino.CompilerFactory cannot be cast to org.codehaus.commons.compiler.ICompilerFactory


原因:org.codehaus.janino 依赖冲突,


办法:全部exclude掉


<exclude>org.codehaus.janino:\*</exclude>


2. Caused by: java.lang.ClassNotFoundException: org.apache.flink.util.function.SerializableFunction


原因:Flink steaming版本与Flink table版本不一致 或 确实相关依赖 (这里是paimon依赖的flink版本最低为1.14.6,与1.14.0的flink不兼容)


办法:升级Flink版本到1.14.4以上


参考Flink配置:[Configuration | Apache Flink]( )


3. Caused by: java.util.ServiceConfigurationError: org.apache.flink.table.factories.Factory: Provider org.apache.flink.table.store.connector.TableStoreManagedFactory not found


在项目的META-INF/services路径下添加 Factory 文件(这样才能匹配Flink的CatalogFactory,才能创建catalog)


4. Caused by: org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: No operators defined in streaming topology. Cannot execute.


已经存在tableEnv.executeSql 或者 statementSet.execute() 时就不需要再 env.execute() 了!


5. Flink SQL不能直接使用null as,需要写成 cast(null as data\_type), 如 cast(null as string);


6. 如果创建paimon分区表,必须要把分区字段放在主键中!,否则建表报错:


![](https://img-blog.csdnimg.cn/ce1c4c3a933342ffbc9f01786c05fe36.png)



## 四、展望


如果有数据格式:


**主键   stream\_client   stream\_server   ts**


1001    null                   a                         1


1001    A                      null                      2



![img](https://img-blog.csdnimg.cn/img_convert/816da587336dc12674bf0371c3c1496e.png)
![img](https://img-blog.csdnimg.cn/img_convert/6a50f35323acba0ddf9874483dcba801.png)
![img](https://img-blog.csdnimg.cn/img_convert/b1e84dee49bf02cfdb1c4a55fe6f7c8b.png)

**既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,涵盖了95%以上大数据知识点,真正体系化!**

**由于文件比较多,这里只是将部分目录截图出来,全套包含大厂面经、学习笔记、源码讲义、实战项目、大纲路线、讲解视频,并且后续会持续更新**

**[需要这份系统化资料的朋友,可以戳这里获取](https://bbs.csdn.net/forums/4f45ff00ff254613a03fab5e56a57acb)**

    1


1001    A                      null                      2



[外链图片转存中...(img-G6QugFs1-1715432738468)]
[外链图片转存中...(img-tecEDHJi-1715432738468)]
[外链图片转存中...(img-PVlk6ul6-1715432738469)]

**既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,涵盖了95%以上大数据知识点,真正体系化!**

**由于文件比较多,这里只是将部分目录截图出来,全套包含大厂面经、学习笔记、源码讲义、实战项目、大纲路线、讲解视频,并且后续会持续更新**

**[需要这份系统化资料的朋友,可以戳这里获取](https://bbs.csdn.net/forums/4f45ff00ff254613a03fab5e56a57acb)**

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值