flink&paimon开发之一:创建catalog

开发环境

  1. IDEA
  2. Flink 1.17.1
  3. Paimon 0.5正式
  4. 本地或HDFS存储

参考链接

paimon java API

https://paimon.apache.org/docs/master/api/flink-api/

代码

  1. pom文件
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.apache.flink</groupId>
    <artifactId>flink-quickstart-java</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
        <flink.version>1.17.1</flink.version>
        <hadoop.version>3.3.6</hadoop.version>
    </properties>

    <dependencies>

        <dependency>
            <groupId>org.apache.paimon</groupId>
            <artifactId>paimon-flink-1.17</artifactId>
            <version>0.5.0-incubating</version>
        </dependency>

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-api-java-bridge</artifactId>
            <version>1.17.0</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-table-planner -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-planner_2.12</artifactId>
            <version>${flink.version}</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-table-runtime -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table-runtime</artifactId>
            <version>${flink.version}</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-sql-client -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-sql-client</artifactId>
            <version>${flink.version}</version>
        </dependency>

        <!-- hadoop begin -->
        <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>${hadoop.version}</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>${hadoop.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.avro</groupId>
                    <artifactId>avro</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>log4j</groupId>
                    <artifactId>log4j</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-log4j12</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>jdk.tools</groupId>
                    <artifactId>jdk.tools</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>${hadoop.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.avro</groupId>
                    <artifactId>avro</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>log4j</groupId>
                    <artifactId>log4j</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-log4j12</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs-client</artifactId>
            <version>${hadoop.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.avro</groupId>
                    <artifactId>avro</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>log4j</groupId>
                    <artifactId>log4j</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-log4j12</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-core</artifactId>
            <version>${hadoop.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>com.google.protobuf</groupId>
                    <artifactId>protobuf-java</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>ch.qos.reload4j</groupId>
                    <artifactId>reload4j</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-reload4j</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>log4j</groupId>
                    <artifactId>log4j</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-log4j12</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>jdk.tools</groupId>
                    <artifactId>jdk.tools</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client-api</artifactId>
            <version>${hadoop.version}</version>
            <exclusions>
            <exclusion>
                <groupId>com.google.protobuf</groupId>
                <artifactId>protobuf-java</artifactId>
            </exclusion>
            <exclusion>
                <groupId>ch.qos.reload4j</groupId>
                <artifactId>reload4j</artifactId>
            </exclusion>
            <exclusion>
                <groupId>org.slf4j</groupId>
                <artifactId>slf4j-reload4j</artifactId>
            </exclusion>
            <exclusion>
                <groupId>log4j</groupId>
                <artifactId>log4j</artifactId>
            </exclusion>
            <exclusion>
                <groupId>org.slf4j</groupId>
                <artifactId>slf4j-log4j12</artifactId>
            </exclusion>
            <exclusion>
                <groupId>jdk.tools</groupId>
                <artifactId>jdk.tools</artifactId>
            </exclusion>
            </exclusions>
        </dependency>

    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>3.1.1</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <artifactSet>
                                <excludes>
                                    <exclude>com.google.code.findbugs:jsr305</exclude>
                                </excludes>
                            </artifactSet>
                            <filters>
                                <filter>
                                    <!-- Do not copy the signatures in the META-INF folder.
                                    Otherwise, this might cause SecurityExceptions when using the JAR. -->
                                    <artifact>*:*</artifact>
                                    <excludes>
                                        <exclude>META-INF/*.SF</exclude>
                                        <exclude>META-INF/*.DSA</exclude>
                                        <exclude>META-INF/*.RSA</exclude>
                                    </excludes>
                                </filter>
                            </filters>
                            <transformers>
                                <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                                    <!-- Replace this with the main class of your job -->
                                    <mainClass>my.programs.main.clazz</mainClass>
                                </transformer>
                                <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
                            </transformers>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>
  1. 代码
package paimon;

import org.apache.flink.api.common.typeinfo.Types;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.DataTypes;
import org.apache.flink.table.api.Schema;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.types.Row;
import org.apache.flink.types.RowKind;


public class WriteToTable {

    public static void main(String[] args) throws Exception {

// create environments of both APIs
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);

        // create a changelog DataStream
        DataStream<Row> dataStream =
                env.fromElements(
                        Row.ofKind(RowKind.INSERT, "Alice", 12),
                        Row.ofKind(RowKind.INSERT, "Bob", 5),
                        Row.ofKind(RowKind.UPDATE_BEFORE, "Alice", 12),
                        Row.ofKind(RowKind.UPDATE_AFTER, "Alice", 100))
                        .returns(
                                Types.ROW_NAMED(
                                        new String[] {"name", "age"},
                                        Types.STRING, Types.INT));

        // interpret the DataStream as a Table
        Schema schema = Schema.newBuilder()
                .column("name", DataTypes.STRING())
                .column("age", DataTypes.INT())
                .build();
        Table table = tableEnv.fromChangelogStream(dataStream, schema);

        // create paimon catalog
        tableEnv.executeSql("CREATE CATALOG paimon WITH ('type' = 'paimon', 'warehouse'='file:///Users/xxxxx/share_dir/paimon')");
//        tableEnv.executeSql("CREATE CATALOG paimon WITH ('type' = 'paimon', 'warehouse'='hdfs://xxxxxx:8020/paimon')");
        tableEnv.executeSql("USE CATALOG paimon;");

        // register the table under a name and perform an aggregation
        tableEnv.createTemporaryView("InputTable", table);

        tableEnv.executeSql("CREATE TABLE IF NOT EXISTS sink_paimon_table (\n" +
                "    name STRING PRIMARY KEY NOT ENFORCED,\n" +
                "    age INT\n" +
                ")");

        // insert into paimon table from your data stream table
        tableEnv.executeSql("INSERT INTO sink_paimon_table SELECT * FROM InputTable");

//        env.execute("paimon demo1");
    }

}
  1. 验证
    3.1 启动 flink/bin/start-cluster.sh
    3.2. 运行 flink/bin/sql-client.sh -i …/conf/sql-client-paimon.sh
    3.3 新建flink/conf/sql-client-paimon.sql文件
CREATE CATALOG my_catalog WITH (
    'type'='paimon',
    'warehouse'='file:///Users/leicq/share_dir/paimon'
);

USE CATALOG my_catalog;

SET 'sql-client.execution.result-mode' = 'tableau';
-- switch to batch mode
--SET 'execution.runtime-mode' = 'batch';
~                                            
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
为什么要学习这门课程?·新一代流式数据湖技术组件深入讲解,帮助你快速构造数据湖知识体系。·为构建湖仓一体架构提供底层技术支撑。本课程将从原理、架构、底层存储细节、性能优化、管理等层面对Paimon流式数据湖组件进行详细讲解,原理+实战,帮助你快速上手使用数据湖技术。讲师介绍华为HCIP认证大数据高级工程师北京猎豹移动大数据技术专家中科院大数据研究院大数据技术专家51CTO企业IT学院优秀讲师电子工业出版社2022年度优秀作者出版书籍:《Flink入门与实战》、《大数据技术及架构图解实战派》。本课程提供配套课件、软件、试题、以及源码。课程内容介绍:1、什么是Apache Paimon2、Paimon的整体架构3、Paimon的核心特点4、Paimon支持的生态5、基于Flink SQL操作Paimon6、基于Flink DataStream API 操作Paimon7、Paimon中的内部表和外部表8、Paimon中的分区表和临时表9、Paimon中的Primary Key表(主键表)10、Paimon中的Append Only表(仅追加表)11、Changelog Producers原理及案例实战12、Merge Engines原理及案例实战13、Paimon中的Catalog详解14、Paimon中的Table详解15、Paimon之Hive Catalog的使用16、动态修改Paimon表属性17、查询Paimon系统表18、批量读取Paimon表19、流式读取Paimon表20、流式读取高级特性Consumer ID21、Paimon CDC数据摄取功能22、CDC之MySQL数据同步到Paimon23、CDC之Kafka数据同步到Paimon24、CDC高级特性之Schema模式演变25、CDC高级特性之计算列26、CDC高级特性之特殊的数据类型映射27、CDC高级特性之中文乱码28、Hive引擎集成Paimon29、在Hive中配置Paimon依赖30、在Hive中读写Paimon表31、在Hive中创建Paimon表32、Hive和Paimon数据类型映射关系33、Paimon底层文件基本概念34、Paimon底层文件布局35、Paimon底层文件操作详解36、Flink流式写入Paimon表过程分析37、读写性能优化详细分析38、Paimon中快照、分区、小文件的管理39、管理标签(自动管理+手工管理)40、管理Bucket(创建+删除+回滚)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值