[Hudi]hudi的编译及hudi&spark和hudi&flink的简单使用

1 篇文章 0 订阅
本文详细介绍了Apache Hudi的安装、配置以及使用Hudi与Spark、Flink的集成。通过实例展示了如何创建Hudi表、插入数据、查询数据、修改数据以及删除和覆盖数据的操作。同时,提到了使用Flink SQL客户端进行数据操作的流程。
摘要由CSDN通过智能技术生成


一般都是用maven进行编译。但直接解压也可以使用

maven的安装

mvn的下载地址

https://archive.apache.org/dist/maven/maven-3/


https://archive.apache.org/dist/maven/maven-3/3.5.4/binaries/
我用的3.5.4版本,开始用的3.2.5编译失败,选择高版本后直接成功

vi /etc/profile

底部添加java和maven相关的环境变量

export JAVA_HOME=/usr/local/jdk_lin
export GOPATH=/opt/go
export GOROOT=/opt/go
export HADOOP_HOME=/opt/hadoop-3.1.4
export MAVEN_HOME=/opt/apache-maven-3.5.4

export SPARK_HOME=/opt/spark
export SPARK_CONF_DIR=$SPARK_HOME/conf
export PYSPARK_DRIVER_PYTHON=jupyter-notebook
export PYSPARK_DRIVER_PYTHON_OPTS="--ip=0.0.0.0 --port=7070"

CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$MAVEN_HOME/bin:$HADOOP_HOME/bin:$GOROOT/bin:$SPARK_HOME/bin:$SPARK_HOME/jars
source /etc/profile
mvn -v

进去maven的conf目录

vi settings.xml
<!-- 添加阿里云镜像-->
<mirror>
        <id>nexus-aliyun</id>
        <mirrorOf>central</mirrorOf>
        <name>Nexus aliyun</name>
        <url>http://maven.aliyun.com/nexus/content/groups/public</url>
</mirror>

在这里插入图片描述

方式一:使用git方式安装hudi

安装git

yum install git

git --version

git clone --branch release-0.10.1 https://gitee.com/apache/Hudi.git

下载成功后,进入hudi的安装目录

vi pom.xml

在repository部分添加如下

 <repository>
        <id>nexus-aliyun</id>
        <name>nexus-aliyun</name>
        <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
        <releases>
            <enabled>true</enabled>
        </releases>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
 </repository>

构建hudi,执行了十七分钟:

mvn clean package -DskipTests -Dspark3 -Dscala-2.12

在这里插入图片描述

方式二:下载hudi-0.10.1.src.tgz 方式安装

hudi下载地址:
https://downloads.apache.org/

下载后
1:配置pom文件。
2.执行编译 mvn clean package -DskipTests -Dspark3 -Dscala-2.12

同上
用了九分钟
在这里插入图片描述

(base) [root@yxkj153 Hudi]# ./hudi-cli/hudi-cli.sh

在这里插入图片描述

1.hudi和maven的版本需要注意;
2.解压hudi安装包的时候使用tar -zxvf 不管用,可以是安装包下载不完全,进行重新下载

spark结合hudi

spark-shell启动

spark-shell启动,需要指定spark-avro模块,因为默认环境里没有,spark-avro模块版本好需要和spark版本对应。这里使用的是hudi已经编译好的jar包。

(base) [root@yxkj153 target]# /opt/spark-3.1.2-bin-hadoop3.2/bin/spark-shell --jars /opt/hudi-0.10.1/packaging/hudi-spark-bundle/target/hudi-spark3.1.2-bundle_2.12-0.10.1.jar --package org.apache.spark:spark-avro_2.12:3.1.2  --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'

在这里插入图片描述

设置表名

设置表名,基本路径和数据生成器

scala> import org.apache.hudi.QuickstartUtils._
import org.apache.hudi.QuickstartUtils._

scala> import scala.collection.JavaConversions._
import scala.collection.JavaConversions._

scala> import org.apache.spark.sql.SaveMode._
import org.apache.spark.sql.SaveMode._

scala> import org.apache.hudi.DataSourceReadOptions._
import org.apache.hudi.DataSourceReadOptions._

scala> import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.DataSourceWriteOptions._

scala> import org.apache.hudi.config.HoodieWriteConfig._
import org.apache.hudi.config.HoodieWriteConfig._

scala> val tableName = "hudi_trips_cow"
tableName: String = hudi_trips_cow

scala> val basePath = "file:///tmp/hudi_trips_cow"
basePath: String = file:///tmp/hudi_trips_cow

scala>  val dataGen = new DataGenerator
dataGen: org.apache.hudi.QuickstartUtils.DataGenerator = org.apache.hudi.QuickstartUtils$DataGenerator@47effce0

在这里插入图片描述

插入数据

生成一些新数据,加载到DataFrame中,然后将DataFrame写入Hudi表。

scala>  val dataGen = new DataGenerator
dataGen: org.apache.hudi.QuickstartUtils.DataGenerator = org.apache.hudi.QuickstartUtils$DataGenerator@47effce0

scala> val inserts = convertToStringList(dataGen.generateInserts(10))
inserts: java.util.List[String] = [{"ts": 1654951305538, "uuid": "c9571c23-a838-41c0-a3e7-03d0a5b642b8", "rider": "rider-213", "driver": "driver-213", "begin_lat": 0.4726905879569653, "begin_lon": 0.46157858450465483, "end_lat": 0.754803407008858, "end_lon": 0.9671159942018241, "fare": 34.158284716382845, "partitionpath": "americas/brazil/sao_paulo"}, {"ts": 1655083361273, "uuid": "db91da5a-fb40-4600-8907-a0898b3d925c", "rider": "rider-213", "driver": "driver-213", "begin_lat": 0.6100070562136587, "begin_lon": 0.8779402295427752, "end_lat": 0.3407870505929602, "end_lon": 0.5030798142293655, "fare": 43.4923811219014, "partitionpath": "americas/brazil/sao_paulo"}, {"ts": 1654575413398, "uuid": "2f71860d-4269-4140-9b01-9c2003b5faef", "rider": "rider-213", "driver"...

scala> val df = spark.read.json(spark.sparkContext.parallelize(inserts, 2))
warning: there was one deprecation warning (since 2.12.0)
warning: there was one deprecation warning (since 2.2.0)
warning: there were two deprecation warnings in total; for details, enable `:setting -deprecation' or `:replay -deprecation'
df: org.apache.spark.sql.DataFrame = [begin_lat: double, begin_lon: double ... 8 more fields]

scala>  df.write.format("hudi").
     |         options(getQuickstartWriteConfigs).
     |         option(PRECOMBINE_FIELD_OPT_KEY, "ts").
     |         option(RECORDKEY_FIELD_OPT_KEY, "uuid").
     |         option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
     |         option(TABLE_NAME, tableName).
     |         mode(Overwrite).
     |         save(basePath)
warning: there was one deprecation warning; for details, enable `:setting -deprecation' or `:replay -deprecation'
22/06/13 10:00:20 WARN DFSPropertiesConfiguration: Cannot find HUDI_CONF_DIR, please set it as the dir of hudi-defaults.conf
22/06/13 10:00:20 WARN DFSPropertiesConfiguration: Properties file file:/etc/hudi/conf/hudi-defaults.conf not found. Ignoring to load props file

scala>

Mode(overwrite)将覆盖重新创建表(如果已存在)。可以检查/tmp/hudi_trps_cow 路径下是否有数据生成。

(base) [root@yxkj153 target]# cd /tmp/hudi_trips_cow/
(base) [root@yxkj153 hudi_trips_cow]# ll
total 0
drwxr-xr-x 4 root root 41 Jun 13 10:00 americas
drwxr-xr-x 3 root root 19 Jun 13 10:00 asia

查询数据

因为测试数据分区是 【区域/国家/城市】,所以load(basePath “////”)

scala> val tripsSnapshotDF = spark.read.format("hudi").load(basePath + "/*/*/*/*")
tripsSnapshotDF: org.apache.spark.sql.DataFrame = [_hoodie_commit_time: string, _hoodie_commit_seqno: string ... 13 more fields]

scala> tripsSnapshotDF.createOrReplaceTempView("hudi_trips_snapshot")

scala> spark.sql("select fare, begin_lon, begin_lat, ts from  hudi_trips_snapshot where fare > 20.0").show()
+------------------+-------------------+-------------------+-------------+
|              fare|          begin_lon|          begin_lat|           ts|
+------------------+-------------------+-------------------+-------------+
| 93.56018115236618|0.14285051259466197|0.21624150367601136|1654878001872|
| 27.79478688582596| 0.6273212202489661|0.11488393157088261|1654621765040|
| 64.27696295884016| 0.4923479652912024| 0.5731835407930634|1654575413398|
| 33.92216483948643| 0.9694586417848392| 0.1856488085068272|1654878418589|
| 66.62084366450246|0.03844104444445928| 0.0750588760043035|1654776491416|
|  43.4923811219014| 0.8779402295427752| 0.6100070562136587|1655083361273|
|34.158284716382845|0.46157858450465483| 0.4726905879569653|1654951305538|
| 41.06290929046368| 0.8192868687714224|  0.651058505660742|1654903333199|
+------------------+-------------------+-------------------+-------------+


scala>spark.sql("select _hoodie_commit_time, _hoodie_record_key, _hoodie_partition_path, rider, driver, fare from  hudi_trips_snapshot").show()
+-------------------+--------------------+----------------------+---------+----------+------------------+
|_hoodie_commit_time|  _hoodie_record_key|_hoodie_partition_path|    rider|    driver|              fare|
+-------------------+--------------------+----------------------+---------+----------+------------------+
|  20220613100020241|c3edc5cc-5baa-4be...|  americas/united_s...|rider-213|driver-213|19.179139106643607|
|  20220613100020241|4223d535-afec-43f...|  americas/united_s...|rider-213|driver-213| 93.56018115236618|
|  20220613100020241|5cacc412-0016-467...|  americas/united_s...|rider-213|driver-213| 27.79478688582596|
|  20220613100020241|2f71860d-4269-414...|  americas/united_s...|rider-213|driver-213| 64.27696295884016|
|  20220613100020241|7c9a2bdc-dc96-4b3...|  americas/united_s...|rider-213|driver-213| 33.92216483948643|
|  20220613100020241|2cd98443-5896-488...|  americas/brazil/s...|rider-213|driver-213| 66.62084366450246|
|  20220613100020241|db91da5a-fb40-460...|  americas/brazil/s...|rider-213|driver-213|  43.4923811219014|
|  20220613100020241|c9571c23-a838-41c...|  americas/brazil/s...|rider-213|driver-213|34.158284716382845|
|  20220613100020241|1ccf9ef2-0b2d-454...|    asia/india/chennai|rider-213|driver-213|17.851135255091155|
|  20220613100020241|66bd4bbb-728c-479...|    asia/india/chennai|rider-213|driver-213| 41.06290929046368|
+-------------------+--------------------+----------------------+---------+----------+------------------+

修改数据

类似于插入新数据,使用数据生成器生成新数据对历史数据进行更新。将数据加载到DataFrame中并将DataFrame写入Hudi表中.

scala> val updates = convertToStringList(dataGen.generateUpdates(10))
updates: java.util.List[String] = [{"ts": 1654674819621, "uuid": "2f71860d-4269-4140-9b01-9c2003b5faef", "rider": "rider-284", "driver": "driver-284", "begin_lat": 0.7340133901254792, "begin_lon": 0.5142184937933181, "end_lat": 0.7814655558162802, "end_lon": 0.6592596683641996, "fare": 49.527694252432056, "partitionpath": "americas/united_states/san_francisco"}, {"ts": 1654685722692, "uuid": "c9571c23-a838-41c0-a3e7-03d0a5b642b8", "rider": "rider-284", "driver": "driver-284", "begin_lat": 0.1593867607188556, "begin_lon": 0.010872312870502165, "end_lat": 0.9808530350038475, "end_lon": 0.7963756520507014, "fare": 29.47661370147079, "partitionpath": "americas/brazil/sao_paulo"}, {"ts": 1654709754125, "uuid": "c9571c23-a838-41c0-a3e7-03d0a5b642b8", "rider": "rider-...

scala> val df = spark.read.json(spark.sparkContext.parallelize(updates, 2))
warning: there was one deprecation warning (since 2.12.0)
warning: there was one deprecation warning (since 2.2.0)
warning: there were two deprecation warnings in total; for details, enable `:setting -deprecation' or `:replay -deprecation'
df: org.apache.spark.sql.DataFrame = [begin_lat: double, begin_lon: double ... 8 more fields]

scala> df.write.format("hudi").
     |      |   options(getQuickstartWriteConfigs).
     |      |   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
     |      |   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
     |      |   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
     |      |   option(TABLE_NAME, tableName).
     |      |   mode(Append).
     |      |   save(basePath)
warning: there was one deprecation warning; for details, enable `:setting -deprecation' or `:replay -deprecation'

scala>

修改数据的查询

Hudi还提供了获取自给定提交时间戳以来以更改记录流的功能。这可以通过使用Hudi的增量查询并提供开始流进行更改的开始时间来实现。

scala> spark.read.format("hudi").load(basePath+"/*/*/*/*").createOrReplaceTempView("hudi_trips_snapshot")

scala> val commits = spark.sql("select distinct(_hoodie_commit_time) as commitTime from  hudi_trips_snapshot order by commitTime").map(k => k.getString(0)).take(20)
commits: Array[String] = Array(20220613100020241, 20220613100452464)

scala> val beginTime = commits(commits.length - 2)
beginTime: String = 20220613100020241

scala> val tripsIncrementalDF = spark.read.format("hudi").
     |      |   option(QUERY_TYPE_OPT_KEY, QUERY_TYPE_INCREMENTAL_OPT_VAL).
     |      |   option(BEGIN_INSTANTTIME_OPT_KEY, beginTime).
     |      |   load(basePath)
tripsIncrementalDF: org.apache.spark.sql.DataFrame = [_hoodie_commit_time: string, _hoodie_commit_seqno: string ... 13 more fields]

scala> tripsIncrementalDF.createOrReplaceTempView("hudi_trips_incremental")

scala> spark.sql("select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from  hudi_trips_incremental where fare > 20.0").show()
+-------------------+------------------+-------------------+-------------------+-------------+
|_hoodie_commit_time|              fare|          begin_lon|          begin_lat|           ts|
+-------------------+------------------+-------------------+-------------------+-------------+
|  20220613100452464|  98.3428192817987| 0.3349917833248327| 0.4777395067707303|1654565700292|
|  20220613100452464|49.527694252432056| 0.5142184937933181| 0.7340133901254792|1654674819621|
|  20220613100452464|  90.9053809533154|0.19949323322922063|0.18294079059016366|1654643182621|
|  20220613100452464| 90.25710109008239| 0.4006983139989222|0.08528650347654165|1654510136846|
|  20220613100452464| 63.72504913279929|  0.888493603696927| 0.6570857443423376|1654776945091|
|  20220613100452464| 91.99515909032544| 0.2783086084578943| 0.2110206104048945|1654852288124|
+-------------------+------------------+-------------------+-------------------+-------------+


scala>

时间点查询

根据特定时间查询,可以将endTime指向特定时间,beginTime指向000(表示最早提交时间)

scala> val beginTime = "000"

scala> val endTime = commits(commits.length - 2)

scala> val tripsPointInTimeDF = spark.read.format("hudi").
     |   option(QUERY_TYPE_OPT_KEY, QUERY_TYPE_INCREMENTAL_OPT_VAL).
     |   option(BEGIN_INSTANTTIME_OPT_KEY, beginTime).
     |   option(END_INSTANTTIME_OPT_KEY, endTime).
     |   load(basePath)
     
scala> tripsPointInTimeDF.createOrReplaceTempView("hudi_trips_point_in_time")

scala> spark.sql("select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from hudi_trips_point_in_time where fare > 20.0").show()

删除数据

只有append模式,才支持删除功能

scala> spark.sql("select uuid, partitionPath from hudi_trips_snapshot").count()

scala> val ds = spark.sql("select uuid, partitionPath from hudi_trips_snapshot").limit(2)

scala> val deletes = dataGen.generateDeletes(ds.collectAsList())

scala> val df = spark.read.json(spark.sparkContext.parallelize(deletes, 2));

scala> df.write.format("hudi").
     |   options(getQuickstartWriteConfigs).
     |   option(OPERATION_OPT_KEY,"delete").
     |   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
     |   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
     |   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
     |   option(TABLE_NAME, tableName).
     |   mode(Append).
     |   save(basePath)
     
scala> val roAfterDeleteViewDF = spark. read. format("hudi"). load(basePath + "/*/*/*/*")

scala> roAfterDeleteViewDF.registerTempTable("hudi_trips_snapshot")

scala> spark.sql("select uuid, partitionPath from hudi_trips_snapshot").count()

覆盖数据

(1)对于一些批量etl操作,overwrite覆盖分区内的数据这种操作可能会比upsert操作效率更高,即一次重新计算目标分区内的数据。因为overwrite操作可以绕过upsert操作总需要的索引、预聚合步骤。

scala>spark.read.format("hudi").load(basePath+"/*/*/*/*").select("uuid","partitionpath"). sort("partitionpath","uuid"). show(100, false)

scala> val inserts = convertToStringList(dataGen.generateInserts(10))
scala> val df = spark.
     |   read.json(spark.sparkContext.parallelize(inserts, 2)).
     |   filter("partitionpath = 'americas/united_states/san_francisco'")

scala> df.write.format("hudi").
     |   options(getQuickstartWriteConfigs).
     |   option(OPERATION_OPT_KEY,"insert_overwrite").
     |   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
     |   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
     |   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
     |   option(TABLE_NAME, tableName).
     |   mode(Append).
     |   save(basePath)

scala> spark.
     |   read.format("hudi").
     |   load(basePath + "/*/*/*/*").
     |   select("uuid","partitionpath").
     |   sort("partitionpath","uuid").
     |   show(100, false)

hudi 结合Flink Sql Client操作

flink提交任务还是注意在主节点上提交。 要不然可能会报错
flink sql环境的配置参考:
https://nightlies.apache.org/flink/flink-docs-release-1.12/zh/dev/table/sqlClient.html#environment-files

启动命令

(base) [root@yxkj153 opt]# /opt/flink-1.13.6/bin/sql-client.sh embedded -j  /opt/hudi-0.10.1/packaging/hudi-flink-bundle/target/hudi-flink-bundle_2.12-0.10.1.jar

插入数据

(1)设置返回结果模式为tableau,让结果直接显示,设置处理模式为批处理

Flink SQL> set execution.result-mode=tableau;
Flink SQL> SET execution.type = batch;

(2)创建一张Merge on Read的表,如果不指定默认为copy on write表
提供的connector

Available factory identifiers are:

blackhole
datagen
filesystem
hudi
print

CREATE TABLE t1
( uuid VARCHAR(20), 
name VARCHAR(10), 
age INT, ts TIMESTAMP(3), 
`partition` VARCHAR(20)
)PARTITIONED BY (`partition`)
WITH ( 
'connector' = 'hudi', 
'path' = 'hdfs://mycluster/flink-hudi/t1', 
'table.type' = 'MERGE_ON_READ');

(3)插入数据

INSERT INTO t1 VALUES ('id1','Danny',23,TIMESTAMP '1970-01-01 00:00:01','par1'), ('id2','Stephen',33,TIMESTAMP '1970-01-01 00:00:02','par1'), ('id3','Julian',53,TIMESTAMP '1970-01-01 00:00:03','par2'), ('id4','Fabian',31,TIMESTAMP '1970-01-01 00:00:04','par2'), ('id5','Sophia',18,TIMESTAMP '1970-01-01 00:00:05','par3'), ('id6','Emma',20,TIMESTAMP '1970-01-01 00:00:06','par3'), ('id7','Bob',44,TIMESTAMP '1970-01-01 00:00:07','par4'), ('id8','Han',56,TIMESTAMP '1970-01-01 00:00:08','par4');

(4)查看hdfs ui提示成功,查看对应9870页面,hdfs路径下已有数据产生
在这里插入图片描述
在这里插入图片描述
flink的ui显示正在运行flink任务
下图是执行

Flink SQL> select *from t1;

flink ui展示的图
在这里插入图片描述

flink的hive部分官网介绍:
https://nightlies.apache.org/flink/flink-docs-release-1.12/zh/dev/table/connectors/hive/

flink的sql官网介绍
https://nightlies.apache.org/flink/flink-docs-release-1.12/zh/dev/table/sqlClient.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值