从hdfs读取csv文件并转化为hudi表并将元数据同步到hivemetestore

1、hadoop-3.2.2、spark-3.2.1-bin-hadoop3.2和apache-hive-3.1.2已经安装好,并且hivemetestore和thrift服务已经启动
2、执行spark shell登录

spark-shell \
  --packages org.apache.hudi:hudi-spark3.2-bundle_2.12:0.12.0 \
  --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
  --conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog' \
  --conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'

3、执行从hdfs读取csv文件转换为hudi表

import scala.collection.JavaConversions._
import org.apache.spark.sql.SaveMode._
import org.apache.hudi.DataSourceReadOptions._
import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.config.HoodieWriteConfig._
import spark.implicits._

case class Peoplest(id:Int,name:String,ts:Long,dt:String,hh:String)
val rdd = sc.textFile("/tmp/huditestdata2/Huditestdata.txt",2)
val df = rdd.map{x => val par = x.split(" ");Peoplest(par(0).toInt,par(1),par(2).toLong,par(3),par(4))}.toDF
val tableName = "hudiimporttb1"

val basePath = "hdfs://tmp/huditestdata3"
df.write.format("hudi").
        option(TABLE_NAME, tableName).
        option("hoodie.datasource.write.precombine.field", "ts").
        option("hoodie.datasource.write.recordkey.field", "id").
        option("hoodie.datasource.write.partitionpath.field", "dt,hh").
        option("hoodie.upsert.shuffle.parallelism", "4").
        option("hoodie.datasource.write.table.type", "COPY_ON_WRITE").
        option("hoodie.datasource.write.operation", "upsert").
        option("hoodie.parquet.max.file.size", "10485760").
        option("hoodie.datasource.write.hive_style_partitioning", "true").
        option("hoodie.datasource.write.keygenerator.class", "org.apache.hudi.keygen.ComplexKeyGenerator").
        option("hoodie.datasource.hive_sync.enable", "true").
        option("hoodie.parquet.small.file.limit", "0").
        option("hoodie.clustering.inline", "true").
        option("hoodie.clustering.inline.max.commits", "2").
        option("hoodie.clustering.plan.strategy.max.num.groups", "10000").
        option("hoodie.clustering.plan.strategy.target.file.max.bytes", "1073741824").
        option("hoodie.clustering.plan.strategy.small.file.limit", "629145600").
        option("hoodie.clustering.plan.strategy.sort.columns", "id").
        mode(Overwrite).
        save("hdfs://XXX:9000/huditestdata3");

4、登录spark sql在default的database可以查看到该表(hudiimporttb1)

spark-sql --packages org.apache.hudi:hudi-spark3.2-bundle_2.12:0.12.0 \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' \
--conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog'

【问题处理:】
执行最后同步metedate到hive出现错误

Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: SemanticException Cannot find class 'org.apache.hudi.hadoop.HoodieParquetInputFormat'

解决方法:
将hudi-sync-common-0.12.0.jar和hudi-hadoop-mr-0.12.0.jar 拷贝到hive的lib下并重启hive服务

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值