1、hadoop-3.2.2、spark-3.2.1-bin-hadoop3.2和apache-hive-3.1.2已经安装好,并且hivemetestore和thrift服务已经启动
2、执行spark shell登录
spark-shell \
--packages org.apache.hudi:hudi-spark3.2-bundle_2.12:0.12.0 \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog' \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'
3、执行从hdfs读取csv文件转换为hudi表
import scala.collection.JavaConversions._
import org.apache.spark.sql.SaveMode._
import org.apache.hudi.DataSourceReadOptions._
import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.config.HoodieWriteConfig._
import spark.implicits._
case class Peoplest(id:Int,name:String,ts:Long,dt:String,hh:String)
val rdd = sc.textFile("/tmp/huditestdata2/Huditestdata.txt",2)
val df = rdd.map{x => val par = x.split(" ");Peoplest(par(0).toInt,par(1),par(2).toLong,par(3),par(4))}.toDF
val tableName = "hudiimporttb1"
val basePath = "hdfs://tmp/huditestdata3"
df.write.format("hudi").
option(TABLE_NAME, tableName).
option("hoodie.datasource.write.precombine.field", "ts").
option("hoodie.datasource.write.recordkey.field", "id").
option("hoodie.datasource.write.partitionpath.field", "dt,hh").
option("hoodie.upsert.shuffle.parallelism", "4").
option("hoodie.datasource.write.table.type", "COPY_ON_WRITE").
option("hoodie.datasource.write.operation", "upsert").
option("hoodie.parquet.max.file.size", "10485760").
option("hoodie.datasource.write.hive_style_partitioning", "true").
option("hoodie.datasource.write.keygenerator.class", "org.apache.hudi.keygen.ComplexKeyGenerator").
option("hoodie.datasource.hive_sync.enable", "true").
option("hoodie.parquet.small.file.limit", "0").
option("hoodie.clustering.inline", "true").
option("hoodie.clustering.inline.max.commits", "2").
option("hoodie.clustering.plan.strategy.max.num.groups", "10000").
option("hoodie.clustering.plan.strategy.target.file.max.bytes", "1073741824").
option("hoodie.clustering.plan.strategy.small.file.limit", "629145600").
option("hoodie.clustering.plan.strategy.sort.columns", "id").
mode(Overwrite).
save("hdfs://XXX:9000/huditestdata3");
4、登录spark sql在default的database可以查看到该表(hudiimporttb1)
spark-sql --packages org.apache.hudi:hudi-spark3.2-bundle_2.12:0.12.0 \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' \
--conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog'
【问题处理:】
执行最后同步metedate到hive出现错误
Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: SemanticException Cannot find class 'org.apache.hudi.hadoop.HoodieParquetInputFormat'
解决方法:
将hudi-sync-common-0.12.0.jar和hudi-hadoop-mr-0.12.0.jar 拷贝到hive的lib下并重启hive服务