Spark读取hudi中的数据报Unable to infer schema for Parquet. It must be specified manually.;

博客内容讲述了在使用Spark读取Hudi数据时遇到AnalysisException,原因是未能自动推断Parquet文件的模式。解决方案是在代码中明确指定Hudi的文件存储路径,特别是需要包含通配符‘/*/*’来正确加载多级目录下的数据。修复后的代码成功展示了数据。
摘要由CSDN通过智能技术生成

Spark读取写入Hudi中的数据时报了这个错误 

10131 [dispatcher-event-loop-4] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 7722 bytes)
10140 [Executor task launch worker for task 0] INFO  org.apache.spark.executor.Executor  - Running task 0.0 in stage 0.0 (TID 0)
10247 [Executor task launch worker for task 0] INFO  org.apache.spark.executor.Executor  - Finished task 0.0 in stage 0.0 (TID 0). 708 bytes result sent to driver
10257 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 0.0 in stage 0.0 (TID 0) in 143 ms on localhost (executor driver) (1/1)
10260 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSchedulerImpl  - Removed TaskSet 0.0, whose tasks have all completed, from pool 
10266 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - ResultStage 0 (resolveRelation at DefaultSource.scala:78) finished in 0.487 s
10271 [main] INFO  org.apache.spark.scheduler.DAGScheduler  - Job 0 finished: resolveRelation at DefaultSource.scala:78, took 0.525224 s
Exception in thread "main" org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;
	at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$7.apply(DataSource.scala:185)
	at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$7.apply(DataSource.scala:185)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:184)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:373)
	at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:78)
	at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:47)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
	at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
	at sparkAndHudiToHive.SparkOnHudiToHiveExample$.SelectHudi(SparkOnHudiToHiveExample.scala:152)
	at sparkAndHudiToHive.SparkOnHudiToHiveExample$.main(SparkOnHudiToHiveExample.scala:36)
	at sparkAndHudiToHive.SparkOnHudiToHiveExample.main(SparkOnHudiToHiveExample.scala)
10279 [Thread-1] INFO  org.apache.spark.SparkContext  - Invoking stop() from shutdown hook
10286 [Thread-1] INFO  org.spark_project.jetty.server.AbstractConnector  - Stopped Spark@62417a16{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
10288 [Thread-1] INFO  org.apache.spark.ui.SparkUI  - Stopped Spark web UI at http://windows:4040
10298 [dispatcher-event-loop-5] INFO  org.apache.spark.MapOutputTrackerMasterEndpoint  - MapOutputTrackerMasterEndpoint stopped!
10317 [Thread-1] INFO  org.apache.spark.storage.memory.MemoryStore  - MemoryStore cleared
10318 [Thread-1] INFO  org.apache.spark.storage.BlockManager  - BlockManager stopped
10324 [Thread-1] INFO  org.apache.spark.storage.BlockManagerMaster  - BlockManagerMaster stopped
10326 [dispatcher-event-loop-0] INFO  org.apache.spark.scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint  - OutputCommitCoordinator stopped!
10335 [Thread-1] INFO  org.apache.spark.SparkContext  - Successfully stopped SparkContext
10336 [Thread-1] INFO  org.apache.spark.util.ShutdownHookManager  - Shutdown hook called
10336 [Thread-1] INFO  org.apache.spark.util.ShutdownHookManager  - Deleting directory C:\Users\10437\AppData\Local\Temp\spark-834fcdc9-2e63-4918-bee5-dcc9e6793015

Process finished with exit code 1

原来在我的代码中没有详细的指定好hudi的文件存储路径  记住哟啊加 

 文件目录要加 "/*/*"

package sparkAndHudiToHive

import org.apache.hudi.DataSourceWriteOptions
import org.apache.hudi.config.{HoodieHBaseIndexConfig, HoodieIndexConfig, HoodieWriteConfig}

import org.apache.hudi.index.HoodieIndex
import org.apache.spark.sql.{DataFrame, Dataset, Row, SaveMode, SparkSession}
import org.apache.spark.{SparkConf, SparkContext}


/**
 * @ClassName SparkOnHudiToHiveExample
 * @Description:
 * @Author 庄
 * @Date 2021/1/18
 * @Version V1.0
 **/
object SparkOnHudiToHiveExample {


def main(args: Array[String]): Unit = {

    val sc: SparkConf = new SparkConf().setMaster("local[*]")
      .setAppName(this.getClass.getName)
      .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
    val sparkSession: SparkSession = SparkSession.builder().config(sc).getOrCreate()

    // 初始化切入数据 到 Hudi
    //InsertHudi(sparkSession: SparkSession)

    // 读取普通的本地数据写入Hive
    //InsertHive(sparkSession: SparkSession)

    //查询写入hudi中的数据
   SelectHudi(sparkSession:SparkSession)

  def SelectHudi(sparkSession: SparkSession) = {

    val df: DataFrame = sparkSession.read.format("org.apache.hudi")
    //最早是这样的 看了一下
    //.load("/hudi/insertHDFS/")
    //这样就对了
    .load("/hudi/insertHDFS/*/*")
    df.show()


  }
}
  }

 

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值