Spark读取hudi中的数据报Unable to infer schema for Parquet. It must be specified manually.;

博客内容讲述了在使用Spark读取Hudi数据时遇到AnalysisException,原因是未能自动推断Parquet文件的模式。解决方案是在代码中明确指定Hudi的文件存储路径,特别是需要包含通配符‘/*/*’来正确加载多级目录下的数据。修复后的代码成功展示了数据。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Spark读取写入Hudi中的数据时报了这个错误 

10131 [dispatcher-event-loop-4] INFO  org.apache.spark.scheduler.TaskSetManager  - Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 7722 bytes)
10140 [Executor task launch worker for task 0] INFO  org.apache.spark.executor.Executor  - Running task 0.0 in stage 0.0 (TID 0)
10247 [Executor task launch worker for task 0] INFO  org.apache.spark.executor.Executor  - Finished task 0.0 in stage 0.0 (TID 0). 708 bytes result sent to driver
10257 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSetManager  - Finished task 0.0 in stage 0.0 (TID 0) in 143 ms on localhost (executor driver) (1/1)
10260 [task-result-getter-0] INFO  org.apache.spark.scheduler.TaskSchedulerImpl  - Removed TaskSet 0.0, whose tasks have all completed, from pool 
10266 [dag-scheduler-event-loop] INFO  org.apache.spark.scheduler.DAGScheduler  - ResultStage 0 (resolveRelation at DefaultSource.scala:78) finished in 0.487 s
10271 [main] INFO  org.apache.spark.scheduler.DAGScheduler  - Job 0 finished: resolveRelation at DefaultSource.scala:78, took 0.525224 s
Exception in thread "main" org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;
	at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$7.apply(DataSource.scala:185)
	at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$7.apply(DataSource.scala:185)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:184)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:373)
	at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:78)
	at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:47)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
	at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
	at sparkAndHudiToHive.SparkOnHudiToHiveExample$.SelectHudi(SparkOnHudiToHiveExample.scala:152)
	at sparkAndHudiToHive.SparkOnHudiToHiveExample$.main(SparkOnHudiToHiveExample.scala:36)
	at sparkAndHudiToHive.SparkOnHudiToHiveExample.main(SparkOnHudiToHiveExample.scala)
10279 [Thread-1] INFO  org.apache.spark.SparkContext  - Invoking stop() from shutdown hook
10286 [Thread-1] INFO  org.spark_project.jetty.server.AbstractConnector  - Stopped Spark@62417a16{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
10288 [Thread-1] INFO  org.apache.spark.ui.SparkUI  - Stopped Spark web UI at http://windows:4040
10298 [dispatcher-event-loop-5] INFO  org.apache.spark.MapOutputTrackerMasterEndpoint  - MapOutputTrackerMasterEndpoint stopped!
10317 [Thread-1] INFO  org.apache.spark.storage.memory.MemoryStore  - MemoryStore cleared
10318 [Thread-1] INFO  org.apache.spark.storage.BlockManager  - BlockManager stopped
10324 [Thread-1] INFO  org.apache.spark.storage.BlockManagerMaster  - BlockManagerMaster stopped
10326 [dispatcher-event-loop-0] INFO  org.apache.spark.scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint  - OutputCommitCoordinator stopped!
10335 [Thread-1] INFO  org.apache.spark.SparkContext  - Successfully stopped SparkContext
10336 [Thread-1] INFO  org.apache.spark.util.ShutdownHookManager  - Shutdown hook called
10336 [Thread-1] INFO  org.apache.spark.util.ShutdownHookManager  - Deleting directory C:\Users\10437\AppData\Local\Temp\spark-834fcdc9-2e63-4918-bee5-dcc9e6793015

Process finished with exit code 1

原来在我的代码中没有详细的指定好hudi的文件存储路径  记住哟啊加 

 文件目录要加 "/*/*"

package sparkAndHudiToHive

import org.apache.hudi.DataSourceWriteOptions
import org.apache.hudi.config.{HoodieHBaseIndexConfig, HoodieIndexConfig, HoodieWriteConfig}

import org.apache.hudi.index.HoodieIndex
import org.apache.spark.sql.{DataFrame, Dataset, Row, SaveMode, SparkSession}
import org.apache.spark.{SparkConf, SparkContext}


/**
 * @ClassName SparkOnHudiToHiveExample
 * @Description:
 * @Author 庄
 * @Date 2021/1/18
 * @Version V1.0
 **/
object SparkOnHudiToHiveExample {


def main(args: Array[String]): Unit = {

    val sc: SparkConf = new SparkConf().setMaster("local[*]")
      .setAppName(this.getClass.getName)
      .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
    val sparkSession: SparkSession = SparkSession.builder().config(sc).getOrCreate()

    // 初始化切入数据 到 Hudi
    //InsertHudi(sparkSession: SparkSession)

    // 读取普通的本地数据写入Hive
    //InsertHive(sparkSession: SparkSession)

    //查询写入hudi中的数据
   SelectHudi(sparkSession:SparkSession)

  def SelectHudi(sparkSession: SparkSession) = {

    val df: DataFrame = sparkSession.read.format("org.apache.hudi")
    //最早是这样的 看了一下
    //.load("/hudi/insertHDFS/")
    //这样就对了
    .load("/hudi/insertHDFS/*/*")
    df.show()


  }
}
  }

 

在Java中,如果你想要使用Apache Spark读取Hudi数据并将其保存为gzip压缩文件,你可以按照以下步骤操作: 1. **添加依赖**: 首先,你需要在你的项目中添加SparkHudi以及相关的Hadoop库作为依赖。如果你使用的是Maven,可以在pom.xml中添加如下依赖: ```xml <dependency> <groupId>org.apache.hudi</groupId> <artifactId>hoodie-spark-binding_2.12</artifactId> <version>0.15.0+</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.12</artifactId> <version>3.2.0+</version> </dependency> ``` 确保替换版本为你实际使用的。 2. **创建SparkSession**: 使用`SparkSession.builder()`创建一个新的Spark会话,并配置好相关属性,例如Hudi的路径和Hadoop配置。 ```java JavaSparkContext jsc = new JavaSparkContext(conf); SparkSession spark = SparkSession .builder() .appName("Hudi to Gzip") .config("spark.jars", "path/to/hoodie-spark-binding.jar") .getOrCreate(); ``` 3. **读取Hudi数据**: 使用HudiDataFrameReader从Hudi源加载数据。假设表名为`my_table`。 ```java Dataset<Row> hudiDF = spark.read() .format("hudi") .option("hoodie.datasource.table.name", "my_table") .option("hoodie.datasource.fs.url", "hudi://path/to/your/hudi/folder") .load(); ``` 4. **转换和保存为gzip**: 对数据进行处理后(如筛选、聚合等),可以将结果保存为gzip压缩文件。使用`saveAsTextFile()`方法指定目标路径,并添加`"compressionCodec" : "org.apache.hadoop.io.compress.GzipCodec"`选项。 ```java hudiDF.select(...).write() .option("path", "output/gzipped_data") .option("compressionCodec", "org.apache.hadoop.io.compress.GzipCodec") .mode(SaveMode.Overwrite) .textFile(); ``` 5. **关闭SparkSession**: 最后别忘了关闭SparkSession释放资源。 ```java spark.stop(); ```
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值