Spark wordCount报错Exception in thread “main“ java.lang.UnsatisfiedLinkError:

Exception in thread "main" java.lang.UnsatisfiedLinkError: 'org.apache.hadoop.io.nativeio.NativeIO$POSIX$Stat org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(java.lang.String)'
	at org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method)
	at org.apache.hadoop.io.nativeio.NativeIO$POSIX.getStat(NativeIO.java:460)
	at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNativeIO(RawLocalFileSystem.java:821)
	at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:735)
	at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:703)
	at org.apache.hadoop.fs.LocatedFileStatus.<init>(LocatedFileStatus.java:52)
	at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:2091)
	at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:2071)
	at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:271)
	at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:230)
	at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:316)
	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:204)
	at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:273)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
	at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:273)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
	at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:273)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
	at org.apache.spark.Partitioner$.$anonfun$defaultPartitioner$4(Partitioner.scala:78)
	at org.apache.spark.Partitioner$.$anonfun$defaultPartitioner$4$adapted(Partitioner.scala:78)
	at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:273)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at scala.collection.TraversableLike.map(TraversableLike.scala:273)
	at scala.collection.TraversableLike.map$(TraversableLike.scala:266)
	at scala.collection.immutable.List.map(List.scala:298)
	at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:78)
	at org.apache.spark.rdd.RDD.$anonfun$groupBy$1(RDD.scala:714)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
	at org.apache.spark.rdd.RDD.groupBy(RDD.scala:714)
	at wordcount.Spark01_WordCount$.main(Spark01_WordCount.scala:20)
	at wordcount.Spark01_WordCount.main(Spark01_WordCount.scala)

解决:把路径具体到文件 或者直接 *

val lines = sc.textFile("datas/*")

  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论
根据提供的引用内容,报错信息"Exception in thread "main" java.lang.ClassNotFoundException: BigData.wordcount1"是由于找不到类BigData.wordcount1引起的。根据引用\[1\]中的错误信息,可能是由于缺少org.apache.hadoop.fs.FSDataInputStream类导致的。而引用\[2\]提供了解决方法,即将类移动到main中,并进行clean和package操作。另外,引用\[3\]中提供了一个Spark远程调用示例,可以参考该示例来设置SparkConf和SparkSession。所以,解决这个问题的方法是将类BigData.wordcount1移动到main中,并进行clean和package操作,同时参考引用\[3\]中的示例来设置SparkConf和SparkSession。 #### 引用[.reference_title] - *1* *3* [Spark3.0.3版本报““mainjava.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream](https://blog.csdn.net/suwei825/article/details/120449015)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* [运行jar包出现Exception in threadmainjava.lang.ClassNotFoundException: edu.bigdata.mr.Demo02_WC](https://blog.csdn.net/weixin_44952852/article/details/117969095)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Lee_Yu_Rui

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值