在/opt/soft/spark-local/data目录下创建word.txt,文件中随意输入一些单词,bin/spark-shell开启spark的shell界面
执行下面的命令:
scala> sc.textFile("data/word.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect
报错信息如下:
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://lxm148:9000/user/root/data/word.txt
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:297)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:239)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:325)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:205)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:276)
at scala.Option.getOrElse(Option.scala:189)
显然,该命令是去HDFS上找这个文件了,因为我们运行的是本地文件,所以要加上全路径,但是加上后还是报错:
scala> sc.textFile("/opt/soft/spark-local/data/word.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://lxm148:9000/opt/soft/spark-local/data/word.txt
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:297)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:239)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:325)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:205)
at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:276)
at scala.Option.getOrElse(Option.scala:189)
在网上找到了解决办法:
1.当读取本地文件时,需要在文件全路径前加file://
scala> sc.textFile("file:///opt/soft/spark-local/data/word.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect
res2: Array[(String, Int)] = Array((hello,3), (java,1), (world,1), (spark,1))
2.当读取HDFS上的文件时,需要HDFS上的全路径
scala> sc.textFile("/data/word.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect
res3: Array[(String, Int)] = Array((scala,1), (Hello,1), (hello,3), (java,1), (world,1), (spark,1))