spark集群环境下Lost task 0.0 in stage 10.0 (TID 17, 10.28.23.202): java.io.FileNotFoundException

12 篇文章 1 订阅
7 篇文章 0 订阅
spark从当前目录加载文件报错,Lost task 0.0 in stage 10.0 (TID 17, 10.28.23.202): java.io.FileNotFoundException,明显的,找不到本地的文件,但是本地的文件是存在的。
scala> val file = sc.textFile("test.txt")
15/12/09 13:22:36 INFO MemoryStore: ensureFreeSpace(191856) called with curMem=717340, maxMem=277877882
15/12/09 13:22:36 INFO MemoryStore: Block broadcast_14 stored as values in memory (estimated size 187.4 KB, free 264.1 MB)
15/12/09 13:22:36 INFO MemoryStore: ensureFreeSpace(19750) called with curMem=909196, maxMem=277877882
15/12/09 13:22:36 INFO MemoryStore: Block broadcast_14_piece0 stored as bytes in memory (estimated size 19.3 KB, free 264.1 MB)
15/12/09 13:22:36 INFO BlockManagerInfo: Added broadcast_14_piece0 in memory on 10.28.23.201:60179 (size: 19.3 KB, free: 264.9 MB)
15/12/09 13:22:36 INFO SparkContext: Created broadcast 14 from textFile at <console>:21
file: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[10] at textFile at <console>:21


scala> file foreach println
15/12/09 13:22:38 INFO FileInputFormat: Total input paths to process : 1
15/12/09 13:22:38 INFO SparkContext: Starting job: foreach at <console>:24
15/12/09 13:22:38 INFO DAGScheduler: Got job 10 (foreach at <console>:24) with 2 output partitions (allowLocal=false)
15/12/09 13:22:38 INFO DAGScheduler: Final stage: ResultStage 10(foreach at <console>:24)
15/12/09 13:22:38 INFO DAGScheduler: Parents of final stage: List()
15/12/09 13:22:38 INFO DAGScheduler: Missing parents: List()
15/12/09 13:22:38 INFO DAGScheduler: Submitting ResultStage 10 (MapPartitionsRDD[10] at textFile at <console>:21), which has no missing parents
15/12/09 13:22:38 INFO MemoryStore: ensureFreeSpace(3080) called with curMem=928946, maxMem=277877882
15/12/09 13:22:38 INFO MemoryStore: Block broadcast_15 stored as values in memory (estimated size 3.0 KB, free 264.1 MB)
15/12/09 13:22:38 INFO MemoryStore: ensureFreeSpace(1795) called with curMem=932026, maxMem=277877882
15/12/09 13:22:38 INFO MemoryStore: Block broadcast_15_piece0 stored as bytes in memory (estimated size 1795.0 B, free 264.1 MB)
15/12/09 13:22:38 INFO BlockManagerInfo: Added broadcast_15_piece0 in memory on 10.28.23.201:60179 (size: 1795.0 B, free: 264.9 MB)
15/12/09 13:22:38 INFO SparkContext: Created broadcast 15 from broadcast at DAGScheduler.scala:874
15/12/09 13:22:38 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 10 (MapPartitionsRDD[10] at textFile at <console>:21)
15/12/09 13:22:38 INFO TaskSchedulerImpl: Adding task set 10.0 with 2 tasks
15/12/09 13:22:38 INFO TaskSetManager: Starting task 0.0 in stage 10.0 (TID 17, 10.28.23.202, PROCESS_LOCAL, 1397 bytes)
15/12/09 13:22:38 INFO TaskSetManager: Starting task 1.0 in stage 10.0 (TID 18, 10.28.23.203, PROCESS_LOCAL, 1397 bytes)
15/12/09 13:22:38 INFO BlockManagerInfo: Added broadcast_15_piece0 in memory on 10.28.23.203:57813 (size: 1795.0 B, free: 265.0 MB)
15/12/09 13:22:38 INFO BlockManagerInfo: Added broadcast_15_piece0 in memory on 10.28.23.202:50706 (size: 1795.0 B, free: 265.0 MB)
15/12/09 13:22:38 INFO BlockManagerInfo: Added broadcast_14_piece0 in memory on 10.28.23.202:50706 (size: 19.3 KB, free: 264.9 MB)
15/12/09 13:22:38 INFO BlockManagerInfo: Added broadcast_14_piece0 in memory on 10.28.23.203:57813 (size: 19.3 KB, free: 265.0 MB)
15/12/09 13:22:38 INFO TaskSetManager: Finished task 1.0 in stage 10.0 (TID 18) in 156 ms on 10.28.23.203 (1/2)
15/12/09 13:22:38 WARN TaskSetManager: Lost task 0.0 in stage 10.0 (TID 17, 10.28.23.202): java.io.FileNotFoundException: File file:/usr/spark/test.txt does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:534)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:140)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:108)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:239)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:216)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


15/12/09 13:22:38 INFO TaskSetManager: Starting task 0.1 in stage 10.0 (TID 19, 10.28.23.201, PROCESS_LOCAL, 1397 bytes)
15/12/09 13:22:38 INFO BlockManagerInfo: Added broadcast_15_piece0 in memory on 10.28.23.201:51294 (size: 1795.0 B, free: 264.9 MB)
15/12/09 13:22:38 INFO BlockManagerInfo: Added broadcast_14_piece0 in memory on 10.28.23.201:51294 (size: 19.3 KB, free: 264.9 MB)
15/12/09 13:22:39 INFO TaskSetManager: Finished task 0.1 in stage 10.0 (TID 19) in 304 ms on 10.28.23.201 (2/2)
15/12/09 13:22:39 INFO TaskSchedulerImpl: Removed TaskSet 10.0, whose tasks have all completed, from pool 
15/12/09 13:22:39 INFO DAGScheduler: ResultStage 10 (foreach at <console>:24) finished in 0.613 s
15/12/09 13:22:39 INFO DAGScheduler: Job 10 finished: foreach at <console>:24, took 0.620210 s


解决办法
1.检查文件的权限
2.如果你是在集群的环境下运行,你必须确保所有的节点上的同个文件夹都有该文件,(我的问题就是这个),或者你可以使用HDFS就不会出现此问题。




评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值