spark任务shell运行_spark任务 在spark-shell上能跑,但是在spark-submit上报错

请大神,帮我看看这个问题,

1.我自己写的WordCount在本地能跑,在spark-shell上能跑,但是在spark-submit上报错

2.排除scala版本的问题,本地和集群的scala单位版本都是2.10.4

3.跑官方的WordCount报错和自己WordCount报错是一样的

4.我提交方式是:./spark-submit --master spark://10.1.0.141:7077  --class org.apache.spark.examples.SparkPi  --name Spark-Pi --executor-memory 500M  --driver-memory 512M  lib/spark-examples-1.1.0-cdh5.2.0-hadoop2.5.0-cdh5.2.0.jar 1000(这样的话,应该也不会存在jar包冲突的问题,但是错误依旧)

15/01/08 15:19:58 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@n5.china.com:20545]

15/01/08 15:19:58 INFO Utils: Successfully started service 'sparkDriver' on port 20545.

15/01/08 15:19:58 INFO SparkEnv: Registering MapOutputTracker

15/01/08 15:19:58 INFO SparkEnv: Registering BlockManagerMaster

15/01/08 15:19:58 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20150108151958-cadb

15/01/08 15:19:58 INFO Utils: Successfully started service 'Connection manager for block manager' on port 63094.

15/01/08 15:19:58 INFO ConnectionManager: Bound socket to port 63094 with id = ConnectionManagerId(n5.china.com,63094)

15/01/08 15:19:58 INFO MemoryStore: MemoryStore started with capacity 265.4 MB

15/01/08 15:19:58 INFO BlockManagerMaster: Trying to register BlockManager

15/01/08 15:19:58 INFO BlockManagerMasterActor: Registering block manager n5.china.com:63094 with 265.4 MB RAM

15/01/08 15:19:58 INFO BlockManagerMaster: Registered BlockManager

15/01/08 15:19:58 INFO HttpFileServer: HTTP File server directory is /tmp/spark-3b796d21-db73-4aad-84bc-23cd0ce3fab7

15/01/08 15:19:58 INFO HttpServer: Starting HTTP Server

15/01/08 15:19:58 INFO Utils: Successfully started service 'HTTP file server' on port 38860.

15/01/08 15:19:58 INFO Utils: Successfully started service 'SparkUI' on port 4040.

15/01/08 15:19:58 INFO SparkUI: Started SparkUI at http://n5.china.com:4040

15/01/08 15:19:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

15/01/08 15:19:59 INFO SparkContext: Added JAR file:/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/jars/spark-examples-1.1.0-cdh5.2.0-hadoop2.5.0-cdh5.2.0.jar at http://10.1.0.182:38860/jars/spa ... p2.5.0-cdh5.2.0.jar with timestamp 1420701599601

15/01/08 15:19:59 INFO AppClient$ClientActor: Connecting to master spark://10.1.0.141:7077...

15/01/08 15:19:59 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0

15/01/08 15:19:59 INFO SparkContext: Starting job: reduce at SparkPi.scala:35

15/01/08 15:19:59 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 1000 output partitions (allowLocal=false)

15/01/08 15:19:59 INFO DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)

15/01/08 15:19:59 INFO DAGScheduler: Parents of final stage: List()

15/01/08 15:19:59 INFO DAGScheduler: Missing parents: List()

15/01/08 15:19:59 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[1] at map at SparkPi.scala:31), which has no missing parents

15/01/08 15:20:00 INFO MemoryStore: ensureFreeSpace(1728) called with curMem=0, maxMem=278302556

15/01/08 15:20:00 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1728.0 B, free 265.4 MB)

15/01/08 15:20:00 INFO MemoryStore: ensureFreeSpace(1125) called with curMem=1728, maxMem=278302556

15/01/08 15:20:00 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1125.0 B, free 265.4 MB)

15/01/08 15:20:00 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on n5.china.com:63094 (size: 1125.0 B, free: 265.4 MB)

15/01/08 15:20:00 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0

15/01/08 15:20:00 INFO DAGScheduler: Submitting 1000 missing tasks from Stage 0 (MappedRDD[1] at map at SparkPi.scala:31)

15/01/08 15:20:00 INFO TaskSchedulerImpl: Adding task set 0.0 with 1000 tasks

15/01/08 15:20:15 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

15/01/08 15:20:19 INFO AppClient$ClientActor: Connecting to master spark://10.1.0.141:7077...

15/01/08 15:20:30 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

15/01/08 15:20:39 INFO AppClient$ClientActor: Connecting to master spark://10.1.0.141:7077...

15/01/08 15:20:45 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

15/01/08 15:20:59 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.

15/01/08 15:20:59 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool

15/01/08 15:20:59 INFO TaskSchedulerImpl: Cancelling stage 0

15/01/08 15:20:59 INFO DAGScheduler: Failed to run reduce at SparkPi.scala:35

Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.

at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)

at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)

at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)

at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)

at scala.Option.foreach(Option.scala:236)

at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)

at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)

at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)

at akka.actor.ActorCell.invoke(ActorCell.scala:456)

at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)

at akka.dispatch.Mailbox.run(Mailbox.scala:219)

at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)

at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)

at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)

at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)

at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[root@n5 bin]#

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值