spark运行项目error集锦

error
Exception in thread “main” java.lang.NoSuchMethodError: scala.Predef . conforms()Lscala/Predef

less$colon$less;atorg.apache.spark.streaming.kafka.KafkaUtils$.createStream(KafkaUtils.scala:156)atorg.apache.spark.streaming.kafka.KafkaUtils.createStream(KafkaUtils.scala)atcom.geo.testSpark.CollectorStreamV2.main(CollectorStreamV2.java:51)atsun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)atjava.lang.reflect.Method.invoke(Method.java:606)atorg.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit
runMain(SparkSubmit.scala:665)
at org.apache.spark.deploy.SparkSubmit .doRunMain 1(SparkSubmit.scala:170)
at org.apache.spark.deploy.SparkSubmit .submit(SparkSubmit.scala:193)atorg.apache.spark.deploy.SparkSubmit .main(SparkSubmit.scala:112)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
15/09/23 11:47:40 INFO spark.SparkContext: Invoking stop() from shutdown hook

Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 73, datanode3): ExecutorLostFailure (executor 3 lost)

15/09/23 15:00:39 ERROR scheduler.ReceiverTracker: Deregistered receiver for stream 0: Error starting receiver 0 - java.lang.NoSuchMethodError: org.apache.spark.util.Utils .newDaemonFixedThreadPool(ILjava/lang/String;)Ljava/util/concurrent/ThreadPoolExecutor;atorg.apache.spark.streaming.kafka.KafkaReceiver.onStart(KafkaInputDStream.scala:114)atorg.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:125)atorg.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:109)atorg.apache.spark.streaming.scheduler.ReceiverTracker ReceiverLauncher

anonfun$8.apply(ReceiverTracker.scala:308)atorg.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher
anonfun 8.apply(ReceiverTracker.scala:300)atorg.apache.spark.SparkContext anonfun runJob 5.apply(SparkContext.scala:1767)atorg.apache.spark.SparkContext anonfun runJob 5.apply(SparkContext.scala:1767)atorg.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)atorg.apache.spark.scheduler.Task.run(Task.scala:70)atorg.apache.spark.executor.Executor TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor Worker.run(ThreadPoolExecutor.java:615)atjava.lang.Thread.run(Thread.java:745)15/09/2315:00:39WARNscheduler.TaskSetManager:Losttask0.0instage2.0(TID70,datanode1):java.lang.NoSuchMethodError:org.apache.spark.util.Utils .newDaemonFixedThreadPool(ILjava/lang/String;)Ljava/util/concurrent/ThreadPoolExecutor;
at org.apache.spark.streaming.kafka.KafkaReceiver.onStart(KafkaInputDStream.scala:114)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:125)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:109)
at org.apache.spark.streaming.scheduler.ReceiverTracker ReceiverLauncher anonfun 8.apply(ReceiverTracker.scala:308)
at org.apache.spark.streaming.scheduler.ReceiverTracker ReceiverLauncher anonfun 8.apply(ReceiverTracker.scala:300)
at org.apache.spark.SparkContext
anonfun$runJob$5.apply(SparkContext.scala:1767)atorg.apache.spark.SparkContext
anonfun runJob 5.apply(SparkContext.scala:1767)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor TaskRunner.run(Executor.scala:213)atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)atjava.util.concurrent.ThreadPoolExecutor Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
解决办法:
代码用到的spark的版本、kafka的版本、Scala的版本(如果有用到Scala,Spark 1.4.1匹配的版本Scala 2.10.4.)需一致,并且最好与线上服务器的各个版本一致。

error
org.apache.spark.SparkException: Couldn’t find leader offsets for Set()
解决办法:
创建新的topic和groupid
【error】
15/09/23 17:15:00 ERROR kafka.DirectKafkaInputDStream: ArrayBuffer(org.apache.spark.SparkException: Couldn’t find leaders for Set([rtb-collector-time-v04,0]))
15/09/23 17:15:00 ERROR scheduler.JobScheduler: Error generating jobs for time 1442999700000 ms
org.apache.spark.SparkException: ArrayBuffer(org.apache.spark.SparkException: Couldn’t find leaders for Set([rtb-collector-time-v04,0]))
at org.apache.spark.streaming.kafka.DirectKafkaInputDStream.latestLeaderOffsets(DirectKafkaInputDStream.scala:94)
at org.apache.spark.streaming.kafka.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:116)
at org.apache.spark.streaming.dstream.DStream

anonfun$getOrCompute$1
anonfun 1 anonfunapply 7.apply(DStream.scala:350)atorg.apache.spark.streaming.dstream.DStream anonfun getOrCompute 1 anonfun1
anonfun$apply$7.apply(DStream.scala:350)atscala.util.DynamicVariable.withValue(DynamicVariable.scala:57)atorg.apache.spark.streaming.dstream.DStream
anonfun getOrCompute 1
anonfun$1.apply(DStream.scala:349)atorg.apache.spark.streaming.dstream.DStream
anonfun getOrCompute 1$$anonfun$1.apply(DStream.scala:349)
解决办法:
spark集群中/etc/hosts 里加配置上所有kafka节点的ip

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值