小问题可能存在大问题,希望大神帮忙解答。Spark本地运行模式中单线程与多线程问题之setMaster("local")可以运行,但是设置成setMaster("local[3]")或setMaste

标签: Spark 线程
1340人阅读 评论(6) 收藏 举报

小问题可能存在大问题,希望大神帮忙解答

求大神帮忙解决同样的代码

setMaster("local")可以运行,但是设置成setMaster("local[3]")或setMaster("local[*]")则报错

一、Spark中本地运行模式

Spark中本地运行模式有3种,如下

(1)local 模式:本地单线程运行;
(2)local[k]模式:本地K个线程运行;
(3)local[*]模式:用本地尽可能多的线程运行。

二、读取的数据如下


三、牵扯数组越界代码如下

 val pieces = line.replaceAll("\"" , "")
      val carid = pieces.split(',')(0)
      val lngstr = pieces.split(',')(1)
      val latstr = pieces.split(',')(2)

      var lng=BigDecimal(0)
      var lat=BigDecimal(0)

      try {
        lng = myround(BigDecimal(lngstr), 3)
        lat = myround(BigDecimal(latstr), 3)
      }catch {
        case e: NumberFormatException => println(".....help......"+lngstr+"....."+latstr)
      }
如果是读取的文件数据脏,读取的原始数据就没有前3列的值,然后导致根据逗号分割后,数组越界,导致数组越界,那为啥Master为local的时候可以允许呢?

四、遇到的local、local[k]、local[*]问题

      Spark 中Master初始化如下:

val sparkConf = new SparkConf().setMaster("local").setAppName("count test")

问题描述如下:

    代码其他部门没有改动,只是改变setMaster(),则出现不同结果:

(1)如果 setMaster()写成setMaster("local"),代码正确运行;

(2)如果写成 setMaster("local[3]")则报错;

(3)如果写成 setMaster("local[*]")则报错,且与(2)中报错内容一样;

 setMaster("local[3]")报错内容与setMaster("local[*]")报错内容一样,报错内容如下:

17/07/31 13:39:01 INFO HadoopRDD: Input split: file:/E:/data/gps201608.csv:0+7683460
17/07/31 13:39:02 INFO BlockManagerInfo: Removed broadcast_1_piece0 on localhost:50541 in memory (size: 1848.0 B, free: 133.6 MB)
17/07/31 13:39:05 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 2)
java.lang.ArrayIndexOutOfBoundsException
	at java.lang.System.arraycopy(Native Method)
	at scala.collection.mutable.ResizableArray$class.ensureSize(ResizableArray.scala:100)
	at scala.collection.mutable.ArrayBuffer.ensureSize(ArrayBuffer.scala:47)
	at scala.collection.mutable.ArrayBuffer.$plus$eq(ArrayBuffer.scala:83)
	at count$.count$$mystatistics$1(count.scala:76)
	at count$$anonfun$2.apply(count.scala:87)
	at count$$anonfun$2.apply(count.scala:87)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:277)
	at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
	at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:242)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
	at org.apache.spark.scheduler.Task.run(Task.scala:70)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
17/07/31 13:39:05 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 2, localhost): java.lang.ArrayIndexOutOfBoundsException
	at java.lang.System.arraycopy(Native Method)
	at scala.collection.mutable.ResizableArray$class.ensureSize(ResizableArray.scala:100)
	at scala.collection.mutable.ArrayBuffer.ensureSize(ArrayBuffer.scala:47)
	at scala.collection.mutable.ArrayBuffer.$plus$eq(ArrayBuffer.scala:83)
	at count$.count$$mystatistics$1(count.scala:76)
	at count$$anonfun$2.apply(count.scala:87)
	at count$$anonfun$2.apply(count.scala:87)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:277)
	at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
	at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:242)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
	at org.apache.spark.scheduler.Task.run(Task.scala:70)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

17/07/31 13:39:05 ERROR TaskSetManager: Task 0 in stage 1.0 failed 1 times; aborting job
17/07/31 13:39:05 INFO TaskSchedulerImpl: Cancelling stage 1
17/07/31 13:39:05 INFO TaskSchedulerImpl: Stage 1 was cancelled
17/07/31 13:39:05 INFO Executor: Executor is trying to kill task 1.0 in stage 1.0 (TID 3)
17/07/31 13:39:05 INFO DAGScheduler: ResultStage 1 (saveAsTextFile at count.scala:90) failed in 3.252 s
17/07/31 13:39:05 INFO Executor: Executor killed task 1.0 in stage 1.0 (TID 3)
17/07/31 13:39:05 INFO DAGScheduler: Job 1 failed: saveAsTextFile at count.scala:90, took 3.278665 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 2, localhost): java.lang.ArrayIndexOutOfBoundsException
	at java.lang.System.arraycopy(Native Method)
	at scala.collection.mutable.ResizableArray$class.ensureSize(ResizableArray.scala:100)
	at scala.collection.mutable.ArrayBuffer.ensureSize(ArrayBuffer.scala:47)
	at scala.collection.mutable.ArrayBuffer.$plus$eq(ArrayBuffer.scala:83)
	at count$.count$$mystatistics$1(count.scala:76)
	at count$$anonfun$2.apply(count.scala:87)
	at count$$anonfun$2.apply(count.scala:87)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
	at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:277)
	at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
	at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:242)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
	at org.apache.spark.scheduler.Task.run(Task.scala:70)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
	at scala.Option.foreach(Option.scala:236)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
17/07/31 13:39:05 WARN TaskSetManager: Lost task 1.0 in stage 1.0 (TID 3, localhost): TaskKilled (killed intentionally)
17/07/31 13:39:05 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
17/07/31 13:39:05 INFO SparkContext: Invoking stop() from shutdown hook
报错截屏如下:



五、奇怪问题

    过了一段时间,我什么也没动,发现把Master设置成local[3]可以运行,生成2个结果文件,但都为空,如下图所示:









查看评论

Spark技术内幕:Executor分配详解

当用户应用new SparkContext后,集群就会为在Worker上分配executor,那么这个过程是什么呢?本文以Standalone的Cluster为例,详细的阐述这个过程。...
  • anzhsoft2008
  • anzhsoft2008
  • 2014-10-05 01:02:19
  • 29845

Spark Executor原理

Master发指令给Worker启动Executor。 Worker接收到Master发送来的指令通过ExecutorRunner启动另外一个进程来启动Executor。 CoarseGraine...
  • u013063153
  • u013063153
  • 2017-02-07 15:41:26
  • 2348

spark lost task 异常 笔记

Lost task java.lang.NullPointerException,Lost task java.lang.OutOfMemoryError,FileNotFoundException,...
  • T1DMzks
  • T1DMzks
  • 2017-06-29 23:41:29
  • 2801

spark源码学习(八)--- executor启动task分析

本文基于spark1.6。 上一篇文章提到TaskScheduler将task与executor分配好以后,交给executor来运行task,本文分析executor运行task。首先,在收到消息后...
  • englishsname
  • englishsname
  • 2016-02-21 13:05:49
  • 1286

Hadoop常见异常及其解决方案

1、Shell$ExitCodeException 现象:运行hadoop job时出现如下异常: 14/07/09 14:42:50 INFO mapreduce.Job: Task Id : at...
  • jediael_lu
  • jediael_lu
  • 2014-07-09 15:02:11
  • 23705

Spark性能优化:shuffle调优

shuffle调优 调优概述       大多数Spark作业的性能主要就是消耗在了shuffle环节,因为该环节包含了大量的磁盘IO、序列化、网络数据传输等操作。因此,如果要让作业的性能更上一层...
  • u012102306
  • u012102306
  • 2016-06-11 19:38:42
  • 14244

Spark异常处理与调优(更新中~)

资源调优 http://blog.csdn.net/u011239443/article/details/52127689 内存 Memory Tuning,Java对象会占用原始数据2~5...
  • u011239443
  • u011239443
  • 2016-07-04 20:54:22
  • 7026

关于java.sql.SQLRecoverableException: Closed Connection异常的解决方案

在项目中碰到了一个应用异常,从表象来看应用僵死。查看Weblogic状态为Running,内存无溢出,但是出现多次线程堵塞。查看Weblogic日志,发现程序出现多次Time Out。 ...
  • gavinloo
  • gavinloo
  • 2013-09-30 21:52:06
  • 31003

Spark排错与优化

一. 运维 1. Master挂掉,standby重启也失效 Master默认使用512M内存,当集群中运行的任务特别多时,就会挂掉,原因是master会读取每个task的event log日...
  • lsshlsw
  • lsshlsw
  • 2015-10-15 17:08:36
  • 43703

源码-stage->task->taskSet->executor

本文:接《DAGScheduler及Stage划分提交》分析Stage中得Task是如何生成并且最终提交到Executor中去的。 从org.apache.spark.scheduler.DAGSch...
  • u011007180
  • u011007180
  • 2016-09-02 22:33:45
  • 520
    个人资料
    等级:
    访问量: 5万+
    积分: 862
    排名: 5万+
    最新评论