Spark源码系列之Spark内核——Task运行

在详细介绍前,还是让我们先看看Task运行的流程,先有个整体的感觉:


Executor收到序列化的Task后,先反序列化取出正常的Task,然后运行Task得到执行结果directResult,这个结果要送到Driver那里。但是发送的数据包不宜过大,通过对directResult大小的判断,进行不同的处理。如果directResult比较大,就把directResult存放到本地“Memory+Disk”上,由BlockManager管理,只把存储位置信息(IndirectTaskResult)发送给Driver。如果directResult不是很大,就直接发送directResult给Driver。

override def run(): Unit = {
…………
val (value, accumUpdates) = try {
  val res = task.run(
	taskAttemptId = taskId,
	attemptNumber = attemptNumber,
	metricsSystem = env.metricsSystem)
  threwException = false
  res
  } finally {
  val freedMemory = taskMemoryManager.cleanUpAllAllocatedMemory()
  if (freedMemory > 0) {
	val errMsg = s"Managed memory leak detected; size = $freedMemory bytes, TID = $taskId"
	if (conf.getBoolean("spark.unsafe.exceptionOnMemoryLeak", false) && !threwException) {
	  throw new SparkException(errMsg)
	} else {
	  logError(errMsg)
	}
  }
}
…………
	val directResult = new DirectTaskResult(valueBytes, accumUpdates, task.metrics.orNull)
	val serializedDirectResult = ser.serialize(directResult)
	val resultSize = serializedDirectResult.limit

	// directSend = sending directly back to the driver
	val serializedResult: ByteBuffer = {
	  if (maxResultSize > 0 && resultSize > maxResultSize) {
		logWarning(s"Finished $taskName (TID $taskId). Result is larger than maxResultSize " +
		  s"(${Utils.bytesToString(resultSize)} > ${Utils.bytesToString(maxResultSize)}), " +
		  s"dropping it.")
		ser.serialize(new IndirectTaskResult[Any](TaskResultBlockId(taskId), resultSize))
	  } else if (resultSize >= akkaFrameSize - AkkaUtils.reservedSizeBytes) {
		val blockId = TaskResultBlockId(taskId)
		env.blockManager.putBytes(
		  blockId, serializedDirectResult, StorageLevel.MEMORY_AND_DISK_SER)
		logInfo(
		  s"Finished $taskName (TID $taskId). $resultSize bytes result sent via BlockManager)")
		ser.serialize(new IndirectTaskResult[Any](blockId, resultSize))
	  } else {
		logInfo(s"Finished $taskName (TID $taskId). $resultSize bytes result sent to driver")
		serializedDirectResult
	  }
	}

	execBackend.statusUpdate(taskId, TaskState.FINISHED, serializedResult)
	…………
}

这里有一个细节,即在计算directResult之前,会调用Task.run()方法。Task是一个特质,其实现类有ShuffleMapTask和ResultTask。ShuffleMapTask和ResultTask生成的result不一样。ShuffleMapTask生成的是MapStatus,MapStatus包含两项内容:一是该Task所在的BlockManager的BlockManagerId,二是Task产生的压缩文件大小(即输出的每个FileSegment大小)。由于ShuffleMapTask需要将FileSegment写入磁盘,因此需要输出流Writer,这些Writer是由BlockManager里的ShuffleManager产生和控制。

override def runTask(context: TaskContext): MapStatus = {
	// Deserialize the RDD using the broadcast variable.
	val deserializeStartTime = System.currentTimeMillis()
	val ser = SparkEnv.get.closureSerializer.newInstance()
	val (rdd, dep) = ser.deserialize[(RDD[_], ShuffleDependency[_, _, _])](
	  ByteBuffer.wrap(taskBinary.value), Thread.currentThread.getContextClassLoader)
	_executorDeserializeTime = System.currentTimeMillis() - deserializeStartTime

	metrics = Some(context.taskMetrics)
	var writer: ShuffleWriter[Any, Any] = null
	try {
	  val manager = SparkEnv.get.shuffleManager
	  writer = manager.getWriter[Any, Any](dep.shuffleHandle, partitionId, context)
	  writer.write(rdd.iterator(partition, context).asInstanceOf[Iterator[_ <: Product2[Any, Any]]])
	  writer.stop(success = true).get
	} catch {
	  case e: Exception =>
		try {
		  if (writer != null) {
			writer.stop(success = false)
		  }
		} catch {
		  case e: Exception =>
			log.debug("Could not stop writer", e)
		}
		throw e
	}
}

而ResultTask生成的result的是func在partition上的执行结果。

override def runTask(context: TaskContext): U = {
	// Deserialize the RDD and the func using the broadcast variables.
	val deserializeStartTime = System.currentTimeMillis()
	val ser = SparkEnv.get.closureSerializer.newInstance()
	val (rdd, func) = ser.deserialize[(RDD[T], (TaskContext, Iterator[T]) => U)](
	  ByteBuffer.wrap(taskBinary.value), Thread.currentThread.getContextClassLoader)
	_executorDeserializeTime = System.currentTimeMillis() - deserializeStartTime

	metrics = Some(context.taskMetrics)
	func(context, rdd.iterator(partition, context))
}

在上面事件处理完之后,会调用execBackend.statusUpdate()函数修改Executor的状态。

override def statusUpdate(taskId: Long, state: TaskState, data: ByteBuffer) {
	val msg = StatusUpdate(executorId, taskId, state, data)
	driver match {
	  case Some(driverRef) => driverRef.send(msg)
	  case None => logWarning(s"Drop $msg because has not yet connected to driver")
	}
}

在CoarseGrainedExecutorBackend 的statusUpdate()函数中信息包装成StatusUpdate发送给Driver。
Driver收到StatusUpdate信息后,如果这个Task已经执行完,则会调用TaskResultGetter. enqueueSuccessfulTask()对收到的Task的执行结果result进行一系列操作。
def statusUpdate(tid: Long, state: TaskState, serializedData: ByteBuffer) {
…………
    var failedExecutor: Option[String] = None
    synchronized {
        if (state == TaskState.FINISHED) {
            taskSet.removeRunningTask(tid)
		    taskResultGetter.enqueueSuccessfulTask(taskSet, tid, serializedData)
        }
    …………	
}

TaskResultGetter的enqueueSuccessfulTask()函数对result进行分析。如果result是directResult,则直接返回directResult;如果result是IndirectTaskResult,需要调用BlockManager.getRemoteBytes去fetch实际的result。

def enqueueSuccessfulTask(
	taskSetManager: TaskSetManager, tid: Long, serializedData: ByteBuffer) {
	getTaskResultExecutor.execute(new Runnable {
	  override def run(): Unit = Utils.logUncaughtExceptions {
		try {
		  val (result, size) = serializer.get().deserialize[TaskResult[_]](serializedData) match {
			case directResult: DirectTaskResult[_] =>
			  if (!taskSetManager.canFetchMoreResults(serializedData.limit())) {
				return
			  }
			  directResult.value()
			  (directResult, serializedData.limit())
			case IndirectTaskResult(blockId, size) =>
			  if (!taskSetManager.canFetchMoreResults(size)) {
				sparkEnv.blockManager.master.removeBlock(blockId)
				return
			  }
			  logDebug("Fetching indirect task result for TID %s".format(tid))
			  scheduler.handleTaskGettingResult(taskSetManager, tid)
			  val serializedTaskResult = sparkEnv.blockManager.getRemoteBytes(blockId)
			  if (!serializedTaskResult.isDefined) {
				scheduler.handleFailedTask(
				  taskSetManager, tid, TaskState.FINISHED, TaskResultLost)
				return
			  }
			  val deserializedResult = serializer.get().deserialize[DirectTaskResult[_]](
				serializedTaskResult.get)
			  sparkEnv.blockManager.master.removeBlock(blockId)
			  (deserializedResult, size)
		  }

		  result.metrics.setResultSize(size)
		  scheduler.handleSuccessfulTask(taskSetManager, tid, result)
		} catch {
		  case cnf: ClassNotFoundException =>
			val loader = Thread.currentThread.getContextClassLoader
			taskSetManager.abort("ClassNotFound with classloader: " + loader)
		  case NonFatal(ex) =>
			logError("Exception while getting task result", ex)
			taskSetManager.abort("Exception while getting task result: %s".format(ex))
		}
	  }
	})
}

在分析完result之后,调用TaskScheduler的实现类TaskSchedulerImpl的handleSuccessfulTask()函数,在通过调用TaskSetManager的handleSuccessfulTask()函数标记成功的task,并通知DAGScheduler该task已完成。

def handleSuccessfulTask(tid: Long, result: DirectTaskResult[_]): Unit = {
    val info = taskInfos(tid)
    val index = info.index
    info.markSuccessful()
    removeRunningTask(tid)
…………
	sched.dagScheduler.taskEnded(
      tasks(index), Success, result.value(), result.accumUpdates, info, result.metrics)
…………	  
}
DAGScheduler的taskEnded()函数记录task的完成和失败情况。
def taskEnded(
  task: Task[_],
  reason: TaskEndReason,
  result: Any,
  accumUpdates: Map[Long, Any],
  taskInfo: TaskInfo,
  taskMetrics: TaskMetrics): Unit = {
	eventProcessLoop.post(
	  CompletionEvent(task, reason, result, accumUpdates, taskInfo, taskMetrics))
}

然后,通过DAGSchedulerEventProcessLoop的接收事件,并进行匹配。如果接收的事件是CompletionEvent,则调用DAGScheduler的handleTaskCompletion()函数。handleTaskCompletion()函数对得到实际的result进行分析,如果result是ResultTask的,那么可以使用 ResultHandler对result进行driver端的计算(比如count()会对所有ResultTask的result作sum)。如果result是ShuffleMapTask,那么需要将ShuffleMapTask的MapStatus(ShuffleMapTask输出的FileSegment的位置和大小信息)存放到mapOutputTrackerMaster 中的mapStatuses数据结构中以便以后reducer shuffle的时候查询。如果driver收到的task是该 stage中的最后一个task,那么可以submit下一个stage,如果该stage已经是最后一个stage,那么告诉DAGScheduler job已经完成。

private[scheduler] def handleTaskCompletion(event: CompletionEvent) {
    val task = event.task
    val stageId = task.stageId
    val taskType = Utils.getFormattedClassName(task)

    outputCommitCoordinator.taskCompleted(stageId, task.partitionId,
      event.taskInfo.attempt, event.reason)

    if (event.reason != Success) {
      val attemptId = task.stageAttemptId
      listenerBus.post(SparkListenerTaskEnd(stageId, attemptId, taskType, event.reason,
        event.taskInfo, event.taskMetrics))
    }

    if (!stageIdToStage.contains(task.stageId)) {
      return
    }

    val stage = stageIdToStage(task.stageId)
    event.reason match {
      case Success =>
        listenerBus.post(SparkListenerTaskEnd(stageId, stage.latestInfo.attemptId, taskType,
          event.reason, event.taskInfo, event.taskMetrics))
        stage.pendingTasks -= task
        task match {
          case rt: ResultTask[_, _] =>
            val resultStage = stage.asInstanceOf[ResultStage]
            resultStage.resultOfJob match {
              case Some(job) =>
                if (!job.finished(rt.outputId)) {
                  updateAccumulators(event)
                  job.finished(rt.outputId) = true
                  job.numFinished += 1
                  if (job.numFinished == job.numPartitions) {
                    markStageAsFinished(resultStage)
                    cleanupStateForJobAndIndependentStages(job)
                    listenerBus.post(
                      SparkListenerJobEnd(job.jobId, clock.getTimeMillis(), JobSucceeded))
                  }

                  try {
                    job.listener.taskSucceeded(rt.outputId, event.result)
                  } catch {
                    case e: Exception =>
                      job.listener.jobFailed(new SparkDriverExecutionException(e))
                  }
                }
              case None =>
                logInfo("Ignoring result from " + rt + " because its job has finished")
            }

          case smt: ShuffleMapTask =>
            val shuffleStage = stage.asInstanceOf[ShuffleMapStage]
            updateAccumulators(event)
            val status = event.result.asInstanceOf[MapStatus]
            val execId = status.location.executorId
            logDebug("ShuffleMapTask finished on " + execId)
            if (failedEpoch.contains(execId) && smt.epoch <= failedEpoch(execId)) {
              logInfo(s"Ignoring possibly bogus $smt completion from executor $execId")
            } else {
              shuffleStage.addOutputLoc(smt.partitionId, status)
            }

            if (runningStages.contains(shuffleStage) && shuffleStage.pendingTasks.isEmpty) {
              markStageAsFinished(shuffleStage)

              mapOutputTracker.registerMapOutputs(
                shuffleStage.shuffleDep.shuffleId,
                shuffleStage.outputLocs.map(_.headOption.orNull),
                changeEpoch = true)

              clearCacheLocs()

              if (shuffleStage.outputLocs.contains(Nil)) {
                logInfo("Resubmitting " + shuffleStage + " (" + shuffleStage.name +
                  ") because some of its tasks had failed: " +
                  shuffleStage.outputLocs.zipWithIndex.filter(_._1.isEmpty)
                      .map(_._2).mkString(", "))
                submitStage(shuffleStage)
              }

            }
          }
…………
}



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值