本篇结构:
- 划分调度阶段
- 实例解析
一、划分调度阶段
Spark 调度阶段的划分是由 DAGScheduler 实现的,它会从最后一个 RDD 出发遍历整个依赖树,从而划分调度阶段,调度阶段的划分是以操作是否为宽依赖进行的。即当某个 RDD 的操作时 shuffle 时,以该 shuffle 操作为界限将划分成前后两个调度阶段。
划分调度阶段的逻辑是在 DAGScheduler 的 handleJobSubmitted 方法中:
private[scheduler] def handleJobSubmitted(jobId: Int,
finalRDD: RDD[_],
func: (TaskContext, Iterator[_]) => _,
partitions: Array[Int],
callSite: CallSite,
listener: JobListener,
properties: Properties) {
var finalStage: ResultStage = null
try {
// New stage creation may throw an exception if, for example, jobs are run on a
// HadoopRDD whose underlying HDFS files have been deleted.
finalStage = createResultStage(finalRDD, func, partitions, jobId, callSite)
} catch {
case e: Exception =>
logWarning("Creating new stage failed due to exception - job: " + jobId, e)
listener.jobFailed(e)
return
}
val job = new ActiveJob(jobId, finalStage, callSite, listener, properties)
clearCacheLocs()
logInfo("Got job %s (%s) with %d output partitions".format(
job.jobId, callSite.shortForm, partitions.length))
logInfo("Final stage: " + finalStage + " (" + finalStage.name + ")")
logInfo("Parents of final stage: " + finalStage.parents)
logInfo("Missing parents: " + getMissingParentStages(finalStage))
val jobSubmissionTime = clock.getTimeMillis()
jobIdToActiveJob(jobId) = job
activeJobs += job
finalStage.setActiveJob(job)
val stageIds = jobIdToStageIds(jobId).toArray
val stageInfos = stageIds.flatMap(id => stageIdToStage.get(id).map(_.latestInfo))
listenerBus.post(
SparkListenerJobStart(job.jobId, jobSubmissionTime, stageInfos, properties))
submitStage(finalStage)
}
handleJobSubmitted 处理作业提交,主要分两部分,一部分是划分调度阶段,另一分是提交调度阶段,这里我们重点关注划分调度阶段,下一篇再关注提交调度阶段。
调度阶段的划分集中在 createResultS