Spark学习【4】:核心源码-TaskScheduler的初始化

使用Spark写的应用中通常一开始就要通过SparkConf进行SparkContext的初始化,然后进行RDD的处理。
这是spark源码中sparkContext的注释部分

简单翻译下sparkContext类的注释:
Spark函数的主入口。SparkContext负责与Spark集群的连接,并用来创建RDD,信号量以及广播变量。
每个JVM上只有一个SparkContext是active的。可以在创建一个新的sparkContext之前将Active的SparkContex先停止掉。这个限制可能会被移除。在spark 1.6中依然是有这个约束的。

SparkContext在实例化的过程中,会new很多的核心实例。

  1. DAGScheduler:执行面向Stage的high-level调度层
  2. TaskScheduler:接口,通过SchedulerBackend在多种类型的集群 上进行task的调度
  3. SchdulerBackend:接口,负责进行Application的注册

本文重点介绍TaskSchedule的实例化。以下是Spark1.6中SparkContext对TaskScheduler的实例化过程

TaskScheduler

TaskScheduler的实例化

// Create and start the scheduler
val (sched, ts) = SparkContext.createTaskScheduler(this, master)
_schedulerBackend = sched
_taskScheduler = ts
_dagScheduler = new DAGScheduler(this)
_heartbeatReceiver.ask[Boolean](TaskSchedulerIsSet)

// start TaskScheduler after taskScheduler sets DAGScheduler reference in DAGScheduler's
// constructor
_taskScheduler.start()

调用CreateTaskScheduler方法,进行TaskScheduler和SchedulerBackend的创建
根据集群类型的不同,ShedulerBackend的实现是不同的。注意不论什么集群,TaskScheduler的实例化均为TaskSchedulerImpl;Standlone模式下,SparkDeploySchedulerBackend; yarn-cluster模式下为:Utils.classForName(“org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend”).,以StandLone集群为例

      case SPARK_REGEX(sparkUrl) =>
        val scheduler = new TaskSchedulerImpl(sc)
        val masterUrls = sparkUrl.split(",").map("spark://" + _)
        val backend = new SparkDeploySchedulerBackend(scheduler, sc, masterUrls)
        scheduler.initialize(backend)
        (backend, scheduler)

TaskScheduler的初始化之后,调用SchedulerBackend的start方法 TaskScheduler的初始化,调用SchedulerBackend的start方法

override def start() {
  backend.start()

  if (!isLocal && conf.getBoolean("spark.speculation", false)) {
    logInfo("Starting speculative execution thread")
    speculationScheduler.scheduleAtFixedRate(new Runnable {
      override def run(): Unit = Utils.tryOrStopSparkContext(sc) {
        checkSpeculatableTasks()
      }
    }, SPECULATION_INTERVAL_MS, SPECULATION_INTERVAL_MS, TimeUnit.MILLISECONDS)
  }
}

start方法中,创建了一个AppClient,该对象负责与其他组件进行通信;

override def start() {
  super.start()
  launcherBackend.connect()

 。。。。。。  // Start executors with a few necessary configs for registering with the scheduler
  val sparkJavaOpts = Utils.sparkJavaOpts(conf, SparkConf.isExecutorStartupConf)
  val javaOpts = sparkJavaOpts ++ extraJavaOpts
  val command = Command("org.apache.spark.executor.CoarseGrainedExecutorBackend",
    args, sc.executorEnvs, classPathEntries ++ testingClassPath, libraryPathEntries, javaOpts)
  val appUIAddress = sc.ui.map(_.appUIAddress).getOrElse("")
  val coresPerExecutor = conf.getOption("spark.executor.cores").map(_.toInt)
  val appDesc = new ApplicationDescription(sc.appName, maxCores, sc.executorMemory,
    command, appUIAddress, sc.eventLogDir, sc.eventLogCodec, coresPerExecutor)
  *client = new AppClient(sc.env.rpcEnv, masters, appDesc, this, conf)
  client.start()*
  launcherBackend.setState(SparkAppHandle.State.SUBMITTED)
  waitForRegistration()
  launcherBackend.setState(SparkAppHandle.State.RUNNING)
}

AppClient创建后,调用Start方法;启动了rpcEndPoint,创建了ClientEndpoint

def start() {
  // Just launch an rpcEndpoint; it will call back into the listener.
  endpoint.set(rpcEnv.setupEndpoint("AppClient", new ClientEndpoint(rpcEnv)))
}

其中实现类AkkaRpcEnv的setupEndpoint方法,调用了ClientEndpoint的Start方法

override def setupEndpoint(name: String, endpoint: RpcEndpoint): RpcEndpointRef = {
  @volatile var endpointRef: AkkaRpcEndpointRef = null
  // Use defered function because the Actor needs to use `endpointRef`.
  // So `actorRef` should be created after assigning `endpointRef`.
  val actorRef = () => actorSystem.actorOf(Props(new Actor with ActorLogReceive with Logging {

    assert(endpointRef != null)

    override def preStart(): Unit = {
      // Listen for remote client network events
      context.system.eventStream.subscribe(self, classOf[AssociationEvent])
      safelyCall(endpoint) {
        **endpoint.onStart()**
      }
    }

在start方法中,可以看到进行到master的注册,注册过程中向master发送RegisterApplication的消息

override def onStart(): Unit = {
  try {
    registerWithMaster(1)
  } catch {
    case e: Exception =>
      logWarning("Failed to connect to master", e)
      markDisconnected()
      stop()
  }
}
/**
 *  Register with all masters asynchronously and returns an array `Future`s for cancellation.
 */
private def tryRegisterAllMasters(): Array[JFuture[_]] = {
  for (masterAddress <- masterRpcAddresses) yield {
    registerMasterThreadPool.submit(new Runnable {
      override def run(): Unit = try {
        if (registered.get) {
          return
        }
        logInfo("Connecting to master " + masterAddress.toSparkURL + "...")
        val masterRef =
          rpcEnv.setupEndpointRef(Master.SYSTEM_NAME, masterAddress, Master.ENDPOINT_NAME)
        masterRef.send(RegisterApplication(appDescription, self))
      } catch {
        case ie: InterruptedException => // Cancelled
        case NonFatal(e) => logWarning(s"Failed to connect to master $masterAddress", e)
      }
    })
  }

Master对Application的注册

Master接收到Driver program (实际是ClientEndpoint发送)的RegisterApplication后,会执行以下代码

case RegisterApplication(description, driver) => {
  // TODO Prevent repeated registrations from some driver
  if (state == RecoveryState.STANDBY) {
    // ignore, don't send response
  } else {
    logInfo("Registering app " + description.name)
    val app = createApplication(description, driver)
    registerApplication(app)
    logInfo("Registered app " + description.name + " with ID " + app.id)
    persistenceEngine.addApplication(app)
    driver.send(RegisteredApplication(app.id, self))
    schedule()
  }
}

先进行application的创建,createApplication中完成了appId的初始化

private def createApplication(desc: ApplicationDescription, driver: RpcEndpointRef):
    ApplicationInfo = {
  val now = System.currentTimeMillis()
  val date = new Date(now)
  val appId = newApplicationId(date)
  new ApplicationInfo(now, appId, desc, date, driver, defaultCores)
}

然后进行Applicaiton的注册:registerApplication方法代码;将Application加载到等待队列中

private def registerApplication(app: ApplicationInfo): Unit = {
  val appAddress = app.driver.address
  if (addressToApp.contains(appAddress)) {
    logInfo("Attempted to re-register application at same address: " + appAddress)
    return
  }

  applicationMetricsSystem.registerSource(app.appSource)
  apps += app
  idToApp(app.id) = app
  endpointToApp(app.driver) = app
  addressToApp(appAddress) = app
  waitingApps += app
}

接下来发送RegisteredApplication消息给ClientEndPoint;
schedule()代码中完成的是 启动driver,并且在worker上完成executors的启动

/**
 * Schedule the currently available resources among waiting apps. This method will be called
 * every time a new app joins or resource availability changes.
 */
private def schedule(): Unit = {
  if (state != RecoveryState.ALIVE) { return }
  // Drivers take strict precedence over executors
  val shuffledWorkers = Random.shuffle(workers) // Randomization helps balance drivers
  for (worker <- shuffledWorkers if worker.state == WorkerState.ALIVE) {
    for (driver <- waitingDrivers) {
      if (worker.memoryFree >= driver.desc.mem && worker.coresFree >= driver.desc.cores) {
        launchDriver(worker, driver)
        waitingDrivers -= driver
      }
    }
  }
  startExecutorsOnWorkers()
}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值