第8课:Spark Streaming源码解读之RDD生成全生命周期彻底研究和思考

作者:谢彪

1. Dsteam 与RD关系?
2. Spark Streaming中RDD生成研究?


一:实战WordCount源码如下:

object WordCount {

  defmain(args:Array[String]): Unit ={

    val sparkConf =new    SparkConf().setMaster("Master:7077").setAppName("WordCount")

    val ssc = newStreamingContext(sparkConf,Seconds(1))

 

    val lines =ssc.socketTextStream("Master",9999)

    val words =lines.flatMap(_.split(" "))

    val wordCounts= words.map(x => (x,1)).reduceByKey(_+_)

   wordCounts.print()

    ssc.start()

    ssc.awaitTermination()

  }

}

  1. Dstream之间是有依赖关系。比如map操作,产生MappedDStream.

def map[U: ClassTag](mapFunc: T => U): DStream[U] =ssc.withScope {

  newMappedDStream(this, context.sparkContext.clean(mapFunc))

}

2.  MappedDStream中的compute方法,会先获取parent Dstream.然后基于其结果进行map操作,其中mapFunc就是我们传入的业务逻辑。

private[streaming]

class MappedDStream[T: ClassTag, U: ClassTag] (

    parent:DStream[T],

    mapFunc: T=> U

  ) extendsDStream[U](parent.ssc) {

 

  override defdependencies: List[DStream[_]] = List(parent)

 

  override defslideDuration: Duration = parent.slideDuration

 

  override defcompute(validTime: Time): Option[RDD[U]] = {

   parent.getOrCompute(validTime).map(_.map[U](mapFunc))

  }

}

3.  DStream讲解

 

一是 每个DStream之间有依赖关系,除了第一个DStream是基于数据源产生,其他DStream均依赖于前面的DStream.二是  DStream基于时间产生RDD。

 

abstract class DStream[T: ClassTag] (

    @transientprivate[streaming] var ssc: StreamingContext

  ) extendsSerializable with Logging {

  1. DStream中的generatedRDDs的HashMap中每个Time都会产生一个RDD,而每个RDD都对应着一个Job,因为此时的RDD就是整个DStream操作的时间间隔的最后一个RDD,而最后一个RDD和前面的RDD是有依赖关系,由于DStream有回朔的特点,产生的RDD后一个依次往前一个进行回朔依赖。

 

private[streaming] var generatedRDDs = new HashMap[Time,RDD[T]] ()

以上表示,实质在运行的时候指抓住最后一个DStream的句柄依次往前进行回朔。

2. DStream中的getOrCompute会根据时间生成RDD。

private[streaming] final def getOrCompute(time: Time):Option[RDD[T]] = {

 generatedRDDs.get(time).orElse {

    if(isTimeValid(time)) {

 

      val rddOption= createRDDWithLocalProperties(time, displayInnerRDDOps = false) {

        // Disablechecks for existing output directories in jobs launched by the streaming

        //scheduler, since we may need to write output to an existing directory duringcheckpoint

        //recovery; see SPARK-4835 for more details. We need to have this call herebecause

        //compute() might cause Spark jobs to be launched.

       PairRDDFunctions.disableOutputSpecValidation.withValue(true) {

//compute根据时间定时器计算产生RDD

         compute(time)

        }

      }

//rddOption里面有RDD生成的逻辑,然后生成的RDD,会put到generatedRDDs中

     rddOption.foreach { case newRDD =>

        // Registerthe generated RDD for caching and checkpointing

        if(storageLevel != StorageLevel.NONE) {

         newRDD.persist(storageLevel)

         logDebug(s"Persisting RDD ${newRDD.id} for time $time to$storageLevel")

        }

        if(checkpointDuration != null && (time -zeroTime).isMultipleOf(checkpointDuration)) {

          newRDD.checkpoint()

         logInfo(s"Marking RDD ${newRDD.id} for time $time forcheckpointing")

        }

       generatedRDDs.put(time, newRDD)

      }

      rddOption

    } else {

      None

    }

  }

}

3.  在ReceiverInputDStream中compute源码如下:ReceiverInputDStream会生成计算链条中的首个RDD。后面的RDD就会依赖此RDD。

/**

 * Generates RDDswith blocks received by the receiver of this stream. */

override def compute(validTime: Time): Option[RDD[T]] = {

  val blockRDD = {

 

    if (validTime< graph.startTime) {

      // If this iscalled for any time before the start time of the context,

      // then thisreturns an empty RDD. This may happen when recovering from a

      // driverfailure without any write ahead log to recover pre-failure data.

//如果没有输入数据会产生一系列空的RDD

      newBlockRDD[T](ssc.sc, Array.empty)

    } else {

      // Otherwise,ask the tracker for all the blocks that have been allocated to this stream

      // for thisbatch

// receiverTracker会跟踪数据

      valreceiverTracker = ssc.scheduler.receiverTracker

// blockInfos

      valblockInfos = receiverTracker.getBlocksOfBatch(validTime).getOrElse(id,Seq.empty)

 

      // Registerthe input blocks information into InputInfoTracker

      val inputInfo= StreamInputInfo(id, blockInfos.flatMap(_.numRecords).sum)

     ssc.scheduler.inputInfoTracker.reportInfo(validTime, inputInfo)

     createBlockRDD(validTime, blockInfos)

    }

  }

  Some(blockRDD)

}

3.  createBlockRDD源码如下:

private[streaming] def createBlockRDD(time: Time,blockInfos: Seq[ReceivedBlockInfo]): RDD[T] = {

 

  if(blockInfos.nonEmpty) {

    val blockIds =blockInfos.map { _.blockId.asInstanceOf[BlockId] }.toArray

 

    // Are WALrecord handles present with all the blocks

    valareWALRecordHandlesPresent = blockInfos.forall {_.walRecordHandleOption.nonEmpty }

 

    if(areWALRecordHandlesPresent) {

      // If all theblocks have WAL record handle, then create a WALBackedBlockRDD

      valisBlockIdValid = blockInfos.map { _.isBlockIdValid() }.toArray

      valwalRecordHandles = blockInfos.map { _.walRecordHandleOption.get }.toArray

      newWriteAheadLogBackedBlockRDD[T](

       ssc.sparkContext, blockIds, walRecordHandles, isBlockIdValid)

    } else {

      // Else,create a BlockRDD. However, if there are some blocks with WAL info but not

      // othersthen that is unexpected and log a warning accordingly.

      if(blockInfos.find(_.walRecordHandleOption.nonEmpty).nonEmpty) {

        if(WriteAheadLogUtils.enableReceiverLog(ssc.conf)) {

         logError("Some blocks do not have Write Ahead Log information;" +

           "this is unexpected and data may not be recoverable after driverfailures")

        } else {

          logWarning("Someblocks have Write Ahead Log information; this is unexpected")

        }

      }

//校验数据是否还存在,不存在就过滤掉,此时的master是BlockManager

      valvalidBlockIds = blockIds.filter { id =>

       ssc.sparkContext.env.blockManager.master.contains(id)

      }

      if (validBlockIds.size != blockIds.size) {

       logWarning("Some blocks could not be recovered as they were notfound in memory. " +

          "Toprevent such data loss, enabled Write Ahead Log (see programming guide " +

          "formore details.")

      }

      newBlockRDD[T](ssc.sc, validBlockIds)

    }

  } else {

    // If no blockis ready now, creating WriteAheadLogBackedBlockRDD or BlockRDD

    // according tothe configuration

    if(WriteAheadLogUtils.enableReceiverLog(ssc.conf)) {

      newWriteAheadLogBackedBlockRDD[T](

       ssc.sparkContext, Array.empty, Array.empty, Array.empty)

    } else {

      newBlockRDD[T](ssc.sc, Array.empty)

    }

  }

}

4.  map算子操作,产生MappedDStream

/** Return a new DStream by applying a function to allelements of this DStream. */

def map[U: ClassTag](mapFunc: T => U): DStream[U] =ssc.withScope {

  newMappedDStream(this, context.sparkContext.clean(mapFunc))

}

5.  MappedDStream源码如下:除了第一个DStream产生RDD之外,其他的DStream都是从前面DStream产生的RDD开始计算,然后返回RDD,因此,对DStream的transformations操作就是对RDD进行transformations操作。

private[streaming]

class MappedDStream[T: ClassTag, U: ClassTag] (

    parent:DStream[T],

    mapFunc: T=> U

  ) extendsDStream[U](parent.ssc) {

 

  override defdependencies: List[DStream[_]] = List(parent)

 

  override defslideDuration: Duration = parent.slideDuration

//parent就是父DStream

  override defcompute(validTime: Time): Option[RDD[U]] = {

// getOrCompute是对RDD进行操作,后面的map就是对RDD进行操作

//DStream里面的计算其实是对RDD进行计算,而mapFunc就是我们要操作的具体业务逻辑。

   parent.getOrCompute(validTime).map(_.map[U](mapFunc))

  }

}

6.  forEachDStream的源码如下:

/**

 * An internalDStream used to represent output operations like DStream.foreachRDD.

 * @paramparent        Parent DStream

 * @paramforeachFunc   Function to apply on eachRDD generated by the parent DStream

 * @paramdisplayInnerRDDOps Whether the detailed callsites and scopes of the RDDsgenerated

 *                           by `foreachFunc`will be displayed in the UI; only the scope and

 *                           callsite of `DStream.foreachRDD`will be displayed.

 */

private[streaming]

class ForEachDStream[T: ClassTag] (

    parent:DStream[T],

    foreachFunc:(RDD[T], Time) => Unit,

   displayInnerRDDOps: Boolean

  ) extendsDStream[Unit](parent.ssc) {

 

  override defdependencies: List[DStream[_]] = List(parent)

 

  override defslideDuration: Duration = parent.slideDuration

 

  override defcompute(validTime: Time): Option[RDD[Unit]] = None

 

  override defgenerateJob(time: Time): Option[Job] = {

   parent.getOrCompute(time) match {

      caseSome(rdd) =>

        val jobFunc= () => createRDDWithLocalProperties(time, displayInnerRDDOps) {

         foreachFunc(rdd, time)

        }

//此时考虑jobFunc中一定有action操作

//因此jobFunc被调用的时候就会真正触发action操作   

        Some(newJob(time, jobFunc))

      case None => None

    }

  }

}

7.  在上述案例中print函数源码如下,foreachFunc函数中直接对RDD进行操作。

/**

 * Print the firstnum elements of each RDD generated in this DStream. This is an output

 * operator, sothis DStream will be registered as an output stream and there materialized.

 */

def print(num: Int): Unit = ssc.withScope {

  def foreachFunc:(RDD[T], Time) => Unit = {

    (rdd: RDD[T],time: Time) => {

//action操作

      val firstNum= rdd.take(num + 1)

      //scalastyle:off println

     println("-------------------------------------------")

     println("Time: " + time)

     println("-------------------------------------------")

     firstNum.take(num).foreach(println)

      if(firstNum.length > num) println("...")

      println()

      //scalastyle:on println

    }

  }

 foreachRDD(context.sparkContext.clean(foreachFunc), displayInnerRDDOps =false)

}

上述都是从逻辑方面把RDD的生成流程走了一遍,下面我们就看正在开始是在哪里触发的。

  1. 在JobGenerator中generateJobs源码如下:

/** Generate jobs and perform checkpoint for the given`time`.  */

private def generateJobs(time: Time) {

  // Set theSparkEnv in this thread, so that job generation code can access the environment

  // Example:BlockRDDs are created in this thread, and it needs to access BlockManager

  // Update: Thisis probably redundant after threadlocal stuff in SparkEnv has been removed.

 SparkEnv.set(ssc.env)

  Try {

   jobScheduler.receiverTracker.allocateBlocksToBatch(time) // allocatereceived blocks to batch

//生成Job

   graph.generateJobs(time) // generate jobs using allocated block

  } match {

    case Success(jobs) =>

      valstreamIdToInputInfos = jobScheduler.inputInfoTracker.getInfo(time)

     jobScheduler.submitJobSet(JobSet(time, jobs, streamIdToInputInfos))

    case Failure(e)=>

     jobScheduler.reportError("Error generating jobs for time " +time, e)

  }

 eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = false))

}

2.  在DStreamGraph中我们前面分析的RDD的产生的动作正在被触发了。

def generateJobs(time: Time): Seq[Job] = {

 logDebug("Generating jobs for time " + time)

  val jobs =this.synchronized {

//此时的outputStream就是forEachDStream

   outputStreams.flatMap { outputStream =>

      val jobOption= outputStream.generateJob(time)

     jobOption.foreach(_.setCallSite(outputStream.creationSite))

      jobOption

    }

  }

 logDebug("Generated " + jobs.length + " jobs for time" + time)

  jobs

}

作者:大数据技术研发人员:谢彪

  • 资料来源于:DT_大数据梦工厂(Spark发行版本定制) 

  • DT大数据梦工厂微信公众号:DT_Spark 

  • 新浪微博:http://www.weibo.com/ilovepains

  • 王家林老师每晚20:00免费大数据实战

YY直播:68917580



 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值