SparkStreaming — 数据接收BlockGenerator源码分析

数据接收源码分析

  上一篇博客中分析到,Receiver数据接收主要是通过BlockGenerator来进行接收和存储的,下面我们就源码来对照之前的流程进行分析。
  首先是创建BlockGenerator的时候初始化的一些重要组件,如下所示:

  // blockInterval是有一个默认值的,默认是200ms,将数据封装成block的时间间隔
  private val blockIntervalMs = conf.getTimeAsMs("spark.streaming.blockInterval", "200ms")
  require(blockIntervalMs > 0, s"'spark.streaming.blockInterval' should be a positive value")

  // 这个相当于每隔200ms,就去执行一个函数updateCurrentBuffer
  private val blockIntervalTimer =
    new RecurringTimer(clock, blockIntervalMs, updateCurrentBuffer, "BlockGenerator")
  // blocksForPushing队列的长度是可以调节的,默认是长度是10
  private val blockQueueSize = conf.getInt("spark.streaming.blockQueueSize", 10)
  // blocksForPushing队列
  private val blocksForPushing = new ArrayBlockingQueue[Block](blockQueueSize)
  // blockPushingThread后台线程,启动之后,就会调用keepPushingBlocks()方法
  // 这个方法中就会每隔一段时间,去blocksForPushing队列中取block
  private val blockPushingThread = new Thread() { override def run() { keepPushingBlocks() } }

  // 创建currentBuffer,用于存放原始数据
  @volatile private var currentBuffer = new ArrayBuffer[Any]

上面是一些比较重要的参数解释如下:
blockInterval:默认是200ms,它是控制block的生成时间间隔
blockIntervalTimer:这是个定时器,定期的将currentBuffer中的数据生成Block
blockQueueSize:保存block的队列的长度
blocksForPushing:保存block的队列
blockPushingThread:这是一个线程,用于将生成的Block进行推送,保存到BlockManager中。
currentBuffer:保存来自数据源的一条一条的数据,这是一个缓存。

BlockGenerator的start()方法
// 启动BlockGenerator,其实就是启动内部两个关键后台线程
  // 一个是blockIntervalTimer,负责将currentBuffer中的原始数据,打包成一个一个的block
  // 一个是blockPushingThread,负责将blocksForPushing中的block,调用pushArrayBuffer()方法
  def start(): Unit = synchronized {
    if (state == Initialized) {
      state = Active
      blockIntervalTimer.start()
      blockPushingThread.start()
      logInfo("Started BlockGenerator")
    } else {
      throw new SparkException(
        s"Cannot start BlockGenerator as its not in the Initialized state [state = $state]")
    }
  }

  从上面代码可以很清楚的看到,start方法中只是将定时器和线程启动了,当启动定时器的时候,它就会每隔200ms,将currentBuffer中的数据取出,并生成一个Block;而blockPushingThread线程则是将队列中的数据推送到BlockMananger。下面我们先看定时器启动的时候会调用的函数updateCurrentBuffer。

blockIntervalTimer的updateCurrentBuffer
/** Change the buffer to which single records are added to. */
  private def updateCurrentBuffer(time: Long): Unit = {
    try {
      var newBlock: Block = null
      synchronized {
        if (currentBuffer.nonEmpty) {
          // 可以看到先将currentBuffer数据复制给newBlockBuffer,然后清空currentBuffer
          val newBlockBuffer = currentBuffer
          currentBuffer = new ArrayBuffer[Any]
          // 生成一个唯一的blockId,根据时间创建的
          val blockId = StreamBlockId(receiverId, time - blockIntervalMs)
          // 目前这个操作是空着的
          listener.onGenerateBlock(blockId)
          // 创建一个block
          newBlock = new Block(blockId, newBlockBuffer)
        }
      }

      // 将block推入blocksForPushing队列中
      if (newBlock != null) {
        blocksForPushing.put(newBlock)  // put is blocking when queue is full
      }
    } catch {
      case ie: InterruptedException =>
        logInfo("Block updating timer thread was interrupted")
      case e: Exception =>
        reportError("Error in block updating thread", e)
    }
  }

  从上面代码中看出,加了synchronized 关键字,防止写并发问题。首先将currentBuffer的数据复制给newBlockBuffer,接着就重新创建currentBuffer,这就类似之前的数据被清空。然后依据时间生成一个唯一的blockId,然后创建一个Block,接着将创建好的block加入到blocksForPushing队列中。
  下面我们再看一下blockPushingThread线程的执行逻辑,它会调用keepPushingBlocks。

blockPushingThread的keepPushingBlocks
private def keepPushingBlocks() {
    logInfo("Started block pushing thread")
	// 先看一下当前BlockGenerator是否还在运行
    def areBlocksBeingGenerated: Boolean = synchronized {
      state != StoppedGeneratingBlocks
    }

    try {
      // 只要block持续在产生,那么就会一直去blocksForPushing队列中取block
      while (areBlocksBeingGenerated) {
        // 从blocksForPushing队列中,poll出来了当前队列队首的block
        // 对于阻塞队列,默认设置10ms的超时
        Option(blocksForPushing.poll(10, TimeUnit.MILLISECONDS)) match {
          // 如果拿到block,调用pushBlock
          case Some(block) => pushBlock(block)
          case None =>
        }
      }

      // At this point, state is StoppedGeneratingBlock. So drain the queue of to-be-pushed blocks.
      logInfo("Pushing out the last " + blocksForPushing.size() + " blocks")
      while (!blocksForPushing.isEmpty) {
        val block = blocksForPushing.take()
        logDebug(s"Pushing block $block")
        pushBlock(block)
        logInfo("Blocks left to push " + blocksForPushing.size())
      }
      logInfo("Stopped block pushing thread")
    } catch {
      case ie: InterruptedException =>
        logInfo("Block pushing thread was interrupted")
      case e: Exception =>
        reportError("Error in block pushing thread", e)
    }
  }

  从上面代码中可以看出,只要BlockGenerator一直在运行没有停止,它就会持续不断的产生Block,那么这里就会从blocksForPushing队列中持续不断的去取Block进行推送。这里的blocksForPushing是一个阻塞队列,默认阻塞时间是10ms。
  从blocksForPushing队列中取出的block会被推送,推送是通过BlockGeneratorListener的onPushBlock进行推送的,而onPushBlock()方法中则调用了pushArrayBuffer,将推送来的Block,而onPushBlock()方法最终调用了pushAndReportBlock()方法,下面我们分析这个方法:

ReceiverSupervisorImpl的pushAndReportBlock推送block
def pushAndReportBlock(
      receivedBlock: ReceivedBlock,
      metadataOption: Option[Any],
      blockIdOption: Option[StreamBlockId]
    ) {
    // 取出BlockId
    val blockId = blockIdOption.getOrElse(nextBlockId)
    // 获取当前系统时间
    val time = System.currentTimeMillis
    // 这里使用receivedBlockHandler,调用storeBlock方法,将block存储到BlockManager中
    // 从这里的源码里可以看到预写日志机制
    val blockStoreResult = receivedBlockHandler.storeBlock(blockId, receivedBlock)
    logDebug(s"Pushed block $blockId in ${(System.currentTimeMillis - time)} ms")
    // 拿到block数据长度
    val numRecords = blockStoreResult.numRecords
    // 封装一个ReceivedBlockInfo对象,里面包含streamId 和 block store结果
    val blockInfo = ReceivedBlockInfo(streamId, numRecords, metadataOption, blockStoreResult)
    // 调用ReceiverTrackerEndPoint,向ReceiverTracker发送AddBlock消息
    trackerEndpoint.askWithRetry[Boolean](AddBlock(blockInfo))
    logDebug(s"Reported block $blockId")
  }

  这个方法主要包含了两个功能,一个是调用receivedBlockHandler的storeBlock将Block保存到BlockManager(或写入预写日志);另一个就是将保存的Block信息封装为ReceivedBlockInfo,发送给ReceiverTracker。下面我们先分析第一个:
  存储block的组件receivedBlockHandler会依据是否开启预写日志功能,而创建不同的receivedBlockHandler,如下所示:

private val receivedBlockHandler: ReceivedBlockHandler = {
    // 如果开启了预写日志机制,默认是false(这里参数是 spark.streaming.receiver.writeAheadLog.enable)
    // 如果为true,那么ReceivedBlockHandler就是WriteAheadLogBasedBlockHandler,
    // 如果没有开启预写日志机制,那么就创建为BlockManagerBasedBlockHandler
    if (WriteAheadLogUtils.enableReceiverLog(env.conf)) {
      if (checkpointDirOption.isEmpty) {
        throw new SparkException(
          "Cannot enable receiver write-ahead log without checkpoint directory set. " +
            "Please use streamingContext.checkpoint() to set the checkpoint directory. " +
            "See documentation for more details.")
      }
      new WriteAheadLogBasedBlockHandler(env.blockManager, receiver.streamId,
        receiver.storageLevel, env.conf, hadoopConf, checkpointDirOption.get)
    } else {
      new BlockManagerBasedBlockHandler(env.blockManager, receiver.storageLevel)
    }
  }

  它会判断是否开启了预写日志,通过读取spark.streaming.receiver.writeAheadLog.enable这个参数是否被设置为true。如果开启了那么就创建WriteAheadLogBasedBlockHandler,否则的话就创建BlockManagerBasedBlockHandler。
  下面我们就WriteAheadLogBasedBlockHandler来进行分析它的storeBlock方法:

def storeBlock(blockId: StreamBlockId, block: ReceivedBlock): ReceivedBlockStoreResult = {
    var numRecords = None: Option[Long]
    // 先将Block的数据序列化
    val serializedBlock = block match {
      case ArrayBufferBlock(arrayBuffer) =>
        numRecords = Some(arrayBuffer.size.toLong)
        blockManager.dataSerialize(blockId, arrayBuffer.iterator)
      case IteratorBlock(iterator) =>
        val countIterator = new CountingIterator(iterator)
        val serializedBlock = blockManager.dataSerialize(blockId, countIterator)
        numRecords = countIterator.count
        serializedBlock
      case ByteBufferBlock(byteBuffer) =>
        byteBuffer
      case _ =>
        throw new Exception(s"Could not push $blockId to block manager, unexpected block type")
    }

    // 将数据保存到BlockManager中去,这里可以看出,默认的持久化策略是带 _SER 和 _2的
    // 会序列化,以及复制一份副本到其他executor的BlockManager上,以供容错
    val storeInBlockManagerFuture = Future {
      val putResult =
        blockManager.putBytes(blockId, serializedBlock, effectiveStorageLevel, tellMaster = true)
      if (!putResult.map { _._1 }.contains(blockId)) {
        throw new SparkException(
          s"Could not store $blockId to block manager with storage level $storageLevel")
      }
    }

    // 将Block存入预写日志,使用Future来获取写入结果
    val storeInWriteAheadLogFuture = Future {
      writeAheadLog.write(serializedBlock, clock.getTimeMillis())
    }

    // 等待两个写入完成,并合并写入结果信息,并返回写入结果信息
    val combinedFuture = storeInBlockManagerFuture.zip(storeInWriteAheadLogFuture).map(_._2)
    val walRecordHandle = Await.result(combinedFuture, blockStoreTimeout)
    WriteAheadLogBasedStoreResult(blockId, numRecords, walRecordHandle)
  }

  从上面代码中看出,主要分为两步:首先将Block的数据进行序列化,然后将其放入BlockManager中进行存储,这里可以看出,默认的持久化级别是带有 _SER 和 _2的,它会序列化并复制一份到其他Executor的BlockManager上。这里就可以看出开启预写日志的容错措施首先会将数据复制一份到其他的Worker节点的executor的BlockManager上;接着将Block的数据写入预写日志中(一般是HDFS文件)。
  从上面可以看出预写日志的容错措施主要有两个:一是将数据备份到其他的Worker节点的executor上(默认持久化级别是_SER 和 _2);再者将数据写入到预写日志中。相当于提供了双重保障,因此能够提供较强的容错性(当然这会牺牲一定的性能)。
  接着我们分析第二个,发送ReceivedBlockInfo信息给ReceiverTracker。这个就简单说一下,ReceiverTracker在收到AddBlock的消息之后,会进行判断是否开启预写日志,假如开启预写日志那么需要将Block的信息写入一份到预写日志中,否则的话,就保存在缓存中。
  总结一下:上面的数据接收和存储功能,依据BlockGenerator组件来对接收到的数据进行缓存、封装和推送,最终将数据推送到BlockManager(以及预写日志中)。其中,主要是依靠一个定时器blockIntervalTimer,每隔200ms,从currentBuffer中取出全部数据,封装为一个block,放入blocksForPushing队列中;接着blockPushingThread,不断的从blocksForPushing队列中取出block进行推送,这是一个阻塞队列阻塞时间默认是10ms。然后通过BlockGeneratorListener的onPushBlock()(最终调用的是pushArrayBuffer),将数据进行推送到BlockManager(加入开启了预写日志,那么也会写入一份到预写日志中),以及发送AddBlock消息给ReceiverTracker进行Block的注册。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值