Kafka服务端源码剖析二 -- Acceptor线程的启动

首先我们进入 kafka的main方法如下:

def main(args: Array[String]): Unit = {
    try {
      val serverProps = getPropsFromArgs(args)
      // 创建kafkaServerStartable对象
      val kafkaServerStartable = KafkaServerStartable.fromProps(serverProps)

      // attach shutdown handler to catch control-c
      Runtime.getRuntime().addShutdownHook(new Thread() {
        override def run() = {
          kafkaServerStartable.shutdown
        }
      })

      // 启动kafka线程
      kafkaServerStartable.startup
      // 等待结束
      kafkaServerStartable.awaitShutdown
    }
    catch {
      case e: Throwable =>
        fatal(e)
        System.exit(1)
    }
    System.exit(0)
  }

下面我们来看 kafkaServerStartable类的fromProps和startup方法

object KafkaServerStartable {
  // 根据传入参数创建了 KafkaServerStartable 对象
  def fromProps(serverProps: Properties) = {
    val reporters = KafkaMetricsReporter.startReporters(new VerifiableProperties(serverProps))
    new KafkaServerStartable(KafkaConfig.fromProps(serverProps), reporters)
  }
}

class KafkaServerStartable(val serverConfig: KafkaConfig, reporters: Seq[KafkaMetricsReporter]) extends Logging {

  // 创建KafkaServerStartable对象的时候 ,初始化 KafkaServer 对象
  private val server = new KafkaServer(serverConfig, kafkaMetricsReporters = reporters)

  def this(serverConfig: KafkaConfig) = this(serverConfig, Seq.empty)
  def startup() {
    try {
     // 直接调用 KafkaServer的startup()方法
      server.startup()
    }
    catch {
      case e: Throwable =>
        fatal("Fatal error during KafkaServerStartable startup. Prepare to shutdown", e)
        // KafkaServer already calls shutdown() internally, so this is purely for logging & the exit code
        System.exit(1)
    }
  }

接下来我们继续追踪 KafkaServer的startup()方法 如下:

def startup() {
   try {
         
     .....
     其实如果大家观察这个整个方法,见名知意
     就感觉到这个方法里面有很多重要的方法,
     我们此次分析源码的方式使用的是场景驱动的方式,我们现在
     主要关心的是客户端发送过来请求以后服务端这儿是如何处理的?
     所以我们此次重要的方法是这个。
         
       socketServer = new SocketServer(config, metrics, kafkaMetricsTime)
       socketServer.startup()
       
     .....

 }

在看 SocketServer的startup方法:

 /**
   * Start the socket server
   */
  def startup() {
    this.synchronized {

      connectionQuotas = new ConnectionQuotas(maxConnectionsPerIp, maxConnectionsPerIpOverrides)

      val sendBufferSize = config.socketSendBufferBytes
      val recvBufferSize = config.socketReceiveBufferBytes
      val brokerId = config.brokerId

      var processorBeginIndex = 0
      endpoints.values.foreach { endpoint =>
        val protocol = endpoint.protocolType
        val processorEndIndex = processorBeginIndex + numProcessorThreads

        for (i <- processorBeginIndex until processorEndIndex)
          processors(i) = newProcessor(i, connectionQuotas, protocol)
          
       //TODO 初始化Acceptor
        val acceptor = new Acceptor(endpoint, sendBufferSize, recvBufferSize, brokerId,
          processors.slice(processorBeginIndex, processorEndIndex), connectionQuotas)
        acceptors.put(endpoint, acceptor)
        
        // 创建线程并启动
        Utils.newThread("kafka-socket-acceptor-%s-%d".format(protocol.toString, endpoint.port), acceptor, false).start()
        
        acceptor.awaitStartup()

        processorBeginIndex = processorEndIndex
      }
    }

    newGauge("NetworkProcessorAvgIdlePercent",
      new Gauge[Double] {
        def value = allMetricNames.map( metricName =>
          metrics.metrics().get(metricName).value()).sum / totalProcessorThreads
      }
    )

    info("Started " + acceptors.size + " acceptor threads")
  }

既然调用了Acceptor线程的start方法,也就是说我们接下来要看Acceptor的run方法:

 /**
   * Accept loop that checks for new connection attempts
   */
  def run() {

    //TODO 网络Selector上注册OP_ACCEPT事件
    serverChannel.register(nioSelector, SelectionKey.OP_ACCEPT)
    //TODO 调用countdown,记录线程启动已经完成
    startupComplete()
    try {
      var currentProcessor = 0
      while (isRunning) {
        try {
          val ready = nioSelector.select(500)
          if (ready > 0) {
            val keys = nioSelector.selectedKeys()
            val iter = keys.iterator()
            //todo 遍历所有的key
            while (iter.hasNext && isRunning) {
              try {
                val key = iter.next
                iter.remove()
                // todo 如果有连接发送过来
                if (key.isAcceptable)
                  //todo 执行此方法,统一对连接进行处理,
                  //但是具体是怎么处理的我们下一讲继续分析。
                  accept(key, processors(currentProcessor))
                else
                  throw new IllegalStateException("Unrecognized key state for acceptor thread.")

                // round robin to the next processor thread
                currentProcessor = (currentProcessor + 1) % processors.length
              } catch {
                case e: Throwable => error("Error while accepting connection", e)
              }
            }
          }
        }
        catch {
          // We catch all the throwables to prevent the acceptor thread from exiting on exceptions due
          // to a select operation on a specific channel or a bad request. We don't want
          // the broker to stop responding to requests from other clients in these scenarios.
          case e: ControlThrowable => throw e
          case e: Throwable => error("Error occurred", e)
        }
      }
    } finally {
      debug("Closing server socket and selector.")
      swallowError(serverChannel.close())
      swallowError(nioSelector.close())
      shutdownComplete()
    }
  }

到这儿其实Acceptor的任务就完成了。
到目前为止,我们看到Kafka启动的时候会启动一个Acceptor线程,Acceptor线程启动以后会初始化一个ServerSocketChannel。

然后这个ServerSocketChannel会注册绑定到Selector上,接着一个死循环不断检查注册在Selector上的事件,如果发现是连接请求过来了,那么Acceptor调用了accept去处理这个连接,但是具体怎么处理这个连接,我们下一讲继续剖析。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值