Kafka源代码阅读(4):网络连接

broker

kafka使用自己写的socket server,使用标准的reactor多线程模式。
启动流程如下:

      //对配置的每一个listener(endpoint)
      config.listeners.foreach { endpoint =>
        val listenerName = endpoint.listenerName
        val securityProtocol = endpoint.securityProtocol
        val processorEndIndex = processorBeginIndex + numProcessorThreads
        //一个endpoint由多个processor和一个acceptor处理。
        for (i <- processorBeginIndex until processorEndIndex)
          processors(i) = newProcessor(i, connectionQuotas, listenerName, securityProtocol, memoryPool)

        val acceptor = new Acceptor(endpoint, sendBufferSize, recvBufferSize, brokerId,
          processors.slice(processorBeginIndex, processorEndIndex), connectionQuotas)
        acceptors.put(endpoint, acceptor)
        KafkaThread.nonDaemon(s"kafka-socket-acceptor-$listenerName-$securityProtocol-${endpoint.port}", acceptor).start()
        acceptor.awaitStartup()

        processorBeginIndex = processorEndIndex
      }

aceptor线程的核心循环代码代码如下。aceptor负责处理新加入的连接。

   while (isRunning) {
        try {
          //阻塞查询需要处理的连接事件
          val ready = nioSelector.select(500)
          if (ready > 0) {
            val keys = nioSelector.selectedKeys()
            val iter = keys.iterator()
            while (iter.hasNext && isRunning) {
              try {
                val key = iter.next
                iter.remove()
                if (key.isAcceptable)
                  //该方法主要是调用processor.accept(socketChannel)
                  accept(key, processors(currentProcessor))
                else
                  throw new IllegalStateException("Unrecognized key state for acceptor thread.")

                // round robin to the next processor thread
                currentProcessor = (currentProcessor + 1) % processors.length
              } catch {

              }
            }
          }
        }
        catch {

        }
      }

processor相当于netty中workergroup中的线程。在连接建立后,chanenl被转交给他们,负责相应之后需要处理的读写。processor线程的核心循环如下

      while (isRunning) {
        try {
          // setup any new connections that have been queued up
          configureNewConnections()
          // register any new responses for writing
          processNewResponses()
          poll()
          processCompletedReceives()
          processCompletedSends()
          processDisconnected()
        } catch {

        }
      }

处理接收如下

  private def processCompletedReceives() {
    selector.completedReceives.asScala.foreach { receive =>
      try {
        openOrClosingChannel(receive.source) match {
          case Some(channel) =>
            val header = RequestHeader.parse(receive.payload)
            val context = new RequestContext(header, receive.source, channel.socketAddress,
              channel.principal, listenerName, securityProtocol)
            //新建一个RequestChannel.Request对象
            val req = new RequestChannel.Request(processor = id, context = context,
              startTimeNanos = time.nanoseconds, memoryPool, receive.payload, requestChannel.metrics)
            //将其放入requestChannel对象内部(这个requestChannel对象在socket server内部是唯一的)的队列。
            requestChannel.sendRequest(req)
            //在处理完之前,阻止再处理该channel的事件。
            selector.mute(receive.source)
          case None =>

        }
      } catch {

      }
    }
  }

处理发送完成(这时候信息已经发送出去了)如下

  private def processCompletedSends() {
    selector.completedSends.asScala.foreach { send =>
      try {
        val resp = inflightResponses.remove(send.destination).getOrElse {
          throw new IllegalStateException(s"Send for ${send.destination} completed, but not in `inflightResponses`")
        }
        updateRequestMetrics(resp)
        //unmute允许该channel再次被写入。
        selector.unmute(send.destination)
      } catch {
      }
    }
  }

java client

底层使用java中java.nio.channels的SocketChannel。分别封装为PlaintextTransportLayer和SslTransportLayer。同样也使用nio包中的selector。使用了Netty。
第一步,先要找到所有准备就绪的SelectionKey

 Set<SelectionKey> readyKeys = this.nioSelector.selectedKeys();

第二步,对所有准备就绪的key按着情况处理

for (SelectionKey key : determineHandlingOrder(selectionKeys)) {
    KafkaChannel channel = channel(key);
    //处理连接TCP连接完成事件
    if (key.isConnectable()) {
        channel.finishConnect()
    }
    //处理发送事件
    if (channel.ready() && key.isWritable()) {
        send = channel.write();
    }
    //处理接收事件
    if (channel.ready() && (key.isReadable()) {
            NetworkReceive networkReceive;
            while ((networkReceive = channel.read()) != null) {
                addToStagedReceives(channel, networkReceive);
            }
     }
}

消费端一直在进行如下的网络传输


15:37:27.278 [kafka_spout:5-SingleThreadSpoutExecutors] DEBUG o.a.k.c.c.i.AbstractCoordinator - Sending coordinator request for group GDL to broker 10.237.64.46:9092 (id: 0 rack: null)
15:37:27.280 [kafka_spout:5-SingleThreadSpoutExecutors] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received group coordinator response ClientResponse(receivedTimeMs=1532936247280, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler@3e6f968c, request=RequestSend(header={api_key=10,api_version=0,correlation_id=346,client_id=consumer-1}, body={group_id=GDL}), createdTimeMs=1532936247278, sendTimeMs=1532936247278), responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})

也就是向Coordinator发出请求,接收返回的ClientResponse。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值