4.2.10 Kafka源码剖析, 阅读环境搭建, broker启动流程, topic创建流程, Producer生产者流程, Consumer消费者流程,

目录

4.1 Kafka源码剖析之源码阅读环境搭建

4.1.1 安装配置Gradle

4.1.2 Scala的安装和配置

4.1.3 Idea配置

4.1.4 源码操作

4.2 Kafka源码剖析之Broker启动流程

4.2.1 启动kafka

4.2.2 查看Kafka.Kafka源码

4.3 Kafka源码剖析之Topic创建流程

4.3.1 Topic创建

4.3.2 手动创建

4.3.3 查看Topic入口

4.3.4 创建Topic

4.4 Kafka源码剖析之Producer生产者流程

4.4.1 Producer示例

4.4.1.1 同步发送

4.4.1.2 异步发送

4.4.2 KafkaProducer实例化

4.4.2 消息发送过程

4.4.2.1 拦截器

4.4.2.2 拦截器核心逻辑

4.4.2.3 发送五步骤

4.4.2.4 MetaData更新机制

4.5 Kafka源码剖析之Consumer消费者流程

4.5.1 Consumer示例

4.5.2 KafkaConsumer实例化

4.5.3 订阅Topic

4.5.4 消息消费过程

4.5.4.1 poll

4.5.4.2 pollOnce

4.5.5 自动提交

4.5.6 手动提交

4.5.6.1 同步提交

4.5.6.2 异步提交


 

4.1 Kafka源码剖析之源码阅读环境搭建

首先下载源码:http://archive.apache.org/dist/kafka/1.0.2/kafka-1.0.2-src.tgz
gradle-4.8.1下载地址:https://services.gradle.org/distributions/gradle-4.8.1-bin.zip
Scala-2.12.12下载地址:https://downloads.lightbend.com/scala/2.12.12/scala-2.12.12.msi

 

4.1.1 安装配置Gradle

解压gradle4.8.-bin.zip到一个目录
配置环境变量,其中GRADLE_HOME指向gradle解压到的根目录,GRADLE_USER_HOME指向gradle的本地仓库位置。

PATH环境变量:

 

进入GRADLE_USER_HOME目录,添加init.gradle,配置gradle的源:
init.gradle内容:

allprojects {
    repositories {
        maven {
            url 'https://maven.aliyun.com/repository/public/'
        }
        maven {
            url
            'https://maven.aliyun.com/nexus/content/repositories/google'
        }
        maven {
            url 'https://maven.aliyun.com/nexus/content/groups/public/'
        }
        maven {
            url
            'https://maven.aliyun.com/nexus/content/repositories/jcenter'
        }
        all {
            ArtifactRepository repo ->
            if (repo instanceof MavenArtifactRepository) {
                def url = repo.url.toString()
                if (url.startsWith('https://repo.maven.apache.org/maven2/')
                        || url.startsWith('https://repo.maven.org/maven2') 
                        || url.startsWith('https://repo1.maven.org/maven2') 
                        || url.startsWith('https://jcenter.bintray.com/')) {
                    //project.logger.lifecycle "Repository ${repo.url} replaced by $REPOSITORY_URL. "
                    remove repo
                }
            }
        }
    }
    buildscript {
        repositories {
            maven {
                url 'https://maven.aliyun.com/repository/public/'
            }
            maven {
                url
                'https://maven.aliyun.com/nexus/content/repositories/google'
            }
            maven {
                url
                'https://maven.aliyun.com/nexus/content/groups/public/'
            }
            maven {
                url
                'https://maven.aliyun.com/nexus/content/repositories/jcenter'
            }
            all {
                ArtifactRepository repo ->
                if (repo instanceof MavenArtifactRepository) {
                    def url = repo.url.toString()
                    if (url.startsWith('https://repo1.maven.org/maven2') 
                            || url.startsWith('https://jcenter.bintray.com/')) {
                        //project.logger.lifecycle "Repository ${repo.url} replaced by $REPOSITORY_URL. "
                        remove repo
                    }
                }

保存并退出,打开cmd,运行:
设置成功。

 

4.1.2 Scala的安装和配置

双击安装

 

添加gradle的bin目录到PATH中。

 

打开cmd,输入 scala 验证:

输入:quit退出Scala的交互式环境。

 

 

4.1.3 Idea配置

idea安装Scala插件:

 

 

 

4.1.4 源码操作

解压源码:

打开CMD,进入kafka-1.0.2-src目录,执行:gradle
结束后,执行gradle idea(注意不要使用生成的gradlew.bat执行操作)
idea导入源码:

选择Gradle

 

 

4.2 Kafka源码剖析之Broker启动流程

4.2.1 启动kafka

命令如下: kafka-server-start.sh /opt/kafka_2.12-1.0.2/config/server.properties 。
kafka-server-start.sh内容如下:

if [ $# -lt 1 ];
then
	echo "USAGE: $0 [-daemon] server.properties [--override property=value]*"
	exit 1
fi
base_dir=$(dirname $0)

if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
    export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
fi

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
fi

EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'}

COMMAND=$1
case $COMMAND in
  -daemon)
    EXTRA_ARGS="-daemon "$EXTRA_ARGS
    shift
    ;;
  *)
    ;;
esac

exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@"

 

 

4.2.2 查看Kafka.Kafka源码

def main(args: Array[String]): Unit = {
    try {
        // 读取启动配置
        val serverProps = getPropsFromArgs(args)
        // 封装KafkaServer
        val kafkaServerStartable = KafkaServerStartable.fromProps(serverProps)
        // register signal handler to log termination due to SIGTERM, SIGHUP and SIGINT (control-c)
        registerLoggingSignalHandler()
        // attach shutdown handler to catch terminating signals as well as normal termination
        // 增加回调监听
        Runtime.getRuntime().addShutdownHook(new Thread("kafka-shutdown-hook") {
            override def run(): Unit = kafkaServerStartable.shutdown()
        })
        // 启动服务
        kafkaServerStartable.startup()
        // 等待
        kafkaServerStartable.awaitShutdown()
    }
    catch {
        case e: Throwable =>
          fatal(e)
          Exit.exit(1)
    }
    Exit.exit(0)
}

上面的 kafkaServerStartabl 封装了 KafkaServer ,最终执行 startup 的是KafkaServer

class KafkaServerStartable(val serverConfig: KafkaConfig, reporters:
Seq[KafkaMetricsReporter]) extends Logging {
 private val server = new KafkaServer(serverConfig, kafkaMetricsReporters = reporters)
 def this(serverConfig: KafkaConfig) = this(serverConfig, Seq.empty)
 // 启动
 def startup() {
  try server.startup()
  catch {
   case _: Throwable =>
    // KafkaServer.startup() calls shutdown() in case of exceptions, so we invoke `exit` to set the status code
    fatal("Exiting Kafka.")
    Exit.exit(1)
 }
}
 // 关闭
 def shutdown() {
  try server.shutdown()
  catch {
   case _: Throwable =>
    fatal("Halting Kafka.")
    Exit.halt(1)
 }
}
 def setServerState(newState: Byte) {
  server.brokerState.newState(newState)
}
 def awaitShutdown(): Unit = server.awaitShutdown()
}

下面来看一下 KafkaServe r的 startup 方法,启动了很多东西,后面都会用到,代码中也加入了注释

def startup() {
  try {
   info("starting")
   // 是否关闭
   if (isShuttingDown.get)
    throw new IllegalStateException("Kafka server is still shutting down, cannot re start!")
   // 是否已启动完成
   if (startupComplete.get)
    return
   // 开始启动,并设置已启动变量
   val canStartup = isStartingUp.compareAndSet(false, true)
   if (canStartup) {
    // 设置broker为启动状态
    brokerState.newState(Starting)
    /* start scheduler */
    // 启动定时器,  
    kafkaScheduler.startup()
    /* setup zookeeper */
    // 初始化zookeeper配置
    zkUtils = initZk()
    /* Get or create cluster_id */
    // 在zookeeper上生成集群Id
    _clusterId = getOrGenerateClusterId(zkUtils)
    info(s"Cluster ID = $clusterId")
    /* generate brokerId */
    // 从配置文件获取brokerId
    val (brokerId, initialOfflineDirs) = getBrokerIdAndOfflineDirs
    config.brokerId = brokerId
    // 日志上下文
    logContext = new LogContext(s"[KafkaServer id=${config.brokerId}] ")
    this.logIdent = logContext.logPrefix
    /* create and configure metrics */
    // 通过配置文件中的MetricsReporter的实现类来创建实例
    val reporters = config.getConfiguredInstances(KafkaConfig.MetricReporterClassesProp, classOf[MetricsReporter],
     Map[String, AnyRef](KafkaConfig.BrokerIdProp -> (config.brokerId.toString)).asJava)
    // 默认监控会增加jmx
    reporters.add(new JmxReporter(jmxPrefix))
    val metricConfig = KafkaServer.metricConfig(config)
    // 创建metric对象
    metrics = new Metrics(metricConfig, reporters, time, true)
    /* register broker metrics */
    _brokerTopicStats = new BrokerTopicStats
    // 初始化配额管理服务,对于每个producer或者consumer,可以对他们produce或者 consum的速度上 作出限制
    quotaManagers = QuotaFactory.instantiate(config, metrics, time, threadNamePrefix.getOrElse(""))
    // 增加监听器
    notifyClusterListeners(kafkaMetricsReporters ++ reporters.asScala)
    logDirFailureChannel = new LogDirFailureChannel(config.logDirs.size)
    // 创建日志管理组件,创建时会检查log目录下是否有.kafka_cleanshutdown文件, 如果没有的话 broker进入RecoveringFrom UncleanShutdown 状态
    /* start log manager */
    logManager = LogManager(config, initialOfflineDirs, zkUtils, brokerState, kafkaScheduler, time, brokerTopicStats, logDirFailureChannel)
    logManager.startup()
    // 创建元数据管理组件
    metadataCache = new MetadataCache(config.brokerId)
    // 创建凭证提供者组件
    credentialProvider = new CredentialProvider(config.saslEnabledMechanisms)
    // Create and start the socket server acceptor threads so that the bound port is known.
    // Delay starting processors until the end of the initialization sequence to ensure
    // that credentials have been loaded before processing authentications.
    // 创建一个sockerServer组件,并启动。该组件启动后,就会开始接收请求
    socketServer = new SocketServer(config, metrics, time, credentialProvider)
    socketServer.startup(startupProcessors = false)
    // 创建一个副本管理组件,并启动该组件
    /* start replica manager */
    replicaManager = createReplicaManager(isShuttingDown)
    replicaManager.startup()
    // 创建kafka控制器,并启动。该控制器启动后broker会尝试去zk创建节点竞争成为 controller
    /* start kafka controller */
    kafkaController = new KafkaController(config, zkUtils, time, metrics, threadNamePrefix)
    kafkaController.startup()
    // 创建一个集群管理组件
    adminManager = new AdminManager(config, metrics, metadataCache, zkUtils)
    // 创建群组协调器,并且启动
    /* start group coordinator */
    // Hardcode Time.SYSTEM for now as some Streams tests fail otherwise, it would be good to fix the underlying issue
    groupCoordinator = GroupCoordinator(config, zkUtils, replicaManager, Time.SYSTEM)
    groupCoordinator.startup()
    // 启动事务协调器,带有单独的后台线程调度程序,用于事务到期和日志加载
    /* start transaction coordinator, with a separate background thread scheduler for transaction expiration and log loading */
    // Hardcode Time.SYSTEM for now as some Streams tests fail otherwise, it would be good to fix the underlying issue
    transactionCoordinator = TransactionCoordinator(config, replicaManager, new KafkaScheduler(threads = 1, threadNamePrefix = "transaction-log-manager-"), zkUtils, metrics, metadataCache, Time.SYSTEM)
    transactionCoordinator.startup()
    // 构造授权器
    /* Get the authorizer and initialize it if one is specified.*/
    authorizer = Option(config.authorizerClassName).filter(_.nonEmpty).map { authorizerClassName =>
     val authZ = CoreUtils.createObject[Authorizer] (authorizerClassName)
     authZ.configure(config.originals())
     authZ
    }
    // 构造api组件,针对各个接口会处理不同的业务
    /* start processing requests */
    apis = new KafkaApis(socketServer.requestChannel, replicaManager, adminManager, groupCoordinator, transactionCoordinator,
     kafkaController, zkUtils, config.brokerId, config, metadataCache, metrics, authorizer, quotaManagers,
     brokerTopicStats, clusterId, time)
    // 请求处理池
    requestHandlerPool = new KafkaRequestHandlerPool(config.brokerId, socketServer.requestChannel, apis, time,
     config.numIoThreads)
    Mx4jLoader.maybeLoad()
    // 动态配置处理器的相关配置
    /* start dynamic config manager */
    dynamicConfigHandlers = Map[String, ConfigHandler] (ConfigType.Topic -> new TopicConfigHandler(logManager, config, quotaManagers),
     ConfigType.Client -> new ClientIdConfigHandler(quotaManagers),
     ConfigType.User -> new UserConfigHandler(quotaManagers, credentialProvider),
     ConfigType.Broker -> new BrokerConfigHandler(config, quotaManagers))
    // 初始化动态配置管理器,并启动
    // Create the config manager. start listening to notifications
    dynamicConfigManager = new DynamicConfigManager(zkUtils, dynamicConfigHandlers)
    dynamicConfigManager.startup()
    // 通知监听者
    /* tell everyone we are alive */
    val listeners = config.advertisedListeners.map { endpoint =>
     if (endpoint.port == 0)
      endpoint.copy(port = socketServer.boundPort(endpoint.listenerName))
     else
      endpoint
    }
    // kafka健康检查组件
    kafkaHealthcheck = new KafkaHealthcheck(config.brokerId, listeners, zkUtils, config.rack,
     config.interBrokerProtocolVersion)
    kafkaHealthcheck.startup()
    // 记录一下恢复点
    // Now that the broker id is successfully registered via KafkaHealthcheck, checkpoint it
    checkpointBrokerId(config.brokerId)
    // 修改broker状态
    socketServer.startProcessors()
    brokerState.newState(RunningAsBroker)
    shutdownLatch = new CountDownLatch(1)
    startupComplete.set(true)
    isStartingUp.set(false)
    AppInfoParser.registerAppInfo(jmxPrefix, config.brokerId.toString, metrics)
    info("started")
  }
 }
  catch {
   case e: Throwable =>
    fatal("Fatal error during KafkaServer startup. Prepare to shutdown", e)
    isStartingUp.set(false)
    shutdown()
    throw e
 }
}

 

 

4.3 Kafka源码剖析之Topic创建流程

4.3.1 Topic创建

有两种创建方式:自动创建、手动创建。在server.properties中配置auto.create.topics.enable=true 时,kafka在发现该topic不存在的时候会按照默认配置自动创建topic,触发自动创建topic有以下两种情况:

1. Producer向某个不存在的Topic写入消息
2. Consumer从某个不存在的Topic读取消息

 

4.3.2 手动创建

当 auto.create.topics.enable=false 时,需要手动创建topic,否则消息会发送失败。手动创建topic的方式如下:

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 10 --topic kafka_test

--replication-factor: 副本数目
--partitions: 分区数据
--topic: topic名字

 

 

4.3.3 查看Topic入口

查看脚本文件 kafka-topics.sh

exec $(dirname $0)/kafka-run-class.sh kafka.admin.TopicCommand "$@" 

最终还是调用的 TopicCommand 类:首先判断参数是否为空,并且create、list、alter、descibe、delete只允许存在一个,进行参数验证,创建 zookeeper 链接,如果参数中包含 create 则开始创建topic,其他情况类似。

def main(args: Array[String]): Unit = { 
  // 解析传入的参数
  val opts = new TopicCommandOptions(args)
  // 判断参数长度
  if(args.length == 0)
   CommandLineUtils.printUsageAndDie(opts.parser, "Create, delete, describe, or change a topic.")
  // create、list、alter、descibe、delete只允许存在一个
  // should have exactly one action
  val actions = Seq(opts.createOpt, opts.listOpt, opts.alterOpt, opts.describeOpt, opts.deleteOpt).count(opts.options.has _)
  if(actions != 1)
   CommandLineUtils.printUsageAndDie(opts.parser, "Command must include exactly one action: --list, --describe, --create, --alter or --delete")
  // 参数验证
  opts.checkArgs()
  // 初始化zookeeper链接
  val zkUtils = ZkUtils(opts.options.valueOf(opts.zkConnectOpt),
             30000,
             30000,
             JaasUtils.isZkSecurityEnabled())
  var exitCode = 0
  try {
   if(opts.options.has(opts.createOpt))
    // 创建topic
    createTopic(zkUtils, opts)
   else if(opts.options.has(opts.alterOpt))
    // 修改topic
    alterTopic(zkUtils, opts)
   else if(opts.options.has(opts.listOpt))
    // 列出所有的topic,bin/kafka-topics.sh --list --zookeeper localhost:2181
    listTopics(zkUtils, opts)
   else if(opts.options.has(opts.describeOpt))
    // 查看topic描述,bin/kafka-topics.sh --describe --zookeeper localhost:2181
    describeTopic(zkUtils, opts)
   else if(opts.options.has(opts.deleteOpt))
    // 删除topic
    deleteTopic(zkUtils, opts)
 } catch {
   case e: Throwable =>
    println("Error while executing topic command : " + e.getMessage)
    error(Utils.stackTrace(e))
    exitCode = 1
 } finally {
   zkUtils.close()
   Exit.exit(exitCode)
 }
}

 

 

4.3.4 创建Topic

下面我们主要来看一下 createTopic 的执行过程:

def createTopic(zkUtils: ZkUtils, opts: TopicCommandOptions) {
  // 获取参数中指定的topic名称
  val topic = opts.options.valueOf(opts.topicOpt)
  // 获取参数中 给当前要创建的主题指定的参数 --config max.message.bytes=1048576
  val configs = parseTopicConfigsToBeAdded(opts)
  // --if-not-exists选项, 看有没有该选项
  val ifNotExists = opts.options.has(opts.ifNotExistsOpt)
  if (Topic.hasCollisionChars(topic))
   println("WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.")
  try {
   // 看有没有指定副本分区的分配, 

   if (opts.options.has(opts.replicaAssignmentOpt)) {
    // 如果客户端指定了topic的partition的replicas分配情况,则直接把所有topic的元数据信息持久化写入到zk,
    // topic的properties写入到/config/topics/{topic}目录,
    // topic的PartitionAssignment写入到/brokers/topics/{topic}目录
    val assignment = parseReplicaAssignment(opts.options.valueOf(opts.replicaAssignmentOpt))
    AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK(zkUtils, topic, assignment, configs, update = false)
  } else {
    // 否则需要自动生成topic的PartitionAssignment
    // 检查有没有指定必要的参数,  解析器, 选项, 分区个数, 副本因子
    CommandLineUtils.checkRequiredArgs(opts.parser, opts.options, opts.partitionsOpt, opts.replicationFactorOpt)
    // 获取分区数
    val partitions = opts.options.valueOf(opts.partitionsOpt).intValue
    // 获取副本因子
    val replicas = opts.options.valueOf(opts.replicationFactorOpt).intValue
    // 从0.10.x版本开始,kafka可以支持指定broker的机架信息,如果指定了机架信息则在副本分配时会尽可能地让分区的副本分不到不同的机架上。
    // 指定机架信息是通过kafka的配置文件config/server.properties中的 broker.rack参数来配置的
    val rackAwareMode = if (opts.options.has(opts.disableRackAware)) RackAwareMode.Disabled
    else RackAwareMode.Enforced
    // 创建主题
    AdminUtils.createTopic(zkUtils, topic, partitions, replicas, configs, rackAwareMode)
  }
   println("Created topic \"%s\".".format(topic))
 } catch {
   case e: TopicExistsException => if (!ifNotExists) throw e
 }
}

 

1. 如果客户端指定了topic的partition的replicas分配情况,则直接把所有topic的元数据信息持久化写入到zk,topic的properties写入到/config/topics/{topic}目录, topic的PartitionAssignment写入到/brokers/topics/{topic}目录
2. 根据分区数量、副本集、是否指定机架来自动生成topic的分区数据

3. 下面继续来看 AdminUtils.createTopic 方法

def createTopic(zkUtils: ZkUtils,
         topic: String,
         partitions: Int,
         replicationFactor: Int,
         topicConfig: Properties = new Properties,
         rackAwareMode: RackAwareMode = RackAwareMode.Enforced) {
  // 获取集群中每个broker的brokerId和机架信息信息的列表,为下面的
  val brokerMetadatas = getBrokerMetadatas(zkUtils, rackAwareMode)
  // 根据分配策略, 将副本分区分配给broker
  val replicaAssignment = AdminUtils.assignReplicasToBrokers(brokerMetadatas, partitions, replicationFactor)
  // 在zookeeper中创建或更新主题分区分配路径
 AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK(zkUtils, topic, replicaAssignment, topicConfig)
 // 到此, 创建主题的过程结束
}

4. 下面继续来看 AdminUtils.assignReplicasToBrokers 方法

def assignReplicasToBrokers(brokerMetadatas: Seq[BrokerMetadata],
               nPartitions: Int,
               replicationFactor: Int,
               fixedStartIndex: Int = -1,
               startPartitionId: Int = -1): Map[Int, Seq[Int]] = {
  if (nPartitions <= 0)
   // 分区个数partitions不能小于等于0
   throw new InvalidPartitionsException("Number of partitions must be larger than 0.")
  if (replicationFactor <= 0)
   // 副本个数replicationFactor不能小于等于0
   throw new InvalidReplicationFactorException("Replication factor must be larger than 0.")
  if (replicationFactor > brokerMetadatas.size)
   // 副本个数replicationFactor不能大于broker的节点个数
   throw new InvalidReplicationFactorException(s"Replication factor: $replicationFactor larger than available brokers: ${brokerMetadatas.size}.")
  if (brokerMetadatas.forall(_.rack.isEmpty))
   // 没有指定机架信息的情况
   assignReplicasToBrokersRackUnaware(nPartitions, replicationFactor, brokerMetadatas.map(_.id), fixedStartIndex,
    startPartitionId)
  else {
   // 针对指定机架信息的情况,更加复杂一点
   if (brokerMetadatas.exists(_.rack.isEmpty))
    throw new AdminOperationException("Not all brokers have rack information for replica rack aware assignment.")
   assignReplicasToBrokersRackAware(nPartitions, replicationFactor, brokerMetadatas, fixedStartIndex,
    startPartitionId)
 }
}

 

1. 未指定机架策略

private def assignReplicasToBrokersRackUnaware(nPartitions: Int,
                        replicationFactor: Int,
                        brokerList: Seq[Int],
                        fixedStartIndex: Int,
                        startPartitionId: Int): Map[Int, Seq[Int]] = {
  val ret = mutable.Map[Int, Seq[Int]]()
  val brokerArray = brokerList.toArray
  val startIndex = if (fixedStartIndex >= 0) fixedStartIndex else rand.nextInt(brokerArray.length)
  var currentPartitionId = math.max(0, startPartitionId)
  var nextReplicaShift = if (fixedStartIndex >= 0) fixedStartIndex else rand.nextInt(brokerArray.length)
  for (_ <- 0 until nPartitions) {
   if (currentPartitionId > 0 && (currentPartitionId % brokerArray.length == 0))
    nextReplicaShift += 1
   val firstReplicaIndex = (currentPartitionId + startIndex) % brokerArray.length
   val replicaBuffer = mutable.ArrayBuffer(brokerArray(firstReplicaIndex))
   for (j <- 0 until replicationFactor - 1)
    replicaBuffer += brokerArray(replicaIndex(firstReplicaIndex, nextReplicaShift, j, brokerArray.length))
   ret.put(currentPartitionId, replicaBuffer)
   currentPartitionId += 1
 }
  ret
}

遍历每个分区partition然后从brokerArray(brokerId的列表)中选取replicationFactor个brokerId分配给这个partition.

 

创建一个可变的Map用来存放本方法将要返回的结果,即分区partition和分配副本的映射关系。由于fixedStartIndex为-1,所以startIndex是一个随机数,用来计算一个起始分配的brokerId,同时由于startPartitionId为-1,所以currentPartitionId的值为0,可见默认创建topic时总是从编号为0的分区依次轮询进行分配。

 

nextReplicaShift表示下一次副本分配相对于前一次分配的位移量,这个字面上理解有点绕,不如举个例子:假设集群中有3个broker节点,即代码中的brokerArray,创建某topic有3个副本和6个分区,那么首先从partitionId(partition的编号)为0的分区开始进行分配,假设第一次计算(由rand.nextInt(brokerArray.length)随机)到nextReplicaShift为1,第一次随机到的startIndex为2,那么partitionId为0的第一个副本的位置(这里指的是brokerArray的数组下标)firstReplicaIndex =(currentPartitionId + startIndex) % brokerArray.length = (0+2)%3 = 2,第二个副本的位置为replicaIndex(firstReplicaIndex, nextReplicaShift, j, brokerArray.length) = replicaIndex(2, nextReplicaShift+1,0, 3)= ? 。

 

继续计算 replicaIndex(2, nextReplicaShift+1,0, 3) = replicaIndex(2, 2,0, 3)= (2+(1+(2+0)%(3-1)))%3=0。继续计算下一个副本的位置replicaIndex(2, 2,1, 3) = (2+(1+(2+1)%(3-1)))%3=1。所以partitionId为0的副本分配位置列表为[2,0,1],如果brokerArray正好是从0开始编号,也正好是顺序不间断的,即brokerArray为[0,1,2]的话,那么当前partitionId为0的副本分配策略为[2,0,1]。如果brokerId不是从零开始,也不是顺序的(有可能之前集群的其中broker几个下线了),最终的brokerArray为[2,5,8],那么partitionId为0的分区的副本分配策略为[8,2,5]。为了便于说明问题,可以简单的假设brokerArray就是[0,1,2]。

 

同样计算下一个分区,即partitionId为1的副本分配策略。此时nextReplicaShift还是为2,没有满足自增的条件。这个分区的firstReplicaIndex = (1+2)%3=0。第二个副本的位置replicaIndex(0,2,0,3) = (0+(1+(2+0)%(3-1)))%3 = 1,第三个副本的位置replicaIndex(0,2,1,3) = 2,最终partitionId为2的分区分配策略为[0,1,2]

 

2. 指定机架策略

private def assignReplicasToBrokersRackAware(nPartitions:Int,
                       replicationFactor: Int,
                       brokerMetadatas: Seq[BrokerMetadata],
                       fixedStartIndex: Int,
                       startPartitionId: Int): Map[Int, Seq[Int]] = {
  val brokerRackMap = brokerMetadatas.collect { case BrokerMetadata(id, Some(rack)) =>    id -> rack  }.toMap   
  val numRacks = brokerRackMap.values.toSet.size
  val arrangedBrokerList = getRackAlternatedBrokerList(brokerRackMap)
  val numBrokers = arrangedBrokerList.size
  val ret = mutable.Map[Int, Seq[Int]]()
  val startIndex = if (fixedStartIndex >= 0) fixedStartIndex else rand.nextInt(arrangedBrokerList.size)
  var currentPartitionId = math.max(0, startPartitionId)
  var nextReplicaShift = if (fixedStartIndex >= 0) fixedStartIndex else rand.nextInt(arrangedBrokerList.size)
  for (_ <- 0 until nPartitions) {
   if (currentPartitionId > 0 && (currentPartitionId % arrangedBrokerList.size == 0))
    nextReplicaShift += 1
   val firstReplicaIndex = (currentPartitionId + startIndex) % arrangedBrokerList.size
   val leader = arrangedBrokerList(firstReplicaIndex)
   val replicaBuffer = mutable.ArrayBuffer(leader)
   val racksWithReplicas = mutable.Set(brokerRackMap(leader))
   val brokersWithReplicas = mutable.Set(leader)
   var k = 0
   for (_ <- 0 until replicationFactor - 1) {
    var done = false
    while (!done) {
     val broker = arrangedBrokerList(replicaIndex(firstReplicaIndex, nextReplicaShift * numRacks, k, arrangedBrokerList.size))
     val rack = brokerRackMap(broker)
     // Skip this broker if
     // 1. there is already a broker in the same rack that has assigned a replica AND there is one or more racks
     //  that do not have any replica, or
     // 2. the broker has already assigned a replica AND there is one or more brokers that do not have replica assigned
     if ((!racksWithReplicas.contains(rack) || racksWithReplicas.size == numRacks)
       && (!brokersWithReplicas.contains(broker) || brokersWithReplicas.size == numBrokers)) {
      replicaBuffer += broker
      racksWithReplicas += rack
      brokersWithReplicas += broker
      done = true
    }
     k += 1
   }
  }
   ret.put(currentPartitionId, replicaBuffer)
   currentPartitionId += 1
 }
  ret
}

1. assignReplicasToBrokersRackUnaware的执行前提是所有的broker都没有配置机架信息,而assignReplicasToBrokersRackAware的执行前提是所有的broker都配置了机架信息,如果出现部分broker配置了机架信息而另一部分没有配置的话,则会抛出AdminOperationException的异常,如果还想要顺利创建topic的话,此时需加上“--disable-rack-aware”

 

2. 第一步获得brokerId和rack信息的映射关系列表brokerRackMap ,之后调用getRackAlternatedBrokerList()方法对brokerRackMap做进一步的处理生成一个brokerId的列表。举例:假设目前有3个机架rack1、rack2和rack3,以及9个broker,分别对应关系如下:

rack1: 0, 1, 2
rack2: 3, 4, 5
rack3: 6, 7, 8

那么经过getRackAlternatedBrokerList()方法处理过后就变成了[0, 3, 6, 1,4, 7, 2, 5, 8]这样一个列表,显而易见的这是轮询各个机架上的broker而产生的,之后你可以简单的将这个列表看成是brokerId的列表,对应assignReplicasToBrokersRackUnaware()方法中的brokerArray,但是其中包含了简单的机架分配信息。之后的步骤也和未指定机架信息的算法类似,同样包含startIndex、currentPartiionId, nextReplicaShift的概念,循环为每一个分区分配副本。分配副本时处理第一个副本之外,其余的也调用replicaIndex方法来获得一个broker,但是这里和assignReplicasToBrokersRackUnaware()不同的是,这里不是简单的将这个broker添加到当前分区的副本列表之中,还要经过一层的筛选,满足以下任意一个条件的broker不能被添加到当前分区的副本列表之中:

1. 如果此broker所在的机架中已经存在一个broker拥有该分区的副本,并且还有其他的机架中没有任何一个broker拥有该分区的副本。对应代码中的(!racksWithReplicas.contains(rack) || racksWithReplicas.size == numRacks)

2. 如果此broker中已经拥有该分区的副本,并且还有其他broker中没有该分区的副本。对应代码中的(!brokersWithReplicas.contains(broker) || brokersWithReplicas.size == numBrokers))

5. 无论是带机架信息的策略还是不带机架信息的策略,上层调用方法AdminUtils.assignReplicasToBrokers()最后都是获得一个[Int, Seq[Int]]类型的副本分配列表,其最后作为kafka zookeeper节点/brokers/topics/{topic-name}节点数据。至此kafka的topic创建就讲解完了,有些同学会感到很疑问,全文通篇(包括上一篇)都是在讲述如何分配副本,最后得到的也不过是个分配的方案,并没有真正创建这些副本的环节,其实这个观点没有任何问题,对于通过kafka提供的kafka-topics.sh脚本创建topic的方法来说,它只是提供一个副本的分配方案,并在kafka zookeeper中创建相应的节点而已。kafka broker的服务会注册监听/brokers/topics/目录下是否有节点变化,如果有新节点创建就会监听到,然后根据其节点中的数据(即topic的分区副本分配方案)来创建对应的副本。

 

 

4.4 Kafka源码剖析之Producer生产者流程

4.4.1 Producer示例

首先我们先通过一段代码来展示 KafkaProducer 的使用方法。在下面的示例中,我们使用KafkaProducer 实现向kafka发送消息的功能。在示例程序中,首先将 KafkaProduce 使用的配置写入到 Properties 中,每项配置的具体含义在注释中进行解释。之后以此 Properties 对象为参数构造KafkaProducer 对象,最后通过 send 方法完成发送,代码中包含同步发送、异步发送两种情况。

public static void main(String[] args) throws ExecutionException,
InterruptedException {
    Properties props = new Properties();
    // 客户端id
    props.put("client.id", "KafkaProducerDemo");
    // kafka地址,列表格式为host1:port1,host2:port2,…,无需添加所有的集群地址, kafka会根据提供的地址发现其他的地址(建议多提供几个,以防提供的服务器关闭)
    props.put("bootstrap.servers", "localhost:9092");
    // 发送返回应答方式
    // 0:Producer 往集群发送数据不需要等到集群的返回,不确保消息发送成功。安全性最低但是效率最高。
    // 1:Producer 往集群发送数据只要 Leader 应答就可以发送下一条,只确保Leader接收成功。
    // -1或者all:Producer 往集群发送数据需要所有的ISR Follower都完成从Leader的同步才会发送下一条,确保Leader发送成功和所有的副本都成功接收。安全性最高,但是效率最低。
    props.put("acks", "all");
    // 重试次数
    props.put("retries", 0);
    // 重试间隔时间
    props.put("retries.backoff.ms", 100);
    // 批量发送的大小
    props.put("batch.size", 16384);
    // 一个Batch被创建之后,最多过多久,不管这个Batch有没有写满,都必须发送出去
    props.put("linger.ms", 10);
    // 缓冲区大小
    props.put("buffer.memory", 33554432);
    // key序列化方式
    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    // value序列化方式
    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    // topic
    String topic = "lagou_edu";
    Producer<String, String> producer = new KafkaProducer<>(props);
    AtomicInteger count = new AtomicInteger();
    while (true) {
      int num = count.get();
      String key = Integer.toString(num);
      String value = Integer.toString(num);
      ProducerRecord<String, String> record = new ProducerRecord<> (topic, key, value);
      if (num % 2 == 0) {
        // 偶数异步发送
        // 第一个参数record封装了topic、key、value
        // 第二个参数是一个callback对象,当生产者接收到kafka发来的ACK确认消息时,会调用此CallBack对象的onComplete方法
        producer.send(record, (recordMetadata, e) -> {
          System.out.println("num:" + num + " topic:" + recordMetadata.topic() + " offset:" + recordMetadata.offset());
       });
     } else {
        // 同步发送
        // KafkaProducer.send方法返回的类型是Future<RecordMetadata>,通过get方法阻塞当前线程,等待kafka服务端ACK响应
        producer.send(record).get();
     }
      count.incrementAndGet();
      TimeUnit.MILLISECONDS.sleep(100);
   }
 }

 

 

4.4.1.1 同步发送

1. KafkaProducer.send方法返回的类型是Future<RecordMetadata>,通过get方法阻塞当前线程,等待kafka服务端ACK响应

producer.send(record).get() 

4.4.1.2 异步发送

1. 第一个参数record封装了topic、key、value
2. 第二个参数是一个callback对象,当生产者接收到kafka发来的ACK确认消息时,会调用此CallBack对象的onComplete方法

producer.send(record, (recordMetadata, e) -> {
          System.out.println("num:" + num + " topic:" + recordMetadata.topic() + " offset:" + recordMetadata.offset());
       });

 

 

4.4.2 KafkaProducer实例化

了解了 KafkaProducer 的基本使用,开始深入了解的KafkaProducer原理和实现,先看一下构造方法核心逻辑

private KafkaProducer(ProducerConfig config, Serializer<K> keySerializer,
Serializer<V> valueSerializer) {
    try {
      // 获取用户的配置
      Map<String, Object> userProvidedConfigs = config.originals();
      this.producerConfig = config;
      // 系统时间
      this.time = Time.SYSTEM;
      // 获取client.id配置
      String clientId = config.getString(ProducerConfig.CLIENT_ID_CONFIG);
      // 如果client.id为空,设置默认值:producer-num num递增
      if (clientId.length() <= 0)
        clientId = "producer-" + PRODUCER_CLIENT_ID_SEQUENCE.getAndIncrement();
      this.clientId = clientId;
      // 获取事务id,如果没有配置则为null
      String transactionalId = userProvidedConfigs.containsKey(ProducerConfig.TRANSACTIONAL_ID_CONFIG) ?
         (String) userProvidedConfigs.get(ProducerConfig.TRANSACTIONAL_ID_CONFIG) : null;
      LogContext logContext;
      if (transactionalId == null)
        logContext = new LogContext(String.format("[Producer clientId=%s] ", clientId));
      else
        logContext = new LogContext(String.format("[Producer clientId=%s, transactionalId=%s] ", clientId, transactionalId));
      log = logContext.logger(KafkaProducer.class);
      log.trace("Starting the Kafka producer");
      // 创建client-id的监控map
      Map<String, String> metricTags = Collections.singletonMap("client-id", clientId);
      // 设置监控配置,包含样本量、取样时间窗口、记录级别
      MetricConfig metricConfig = new MetricConfig().samples(config.getInt(ProducerConfig.METRICS_NUM_SAMPLES_CONFIG))
         .timeWindow(config.getLong(ProducerConfig.METRICS_SAMPLE_WINDOW_MS_CONFIG), TimeUnit.MILLISECONDS).recordLevel(Sensor.RecordingLevel.forName(config.getString(ProducerConfig.METRICS_RECORDING_LEVEL_CONFIG)))
         .tags(metricTags);
      // 监控数据上报类
      List<MetricsReporter> reporters = config.getConfiguredInstances(ProducerConfig.METRIC_REPORTER_CLASSES_CONFIG,
          MetricsReporter.class);
      reporters.add(new JmxReporter(JMX_PREFIX));
      this.metrics = new Metrics(metricConfig, reporters, time);
      // 生成生产者监控
      ProducerMetrics metricsRegistry = new ProducerMetrics(this.metrics);
      // 获取用户设置的分区器
      this.partitioner = config.getConfiguredInstance(ProducerConfig.PARTITIONER_CLASS_CONFIG, Partitioner.class);
      // 重试时间 retry.backoff.ms 默认100ms
      long retryBackoffMs = config.getLong(ProducerConfig.RETRY_BACKOFF_MS_CONFIG);
      if (keySerializer == null) {
        // 反射生成key序列化方式
        this.keySerializer = ensureExtended(config.getConfiguredInstance(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
            Serializer.class));
        this.keySerializer.configure(config.originals(), true);
     } else {
        config.ignore(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG);
        this.keySerializer = ensureExtended(keySerializer);
     }
      if (valueSerializer == null) {
        // 反射生成value序列化方式
        this.valueSerializer = ensureExtended(config.getConfiguredInstance(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
            Serializer.class));
        this.valueSerializer.configure(config.originals(), false);
     } else {
        config.ignore(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG);
        this.valueSerializer = ensureExtended(valueSerializer);
     }
      // load interceptors and make sure they get clientId
      // 确认client.id添加到用户的配置里面
      userProvidedConfigs.put(ProducerConfig.CLIENT_ID_CONFIG, clientId);
      // 获取用户设置的多个拦截器,为空则不处理
      List<ProducerInterceptor<K, V>> interceptorList = (List) (new ProducerConfig(userProvidedConfigs, false)).getConfiguredInstances(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG,
          ProducerInterceptor.class);
      this.interceptors = interceptorList.isEmpty() ? null : new ProducerInterceptors<>(interceptorList);
      // 集群资源监听器,在元数据变更时会有通知
      ClusterResourceListeners clusterResourceListeners = configureClusterResourceListeners(keySerializer, valueSerializer, interceptorList, reporters);
      // 生产者每隔一段时间都要去更新一下集群的元数据,默认5分钟
      this.metadata = new Metadata(retryBackoffMs, config.getLong(ProducerConfig.METADATA_MAX_AGE_CONFIG),
          true, true, clusterResourceListeners);
      // 生产者往服务端发送消息的时候,规定一条消息最大多大?
      // 如果你超过了这个规定消息的大小,你的消息就不能发送过去。
      // 默认是1M,这个值偏小,在生产环境中,我们需要修改这个值。
      // 经验值是10M。但是大家也可以根据自己公司的情况来。
      this.maxRequestSize = config.getInt(ProducerConfig.MAX_REQUEST_SIZE_CONFIG);
      //指的是缓存总大小
      //默认值是32M,这个值一般是够用,如果有特殊情况的时候,我们可以去修改这个值。
      this.totalMemorySize = config.getLong(ProducerConfig.BUFFER_MEMORY_CONFIG);
      // kafka是支持压缩数据的,可以设置压缩格式,默认是不压缩,支持gzip、 snappy、lz4
      // 一次发送出去的消息就更多。生产者这儿会消耗更多的cpu.
      this.compressionType = CompressionType.forName(config.getString(ProducerConfig.COMPRESSION_TYPE_CONFIG));
      // 配置控制了KafkaProducer.send()并将KafkaProducer.partitionsFor()被阻塞多长时间,由于缓冲区已满或元数据不可用,这些方法可能会被阻塞止
      this.maxBlockTimeMs = config.getLong(ProducerConfig.MAX_BLOCK_MS_CONFIG);
      // 控制客户端等待请求响应的最长时间。如果在超时过去之前未收到响应,客户端将在必要时重新发送请求,或者如果重试耗尽,请求失败
      this.requestTimeoutMs = config.getInt(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG);
      // 事务管理器
      this.transactionManager = configureTransactionState(config, logContext, log);
      // 重试次数,
      int retries = configureRetries(config, transactionManager != null, log);
      // 使用幂等性,需要将 enable.idempotence 配置项设置为true。并且它对单个分区的发送,一次性最多发送5条
      // 正在等待请求结果的请求数
      int maxInflightRequests = configureInflightRequests(config, transactionManager != null);
      // 如果开启了幂等性,但是用户指定的ack不为 -1,则会抛出异常
      short acks = configureAcks(config, transactionManager != null, log);
      this.apiVersions = new ApiVersions();
      // 创建核心组件:记录累加器
      // 用户消息记录
      this.accumulator = new RecordAccumulator(logContext,
          config.getInt(ProducerConfig.BATCH_SIZE_CONFIG),
          this.totalMemorySize,
          this.compressionType,
          config.getLong(ProducerConfig.LINGER_MS_CONFIG),
          retryBackoffMs,
          metrics,
          time,
          apiVersions,
          transactionManager);
      // 获取broker地址列表
      List<InetSocketAddress> addresses = ClientUtils.parseAndValidateAddresses(config.getList(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG));
      // 更新元数据
      this.metadata.update(Cluster.bootstrap(addresses), Collections.<String>emptySet(), time.milliseconds());
      // 创建通道,是否需要加密
      ChannelBuilder channelBuilder = ClientUtils.createChannelBuilder(config);
      Sensor throttleTimeSensor = Sender.throttleTimeSensor(metricsRegistry.senderMetrics);
      // 初始化了一个重要的管理网路的组件, 真正发送消息的网络客户端
      // connections.max.idle.ms: 默认值是9分钟, 一个网络连接最多空闲多久,超过这个空闲时间,就关闭这个网络连接。
      // max.in.flight.requests.per.connection:默认是5, producer向 broker发送数据的时候,其实是有多个网络连接。每个网络连接可以忍受 producer端发送给 broker 消息然后消息没有响应的个数
      NetworkClient client = new NetworkClient(
          new Selector(config.getLong(ProducerConfig.CONNECTIONS_MAX_IDLE_MS_CONFIG),
              this.metrics, time, "producer", channelBuilder, logContext),
          this.metadata,
          clientId,
          maxInflightRequests,
          config.getLong(ProducerConfig.RECONNECT_BACKOFF_MS_CONFIG),
          config.getLong(ProducerConfig.RECONNECT_BACKOFF_MAX_MS_CONFIG),
          config.getInt(ProducerConfig.SEND_BUFFER_CONFIG),
          config.getInt(ProducerConfig.RECEIVE_BUFFER_CONFIG),
          this.requestTimeoutMs,
          time,
          true,
          apiVersions,
          throttleTimeSensor,
          logContext);
      // 发送线程
      this.sender = new Sender(logContext,
          client,
          this.metadata,
          this.accumulator,
          maxInflightRequests == 1,
          config.getInt(ProducerConfig.MAX_REQUEST_SIZE_CONFIG),
          acks,
          retries,
          metricsRegistry.senderMetrics,
          Time.SYSTEM,
          this.requestTimeoutMs,
          config.getLong(ProducerConfig.RETRY_BACKOFF_MS_CONFIG),
          this.transactionManager,
          apiVersions);
      // 线程名称
      String ioThreadName = NETWORK_THREAD_PREFIX + " | " + clientId;
      // 启动守护线程
      this.ioThread = new KafkaThread(ioThreadName, this.sender, true);
      // 启动发送消息的线程
      this.ioThread.start();
      this.errors = this.metrics.sensor("errors");
      // 把用户配置的参数,但是没有用到的打印出来
      config.logUnused();
      AppInfoParser.registerAppInfo(JMX_PREFIX, clientId, metrics);
      log.debug("Kafka producer started");
   } catch (Throwable t) {
      // call close methods if internal objects are already constructed this is to prevent resource leak. see KAFKA-2121
      close(0, TimeUnit.MILLISECONDS, true);
      // now propagate the exception
      throw new KafkaException("Failed to construct kafka producer", t);
   }
 }

 

4.4.2 消息发送过程

Kafka消息实际发送以 send 方法为入口:

@Override
  public Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback) {
    // intercept the record, which can be potentially modified; this method does not throw exceptions
    // 判断是都设置了拦截器, 如果有, 就吊用
    // 拦截器的吊用按照拦截器社会的顺序进行调用 , 123
    ProducerRecord<K, V> interceptedRecord = this.interceptors == null ? record : this.interceptors.onSend(record);
    // doSend
    return doSend(interceptedRecord, callback);
 }

 

4.4.2.1 拦截器

首先方法会先进入拦截器集合 ProducerInterceptors , onSend 方法是遍历拦截器 onSend 方法,拦截器的目的是将数据处理加工, kafka 本身并没有给出默认的拦截器的实现。如果需要使用拦截器功能,必须自己实现 ProducerInterceptor 接口。

public ProducerRecord<K, V> onSend(ProducerRecord<K, V> record) {
    ProducerRecord<K, V> interceptRecord = record;
    // 遍历所有拦截器,顺序执行,如果有异常只打印日志,不会向上抛出
    for (ProducerInterceptor<K, V> interceptor : this.interceptors) {
      try {
        interceptRecord = interceptor.onSend(interceptRecord);
     } catch (Exception e) {
        // do not propagate interceptor exception, log and continue calling other interceptors
        // be careful not to throw exception from here
        if (record != null)
          log.warn("Error executing interceptor onSend callback for topic: {}, partition: {}", record.topic(), record.partition(), e);
        else
          log.warn("Error executing interceptor onSend callback", e);
     }
   }
    return interceptRecord;
 }

 

4.4.2.2 拦截器核心逻辑

ProducerInterceptor 接口包括三个方法:

1. onSend(ProducerRecord):该方法封装进KafkaProducer.send方法中,即它运行在用户主线程中的。Producer确保在消息被序列化以计算分区前调用该方法。用户可以在该方法中对消息做任何操作,但最好保证不要修改消息所属的topic和分区,否则会影响目标分区的计算

2. onAcknowledgement(RecordMetadata, Exception):该方法会在消息被应答之前或消息发送失败时调用,并且通常都是在producer回调逻辑触发之前。onAcknowledgement运行在producer的IO线程中,因此不要在该方法中放入很重的逻辑,否则会拖慢producer的消息发送效率

3. close:关闭interceptor,主要用于执行一些资源清理工作

4. 拦截器可能被运行在多个线程中,因此在具体实现时用户需要自行确保线程安全。另外倘若指定了多个interceptor,则producer将按照指定顺序调用它们,并仅仅是捕获每个interceptor可能抛出的异常记录到错误日志中而非在向上传递。

 

4.4.2.3 发送五步骤

下面仔细来看一下 doSend 方法的运行过程:

private Future<RecordMetadata> doSend(ProducerRecord<K, V> record, Callback
callback) {
    // 首先创建一个主题分区类
    TopicPartition tp = null;
    try {
      // first make sure the metadata for the topic is available
      // 首先确保该topic的元数据可用
      ClusterAndWaitTime clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), maxBlockTimeMs);
      long remainingWaitMs = Math.max(0, maxBlockTimeMs - clusterAndWaitTime.waitedOnMetadataMs);
      Cluster cluster = clusterAndWaitTime.cluster;
      // 序列化 record 的 key 和 value
      byte[] serializedKey;
      try {
        serializedKey = keySerializer.serialize(record.topic(), record.headers(), record.key());
     } catch (ClassCastException cce) {
        throw new SerializationException("Can't convert key of class " + record.key().getClass().getName() +
            " to class " + producerConfig.getClass(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG).getName() +
            " specified in key.serializer", cce);
     }
      byte[] serializedValue;
      try {
        serializedValue = valueSerializer.serialize(record.topic(), record.headers(), record.value());
     } catch (ClassCastException cce) {
        throw new SerializationException("Can't convert value of class " + record.value().getClass().getName() +
            " to class " + producerConfig.getClass(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG).getName() +
            " specified in value.serializer", cce);
     }
      // 获取该 record 要发送到的 partition
      int partition = partition(record, serializedKey, serializedValue, cluster);
      // 将消息发送到哪个主题, 哪个分区
      tp = new TopicPartition(record.topic(), partition);
      // 给header设置只读
      setReadOnly(record.headers());
      Header[] headers = record.headers().toArray();
      int serializedSize = AbstractRecords.estimateSizeInBytesUpperBound(apiVersions.maxUsableProduceMagic(),
          compressionType, serializedKey, serializedValue, headers);
      ensureValidRecordSize(serializedSize);
      long timestamp = record.timestamp() == null ? time.milliseconds() : record.timestamp();
      log.trace("Sending record {} with callback {} to topic {} partition {}", record, callback, record.topic(), partition);
      // producer callback will make sure to call both 'callback' and interceptor callback
      // 消息从broker确认之后 , 要调用的拦截器, 顺序按照拦截器设置的顺序代用
      Callback interceptCallback = this.interceptors == null ? callback : new InterceptorCallback<>(callback, this.interceptors, tp);
      // 如果是事务的消息发送, 则设置事物管理器 
      if (transactionManager != null && transactionManager.isTransactional())
        transactionManager.maybeAddPartitionToTransaction(tp);
      // 向 accumulator 中追加 record 数据,数据会先进行缓存
      RecordAccumulator.RecordAppendResult result = accumulator.append(tp, timestamp, serializedKey,
          serializedValue, headers, interceptCallback, remainingWaitMs);
      // 如果追加完数据后,对应的 RecordBatch 已经达到了 batch.size 的大小 (或者batch 的剩余空间不足以添加下一条 Record),则唤醒 sender 线程发送数据。
      // 如果linger.ms 时间到了, 则创建新的批次, 唤醒发送线程
      if (result.batchIsFull || result.newBatchCreated) {
        log.trace("Waking up the sender since topic {} partition {} is either full or getting a new batch", record.topic(), partition);
        this.sender.wakeup();
     }   
      // 返回future对象
      return result.future;
      // handling exceptions and record the errors;
      // for API exceptions return them in the future,
      // for other exceptions throw directly
   } catch (ApiException e) {
      log.debug("Exception occurred during message send:", e);
      if (callback != null)
        callback.onCompletion(null, e);
      this.errors.record();
      if (this.interceptors != null)
        this.interceptors.onSendError(record, tp, e);
      return new FutureFailure(e);
   } catch (InterruptedException e) {
      this.errors.record();
      if (this.interceptors != null)
        this.interceptors.onSendError(record, tp, e);
      throw new InterruptException(e);
   } catch (BufferExhaustedException e) {
      this.errors.record();
      this.metrics.sensor("buffer-exhausted-records").record();
      if (this.interceptors != null)
        this.interceptors.onSendError(record, tp, e);
      throw e;
   } catch (KafkaException e) {
      this.errors.record();
      if (this.interceptors != null)
        this.interceptors.onSendError(record, tp, e);
      throw e;
   } catch (Exception e) {
      // we notify interceptor about all exceptions, since onSend is called before anything else in this method
      if (this.interceptors != null)
        this.interceptors.onSendError(record, tp, e);
      throw e;
   }
 }

1. Producer 通过  waitOnMetadata() 方法来获取对应 topic 的 metadata 信息,需要先该topic 是可用的
2. Producer 端对 record 的 key 和 value 值进行序列化操作,在 Consumer 端再进行相应的反序列化
3. 获取partition值,具体分为下面三种情况:

1. 指明 partition 的情况下,直接将指明的值直接作为 partiton 值

2. 没有指明 partition 值但有 key 的情况下,将 key 的 hash 值与 topic 的 partition数进行取余得到 partition 值

3. 既没有 partition 值又没有 key 值的情况下,第一次调用时随机生成一个整数(后面每次调用在这个整数上自增),将这个值与 topic 可用的 partition 总数取余得到partition 值,也就是常说的 round-robin 算法

 

Producer 默认使用的 partitioner 是org.apache.kafka.clients.producer.internals.DefaultPartitioner

4. 向 accumulator 写数据,先将 record 写入到 buffer 中,当达到一个 batch.size 的大小时,再唤起 sender线程去发送 RecordBatch,这里仔细分析一下Producer是如何向buffer写入数据的

1. 获取该 topic-partition 对应的 queue,没有的话会创建一个空的 queue
2. 向 queue 中追加数据,先获取 queue 中最新加入的那个 RecordBatch,如果不存在或者存在但剩余空余不足以添加本条 record 则返回 null,成功写入的话直接返回结果,写入成功
3. 创建一个新的 RecordBatch,初始化内存大小根据 max(batch.size,Records.LOG_OVERHEAD + Record.recordSize(key, value)) 来确定(防止单条record 过大的情况)
4. 向新建的 RecordBatch 写入 record,并将 RecordBatch 添加到 queue 中,返回结果,写入成功

5. 发送 RecordBatch,当 record 写入成功后,如果发现 RecordBatch 已满足发送的条件(通常是 queue 中有多个 batch,那么最先添加的那些 batch 肯定是可以发送了),那么就会唤醒 sender 线程,发送  RecordBatch 。sender 线程对 RecordBatch 的处理是在 run() 方法中进行的,该方法具体实现如下:

1. 获取那些已经可以发送的 RecordBatch 对应的 nodes
2. 如果与node 没有连接(如果可以连接,同时初始化该连接),就证明该 node 暂时不能发送数据,暂时移除该 node
3. 返回该 node 对应的所有可以发送的 RecordBatch 组成的 batches(key 是node.id),并将 RecordBatch 从对应的 queue 中移除
4. 将由于元数据不可用而导致发送超时的 RecordBatch 移除
5. 发送 RecordBatch

 

4.4.2.4 MetaData更新机制

1. metadata.requestUpdate() 将 metadata 的 needUpdate 变量设置为 true(强制更新),并返回当前的版本号(version),通过版本号来判断 metadata 是否完成更新
2. sender.wakeup() 唤醒 sender 线程,sender 线程又会去唤醒NetworkClient线程去更新
3. metadata.awaitUpdate(version, remainingWaitMs) 等待 metadata 的更新
4. 所以,每次 Producer 请求更新 metadata 时,会有以下几种情况:

1. 如果 node 可以发送请求,则直接发送请求
2. 如果该 node 正在建立连接,则直接返回
3. 如果该 node 还没建立连接,则向 broker 初始化链接
5. NetworkClient的poll方法中判断是否需要更新meta数据, handleCompletedReceives 处理metadata 的更新,最终是调用的 DefaultMetadataUpdater 中的handleCompletedMetadataResponse 方法处理

 

 

4.5 Kafka源码剖析之Consumer消费者流程

4.5.1 Consumer示例

KafkaConsumer
消费者的根本目的是从Kafka服务端拉取消息,并交给业务逻辑进行处理。
开发人员不必关心与Kafka服务端之间网络连接的管理、心跳检测、请求超时重试等底层操作也不必关心订阅Topic的分区数量、分区Leader副本的网络拓扑以及消费组的Rebalance等细节,另外还提供了自动提交offset的功能。

案例:

public static void main(String[] args) throws InterruptedException {
    // 是否自动提交
    Boolean autoCommit = false;
    // 是否异步提交
    Boolean isSync = true;
    Properties props = new Properties();
    // kafka地址,列表格式为host1:port1,host2:port2,…,无需添加所有的集群地址, kafka会根据提供的地址发现其他的地址(建议多提供几个,以防提供的服务器关闭)
    props.put("bootstrap.servers", "localhost:9092");
    // 消费组
    props.put("group.id", "test");
    // 开启自动提交offset
    props.put("enable.auto.commit", autoCommit.toString());
    // 1s自动提交
    props.put("auto.commit.interval.ms", "1000");
    // 消费者和群组协调器的最大心跳时间,如果超过该时间则认为该消费者已经死亡或者故 障,需要踢出消费者组
    props.put("session.timeout.ms", "60000");
    // 一次poll间隔最大时间
    props.put("max.poll.interval.ms", "1000");
    // 当消费者读取偏移量无效的情况下,需要重置消费起始位置,默认为latest(从消费者 启动后生成的记录),另外一个选项值是 earliest,将从有效的最小位移位置开始消费
    props.put("auto.offset.reset", "latest");
    // consumer端一次拉取数据的最大字节数
    props.put("fetch.max.bytes", "1024000");
    // key序列化方式
    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    // value序列化方式
    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    KafkaConsumer<String, String> consumer = new KafkaConsumer<> (props);
    String topic = "lagou_edu";
    // 订阅topic列表
    consumer.subscribe(Arrays.asList(topic));
    while (true) {
      // 消息拉取
      ConsumerRecords<String, String> records = consumer.poll(100);
      for (ConsumerRecord<String, String> record : records) {
        System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
     }
      if (!autoCommit) {
        if (isSync) {
          // 处理完成单次消息以后,提交当前的offset,如果失败会一直重试直至成功
          consumer.commitSync();
       } else {
          // 异步提交
          consumer.commitAsync((offsets, exception) -> {
            exception.printStackTrace();
            System.out.println(offsets.size());
         });
       }
     }
      TimeUnit.SECONDS.sleep(3);
   }
 }

 

Kafka服务端并不会记录消费者的消费位置,而是由消费者自己决定如何保存如何记录其消费的offset。在Kafka服务端中添加了一个名为“__consumer_offsets"的内部topic来保存消费者提交的offset,当出现消费者上、下线时会触发Consumer Group进行Rebalance操作,对分区进行重新分配,待Rebalance操作完成后。消费者就可以读取该topic中记录的offset,并从此offset位置继续消费。当然,使用该topic记录消费者的offset只是默认选项,开发人员可以根据业务需求将offset记录在别的存储中。

 

在消费者消费消息的过程中,提交offset的时机非常重要,因为它决定了消费者故障重启后的消费位置。在上面的示例中,我们通过将 enable.auto.commit 选项设置为true可以起到自动提交offset的功能, auto.commit.interval.ms 选项则设置了自动提交的时间间隔。每次在调用KafkaConsumer.poll() 方法时都会检测是否需要自动提交,并提交上次 poll() 方法返回的最后一个消息的offset。为了避免消息丢失,建议poll()方法之前要处理完上次poll()方法拉取的全部消息。KafkaConsumer中还提供了两个手动提交offset的方法,分别是 commitSync() 和 commitAsync() ,它们都可以指定提交的offset值,区别在于前者是同步提交,后者是异步提交。

 

4.5.2 KafkaConsumer实例化

了解了 KafkaConsumer 的基本使用,开始深入了解 KafkaConsumer 原理和实现,先看一下构造方法核心逻辑

private KafkaConsumer(ConsumerConfig config,
             Deserializer<K> keyDeserializer,
             Deserializer<V> valueDeserializer) {
    try {
      // 获取client.id,如果为空则默认生成一个,默认:consumer-1
      String clientId = config.getString(ConsumerConfig.CLIENT_ID_CONFIG);
      if (clientId.isEmpty())
        clientId = "consumer-" + CONSUMER_CLIENT_ID_SEQUENCE.getAndIncrement();
      this.clientId = clientId;
      // 获取消费组名
      String groupId = config.getString(ConsumerConfig.GROUP_ID_CONFIG);
      LogContext logContext = new LogContext("[Consumer clientId=" + clientId + ", groupId=" + groupId + "] ");
      this.log = logContext.logger(getClass());
      log.debug("Initializing the Kafka consumer");
      this.requestTimeoutMs = config.getInt(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG);
      int sessionTimeOutMs = config.getInt(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG);
      int fetchMaxWaitMs = config.getInt(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG);
      if (this.requestTimeoutMs <= sessionTimeOutMs || this.requestTimeoutMs <= fetchMaxWaitMs)
        throw new ConfigException(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG + " should be greater than " + ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG + " and " + ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG);
      this.time = Time.SYSTEM;
      // 与生产者逻辑相同
      Map<String, String> metricsTags = Collections.singletonMap("client-id", clientId);
      MetricConfig metricConfig = new MetricConfig().samples(config.getInt(ConsumerConfig.METRICS_NUM_SAMPLES_CO NFIG))
         .timeWindow(config.getLong(ConsumerConfig.METRICS_SAMPLE_WINDOW_MS_CONFIG) , TimeUnit.MILLISECONDS)
         .recordLevel(Sensor.RecordingLevel.forName(config.getString(ConsumerConfig .METRICS_RECORDING_LEVEL_CONFIG)))
         .tags(metricsTags);
      List<MetricsReporter> reporters = config.getConfiguredInstances(ConsumerConfig.METRIC_REPORTER_CLASSES_CONFIG,
          MetricsReporter.class);
      reporters.add(new JmxReporter(JMX_PREFIX));
      this.metrics = new Metrics(metricConfig, reporters, time);
      this.retryBackoffMs = config.getLong(ConsumerConfig.RETRY_BACKOFF_MS_CONFIG);
      // 消费者拦截器
      // load interceptors and make sure they get clientId
      Map<String, Object> userProvidedConfigs = config.originals();
      userProvidedConfigs.put(ConsumerConfig.CLIENT_ID_CONFIG, clientId);
      List<ConsumerInterceptor<K, V>> interceptorList = (List) (new ConsumerConfig(userProvidedConfigs, false)).getConfiguredInstances(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG,
          ConsumerInterceptor.class);
      this.interceptors = interceptorList.isEmpty() ? null : new ConsumerInterceptors<>(interceptorList);
      // key反序列化
      if (keyDeserializer == null) {
        this.keyDeserializer = config.getConfiguredInstance(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
            Deserializer.class);
        this.keyDeserializer.configure(config.originals(), true);
     } else { config.ignore(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG);
        this.keyDeserializer = keyDeserializer;
     }
      // value反序列化
      if (valueDeserializer == null) {
        this.valueDeserializer = config.getConfiguredInstance(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
            Deserializer.class);
        this.valueDeserializer.configure(config.originals(), false);
     } else {
       config.ignore(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG);
        this.valueDeserializer = valueDeserializer;
     }
      ClusterResourceListeners clusterResourceListeners = configureClusterResourceListeners(keyDeserializer, valueDeserializer, reporters, interceptorList);

      this.metadata = new Metadata(retryBackoffMs, config.getLong(ConsumerConfig.METADATA_MAX_AGE_CONFIG),
          true, false, clusterResourceListeners);
      List<InetSocketAddress> addresses = ClientUtils.parseAndValidateAddresses(config.getList(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG));
      // 更新集群元数据
      this.metadata.update(Cluster.bootstrap(addresses), Collections.<String>emptySet(), 0);
      String metricGrpPrefix = "consumer";
      ConsumerMetrics metricsRegistry = new ConsumerMetrics(metricsTags.keySet(), "consumer");
      ChannelBuilder channelBuilder = ClientUtils.createChannelBuilder(config);
      // 事务隔离级别
      IsolationLevel isolationLevel = IsolationLevel.valueOf(
         config.getString(ConsumerConfig.ISOLATION_LEVEL_CONFIG).toUpperCase(Locale.ROOT));
      Sensor throttleTimeSensor = Fetcher.throttleTimeSensor(metrics, metricsRegistry.fetcherMetrics);
      // 网络组件
      NetworkClient netClient = new NetworkClient(
          new Selector(config.getLong(ConsumerConfig.CONNECTIONS_MAX_IDLE_MS_CONFIG), metrics, time, metricGrpPrefix, channelBuilder, logContext),
          this.metadata,
          clientId,
          100, // a fixed large enough value will suffice for max in-flight requests
          config.getLong(ConsumerConfig.RECONNECT_BACKOFF_MS_CONFIG),
          config.getLong(ConsumerConfig.RECONNECT_BACKOFF_MAX_MS_CONFIG),
          config.getInt(ConsumerConfig.SEND_BUFFER_CONFIG),
          config.getInt(ConsumerConfig.RECEIVE_BUFFER_CONFIG), config.getInt(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG),
          time,
          true,
          new ApiVersions(),
          throttleTimeSensor,
          logContext);
      // 实例化消费者网络客户端
      this.client = new ConsumerNetworkClient(
          logContext,
          netClient,
          metadata,
          time,
          retryBackoffMs,
          config.getInt(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG));
      // offset重置策略,默认是自动提交
      OffsetResetStrategy offsetResetStrategy = OffsetResetStrategy.valueOf(config.getString(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG).toUpperCase(Locale.ROOT));
      // 创建订阅的对象, 封装订阅信息
      this.subscriptions = new SubscriptionState(offsetResetStrategy);
      // 消费组和主题分区分配器
      this.assignors = config.getConfiguredInstances(
          ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,
          PartitionAssignor.class);
      // offset协调者
      this.coordinator = new ConsumerCoordinator(logContext,
          this.client,
          groupId,
          config.getInt(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG),
          config.getInt(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG),
          config.getInt(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG),
          assignors,
          this.metadata,
          this.subscriptions,
          metrics,
          metricGrpPrefix,
          this.time,
          retryBackoffMs,
          config.getBoolean(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG),
          config.getInt(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG),
          this.interceptors,
          config.getBoolean(ConsumerConfig.EXCLUDE_INTERNAL_TOPICS_CONFIG),
          config.getBoolean(ConsumerConfig.LEAVE_GROUP_ON_CLOSE_CONFIG));
      // 拉取器, 通过网络获取消息的对象
      this.fetcher = new Fetcher<>(
          logContext,
          this.client,
          config.getInt(ConsumerConfig.FETCH_MIN_BYTES_CONFIG),
          config.getInt(ConsumerConfig.FETCH_MAX_BYTES_CONFIG),
          config.getInt(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG),
          config.getInt(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG),
          config.getInt(ConsumerConfig.MAX_POLL_RECORDS_CONFIG),
          config.getBoolean(ConsumerConfig.CHECK_CRCS_CONFIG),
          this.keyDeserializer,
          this.valueDeserializer,
          this.metadata,
          this.subscriptions,
          metrics,
          metricsRegistry.fetcherMetrics,
          this.time,
          this.retryBackoffMs,
          isolationLevel);
      // 打印用户设置,但是没有使用的配置项
      config.logUnused();
      AppInfoParser.registerAppInfo(JMX_PREFIX, clientId, metrics);
      log.debug("Kafka consumer initialized");
   } catch (Throwable t) {
      // call close methods if internal objects are already constructed
      // this is to prevent resource leak. see KAFKA-2121
      close(0, true);
      // now propagate the exception
      throw new KafkaException("Failed to construct kafka consumer", t);
   }
 }

1. 初始化参数配置

  • client.id、group.id、消费者拦截器、key/value序列化、事务隔离级别

2. 初始化网络客户端 NetworkClient
3. 初始化消费者网络客户端 ConsumerNetworkClient
4. 初始化offset提交策略,默认自动提交
5. 初始化消费者协调器 ConsumerCoordinator
6. 初始化拉取器 Fetcher

 

 

4.5.3 订阅Topic

下面我们先来看一下subscribe方法都有哪些逻辑:

public void subscribe(Collection<String> topics, ConsumerRebalanceListener listener) {
 // 轻量级锁
 acquireAndEnsureOpen();
 try {
  if (topics == null) {
   throw new IllegalArgumentException("Topic collection to subscribe to cannot be null");
  } else if (topics.isEmpty()) {
   // topics为空,则开始取消订阅的逻辑
   this.unsubscribe();
  } else {
   // topic合法性判断,包含null或者空字符串直接抛异常
   for (String topic : topics) {
    if (topic == null || topic.trim().isEmpty())
     throw new IllegalArgumentException("Topic collection to subscribe to cannot contain null or empty topic");
   }
   // 如果没有消费协调者直接抛异常
   throwIfNoAssignorsConfigured();
   log.debug("Subscribed to topic(s): {}", Utils.join(topics, ", "));
   // 开始订阅
   this.subscriptions.subscribe(new HashSet<>(topics), listener);
   // 更新元数据,如果metadata当前不包括所有的topics则标记强制更新
   metadata.setTopics(subscriptions.groupSubscription());
  }
 } finally {
  release();
 }
}

public void subscribe(Set<String> topics, ConsumerRebalanceListener listener) {
 if (listener == null)
  throw new IllegalArgumentException("RebalanceListener cannot be null");
 // 按照指定的Topic名字进行订阅,自动分配分区
 setSubscriptionType(SubscriptionType.AUTO_TOPICS);
 // 监听
 this.listener = listener;
 // 修改订阅信息
 changeSubscription(topics);
}
 
private void changeSubscription(Set<String> topicsToSubscribe) {
 if (!this.subscription.equals(topicsToSubscribe)) {
  // 如果使用AUTO_TOPICS或AUTO_PARTITION模式,则使用此集合记录所有订阅的Topic
  this.subscription = topicsToSubscribe;
  // Consumer Group中会选一个Leader,Leader会使用这个集合记录Consumer Group中所 有消费者订阅的Topic,而其他的Follower的这个集合只会保存自身订阅的Topic
  this.groupSubscription.addAll(topicsToSubscribe);
 }
}

1. KafkaConsumer不是线程安全类,开启轻量级锁,topics为空抛异常,topics是空集合开始取消订阅,再次判断topics集合中是否有非法数据,判断消费者协调者是否为空。开始订阅对应topic。listener默认为 NoOpConsumerRebalanceListener ,一个空操作

轻量级锁:分别记录了当前使用KafkaConsumer的线程id和重入次数,KafkaConsumer的acquire()和release()方法实现了一个”轻量级锁“,它并非真正的锁,仅是检测是否有多线程并发操作KafkaConsumer而已

2. 每一个KafkaConsumer实例内部都拥有一个SubscriptionState对象,subscribe内部调用了subscribe方法,subscribe方法订阅信息记录到 SubscriptionState ,多次订阅会覆盖旧数据。
3. 更新metadata,判断如果metadata中不包含当前groupSubscription,开始标记更新(后面会有更新的逻辑),并且消费者侧的topic不会过期

 

4.5.4 消息消费过程

下面KafkaConsumer的核心方法poll是如何拉取消息的,先来看一下下面的代码:

4.5.4.1 poll

public ConsumerRecords<K, V> poll(long timeout) {
    // 使用轻量级锁检测kafkaConsumer是否被其他线程使用, 主要是网络
    acquireAndEnsureOpen();
    try {
      // 超时时间小于0抛异常
      if (timeout < 0)
        throw new IllegalArgumentException("Timeout must not be negative");
      // 订阅类型为NONE抛异常,表示当前消费者没有订阅任何topic或者没有分配分区
      if (this.subscriptions.hasNoSubscriptionOrUserAssignment())
        throw new IllegalStateException("Consumer is not subscribed to any topics or assigned any partitions");
      // poll for new data until the timeout expires
      long start = time.milliseconds();
      long remaining = timeout;
      do {

        // 核心方法,拉取消息
        Map<TopicPartition, List<ConsumerRecord<K, V>>> records = pollOnce(remaining);
        if (!records.isEmpty()) {
          // before returning the fetched records, we can send off the next round of fetches
          // and avoid block waiting for their responses to enable pipelining while the user
          // is handling the fetched records.
          //
          // NOTE: since the consumed position has already been updated, we must not allow
          // wakeups or any other errors to be triggered prior to returning the fetched records.
          // 如果拉取到了消息,发送一次消息拉取的请求,不会阻塞不会被中断
          // 在返回数据之前,发送下次的 fetch 请求,避免用户在下次获取数据 时线程 block
          if (fetcher.sendFetches() > 0 || client.hasPendingRequests())
            client.pollNoWakeup();
          // 经过拦截器处理后返回
          if (this.interceptors == null)
            return new ConsumerRecords<>(records);
          else
            // 消费的消息, 经过拦截器的处理
            return this.interceptors.onConsume(new ConsumerRecords<>(records));
       }
        long elapsed = time.milliseconds() - start;
        // 拉取超时就结束
        remaining = timeout - elapsed;
     } while (remaining > 0);
      return ConsumerRecords.empty();
   } finally {
      release();
   }
 }

1. 使用轻量级锁检测kafkaConsumer是否被其他线程使用
2. 检查超时时间是否小于0,小于0抛出异常,停止消费
3. 检查这个 consumer 是否订阅的相应的 topic-partition
4. 调用 pollOnce() 方法获取相应的 records
5. 在返回获取的 records 前,发送下一次的 fetch 请求,避免用户在下次请求时线程 block 在pollOnce() 方法中
6. 如果在给定的时间(timeout)内获取不到可用的 records,返回空数据

 

这里可以看出,poll 方法的真正实现是在 pollOnce 方法中,poll 方法通过 pollOnce 方法获取可用的数据

 

4.5.4.2 pollOnce

// 除了获取新数据外,还会做一些必要的 offset-commit和reset-offset的操作
private Map<TopicPartition, List<ConsumerRecord<K, V>>> pollOnce(long timeout) {
    client.maybeTriggerWakeup();
    // 1. 获取 GroupCoordinator 地址并连接、加入 Group、sync Group、自动 commit, join 及 sync 期间 group 会进行 rebalance
    coordinator.poll(time.milliseconds(), timeout);
    // 2. 更新订阅的 topic-partition 的 offset(如果订阅的 topic-partition list 没有有效的 offset 的情况下)
    if (!subscriptions.hasAllFetchPositions())
     updateFetchPositions(this.subscriptions.missingFetchPositions());
    // 3. 获取 fetcher 已经拉取到的数据
    Map<TopicPartition, List<ConsumerRecord<K, V>>> records = fetcher.fetchedRecords();
    if (!records.isEmpty())
      return records;
    // 4. 发送 fetch 请求,会从多个 topic-partition 拉取数据(只要对应的 topic- partition 没有未完成的请求)
    fetcher.sendFetches();
    long now = time.milliseconds();
    long pollTimeout = Math.min(coordinator.timeToNextPoll(now), timeout);
    // 5. 调用 poll 方法发送请求(底层发送请求的接口)
    client.poll(pollTimeout, now, new PollCondition() {
      @Override
      public boolean shouldBlock() {
        // since a fetch might be completed by the background thread, we need this poll condition
        // to ensure that we do not block unnecessarily in poll()
        return !fetcher.hasCompletedFetches();
     }
   });
    // 6. 如果 group 需要 rebalance,直接返回空数据,这样更快地让 group 进行稳定 状态
    if (coordinator.needRejoin())
      return Collections.emptyMap();
    // 获取到请求的结果
    return fetcher.fetchedRecords();
 }

pollOnce 可以简单分为6步来看,其作用分别如下:

1 coordinator.poll()

获取 GroupCoordinator 的地址,并建立相应 tcp 连接,发送 join-group、sync-group,之后才真正加入到了一个 group 中,这时会获取其要消费的 topic-partition 列表,如果设置了自动 commit,也会在这一步进行 commit。总之,对于一个新建的 group,group 状态将会从 Empty –>PreparingRebalance –> AwaiSync –> Stable;

1. 获取 GroupCoordinator 的地址,并建立相应 tcp 连接;
2. 发送 join-group 请求,然后 group 将会进行 rebalance;
3. 发送 sync-group 请求,之后才正在加入到了一个 group 中,这时会通过请求获取其要消费的 topic-partition 列表;
4. 如果设置了自动 commit,也会在这一步进行 commit offset

 

2 updateFetchPositions()

这个方法主要是用来更新这个 consumer 实例订阅的 topic-partition 列表的 fetch-offset 信息。目的就是为了获取其订阅的每个 topic-partition 对应的 position,这样 Fetcher 才知道从哪个 offset 开始去拉取这个 topic-partition 的数据

private void updateFetchPositions(Set<TopicPartition> partitions) {
    // 先重置那些调用 seekToBegin 和 seekToEnd 的 offset 的 tp,设置其 the fetch position 的 offset
    fetcher.resetOffsetsIfNeeded(partitions);
    if (!subscriptions.hasAllFetchPositions(partitions)) {
      // 获取所有分配 tp 的 offset, 即 committed offset, 更新到 TopicPartitionState 中的 committed offset 中
      coordinator.refreshCommittedOffsetsIfNeeded();
      // 如果 the fetch position 值无效,则将上步获取的 committed offset 设 置为 the fetch position
      fetcher.updateFetchPositions(partitions);
   }
 }

在 Fetcher 中,这个 consumer 实例订阅的每个 topic-partition 都会有一个对应的TopicPartitionState 对象,在这个对象中会记录以下这些内容:

private static class TopicPartitionState {
    // Fetcher 下次去拉取时的 offset,Fecher 在拉取时需要知道这个值
    private Long position; // last consumed position
    // 最后一次获取的高水位标记
    private Long highWatermark; // the high watermark from last fetch
    private Long lastStableOffset;
    // consumer 已经处理完的最新一条消息的 offset,consumer 主动调用 offset- commit 时会更新这个值;
    private OffsetAndMetadata committed;  // last committed position
    // 是否暂停
    private boolean paused;  // whether this partition has been paused by the user
    // 这 topic-partition offset 重置的策略,重置之后,这个策略就会改为 null, 防止再次操作
    private OffsetResetStrategy resetStrategy;  // the strategy to use if the offset needs resetting
}

 

3 fetcher.fetchedRecords()

返回其 fetched records,并更新其 fetch-position offset,只有在 offset-commit 时(自动commit 时,是在第一步实现的),才会更新其 committed offset;

public Map<TopicPartition, List<ConsumerRecord<K, V>>> fetchedRecords() {
    Map<TopicPartition, List<ConsumerRecord<K, V>>> fetched = new HashMap<>();
    // 在 max.poll.records 中设置单词最大的拉取条数
    int recordsRemaining = maxPollRecords;
    try {
      while (recordsRemaining > 0) {
        if (nextInLineRecords == null || nextInLineRecords.isFetched) {
          // 从队列中获取但不移除此队列的头;如果此队列为空,返回null
          CompletedFetch completedFetch = completedFetches.peek();
          if (completedFetch == null) break;
          // 获取下一个要处理的 nextInLineRecords
          nextInLineRecords = parseCompletedFetch(completedFetch);
          completedFetches.poll();
       } else {
          // 拉取records,更新 position
          List<ConsumerRecord<K, V>> records = fetchRecords(nextInLineRecords, recordsRemaining);
          TopicPartition partition = nextInLineRecords.partition;
          if (!records.isEmpty()) {
            List<ConsumerRecord<K, V>> currentRecords = fetched.get(partition);
            if (currentRecords == null) {
              fetched.put(partition, records);
           } else {
              List<ConsumerRecord<K, V>> newRecords = new ArrayList<>(records.size() + currentRecords.size());
              newRecords.addAll(currentRecords);
              newRecords.addAll(records);
              fetched.put(partition, newRecords);
           }
            recordsRemaining -= records.size();
         }
       }
     }
   } catch (KafkaException e) {
      if (fetched.isEmpty())
        throw e;
   }
    return fetched;
 }
  private List<ConsumerRecord<K, V>> fetchRecords(PartitionRecords partitionRecords, int maxRecords) {
    if (!subscriptions.isAssigned(partitionRecords.partition)) {
      log.debug("Not returning fetched records for partition {} since it is no longer assigned",
          partitionRecords.partition);
   } else {
      long position = subscriptions.position(partitionRecords.partition);
      // 这个 tp 不能来消费了,比如调用 pause方法暂停消费
      if (!subscriptions.isFetchable(partitionRecords.partition)) {
        log.debug("Not returning fetched records for assigned partition {} since it is no longer fetchable",
            partitionRecords.partition);
     } else if (partitionRecords.nextFetchOffset == position) {
        // 获取该 tp 对应的records,并更新 partitionRecords 的 fetchOffset(用于判断是否顺序)
        List<ConsumerRecord<K, V>> partRecords = partitionRecords.fetchRecords(maxRecords);
        long nextOffset = partitionRecords.nextFetchOffset;
        log.trace("Returning fetched records at offset {} for assigned partition {} and update " +
            "position to {}", position, partitionRecords.partition, nextOffset);
        // 更新消费的到 offset( the fetch position)
        subscriptions.position(partitionRecords.partition, nextOffset);
        // 获取 Lag(即 position与 hw 之间差值),hw 为 null 时,才返回 null
        Long partitionLag = subscriptions.partitionLag(partitionRecords.partition, isolationLevel);
        if (partitionLag != null)
          this.sensors.recordPartitionLag(partitionRecords.partition, partitionLag);
        return partRecords;
     } else {
        log.debug("Ignoring fetched records for {} at offset {} since the current position is {}",
            partitionRecords.partition, partitionRecords.nextFetchOffset, position);
     }
   }
    partitionRecords.drain();
    return emptyList();
 }

 

4 fetcher.sendFetches()

只要订阅的 topic-partition list 没有未处理的 fetch 请求,就发送对这个 topic-partition 的 fetch请求,在真正发送时,还是会按 node 级别去发送,leader 是同一个 node 的 topic-partition 会合成一个请求去发送;

// 向订阅的所有 partition (只要该 leader 暂时没有拉取请求)所在 leader 发送 fetch请求
  public int sendFetches() {
    // 1. 创建 Fetch Request
    Map<Node, FetchRequest.Builder> fetchRequestMap = createFetchRequests();
    for (Map.Entry<Node, FetchRequest.Builder> fetchEntry : fetchRequestMap.entrySet()) {
      final FetchRequest.Builder request = fetchEntry.getValue();
      final Node fetchTarget = fetchEntry.getKey();
      log.debug("Sending {} fetch for partitions {} to broker {}", isolationLevel, request.fetchData().keySet(), fetchTarget);
      // 2 发送 Fetch Request
      client.send(fetchTarget, request)
        .addListener(new RequestFutureListener<ClientResponse>() {
            @Override
            public void onSuccess(ClientResponse resp) {
              FetchResponse response = (FetchResponse)resp.responseBody();
              if (!matchesRequestedPartitions(request,response)) {
                log.warn("Ignoring fetch response containing partitions {} since it does not match "
                     + "the requested partitions {}", response.responseData().keySet(), request.fetchData().keySet());
                return;
              }
              Set<TopicPartition> partitions = new HashSet<>(response.responseData().keySet());
              FetchResponseMetricAggregator metricAggregator = new FetchResponseMetricAggregator(sensors, partitions);
              for (Map.Entry<TopicPartition, FetchResponse.PartitionData> entry : response.responseData().entrySet()) {
                TopicPartition partition = entry.getKey();
                long fetchOffset = request.fetchData().get(partition).fetchOffset;
                FetchResponse.PartitionData fetchData = entry.getValue();
                log.debug("Fetch {} at offset {} for partition {} returned fetch data {}",
                    isolationLevel, fetchOffset, partition, fetchData);
                completedFetches.add(new CompletedFetch(partition, fetchOffset, fetchData, metricAggregator,
                    resp.requestHeader().apiVersion()));
             }
            
             sensors.fetchLatency.record(resp.requestLatencyMs());
            }
            @Override
            public void onFailure(RuntimeException e) {
              log.debug("Fetch request {} to {} failed", request.fetchData(), fetchTarget, e);
           }
         });
   }
    return fetchRequestMap.size();
 }

1. createFetchRequests():为订阅的所有 topic-partition list 创建 fetch 请求(只要该topic-partition 没有还在处理的请求),创建的 fetch 请求依然是按照 node 级别创建的;
2. client.send():发送 fetch 请求,并设置相应的 Listener,请求处理成功的话,就加入到completedFetches 中,在加入这个 completedFetches 集合时,是按照 topic-partition 级别去加入,这样也就方便了后续的处理。

从这里可以看出,在每次发送 fetch 请求时,都会向所有可发送的 topic-partition 发送 fetch 请求,调用一次 fetcher.sendFetches,拉取到的数据,可需要多次 pollOnce 循环才能处理完,因为Fetcher 线程是在后台运行,这也保证了尽可能少地阻塞用户的处理线程,因为如果 Fetcher 中没有可处理的数据,用户的线程是会阻塞在 poll 方法中的

 

5 client.poll()

调用底层 NetworkClient 提供的接口去发送相应的请求;

6 coordinator.needRejoin()

如果当前实例分配的 topic-partition 列表发送了变化,那么这个 consumer group 就需要进行rebalance

 

 

4.5.5 自动提交

最简单的提交方式是让悄费者自动提交偏移量。如果enable.auto.commit被设为 true,消费者会自动把从 poll() 方法接收到的最大偏移量提交上去。提交时间间隔由 auto.commit.interval.ms 控制,默认值是 5s。与消费者里的其他东西 一样,自动提交也是在轮询(poll() )里进行的。消费者每次在进行轮询时会检查是否该提交偏移量了,如果是,那 么就会提交从上一次轮询返回的偏移量。

不过,这种简便的方式也会带来一些问题,来看一下下面的例子: 假设我们仍然使用默认的 5s提交时间间隔,在最近一次提交之后的 3s发生了再均衡,再 均衡之后,消费者从最后一次提交的偏移量位置开始读取消息。这个时候偏移量已经落后 了 3s,所以在这 3s 内到达的消息会被重复处理。可以通过修改提交时间间隔来更频繁地提交偏移量,减小可能出现重复消息的时间窗,不过这种情况是无也完全避免的

 

 

4.5.6 手动提交

4.5.6.1 同步提交

取消自动提交,把 auto.commit.offset 设为 false,让应用程序决定何时提交 偏 移量。使用commitSync() 提交偏移量最简单也最可靠。这个 API会提交由 poll() 方法返回 的最新偏移量,提交成功后马上返回,如果提交失败就抛出异常

while (true) {
 // 消息拉取
 ConsumerRecords<String, String> records = consumer.poll(100);
 for (ConsumerRecord<String, String> record : records) {
  System.out.printf("offset = %d, key = %s, value = %s%n"
    , record.offset(),record.key(), record.value());
}
 // 处理完成单次消息以后,提交当前的offset,如果提交失败就抛出异常
 consumer.commitSync();
}

 

 

4.5.6.2 异步提交

同步提交有一个不足之处,在 broker对提交请求作出回应之前,应用程序会一直阻塞,这样会限制应用程序的吞吐量。我们可以通过降低提交频率来提升吞吐量,但如果发生了再均衡, 会增加重复消息的数量。这个时候可以使用异步提交 API。我们只管发送提交请求,无需等待 broker的响应。

while (true) {
  // 消息拉取
  ConsumerRecords<String, String> records = consumer.poll(100);
  for (ConsumerRecord<String, String> record : records) {
    System.out.printf("offset = %d, key = %s, value = %s%n"
        , record.offset(), record.key(), record.value());
}
  // 异步提交
  consumer.commitAsync((offsets, exception) -> {
    exception.printStackTrace();
    System.out.println(offsets.size());
  });
}

 

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值