Kafka源码调试(一):如何开始调试Kafka源码

参考链接

  1. 源码系列第1弹 | 带你快速攻略Kafka源码之旅入门篇
  2. win10快速搭建scala环境
  3. Zookeeper3.5.9源码编译和启动
  4. 多种方法解决Failed to load class “org.slf4j.impl.StaticLoggerBinder“.的错误
  5. 2.调试kafka源码
  6. Process ‘command ‘E:/java16/bin/java.exe‘‘ finished with non-zero exit value 1 错误解决
  7. 聊聊 Kafka:编译 Kafka 源码并搭建源码环境

下载地址

  1. Zookeeper GitHub 仓库
  2. Kafka GitHub 仓库
  3. scala GitHub 版本包下载
  4. scala 官方安装包下载
  5. Idea Scala 插件手动安装下载地址
  6. Gradle 安装包下载
  7. Ant官方下载页面
  8. OpenJdk 的发行版 Azul JDKs

1. 环境准备

1.1. 准备 Jdk 1.8

windows 系统用户可以考虑用 OpenJdk 的发行版 Azul JDKs代替 Oracle JDKs

下载 zulu8.76.0.17-ca-jdk8.0.402-win_x64.zip 后解压缩到目录 C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64

设置系统环境变量 JAVA_HOME=C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64

设置系统环境变量 CLASSPATH=.;%JAVA_HOME%\lib;%JAVA_HOME%\lib\tools.jar

系统环境变量 Path 的值追加:%JAVA_HOME%\bin\;%JAVA_HOME%\jre\bin\

1.2. 准备 Maven 3

到 Maven 官网下载当前最新版本的 apache-maven-3.6.3 或更高版本,越高越好。

解压到一个自己喜欢的目录下,例如:F:\software\apache-maven-3.6.3

并设置系统环境变量:MAVEN_HOME=F:\software\apache-maven-3.6.3

系统环境变量 Path 的值追加:%MAVEN_HOME%\bin\

1.3. 准备 Scala 2.12.10

Kafka运行需要 scala 环境。在 kafka 项目根目录的 gradle.properties 中可以找到默认配置的 Scala 版本:scalaVersion=2.12.10

接下来从 scala 官方安装包下载 下载 scala-2.12.10.zip 包,下载完成后解压缩到合适的目录,并且配置系统环境:

  1. 增加系统环境变量 SCALA_HOME=F:\software\scala-2.12.10
  2. 系统环境变量 Path 的值追加:%SCALA_HOME%\bin\

说明安装成功。

在 cmd 命令终端验证 scala:

C:\Users\XXXX>scala
Welcome to Scala 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_402).
Type in expressions for evaluation. Or try :help.

scala>

至此安装完成。

1.3.1. Idea 配置 scala

在 Idea 里配置 scala。

在 Idea 中安装插件 Scala,如果安装失败,可以考虑手动下载安装:Idea Scala 插件手动安装下载地址

1.4. 准备 Gradle 5.6.2

打开 kafka 项目中的 ${basedir}/gradle/wrapper/gradle-wrapper.properties 获得指定的版本

distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-5.6.2-all.zip
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists

并按照其要求下载 gradle-5.6.2-all.zip

下载gradle安装包

  1. 安装包:gradle-5.6.2-bin.zip
  2. 源码包:gradle-5.6.2-src.zip
  3. 包含以上全部内容的包:gradle-5.6.2-all.zip

解压缩到目录(例如:F:\software\gradle-5.6.2-all\gradle-5.6.2

增加系统变量:GRADLE_HOME=F:\software\gradle-5.6.2-all\gradle-5.6.2

增加Path变量:%GRADLE_HOME%\bin

确定后,进入cmd验证:

C:\Users\Kitman>gradle -v

------------------------------------------------------------
Gradle 5.6.2
------------------------------------------------------------

Build time:   2019-09-05 16:13:54 UTC
Revision:     55a5e53d855db8fc7b0e494412fc624051a8e781

Kotlin:       1.3.41
Groovy:       2.5.4
Ant:          Apache Ant(TM) version 1.9.14 compiled on March 12 2019
JVM:          1.8.0_402 (Azul Systems, Inc. 25.402-b06)
OS:           Windows 10 10.0 amd64

3. 获得源码

  1. Kafka GitHub 仓库
  2. Zookeeper GitHub 仓库
# clone kafka
git clone https://github.com/apache/kafka.git
# clone zookeeper
git clone https://github.com/apache/zookeeper.git

3.1. 确认kafka源码版本

获得源码之后还要确认我们需要调试的是什么版本。

目前腾讯云提供了好几个kafka版本可以选择:

  1. 0.10.2
  2. 1.1.1
  3. 2.4.1(推荐)
  4. 2.8.1
  5. 3.2.3

腾讯云默认推荐2.4.1,我们就以 2.4.1 版本作为研究目标。

# 查询该版本的tag
git tag | grep 2.4.1
## 2.4.1
## 2.4.1-rc0

# 基于标签创建本地分支
git branch myloc-2.4.1 2.4.1

# 切换到该分支
git checkout myloc-2.4.1
## Switched to branch 'myloc-2.4.1'

3.2. 确认kafka对应的zookeeper版本

kafka源码中可以查到kafka各版本对应的zookeeper版本,打开kafka仓库中的文件:kafka/gradle/dependencies.gradle,打开查找关键词zookeeper,就可以找到对应的zookeeper版本了,例如:

kafka的 2.4.1 分支下,可以查看到对应的zookeeper版本号如下所示:

versions += [
	zookeeper: "3.5.7"
]

同样按照腾讯云中提供的 Zookeeper 服务可以选择的有两个版本:

  1. 3.5.9
  2. 3.6.3

所以,最接近的一个相对稳定的版本应该是:3.5.9

查看对应版本的标签:

git tag | grep 3.5.9
## release-3.5.9
## release-3.5.9-rc0
## release-3.5.9-rc1
## release-3.5.9-rc2

# 基于标签创建本地分支
git branch myloc-release-3.5.9 release-3.5.9

# 切换到该分支
git checkout myloc-release-3.5.9
## Switched to branch 'myloc-release-3.5.9'

4. 编译和启动zookeeper

以下仅列出在cmd命令行启动的步骤,具体详情或想在 IntelliJ Idea 启动 zookeeper 服务的请参考:Zookeeper3.5.9源码编译和启动

4.1. 安装 Ant

根据 zookeeper 项目根目录下的 README_packaging.txt 文件中的内容提示,ant 版本建议在 1.9.4 以上。

我们直接到官网下载一个最新的:

Ant官方下载页面 下载一个当前最新版本的二进制包 apache-ant-1.10.14-bin.zip

注意!Ant 官方提供二进制包下载以及源码包下载,如果下载了源码包还要再次进行编译构建才能真正应用,所以建议直接下载二进制包方便直接配置应用。

解压缩二进制包,将解压缩后 bin 的父目录,添加到环境配置中,命名为ANT_HOME,并添加 %ANT_HOME%/bin/Path 中。

在新打开的 cmd 中执行 ant -version 验证安装成功。

4.2. 使用 Ant 编译

注意,我们这里不需要指定执行的 ant 任务,就在 zookeeper 项目根目录直接执行一个命令:

ant

查看 zookeeper 项目根目录的 build.xml 文件,发现其中是给项目配置了默认ant任务为 jar 的:

<project name="ZooKeeper" default="jar" 
xmlns:ivy="antlib:org.apache.ivy.ant"
xmlns:artifact="antlib:org.apache.maven.artifact.ant"
xmlns:maven="antlib:org.apache.maven.artifact.ant"
xmlns:cs="antlib:com.puppycrawl.tools.checkstyle.ant">

所以实际上 ant 命令等效 ant jar

4.3. 准备一个zoo.cfg配置文件

进入目录 ${basedir}/conf/,在同目录复制一个 zoo_sample.cfg 文件并重命名为 zoo.cfg

打开并且编辑,增加两个属性:

# windows系统下的绝对路径,该目录用于存储zookeeper数据
dataDir=F:\\privateBox\\kafkaCodeReadProject\\zookeeper\\tmp\\zookeeper
# 修改服务端口,避开常用的8080
admin.serverPort=8083

这个配置文件是执行 java 命令时需要作为参数用的,例如:

java org.apache.zookeeper.server.quorum.QuorumPeerMain zoo.cfg

4.4. 执行 zkServer.cmd 脚本启动服务

在windows命令提示符(cmd)中执行以下命令,开启zookeeper服务

cmd /k bin\zkServer.cmd

zookeeper 服务启动成功。

5. 编译和启动Kafka

5.1. 用 IntelliJ IDEA 打开项目

获得源码后,切换到 2.4.1 版本的标签或分支,然后用Idea打开项目。

初次打开项目的时候,Idea 可能会用文字气泡提示你 Gradle build scripts found,只需要点击 Load Gradle Project 然后等待项目加载。

也可能会根据 ${basedir}/gradle/wrapper/gradle-wrapper.properties 配置自动下载相应版本的 gradle,例如当前 kafka-2.4.1 指定需要 gradle-5.6.2

distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-5.6.2-all.zip
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists

下载完指定版本的 gradle 后再用 gradle 进行项目构建,下载依赖包等。

但是如果外部已经安装过了 gradle 就不用等它下载,可以直接在底部进度条右边点击 x 号中断,并进行下面的 Idea 设置。

  1. 打开菜单:File > Settings... > Build,Execution,Deployment >

    1. 打开 Build Tools >
      1. 打开 Gradle,在 Gradle projects 选择当前的 kafka
        1. 设置 Use Gradle fromSpecified location,并为其设置外部 gradle 安装目录(例如:F:\software\gradle-5.6.2-all\gradle-5.6.2
        2. 设置 Gradle JVMJAVA_HOME
        3. 设置 Build and run usingIntelliJ IDEA,这么设置可以避免 kafka 停止时报 finished with non-zero exit value -1 的错误
    2. 打开 Compiler >
      1. 打开 Java Compiler, 设置 Project bytecode version8

    设置完成后点 OK 退出,Idea 会立即根据以上配置进行重新构建,只需要耐心等待依赖包下载完毕以及项目构建即可。

    提示:
    IntelliJ IDEA 可以单独设置 gradle 安装目录,而不是从 Path 中获取,所以其实 Path 里可以不必顾虑 kafka 一个项目而把 Gradle 焊死在 gradle 5 版本,依然可以配置更高版本的 Gradle 到 Path 的呢。

    笔记:为什么不能是 gradle 8?

    因为 build.gradle 文件无法用 gradle 8 编译,报如下错误:

    Build file 'F:\privateBox\kafkaCodeReadProject\kafka\build.gradle' line: 192
    
    Could not compile build file 'F:\privateBox\kafkaCodeReadProject\kafka\build.gradle'.
    > startup failed:
      build file 'F:\privateBox\kafkaCodeReadProject\kafka\build.gradle': 192: unable to resolve class MavenDeployment 
       @ line 192, column 32.
                       beforeDeployment { MavenDeployment deployment -> signing.signPom(deployment) }
    
  2. 打开菜单:File > Project Structure... > Project Settings >

    1. 打开菜单:Project Settings > Project:设置 SDK 在选项中选择系统环境变量 Path 中配置的 jdk 1.8 版本
    2. 打开菜单:Platform Settings > SDKs:只保留一个 jdk 1.8 ,并把 Idea 自绑定带的 jdk 在列表中删除
  3. 在 IDEA 的 project 面板右键点击 kafka 项目根目录,唤出菜单选择 Add Framework Suppot..., 并且选择 Scala并指定scala的安装路径(bin父目录)。

5.2. 调整项目配置文件

  1. config/server.properties
    1. zookeeper.connect=localhost:2181改为本地zookeeper的域名和端口号,对应的是 zookeeper 的 zoo.cfg 配置中的:
      # the port at which the clients will connect
      clientPort=2181
      
    2. log.dirs=\tmp\kafka=logs改为windows系统的绝对路径,例如F:\\privateBox\\kafkaCodeReadProject\\kafka\\tmp\\kafka-logs
  2. build.gradle
    1. project(':core') {
      1. dependencies {
        1. testCompile libs.slf4jlog4j 改为 compile libs.slf4jlog4j

修改完 build.gradle 后记得要在 gradle 面板左上角 Reload All Gradle Project

5.3. 用Gradle编译

执行命令:

gradle classes

或者在 IntelliJ IDEA 的 Gradle 面板内双击执行:kafka > build > classes 任务。

5.3. 启动 Kafka 服务

在项目根目录创建一个 logs 目录用于存放log4j输出的日志

Kafka服务启动类为core/src/main/scala/kafka/Kafka.scala。右键点击它唤出菜单,在菜单中选择 Modify Run Configuration...

  • SDK:选择 JDK 1.8
  • VM Options:
    -Dlog4j.configuration=file:config/log4j.properties
    -Dkafka.logs.dir=logs
    
  • Program arguments:
    config/server.properties
    

注意:
为了让 log4j.properties 加入 classpath 中,除了上面的设置 VM Options 这一种方式外还有一种方式可以实现。

  1. 创建 ${basedir}/core/src/main/resources 目录,并在 IDEA 里 mark as resource
  2. ${basedir}/config/log4j.properties 文件复制到刚创建的 resources 目录中
  3. 复制完成后,可以再考虑增加一个属性 kafka.logs.dir=logs

若如此做,上面 VM Options 的两个 -D 选项就可以取消了。

启动日志如下:

"C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\bin\java.exe" -agentlib:jdwp=transport=dt_socket,address=127.0.0.1:51139,suspend=y,server=n -Dlog4j.configuration=file:config/log4j.properties -Dkafka.logs.dir=logs -javaagent:C:\Users\Kitman\AppData\Local\JetBrains\IdeaIC2022.1\captureAgent\debugger-agent.jar -Dfile.encoding=UTF-8 -classpath "...太长省略..." kafka.Kafka config/server.properties
Connected to the target VM, address: '127.0.0.1:51139', transport: 'socket'
[2024-03-14 04:16:23,824] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2024-03-14 04:16:24,572] INFO starting (kafka.server.KafkaServer)
[2024-03-14 04:16:24,573] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2024-03-14 04:16:24,610] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2024-03-14 04:16:40,293] INFO Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,293] INFO Client environment:host.name=DESKTOP-S0UTLJU (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,294] INFO Client environment:java.version=1.8.0_402 (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,294] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,295] INFO Client environment:java.home=C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,295] INFO Client environment:java.class.path=C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\cat.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\charsets.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\crs-agent.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\dnsns.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\jaccess.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\localedata.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\nashorn.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\sunec.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\zipfs.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\jce.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\jfr.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\jsse.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\management-agent.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\resources.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\rt.jar;F:\privateBox\kafkaCodeReadProject\kafka\core\out\production\classes;F:\privateBox\kafkaCodeReadProject\kafka\clients\out\production\classes;F:\privateBox\kafkaCodeReadProject\kafka\clients\out\production\resources;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.module\jackson-module-scala_2.12\2.10.0\343a5406ea085a42d14997c1f0ce3ca8af6e74d9\jackson-module-scala_2.12-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.dataformat\jackson-dataformat-csv\2.10.0\fdbc401c60e2343a05b6842b21edf1fc5ec8ec79\jackson-dataformat-csv-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.datatype\jackson-datatype-jdk8\2.10.0\ac9b5e4ec02f243c580113c0c58564d90432afaa\jackson-datatype-jdk8-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.core\jackson-databind\2.10.0\1127c9cf62f2bb3121a3a2a0a1351d251a602117\jackson-databind-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\net.sf.jopt-simple\jopt-simple\5.0.4\4fdac2fbe92dfad86aa6e9301736f6b4342a3f5c\jopt-simple-5.0.4.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.yammer.metrics\metrics-core\2.2.0\f82c035cfa786d3cbec362c38c22a5f5b1bc8724\metrics-core-2.2.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.scala-lang.modules\scala-collection-compat_2.12\2.1.2\7a96dbe1dc17a92d46a52b6f84a36bdee1936548\scala-collection-compat_2.12-2.1.2.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.scala-lang.modules\scala-java8-compat_2.12\0.9.0\9525fb6bbf54a9caf0f7e1b65b261215b02fe939\scala-java8-compat_2.12-0.9.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.typesafe.scala-logging\scala-logging_2.12\3.9.2\b1f19bc6774e01debf09bf5f564ad3613687bf49\scala-logging_2.12-3.9.2.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.scala-lang\scala-reflect\2.12.10\14cb7beb516cd8e07716133668c427792122c926\scala-reflect-2.12.10.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.scala-lang\scala-library\2.12.10\3509860bc2e5b3da001ed45aca94ffbe5694dbda\scala-library-2.12.10.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.apache.zookeeper\zookeeper\3.5.7\12bdf55ba8be7fc891996319d37f35eaad7e63ea\zookeeper-3.5.7.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.slf4j\slf4j-log4j12\1.7.28\9c45c87557628d1c06d770e1382932dc781e3d5d\slf4j-log4j12-1.7.28.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.slf4j\slf4j-api\1.7.28\2cd9b264f76e3d087ee21bfc99305928e1bdb443\slf4j-api-1.7.28.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\commons-cli\commons-cli\1.4\c51c00206bb913cd8612b24abd9fa98ae89719b1\commons-cli-1.4.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\log4j\log4j\1.2.17\5af35056b4d257e4b64b9e8069c0746e8b08629f\log4j-1.2.17.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.github.luben\zstd-jni\1.4.3-1\c3c8278c6a02b332a21fcd2c22434d0bc928126d\zstd-jni-1.4.3-1.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.lz4\lz4-java\1.6.0\b49e2b422a5b7145ba7aa0c3f60c13664a5c5acf\lz4-java-1.6.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.xerial.snappy\snappy-java\1.1.7.3\241bb74a1eb37d68a4e324a4bc3865427de0a62d\snappy-java-1.1.7.3.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.module\jackson-module-paranamer\2.10.0\4fc4ba10b328a53ac5653cee15504621c6b66083\jackson-module-paranamer-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.core\jackson-annotations\2.10.0\e01cfd93b80d6773b3f757c78e756c9755b47b81\jackson-annotations-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.core\jackson-core\2.10.0\4e2c5fa04648ec9772c63e2101c53af6504e624e\jackson-core-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.apache.zookeeper\zookeeper-jute\3.5.7\1270f80b08904499a6839a2ee1800da687ad96b4\zookeeper-jute-3.5.7.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.apache.yetus\audience-annotations\0.5.0\55762d3191a8d6610ef46d11e8cb70c7667342a3\audience-annotations-0.5.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-handler\4.1.45.Final\51071ba9977cce64e3a58e6f2f6326bbb7e5bc7f\netty-handler-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-transport-native-epoll\4.1.45.Final\cf153257db449b6a74adb64fbd2903542af55892\netty-transport-native-epoll-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.thoughtworks.paranamer\paranamer\2.8\619eba74c19ccf1da8ebec97a2d7f8ba05773dd6\paranamer-2.8.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-codec\4.1.45.Final\8c768728a3e82c3cef62a7a2c8f52ae8d777bac9\netty-codec-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-transport\4.1.45.Final\b7d8f2645e330bd66cd4f28f155eba605e0c8758\netty-transport-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-buffer\4.1.45.Final\bac54338074540c4f3241a3d92358fad5df89ba\netty-buffer-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-common\4.1.45.Final\5cf5e448d44ddf53d00f2fc4047c2a7aceaa7087\netty-common-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-transport-native-unix-common\4.1.45.Final\49f9fa4b7fe7d3e562666d050049541b86822549\netty-transport-native-unix-common-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-resolver\4.1.45.Final\9e77bdc045d33a570dabf9d53192ea954bb195d7\netty-resolver-4.1.45.Final.jar;D:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2021.3.2\lib\idea_rt.jar;C:\Users\Kitman\AppData\Local\JetBrains\IdeaIC2022.1\captureAgent\debugger-agent.jar (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,296] INFO Client environment:java.library.path=C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\bin\;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\bin\;F:\software\apache-maven-3.6.3\bin\;F:\software\gradle-5.6.2-all\gradle-5.6.2\bin;C:\Program Files (x86)\VMware\VMware Workstation\bin\;C:\Program Files\Python310\Scripts\;C:\Program Files\Python310\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;D:\Program Files (x86)\NetSarang\Xftp 7\;D:\Program Files (x86)\NetSarang\Xshell 7\;C:\Android;C:\Windows\System32;C:\Program Files\dotnet\;F:\software\apache-ant-1.10.14-bin\apache-ant-1.10.14\bin\;F:\software\scala-2.12.10\bin\;D:\Program Files\Git\cmd;C:\Users\Kitman\AppData\Local\Microsoft\WindowsApps;;C:\Users\Kitman\AppData\Local\Programs\Microsoft VS Code\bin;. (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,296] INFO Client environment:java.io.tmpdir=C:\Users\Kitman\AppData\Local\Temp\ (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,296] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,296] INFO Client environment:os.name=Windows 10 (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,296] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,296] INFO Client environment:os.version=10.0 (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,297] INFO Client environment:user.name=Kitman (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,297] INFO Client environment:user.home=C:\Users\Kitman (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,297] INFO Client environment:user.dir=F:\privateBox\kafkaCodeReadProject\kafka (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,297] INFO Client environment:os.memory.free=188MB (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,297] INFO Client environment:os.memory.max=3614MB (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,297] INFO Client environment:os.memory.total=245MB (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,307] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@62e136d3 (org.apache.zookeeper.ZooKeeper)
[2024-03-14 04:16:40,334] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2024-03-14 04:16:40,671] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2024-03-14 04:16:40,680] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2024-03-14 04:16:40,681] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2024-03-14 04:16:40,688] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2024-03-14 04:16:40,689] INFO Socket connection established, initiating session, client: /127.0.0.1:51160, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2024-03-14 04:16:41,167] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000003a3540000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2024-03-14 04:16:41,176] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2024-03-14 04:16:42,378] INFO Cluster ID = 99SF2P8sQqyy9jqduosvHQ (kafka.server.KafkaServer)
[2024-03-14 04:16:42,383] WARN No meta.properties file under dir F:\tmp\kafka-logs\meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2024-03-14 04:16:42,457] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = null
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 0
	broker.id.generation.enable = true
	broker.rack = null
	client.quota.callback.class = null
	compression.type = producer
	connection.failed.authentication.delay.ms = 100
	connections.max.idle.ms = 600000
	connections.max.reauth.ms = 0
	control.plane.listener.name = null
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 0
	group.max.session.timeout.ms = 1800000
	group.max.size = 2147483647
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.listener.name = null
	inter.broker.protocol.version = 2.4-IV1
	kafka.metrics.polling.interval.secs = 10
	kafka.metrics.reporters = []
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
	listeners = null
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.max.compaction.lag.ms = 9223372036854775807
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /tmp/kafka-logs
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.downconversion.enable = true
	log.message.format.version = 2.4-IV1
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections = 2147483647
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1000012
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 10080
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 10000
	replica.selector.class = null
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.client.callback.handler.class = null
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism.inter.broker.protocol = GSSAPI
	sasl.server.callback.handler.class = null
	security.inter.broker.protocol = PLAINTEXT
	security.providers = null
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.principal.mapping.rules = DEFAULT
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 1
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 1
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.connect = localhost:2181
	zookeeper.connection.timeout.ms = 6000
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 6000
	zookeeper.set.acl = false
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2024-03-14 04:16:42,468] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = null
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 0
	broker.id.generation.enable = true
	broker.rack = null
	client.quota.callback.class = null
	compression.type = producer
	connection.failed.authentication.delay.ms = 100
	connections.max.idle.ms = 600000
	connections.max.reauth.ms = 0
	control.plane.listener.name = null
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 0
	group.max.session.timeout.ms = 1800000
	group.max.size = 2147483647
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.listener.name = null
	inter.broker.protocol.version = 2.4-IV1
	kafka.metrics.polling.interval.secs = 10
	kafka.metrics.reporters = []
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
	listeners = null
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.max.compaction.lag.ms = 9223372036854775807
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /tmp/kafka-logs
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.downconversion.enable = true
	log.message.format.version = 2.4-IV1
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections = 2147483647
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1000012
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 10080
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 10000
	replica.selector.class = null
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.client.callback.handler.class = null
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism.inter.broker.protocol = GSSAPI
	sasl.server.callback.handler.class = null
	security.inter.broker.protocol = PLAINTEXT
	security.providers = null
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.principal.mapping.rules = DEFAULT
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 1
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 1
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.connect = localhost:2181
	zookeeper.connection.timeout.ms = 6000
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 6000
	zookeeper.set.acl = false
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2024-03-14 04:16:42,506] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-14 04:16:42,506] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-14 04:16:42,508] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-14 04:16:42,564] INFO Log directory F:\tmp\kafka-logs not found, creating it. (kafka.log.LogManager)
[2024-03-14 04:16:42,582] INFO Loading logs. (kafka.log.LogManager)
[2024-03-14 04:16:42,595] INFO Logs loading complete in 12 ms. (kafka.log.LogManager)
[2024-03-14 04:16:42,628] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2024-03-14 04:16:42,633] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2024-03-14 04:16:43,161] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2024-03-14 04:16:43,230] INFO [SocketServer brokerId=0] Created data-plane acceptor and processors for endpoint : EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2024-03-14 04:16:43,233] INFO [SocketServer brokerId=0] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2024-03-14 04:16:43,268] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-14 04:16:43,269] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-14 04:16:43,270] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-14 04:16:43,271] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-14 04:16:43,301] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-03-14 04:16:59,028] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2024-03-14 04:16:59,390] INFO Stat of the created znode at /brokers/ids/0 is: 388,388,1710361019251,1710361019251,1,0,0,72057609663021056,200,0,388
 (kafka.zk.KafkaZkClient)
[2024-03-14 04:16:59,391] INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(DESKTOP-S0UTLJU,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 388 (kafka.zk.KafkaZkClient)
[2024-03-14 04:16:59,827] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-14 04:16:59,839] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-14 04:16:59,840] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-14 04:16:59,922] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2024-03-14 04:16:59,923] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2024-03-14 04:16:59,934] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 8 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-14 04:17:00,061] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:19000,blockEndProducerId:19999) by writing to Zk with path version 20 (kafka.coordinator.transaction.ProducerIdManager)
[2024-03-14 04:17:00,143] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-14 04:17:00,145] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-14 04:17:00,214] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-03-14 04:17:00,240] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-14 04:17:00,288] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2024-03-14 04:17:00,348] INFO [SocketServer brokerId=0] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2024-03-14 04:17:00,350] WARN Error while loading kafka-version.properties: null (org.apache.kafka.common.utils.AppInfoParser)
[2024-03-14 04:17:00,351] INFO Kafka version: unknown (org.apache.kafka.common.utils.AppInfoParser)
[2024-03-14 04:17:00,351] INFO Kafka commitId: unknown (org.apache.kafka.common.utils.AppInfoParser)
[2024-03-14 04:17:00,352] INFO Kafka startTimeMs: 1710361020349 (org.apache.kafka.common.utils.AppInfoParser)
[2024-03-14 04:17:00,354] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)

6. 尝试调试Kafka事务发送消息

6.1. 建议断点

kafka.server.KafkaApis#handle 方法,因为这个是处理所有请求并多路复用到正确api的顶级方法,所有客户端到服务端的请求,都经过这里,然后根据策略模式转发到相应的委派方法。

最好,我们在调试的时候,断点到相应的 case 即可。例如我们要调试事务发送消息,可以断点到以下几处:

  1. case ApiKeys.INIT_PRODUCER_ID => handleInitProducerIdRequest(request)
  2. case ApiKeys.OFFSET_FOR_LEADER_EPOCH => handleOffsetForLeaderEpochRequest(request)
  3. case ApiKeys.ADD_PARTITIONS_TO_TXN => handleAddPartitionToTxnRequest(request)
  4. case ApiKeys.ADD_OFFSETS_TO_TXN => handleAddOffsetsToTxnRequest(request)
  5. case ApiKeys.END_TXN => handleEndTxnRequest(request)
  6. case ApiKeys.WRITE_TXN_MARKERS => handleWriteTxnMarkersRequest(request)
  7. case ApiKeys.TXN_OFFSET_COMMIT => handleTxnOffsetCommitRequest(request)

该方法完整代码如下所示:

  /**
   * Top-level method that handles all requests and multiplexes to the right api
   */
  def handle(request: RequestChannel.Request): Unit = {
    try {
      trace(s"Handling request:${request.requestDesc(true)} from connection ${request.context.connectionId};" +
        s"securityProtocol:${request.context.securityProtocol},principal:${request.context.principal}")
      request.header.apiKey match {
        case ApiKeys.PRODUCE => handleProduceRequest(request)
        case ApiKeys.FETCH => handleFetchRequest(request)
        case ApiKeys.LIST_OFFSETS => handleListOffsetRequest(request)
        case ApiKeys.METADATA => handleTopicMetadataRequest(request)
        case ApiKeys.LEADER_AND_ISR => handleLeaderAndIsrRequest(request)
        case ApiKeys.STOP_REPLICA => handleStopReplicaRequest(request)
        case ApiKeys.UPDATE_METADATA => handleUpdateMetadataRequest(request)
        case ApiKeys.CONTROLLED_SHUTDOWN => handleControlledShutdownRequest(request)
        case ApiKeys.OFFSET_COMMIT => handleOffsetCommitRequest(request)
        case ApiKeys.OFFSET_FETCH => handleOffsetFetchRequest(request)
        case ApiKeys.FIND_COORDINATOR => handleFindCoordinatorRequest(request)
        case ApiKeys.JOIN_GROUP => handleJoinGroupRequest(request)
        case ApiKeys.HEARTBEAT => handleHeartbeatRequest(request)
        case ApiKeys.LEAVE_GROUP => handleLeaveGroupRequest(request)
        case ApiKeys.SYNC_GROUP => handleSyncGroupRequest(request)
        case ApiKeys.DESCRIBE_GROUPS => handleDescribeGroupRequest(request)
        case ApiKeys.LIST_GROUPS => handleListGroupsRequest(request)
        case ApiKeys.SASL_HANDSHAKE => handleSaslHandshakeRequest(request)
        case ApiKeys.API_VERSIONS => handleApiVersionsRequest(request)
        case ApiKeys.CREATE_TOPICS => handleCreateTopicsRequest(request)
        case ApiKeys.DELETE_TOPICS => handleDeleteTopicsRequest(request)
        case ApiKeys.DELETE_RECORDS => handleDeleteRecordsRequest(request)
        case ApiKeys.INIT_PRODUCER_ID => handleInitProducerIdRequest(request)
        case ApiKeys.OFFSET_FOR_LEADER_EPOCH => handleOffsetForLeaderEpochRequest(request)
        case ApiKeys.ADD_PARTITIONS_TO_TXN => handleAddPartitionToTxnRequest(request)
        case ApiKeys.ADD_OFFSETS_TO_TXN => handleAddOffsetsToTxnRequest(request)
        case ApiKeys.END_TXN => handleEndTxnRequest(request)
        case ApiKeys.WRITE_TXN_MARKERS => handleWriteTxnMarkersRequest(request)
        case ApiKeys.TXN_OFFSET_COMMIT => handleTxnOffsetCommitRequest(request)
        case ApiKeys.DESCRIBE_ACLS => handleDescribeAcls(request)
        case ApiKeys.CREATE_ACLS => handleCreateAcls(request)
        case ApiKeys.DELETE_ACLS => handleDeleteAcls(request)
        case ApiKeys.ALTER_CONFIGS => handleAlterConfigsRequest(request)
        case ApiKeys.DESCRIBE_CONFIGS => handleDescribeConfigsRequest(request)
        case ApiKeys.ALTER_REPLICA_LOG_DIRS => handleAlterReplicaLogDirsRequest(request)
        case ApiKeys.DESCRIBE_LOG_DIRS => handleDescribeLogDirsRequest(request)
        case ApiKeys.SASL_AUTHENTICATE => handleSaslAuthenticateRequest(request)
        case ApiKeys.CREATE_PARTITIONS => handleCreatePartitionsRequest(request)
        case ApiKeys.CREATE_DELEGATION_TOKEN => handleCreateTokenRequest(request)
        case ApiKeys.RENEW_DELEGATION_TOKEN => handleRenewTokenRequest(request)
        case ApiKeys.EXPIRE_DELEGATION_TOKEN => handleExpireTokenRequest(request)
        case ApiKeys.DESCRIBE_DELEGATION_TOKEN => handleDescribeTokensRequest(request)
        case ApiKeys.DELETE_GROUPS => handleDeleteGroupsRequest(request)
        case ApiKeys.ELECT_LEADERS => handleElectReplicaLeader(request)
        case ApiKeys.INCREMENTAL_ALTER_CONFIGS => handleIncrementalAlterConfigsRequest(request)
        case ApiKeys.ALTER_PARTITION_REASSIGNMENTS => handleAlterPartitionReassignmentsRequest(request)
        case ApiKeys.LIST_PARTITION_REASSIGNMENTS => handleListPartitionReassignmentsRequest(request)
        case ApiKeys.OFFSET_DELETE => handleOffsetDeleteRequest(request)
      }
    } catch {
      case e: FatalExitError => throw e
      case e: Throwable => handleError(request, e)
    } finally {
      request.apiLocalCompleteTimeNanos = time.nanoseconds
    }
  }

6.2. 写个测试服务来发送Kafka事务消息

Controller

    @GetMapping("/tx-two")
    @Transactional(rollbackFor = Exception.class, transactionManager = "kafkaTransactionManager")
    public String sendTransactionTwo(@RequestParam("message") String message) throws InterruptedException {
        log.info("发送消息:{}", message);
        senderService.sendTransactionTwo(message);
        return "send transaction-one doing...";
    }

Service

import com.leekitman.pangea.evolution.kafka.consumer.group.TransactionEventOne;
import com.leekitman.pangea.evolution.kafka.consumer.group.TransactionEventTwo;
import com.leekitman.pangea.evolution.kafka.controller.callback.TransactionOneCallback;
import com.leekitman.pangea.evolution.kafka.dao.ProcessEventRepository;
import com.leekitman.pangea.evolution.kafka.entity.ProcessEventEntity;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import org.springframework.util.concurrent.ListenableFuture;

/**
 * @author Leekitman Li
 * @version 2.0
 * @date 2023/2/8 12:26
 */
@Service
@Transactional(rollbackFor = Exception.class)
@Slf4j
public class SenderService {

    @Autowired
    private KafkaTemplate<String, String> template;
    @Autowired
    private TransactionOneCallback transactionOneCallback;
    @Autowired
    private ProcessEventRepository processEventRepository;

    /**
     * 发布第一事件
     *
     * @param message 消息
     */
    public void sendTransactionTwo(String message) {

        final Iterable<ProcessEventEntity> all = processEventRepository.findAll();
        log.info("1-尝试开启数据库事务,查询数据库:{}", all);

		// 发起第一个 TOPIC 的事务消息
        final ListenableFuture<SendResult<String, String>> result = this.template.send(
            TransactionEventOne.TOPIC, message);
        result.addCallback(transactionOneCallback);
    }

    /**
     * 第一事件消费中发布第二事件
     *
     * @param message 消息
     */
    public void doEventV1(String message) {
        final Iterable<ProcessEventEntity> all = processEventRepository.findAll();
        log.info("2-尝试开启数据库事务,查询数据库:{}", all);

		// 在消费者发起另一个 TOPIC 的事务消息
        final ListenableFuture<SendResult<String, String>> result = this.template.send(
            TransactionEventTwo.TOPIC, message);
        result.addCallback(transactionOneCallback);

    }

    /**
     * 第二事件消费中
     *
     * @param message 消息
     */
    public void doEventV2(String message) {
        final Iterable<ProcessEventEntity> all = processEventRepository.findAll();
        log.info("3-尝试开启数据库事务,查询数据库,:{}", all);
    }
}

KafkaListener

import com.leekitman.pangea.evolution.kafka.consumer.group.TransactionEventTwo;
import com.leekitman.pangea.evolution.kafka.service.SenderService;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.annotation.TopicPartition;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.messaging.handler.annotation.Header;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;

/**
 * @author Leekitman Li
 * @version 2.0
 * @date 2022/12/20 15:06
 */
@Component
@Slf4j
public class TransactionV2TopicKafkaConsumerListener {

    @Autowired
    private SenderService senderService;

    @KafkaListener(topicPartitions = {
        @TopicPartition(topic = TransactionEventOne.TOPIC, partitions = TransactionEventOne.PARTITION_ID_0)
    })
    @Transactional(rollbackFor = Exception.class, transactionManager = "kafkaTransactionManager")
    public void eventOne(String value,
                           @Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
                           @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
                           @Header(KafkaHeaders.OFFSET) long offset) {
        log.info("eventOne:接收kafka消息:[{}],from {} @ {}@ {}", value, topic, partition, offset);
        senderService.doEventV1(value);
    }


    @KafkaListener(topicPartitions = {
        @TopicPartition(topic = TransactionEventTwo.TOPIC, partitions = TransactionEventTwo.PARTITION_ID_0)
    })
    @Transactional(rollbackFor = Exception.class, transactionManager = "kafkaTransactionManager")
    public void eventTwo(String value,
                           @Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
                           @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
                           @Header(KafkaHeaders.OFFSET) long offset) {
        log.info("eventTwo:接收kafka消息:[{}],from {} @ {}@ {}", value, topic, partition, offset);
        senderService.doEventV2(value);
    }
}

更详细的测试源码以及详细的日志留档在下一篇文章,如有读者想研究日志的话,请参阅《Kafka源码调试(二):编写简单测试客户端程序,以及发送事务消息的日志留档》

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值