Kafka源码调试(二):编写简单测试客户端程序,以及发送事务消息的日志留档

5 篇文章 0 订阅
4 篇文章 0 订阅

书接上回:《Kafka源码调试(一):如何开始调试Kafka源码》

1. 写一个测试客户端,采用流式应用的典型 “consume-transform-produce” 模式

config

spring:
  kafka:
    bootstrap-servers: localhost:9092
    producer:
      # 事务id前缀,有值即开启kafka事务
      transaction-id-prefix: tx-kafka-
      value-serializer: org.springframework.kafka.support.serializer.ToStringSerializer
    consumer:
      group-id: spring-kafka-evo-consumer-004
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      # 消费者事务级别:读已提交
      isolation-level: READ_COMMITTED

# 打印更详细的事务活动
logging:
  level:
    root: info
    org.springframework.transaction: trace
    org.springframework.kafka.transaction: debug

配置属性说明:

  • spring.kafka.bootstrap-servers:该参数用来指定生产者客户端连接 Kafka (服务)集群所需的 broker 地址清单,具体的内容格式为 host1:port1,host2:port2,可以设置一个或多个地址,中间以英文逗号隔开,此参数的默认值为 “”。注意这里并非需要所有的 broker 地址,因为生产者会从给定的 broker 里查找到其他 broker 的信息。不过建议至少要设置两个以上的 broker 地址信息,当其中任意一个宕机时,生产者仍然可以连接到 Kafka 集群上。
@Configuration
@EnableKafka
public class KafkaConfig {

    @Bean
    public NewTopic transactionTopic1() {
        return TopicBuilder.name("TRANSACTION-TOPIC-1").partitions(1).replicas(1).build();
    }

    @Bean
    public NewTopic transactionTopic2() {
        return TopicBuilder.name("TRANSACTION-TOPIC-2").partitions(1).replicas(1).build();
    }
}

@Configuration
public class TransactionManagerConfig {

    @Bean
    @Primary
    public PlatformTransactionManager transactionManager(
        ObjectProvider<TransactionManagerCustomizers> transactionManagerCustomizers) {
        JpaTransactionManager transactionManager = new JpaTransactionManager();
        transactionManagerCustomizers.ifAvailable((customizers) -> customizers.customize(transactionManager));
        return transactionManager;
    }
}

Controller

    @GetMapping("/tx-two")
    @Transactional(rollbackFor = Exception.class, transactionManager = "kafkaTransactionManager")
    public String sendTransactionTwo(@RequestParam("message") String message) throws InterruptedException {
        log.info("发送消息:{}", message);
        senderService.sendTransactionTwo(message);
        return "send transaction-two success...";
    }

Service

@Service
@Slf4j
public class SenderService {

    @Autowired
    private KafkaTemplate<String, String> template;
    @Autowired
    private ProcessEventRepository processEventRepository;

    /**
     * 发布第一事件
     *
     * @param message 消息
     */
    @Transactional(rollbackFor = Exception.class)
    public void sendTransactionTwo(String message) {

        final Iterable<ProcessEventEntity> all = processEventRepository.findAll();
        log.info("1-验证数据库事务,查询数据库:{}", all);

        // 发起第一个 TOPIC 的事务消息
        final ListenableFuture<SendResult<String, String>> result = this.template.send(
                "TRANSACTION-TOPIC-1", message);

		// 发送消息回调
        result.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
            @Override
            public void onFailure(Throwable ex) {
                log.error("2-事务发送kafka消息异常回调", ex);
            }

            @Override
            public void onSuccess(SendResult<String, String> result) {
                log.info("2-事务发送kafka消息成功回调: {}", result);
            }
        });
    }

    /**
     * 第一事件消费中发布第二事件
     *
     * @param message 消息
     */
    @Transactional(rollbackFor = Exception.class)
    public void doEventV1(String message) {
        final Iterable<ProcessEventEntity> all = processEventRepository.findAll();
        log.info("3-验证数据库事务,查询数据库:{}", all);

        // 在消费者发起另一个 TOPIC 的事务消息
        final ListenableFuture<SendResult<String, String>> result = this.template.send(
                "TRANSACTION-TOPIC-2", message);
        
		// 发送消息回调
        result.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
            @Override
            public void onFailure(Throwable ex) {
                log.error("4-事务发送kafka消息异常回调", ex);
            }

            @Override
            public void onSuccess(SendResult<String, String> result) {
                log.info("4-事务发送kafka消息成功回调: {}", result);
            }
        });

    }
}

KafkaListener

第一事件消费者

@Component
@Slf4j
public class TransactionOneEventListener {

    @Autowired
    private SenderService senderService;

    @KafkaListener(topicPartitions = {
        @TopicPartition(topic = "TRANSACTION-TOPIC-1", partitions = "0")
    })
    @Transactional(rollbackFor = Exception.class, transactionManager = "kafkaTransactionManager")
    public void listenEvent1(String value,
                           @Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
                           @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
                           @Header(KafkaHeaders.OFFSET) long offset) {
        log.info("listenEvent1:接收kafka消息:[{}],from {} @ {}@ {}", value, topic, partition, offset);
        senderService.doEventV1(value);
    }
}

第二事件消费者

@Component
@Slf4j
public class TransactionTwoEventListener {

    @Autowired
    private SenderService senderService;

    @KafkaListener(topicPartitions = {
        @TopicPartition(topic = "TRANSACTION-TOPIC-2", partitions = "0")
    })
    @Transactional(rollbackFor = Exception.class, transactionManager = "kafkaTransactionManager")
    public void listenEvent2(String value,
                           @Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
                           @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
                           @Header(KafkaHeaders.OFFSET) long offset) {
        log.info("listenEvent2:接收kafka消息:[{}],from {} @ {}@ {}", value, topic, partition, offset);
    }
}

2. 日志留档

2.1. 启动日志

2.1.1. Kafka启动日志

关键是打印Kafka节点配置,用于后续源码阅读时查阅对比

日志关键词:

  1. Connecting to zookeeper:连接到 Zookeeper
  2. Client environment:Kafka节点作为 Zookeeper 客户端的系统环境属性
  3. KafkaConfig values:Kafka节点配置
  4. [GroupCoordinator: 消费者组协调器启动,集群里每个节点都会有,所以启动时自动开启
  5. [TransactionCoordinator:事务协调器启动,集群里每个节点都会有,所以启动时自动开启
"C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\bin\java.exe" -agentlib:jdwp=transport=dt_socket,address=127.0.0.1:57668,suspend=y,server=n -Dlog4j.configuration=file:config/log4j.properties -Dkafka.logs.dir=logs -javaagent:C:\Users\Kitman\AppData\Local\JetBrains\IdeaIC2022.1\captureAgent\debugger-agent.jar -Dfile.encoding=UTF-8 -classpath "...太长省略..." kafka.Kafka config/server.properties
Connected to the target VM, address: '127.0.0.1:57668', transport: 'socket'
[2024-03-16 12:47:17,810] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2024-03-16 12:47:18,778] INFO starting (kafka.server.KafkaServer)
[2024-03-16 12:47:18,794] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2024-03-16 12:47:18,841] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2024-03-16 12:47:34,460] INFO Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,460] INFO Client environment:host.name=DESKTOP-S0UTLJU (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,460] INFO Client environment:java.version=1.8.0_402 (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,461] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,461] INFO Client environment:java.home=C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,461] INFO Client environment:java.class.path=C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\cat.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\charsets.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\crs-agent.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\dnsns.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\jaccess.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\localedata.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\nashorn.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\sunec.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\ext\zipfs.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\jce.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\jfr.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\jsse.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\management-agent.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\resources.jar;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\lib\rt.jar;F:\privateBox\kafkaCodeReadProject\kafka\core\out\production\classes;F:\privateBox\kafkaCodeReadProject\kafka\clients\out\production\classes;F:\privateBox\kafkaCodeReadProject\kafka\clients\out\production\resources;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.module\jackson-module-scala_2.12\2.10.0\343a5406ea085a42d14997c1f0ce3ca8af6e74d9\jackson-module-scala_2.12-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.dataformat\jackson-dataformat-csv\2.10.0\fdbc401c60e2343a05b6842b21edf1fc5ec8ec79\jackson-dataformat-csv-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.datatype\jackson-datatype-jdk8\2.10.0\ac9b5e4ec02f243c580113c0c58564d90432afaa\jackson-datatype-jdk8-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.core\jackson-databind\2.10.0\1127c9cf62f2bb3121a3a2a0a1351d251a602117\jackson-databind-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\net.sf.jopt-simple\jopt-simple\5.0.4\4fdac2fbe92dfad86aa6e9301736f6b4342a3f5c\jopt-simple-5.0.4.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.yammer.metrics\metrics-core\2.2.0\f82c035cfa786d3cbec362c38c22a5f5b1bc8724\metrics-core-2.2.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.scala-lang.modules\scala-collection-compat_2.12\2.1.2\7a96dbe1dc17a92d46a52b6f84a36bdee1936548\scala-collection-compat_2.12-2.1.2.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.scala-lang.modules\scala-java8-compat_2.12\0.9.0\9525fb6bbf54a9caf0f7e1b65b261215b02fe939\scala-java8-compat_2.12-0.9.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.typesafe.scala-logging\scala-logging_2.12\3.9.2\b1f19bc6774e01debf09bf5f564ad3613687bf49\scala-logging_2.12-3.9.2.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.scala-lang\scala-reflect\2.12.10\14cb7beb516cd8e07716133668c427792122c926\scala-reflect-2.12.10.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.scala-lang\scala-library\2.12.10\3509860bc2e5b3da001ed45aca94ffbe5694dbda\scala-library-2.12.10.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.apache.zookeeper\zookeeper\3.5.7\12bdf55ba8be7fc891996319d37f35eaad7e63ea\zookeeper-3.5.7.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.slf4j\slf4j-log4j12\1.7.28\9c45c87557628d1c06d770e1382932dc781e3d5d\slf4j-log4j12-1.7.28.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.slf4j\slf4j-api\1.7.28\2cd9b264f76e3d087ee21bfc99305928e1bdb443\slf4j-api-1.7.28.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\commons-cli\commons-cli\1.4\c51c00206bb913cd8612b24abd9fa98ae89719b1\commons-cli-1.4.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\log4j\log4j\1.2.17\5af35056b4d257e4b64b9e8069c0746e8b08629f\log4j-1.2.17.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.github.luben\zstd-jni\1.4.3-1\c3c8278c6a02b332a21fcd2c22434d0bc928126d\zstd-jni-1.4.3-1.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.lz4\lz4-java\1.6.0\b49e2b422a5b7145ba7aa0c3f60c13664a5c5acf\lz4-java-1.6.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.xerial.snappy\snappy-java\1.1.7.3\241bb74a1eb37d68a4e324a4bc3865427de0a62d\snappy-java-1.1.7.3.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.module\jackson-module-paranamer\2.10.0\4fc4ba10b328a53ac5653cee15504621c6b66083\jackson-module-paranamer-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.core\jackson-annotations\2.10.0\e01cfd93b80d6773b3f757c78e756c9755b47b81\jackson-annotations-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.fasterxml.jackson.core\jackson-core\2.10.0\4e2c5fa04648ec9772c63e2101c53af6504e624e\jackson-core-2.10.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.apache.zookeeper\zookeeper-jute\3.5.7\1270f80b08904499a6839a2ee1800da687ad96b4\zookeeper-jute-3.5.7.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\org.apache.yetus\audience-annotations\0.5.0\55762d3191a8d6610ef46d11e8cb70c7667342a3\audience-annotations-0.5.0.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-handler\4.1.45.Final\51071ba9977cce64e3a58e6f2f6326bbb7e5bc7f\netty-handler-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-transport-native-epoll\4.1.45.Final\cf153257db449b6a74adb64fbd2903542af55892\netty-transport-native-epoll-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\com.thoughtworks.paranamer\paranamer\2.8\619eba74c19ccf1da8ebec97a2d7f8ba05773dd6\paranamer-2.8.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-codec\4.1.45.Final\8c768728a3e82c3cef62a7a2c8f52ae8d777bac9\netty-codec-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-transport\4.1.45.Final\b7d8f2645e330bd66cd4f28f155eba605e0c8758\netty-transport-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-buffer\4.1.45.Final\bac54338074540c4f3241a3d92358fad5df89ba\netty-buffer-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-common\4.1.45.Final\5cf5e448d44ddf53d00f2fc4047c2a7aceaa7087\netty-common-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-transport-native-unix-common\4.1.45.Final\49f9fa4b7fe7d3e562666d050049541b86822549\netty-transport-native-unix-common-4.1.45.Final.jar;F:\software\gradle-8.6-bin\gradle-8.6\repo\caches\modules-2\files-2.1\io.netty\netty-resolver\4.1.45.Final\9e77bdc045d33a570dabf9d53192ea954bb195d7\netty-resolver-4.1.45.Final.jar;D:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2021.3.2\lib\idea_rt.jar;C:\Users\Kitman\AppData\Local\JetBrains\IdeaIC2022.1\captureAgent\debugger-agent.jar (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,462] INFO Client environment:java.library.path=C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\bin\;C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\jre\bin\;F:\software\apache-maven-3.6.3\bin\;F:\software\gradle-8.6-bin\gradle-8.6\bin;C:\Program Files (x86)\VMware\VMware Workstation\bin\;C:\Program Files\Python310\Scripts\;C:\Program Files\Python310\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;D:\Program Files (x86)\NetSarang\Xftp 7\;D:\Program Files (x86)\NetSarang\Xshell 7\;C:\Android;C:\Windows\System32;C:\Program Files\dotnet\;F:\software\apache-ant-1.10.14-bin\apache-ant-1.10.14\bin\;F:\software\scala-2.12.10\bin\;D:\Program Files\Git\cmd;C:\Users\Kitman\AppData\Local\Microsoft\WindowsApps;;C:\Users\Kitman\AppData\Local\Programs\Microsoft VS Code\bin;. (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,462] INFO Client environment:java.io.tmpdir=C:\Users\Kitman\AppData\Local\Temp\ (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,462] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,462] INFO Client environment:os.name=Windows 10 (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,462] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,462] INFO Client environment:os.version=10.0 (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,462] INFO Client environment:user.name=Kitman (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,462] INFO Client environment:user.home=C:\Users\Kitman (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,462] INFO Client environment:user.dir=F:\privateBox\kafkaCodeReadProject\kafka (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,462] INFO Client environment:os.memory.free=188MB (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,462] INFO Client environment:os.memory.max=3614MB (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,462] INFO Client environment:os.memory.total=245MB (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,470] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@62e136d3 (org.apache.zookeeper.ZooKeeper)
[2024-03-16 12:47:34,480] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2024-03-16 12:47:34,848] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2024-03-16 12:47:34,857] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2024-03-16 12:47:34,861] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2024-03-16 12:47:34,867] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2024-03-16 12:47:34,869] INFO Socket connection established, initiating session, client: /127.0.0.1:57681, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2024-03-16 12:47:35,218] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000c374eef0000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2024-03-16 12:47:35,225] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2024-03-16 12:47:37,029] INFO Cluster ID = -783ZTK6RamKKevY_g15Mw (kafka.server.KafkaServer)
[2024-03-16 12:47:37,035] WARN No meta.properties file under dir F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2024-03-16 12:47:37,156] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = null
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 0
	broker.id.generation.enable = true
	broker.rack = null
	client.quota.callback.class = null
	compression.type = producer
	connection.failed.authentication.delay.ms = 100
	connections.max.idle.ms = 600000
	connections.max.reauth.ms = 0
	control.plane.listener.name = null
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 0
	group.max.session.timeout.ms = 1800000
	group.max.size = 2147483647
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.listener.name = null
	inter.broker.protocol.version = 2.4-IV1
	kafka.metrics.polling.interval.secs = 10
	kafka.metrics.reporters = []
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
	listeners = null
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.max.compaction.lag.ms = 9223372036854775807
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.downconversion.enable = true
	log.message.format.version = 2.4-IV1
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections = 2147483647
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1000012
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 10080
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 10000
	replica.selector.class = null
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.client.callback.handler.class = null
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism.inter.broker.protocol = GSSAPI
	sasl.server.callback.handler.class = null
	security.inter.broker.protocol = PLAINTEXT
	security.providers = null
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.principal.mapping.rules = DEFAULT
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 1
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 1
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.connect = localhost:2181
	zookeeper.connection.timeout.ms = 6000
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 6000
	zookeeper.set.acl = false
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2024-03-16 12:47:37,174] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = null
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 0
	broker.id.generation.enable = true
	broker.rack = null
	client.quota.callback.class = null
	compression.type = producer
	connection.failed.authentication.delay.ms = 100
	connections.max.idle.ms = 600000
	connections.max.reauth.ms = 0
	control.plane.listener.name = null
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 0
	group.max.session.timeout.ms = 1800000
	group.max.size = 2147483647
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.listener.name = null
	inter.broker.protocol.version = 2.4-IV1
	kafka.metrics.polling.interval.secs = 10
	kafka.metrics.reporters = []
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
	listeners = null
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.max.compaction.lag.ms = 9223372036854775807
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.downconversion.enable = true
	log.message.format.version = 2.4-IV1
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections = 2147483647
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1000012
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 10080
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 10000
	replica.selector.class = null
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.client.callback.handler.class = null
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism.inter.broker.protocol = GSSAPI
	sasl.server.callback.handler.class = null
	security.inter.broker.protocol = PLAINTEXT
	security.providers = null
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.principal.mapping.rules = DEFAULT
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 1
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 1
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.connect = localhost:2181
	zookeeper.connection.timeout.ms = 6000
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 6000
	zookeeper.set.acl = false
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2024-03-16 12:47:37,248] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-16 12:47:37,249] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-16 12:47:37,253] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2024-03-16 12:47:37,333] INFO Loading logs. (kafka.log.LogManager)
[2024-03-16 12:47:37,364] INFO Logs loading complete in 31 ms. (kafka.log.LogManager)
[2024-03-16 12:47:37,395] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2024-03-16 12:47:37,402] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2024-03-16 12:47:38,324] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2024-03-16 12:47:38,387] INFO [SocketServer brokerId=0] Created data-plane acceptor and processors for endpoint : EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2024-03-16 12:47:38,389] INFO [SocketServer brokerId=0] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2024-03-16 12:47:38,453] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-16 12:47:38,453] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-16 12:47:38,453] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-16 12:47:38,453] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-16 12:47:38,504] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2024-03-16 12:47:54,161] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2024-03-16 12:47:54,694] INFO Stat of the created znode at /brokers/ids/0 is: 24,24,1710564474482,1710564474482,1,0,0,72071025724948480,200,0,24
 (kafka.zk.KafkaZkClient)
[2024-03-16 12:47:54,698] INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(DESKTOP-S0UTLJU,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 24 (kafka.zk.KafkaZkClient)
[2024-03-16 12:47:55,217] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-16 12:47:55,357] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-16 12:47:55,357] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-16 12:47:55,404] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2024-03-16 12:47:55,404] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2024-03-16 12:47:55,420] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 16 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:47:55,560] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
[2024-03-16 12:47:55,592] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2024-03-16 12:47:55,685] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-16 12:47:55,685] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-16 12:47:55,716] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2024-03-16 12:47:55,795] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2024-03-16 12:47:55,841] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2024-03-16 12:47:55,857] INFO [SocketServer brokerId=0] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2024-03-16 12:47:55,857] WARN Error while loading kafka-version.properties: null (org.apache.kafka.common.utils.AppInfoParser)
[2024-03-16 12:47:55,857] INFO Kafka version: unknown (org.apache.kafka.common.utils.AppInfoParser)
[2024-03-16 12:47:55,857] INFO Kafka commitId: unknown (org.apache.kafka.common.utils.AppInfoParser)
[2024-03-16 12:47:55,857] INFO Kafka startTimeMs: 1710564475857 (org.apache.kafka.common.utils.AppInfoParser)
[2024-03-16 12:47:55,857] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)

2.1.2. 客户端启动日志

客户端启动时,会自动连接 Kafka 并进行主题、分区、消费者组等配置信息进行注册(初始化)操作

日志关键词:

  1. Adding transactional:通过动态代理的方式,给每个 @Transactional 注解的方法加上事务处理程序
  2. AdminClientConfig values:客户端配置。比较关键的配置参数有:
    • bootstrap.servers = [localhost:9092]:该参数用来指定生产者客户端连接 Kafka 集群所需的 broker 地址清单。
  3. ConsumerConfig values:消费者配置,有两个,应该分别对应两个 @KafkaListener 注解,两个之间的区别,关键在于 client.id
    1. 第一个
      • bootstrap.servers = [localhost:9092]:该参数用来指定消费者客户端连接 Kafka 集群所需的 broker 地址清单。
      • group.id = spring-kafka-evo-consumer-004:消费者隶属的消费组的名称,默认值为 “” 。如果为空,则会报出异常。一般而言,这个参数需要设置成具有一定的业务意义的名称。
      • client.id = consumer-spring-kafka-evo-consumer-004-1:客户端id-1
    2. 第二个
      • bootstrap.servers = [localhost:9092]:该参数用来指定消费者客户端连接 Kafka 集群所需的 broker 地址清单。
      • group.id = spring-kafka-evo-consumer-004:消费者隶属的消费组的名称,默认值为 “” 。如果为空,则会报出异常。一般而言,这个参数需要设置成具有一定的业务意义的名称。
      • client.id = consumer-spring-kafka-evo-consumer-004-2:客户端id-2
  4. Subscribed to partition:哪个消费者客户端订阅了哪个主题分区
    • 消费者 consumer-spring-kafka-evo-consumer-004-1 订阅了主题分区 TRANSACTION-TOPIC-1-0
    • 消费者 consumer-spring-kafka-evo-consumer-004-2 订阅了主题分区 TRANSACTION-TOPIC-2-0
  5. Discovered group coordinator:发现消费者组协调器,此时发起 FindCoordinatorRequest 请求
  6. Cluster ID:集群ID是Kafka集群中唯一且不可变的标识符
  7. Found no committed offset for partition:消费者初次启动时,在 Kafka 服务端没有任务提交记录
  8. Resetting offset for partition:消费者初次启动时,重置消费者在对应分区的提交偏移量为 0
"C:\Program Files\Java\zulu8.76.0.17-ca-jdk8.0.402-win_x64\bin\java.exe" -agentlib:jdwp=transport=dt_socket,address=127.0.0.1:57774,suspend=y,server=n -javaagent:C:\Users\Kitman\AppData\Local\JetBrains\IdeaIC2022.1\captureAgent\debugger-agent.jar -Dfile.encoding=UTF-8 -classpath "...太长省略..." com.kitman.kafka.simple.demo.MyApplication
Connected to the target VM, address: '127.0.0.1:57774', transport: 'socket'

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::               (v2.7.18)

2024-03-16 12:52:25.952  INFO 5276 --- [           main] c.k.kafka.simple.demo.MyApplication      : Starting MyApplication using Java 1.8.0_402 on DESKTOP-S0UTLJU with PID 5276 (F:\privateBox\kafkaCodeReadProject\kafka-client-simple-demo\target\classes started by Kitman in F:\privateBox\kafkaCodeReadProject\kafka-client-simple-demo)
2024-03-16 12:52:25.959  INFO 5276 --- [           main] c.k.kafka.simple.demo.MyApplication      : No active profile set, falling back to 1 default profile: "default"
2024-03-16 12:52:26.999  INFO 5276 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode.
2024-03-16 12:52:27.075  INFO 5276 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 62 ms. Found 1 JPA repository interfaces.
2024-03-16 12:52:28.262  INFO 5276 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8080 (http)
2024-03-16 12:52:28.279  INFO 5276 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2024-03-16 12:52:28.279  INFO 5276 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet engine: [Apache Tomcat/9.0.83]
2024-03-16 12:52:28.420  INFO 5276 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2024-03-16 12:52:28.420  INFO 5276 --- [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 2360 ms
2024-03-16 12:52:28.701  INFO 5276 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting...
2024-03-16 12:52:29.072  INFO 5276 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.
2024-03-16 12:52:29.152  INFO 5276 --- [           main] o.hibernate.jpa.internal.util.LogHelper  : HHH000204: Processing PersistenceUnitInfo [name: default]
2024-03-16 12:52:29.234  INFO 5276 --- [           main] org.hibernate.Version                    : HHH000412: Hibernate ORM core version 5.6.15.Final
2024-03-16 12:52:29.466  INFO 5276 --- [           main] o.hibernate.annotations.common.Version   : HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
2024-03-16 12:52:29.659  INFO 5276 --- [           main] org.hibernate.dialect.Dialect            : HHH000400: Using dialect: org.hibernate.dialect.H2Dialect
2024-03-16 12:52:30.384  INFO 5276 --- [           main] o.h.e.t.j.p.i.JtaPlatformInitiator       : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform]
2024-03-16 12:52:30.395  INFO 5276 --- [           main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
2024-03-16 12:52:31.203 TRACE 5276 --- [           main] t.a.AnnotationTransactionAttributeSource : Adding transactional method 'com.kitman.kafka.simple.demo.service.SenderService.sendTransactionTwo' with attribute: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,-java.lang.Exception
2024-03-16 12:52:31.213 TRACE 5276 --- [           main] t.a.AnnotationTransactionAttributeSource : Adding transactional method 'com.kitman.kafka.simple.demo.service.SenderService.doEventV2' with attribute: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,-java.lang.Exception
2024-03-16 12:52:31.213 TRACE 5276 --- [           main] t.a.AnnotationTransactionAttributeSource : Adding transactional method 'com.kitman.kafka.simple.demo.service.SenderService.doEventV1' with attribute: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,-java.lang.Exception
2024-03-16 12:52:31.226 TRACE 5276 --- [           main] t.a.AnnotationTransactionAttributeSource : Adding transactional method 'com.kitman.kafka.simple.demo.consumer.TransactionOneEventListener.listenEvent1' with attribute: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; 'kafkaTransactionManager',-java.lang.Exception
2024-03-16 12:52:31.248 TRACE 5276 --- [           main] t.a.AnnotationTransactionAttributeSource : Adding transactional method 'com.kitman.kafka.simple.demo.controller.SenderController.sendTransactionTwo' with attribute: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; 'kafkaTransactionManager',-java.lang.Exception
2024-03-16 12:52:31.309  WARN 5276 --- [           main] JpaBaseConfiguration$JpaWebConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning
2024-03-16 12:52:31.524  INFO 5276 --- [           main] o.s.b.a.w.s.WelcomePageHandlerMapping    : Adding welcome page template: index
2024-03-16 12:52:31.923  INFO 5276 --- [           main] o.a.k.clients.admin.AdminClientConfig    : AdminClientConfig values: 
	bootstrap.servers = [localhost:9092]
	client.dns.lookup = use_all_dns_ips
	client.id = 
	connections.max.idle.ms = 300000
	default.api.timeout.ms = 60000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.connect.timeout.ms = null
	sasl.login.read.timeout.ms = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.login.retry.backoff.max.ms = 10000
	sasl.login.retry.backoff.ms = 100
	sasl.mechanism = GSSAPI
	sasl.oauthbearer.clock.skew.seconds = 30
	sasl.oauthbearer.expected.audience = null
	sasl.oauthbearer.expected.issuer = null
	sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
	sasl.oauthbearer.jwks.endpoint.url = null
	sasl.oauthbearer.scope.claim.name = scope
	sasl.oauthbearer.sub.claim.name = sub
	sasl.oauthbearer.token.endpoint.url = null
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 131072
	socket.connection.setup.timeout.max.ms = 30000
	socket.connection.setup.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2]
	ssl.endpoint.identification.algorithm = https
	ssl.engine.factory.class = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.certificate.chain = null
	ssl.keystore.key = null
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLSv1.2
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.certificates = null
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS

2024-03-16 12:52:32.075  WARN 5276 --- [           main] o.a.k.clients.admin.AdminClientConfig    : The configuration 'max.poll.interval.ms' was supplied but isn't a known config.
2024-03-16 12:52:32.077  INFO 5276 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka version: 3.1.2
2024-03-16 12:52:32.077  INFO 5276 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: f8c67dc3ae0a3265
2024-03-16 12:52:32.077  INFO 5276 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1710564752075
2024-03-16 12:52:33.402  INFO 5276 --- [| adminclient-1] o.a.kafka.common.utils.AppInfoParser     : App info kafka.admin.client for adminclient-1 unregistered
2024-03-16 12:52:33.409  INFO 5276 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Metrics scheduler closed
2024-03-16 12:52:33.409  INFO 5276 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2024-03-16 12:52:33.409  INFO 5276 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Metrics reporters closed
2024-03-16 12:52:33.452  INFO 5276 --- [           main] o.a.k.clients.consumer.ConsumerConfig    : ConsumerConfig values: 
	allow.auto.create.topics = true
	auto.commit.interval.ms = 5000
	auto.offset.reset = latest
	bootstrap.servers = [localhost:9092]
	check.crcs = true
	client.dns.lookup = use_all_dns_ips
	client.id = consumer-spring-kafka-evo-consumer-004-1
	client.rack = 
	connections.max.idle.ms = 540000
	default.api.timeout.ms = 60000
	enable.auto.commit = false
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id = spring-kafka-evo-consumer-004
	group.instance.id = null
	heartbeat.interval.ms = 3000
	interceptor.classes = []
	internal.leave.group.on.close = true
	internal.throw.on.fetch.stable.offset.unsupported = false
	isolation.level = read_committed
	key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 10000
	max.poll.records = 500
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.connect.timeout.ms = null
	sasl.login.read.timeout.ms = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.login.retry.backoff.max.ms = 10000
	sasl.login.retry.backoff.ms = 100
	sasl.mechanism = GSSAPI
	sasl.oauthbearer.clock.skew.seconds = 30
	sasl.oauthbearer.expected.audience = null
	sasl.oauthbearer.expected.issuer = null
	sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
	sasl.oauthbearer.jwks.endpoint.url = null
	sasl.oauthbearer.scope.claim.name = scope
	sasl.oauthbearer.sub.claim.name = sub
	sasl.oauthbearer.token.endpoint.url = null
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 131072
	session.timeout.ms = 45000
	socket.connection.setup.timeout.max.ms = 30000
	socket.connection.setup.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2]
	ssl.endpoint.identification.algorithm = https
	ssl.engine.factory.class = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.certificate.chain = null
	ssl.keystore.key = null
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLSv1.2
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.certificates = null
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer

2024-03-16 12:52:33.504  INFO 5276 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka version: 3.1.2
2024-03-16 12:52:33.504  INFO 5276 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: f8c67dc3ae0a3265
2024-03-16 12:52:33.505  INFO 5276 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1710564753504
2024-03-16 12:52:33.507  INFO 5276 --- [           main] o.a.k.clients.consumer.KafkaConsumer     : [Consumer clientId=consumer-spring-kafka-evo-consumer-004-1, groupId=spring-kafka-evo-consumer-004] Subscribed to partition(s): TRANSACTION-TOPIC-1-0
2024-03-16 12:52:33.517  INFO 5276 --- [           main] o.a.k.clients.consumer.ConsumerConfig    : ConsumerConfig values: 
	allow.auto.create.topics = true
	auto.commit.interval.ms = 5000
	auto.offset.reset = latest
	bootstrap.servers = [localhost:9092]
	check.crcs = true
	client.dns.lookup = use_all_dns_ips
	client.id = consumer-spring-kafka-evo-consumer-004-2
	client.rack = 
	connections.max.idle.ms = 540000
	default.api.timeout.ms = 60000
	enable.auto.commit = false
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id = spring-kafka-evo-consumer-004
	group.instance.id = null
	heartbeat.interval.ms = 3000
	interceptor.classes = []
	internal.leave.group.on.close = true
	internal.throw.on.fetch.stable.offset.unsupported = false
	isolation.level = read_committed
	key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 10000
	max.poll.records = 500
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.connect.timeout.ms = null
	sasl.login.read.timeout.ms = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.login.retry.backoff.max.ms = 10000
	sasl.login.retry.backoff.ms = 100
	sasl.mechanism = GSSAPI
	sasl.oauthbearer.clock.skew.seconds = 30
	sasl.oauthbearer.expected.audience = null
	sasl.oauthbearer.expected.issuer = null
	sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
	sasl.oauthbearer.jwks.endpoint.url = null
	sasl.oauthbearer.scope.claim.name = scope
	sasl.oauthbearer.sub.claim.name = sub
	sasl.oauthbearer.token.endpoint.url = null
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 131072
	session.timeout.ms = 45000
	socket.connection.setup.timeout.max.ms = 30000
	socket.connection.setup.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2]
	ssl.endpoint.identification.algorithm = https
	ssl.engine.factory.class = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.certificate.chain = null
	ssl.keystore.key = null
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLSv1.2
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.certificates = null
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer

2024-03-16 12:52:33.528  INFO 5276 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka version: 3.1.2
2024-03-16 12:52:33.528  INFO 5276 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: f8c67dc3ae0a3265
2024-03-16 12:52:33.528  INFO 5276 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1710564753528
2024-03-16 12:52:33.528  INFO 5276 --- [           main] o.a.k.clients.consumer.KafkaConsumer     : [Consumer clientId=consumer-spring-kafka-evo-consumer-004-2, groupId=spring-kafka-evo-consumer-004] Subscribed to partition(s): TRANSACTION-TOPIC-2-0
2024-03-16 12:52:33.552  INFO 5276 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2024-03-16 12:52:33.560  INFO 5276 --- [ntainer#0-0-C-1] org.apache.kafka.clients.Metadata        : [Consumer clientId=consumer-spring-kafka-evo-consumer-004-1, groupId=spring-kafka-evo-consumer-004] Cluster ID: -783ZTK6RamKKevY_g15Mw
2024-03-16 12:52:33.560  INFO 5276 --- [ntainer#1-0-C-1] org.apache.kafka.clients.Metadata        : [Consumer clientId=consumer-spring-kafka-evo-consumer-004-2, groupId=spring-kafka-evo-consumer-004] Cluster ID: -783ZTK6RamKKevY_g15Mw
2024-03-16 12:52:33.567  INFO 5276 --- [           main] c.k.kafka.simple.demo.MyApplication      : Started MyApplication in 8.63 seconds (JVM running for 9.409)
2024-03-16 12:52:37.755  INFO 5276 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-spring-kafka-evo-consumer-004-1, groupId=spring-kafka-evo-consumer-004] Discovered group coordinator DESKTOP-S0UTLJU:9092 (id: 2147483647 rack: null)
2024-03-16 12:52:37.769  INFO 5276 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-spring-kafka-evo-consumer-004-1, groupId=spring-kafka-evo-consumer-004] Found no committed offset for partition TRANSACTION-TOPIC-1-0
2024-03-16 12:52:37.785  INFO 5276 --- [ntainer#1-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-spring-kafka-evo-consumer-004-2, groupId=spring-kafka-evo-consumer-004] Discovered group coordinator DESKTOP-S0UTLJU:9092 (id: 2147483647 rack: null)
2024-03-16 12:52:37.785  INFO 5276 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-spring-kafka-evo-consumer-004-1, groupId=spring-kafka-evo-consumer-004] Resetting offset for partition TRANSACTION-TOPIC-1-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[DESKTOP-S0UTLJU:9092 (id: 0 rack: null)], epoch=0}}.
2024-03-16 12:52:37.785  INFO 5276 --- [ntainer#1-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-spring-kafka-evo-consumer-004-2, groupId=spring-kafka-evo-consumer-004] Found no committed offset for partition TRANSACTION-TOPIC-2-0
2024-03-16 12:52:37.785  INFO 5276 --- [ntainer#1-0-C-1] o.a.k.c.c.internals.SubscriptionState    : [Consumer clientId=consumer-spring-kafka-evo-consumer-004-2, groupId=spring-kafka-evo-consumer-004] Resetting offset for partition TRANSACTION-TOPIC-2-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[DESKTOP-S0UTLJU:9092 (id: 0 rack: null)], epoch=0}}.

2.1.3. 客户端启动时Kafka日志

日志关键词:

  1. Creating topic:创建主题及分区,包括客户自定义主题以及Kafka内部主题
  2. starts at Leader:确定分区副本的 leader 节点
  3. Created log for partition:创建分区日志, 可以根据具体日志打开物理目录:~\tmp\kafka-logs 查看创建好的分区日志文件,并用 kafka.tools.DumpLogSegments 程序进行物理文件解码查看日志内容
  4. loading of offsets and group metadata from:从以 __consumer_offsets 开头的主题日志里加载消费者偏移量以及消费者组元数据。
[2024-03-16 12:52:32,601] INFO Creating topic TRANSACTION-TOPIC-2 with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient)
[2024-03-16 12:52:32,831] INFO Creating topic TRANSACTION-TOPIC-1 with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient)
[2024-03-16 12:52:33,068] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(TRANSACTION-TOPIC-2-0) (kafka.server.ReplicaFetcherManager)
[2024-03-16 12:52:33,175] INFO [Log partition=TRANSACTION-TOPIC-2-0, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:33,183] INFO [Log partition=TRANSACTION-TOPIC-2-0, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 64 ms (kafka.log.Log)
[2024-03-16 12:52:33,185] INFO Created log for partition TRANSACTION-TOPIC-2-0 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\TRANSACTION-TOPIC-2-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:33,186] INFO [Partition TRANSACTION-TOPIC-2-0 broker=0] No checkpointed highwatermark is found for partition TRANSACTION-TOPIC-2-0 (kafka.cluster.Partition)
[2024-03-16 12:52:33,187] INFO [Partition TRANSACTION-TOPIC-2-0 broker=0] Log loaded for partition TRANSACTION-TOPIC-2-0 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:33,187] INFO [Partition TRANSACTION-TOPIC-2-0 broker=0] TRANSACTION-TOPIC-2-0 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:33,279] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(TRANSACTION-TOPIC-1-0) (kafka.server.ReplicaFetcherManager)
[2024-03-16 12:52:33,294] INFO [Log partition=TRANSACTION-TOPIC-1-0, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:33,295] INFO [Log partition=TRANSACTION-TOPIC-1-0, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 13 ms (kafka.log.Log)
[2024-03-16 12:52:33,296] INFO Created log for partition TRANSACTION-TOPIC-1-0 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\TRANSACTION-TOPIC-1-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:33,320] INFO [Partition TRANSACTION-TOPIC-1-0 broker=0] No checkpointed highwatermark is found for partition TRANSACTION-TOPIC-1-0 (kafka.cluster.Partition)
[2024-03-16 12:52:33,320] INFO [Partition TRANSACTION-TOPIC-1-0 broker=0] Log loaded for partition TRANSACTION-TOPIC-1-0 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:33,320] INFO [Partition TRANSACTION-TOPIC-1-0 broker=0] TRANSACTION-TOPIC-1-0 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:33,574] INFO Creating topic __consumer_offsets with configuration {segment.bytes=104857600, compression.type=producer, cleanup.policy=compact} and initial partition assignment Map(23 -> ArrayBuffer(0), 32 -> ArrayBuffer(0), 41 -> ArrayBuffer(0), 17 -> ArrayBuffer(0), 8 -> ArrayBuffer(0), 35 -> ArrayBuffer(0), 44 -> ArrayBuffer(0), 26 -> ArrayBuffer(0), 11 -> ArrayBuffer(0), 29 -> ArrayBuffer(0), 38 -> ArrayBuffer(0), 47 -> ArrayBuffer(0), 20 -> ArrayBuffer(0), 2 -> ArrayBuffer(0), 5 -> ArrayBuffer(0), 14 -> ArrayBuffer(0), 46 -> ArrayBuffer(0), 49 -> ArrayBuffer(0), 40 -> ArrayBuffer(0), 13 -> ArrayBuffer(0), 4 -> ArrayBuffer(0), 22 -> ArrayBuffer(0), 31 -> ArrayBuffer(0), 16 -> ArrayBuffer(0), 7 -> ArrayBuffer(0), 43 -> ArrayBuffer(0), 25 -> ArrayBuffer(0), 34 -> ArrayBuffer(0), 10 -> ArrayBuffer(0), 37 -> ArrayBuffer(0), 1 -> ArrayBuffer(0), 19 -> ArrayBuffer(0), 28 -> ArrayBuffer(0), 45 -> ArrayBuffer(0), 27 -> ArrayBuffer(0), 36 -> ArrayBuffer(0), 18 -> ArrayBuffer(0), 9 -> ArrayBuffer(0), 21 -> ArrayBuffer(0), 48 -> ArrayBuffer(0), 3 -> ArrayBuffer(0), 12 -> ArrayBuffer(0), 30 -> ArrayBuffer(0), 39 -> ArrayBuffer(0), 15 -> ArrayBuffer(0), 42 -> ArrayBuffer(0), 24 -> ArrayBuffer(0), 6 -> ArrayBuffer(0), 33 -> ArrayBuffer(0), 0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient)
[2024-03-16 12:52:33,574] INFO Creating topic __consumer_offsets with configuration {segment.bytes=104857600, compression.type=producer, cleanup.policy=compact} and initial partition assignment Map(23 -> ArrayBuffer(0), 32 -> ArrayBuffer(0), 41 -> ArrayBuffer(0), 17 -> ArrayBuffer(0), 8 -> ArrayBuffer(0), 35 -> ArrayBuffer(0), 44 -> ArrayBuffer(0), 26 -> ArrayBuffer(0), 11 -> ArrayBuffer(0), 29 -> ArrayBuffer(0), 38 -> ArrayBuffer(0), 47 -> ArrayBuffer(0), 20 -> ArrayBuffer(0), 2 -> ArrayBuffer(0), 5 -> ArrayBuffer(0), 14 -> ArrayBuffer(0), 46 -> ArrayBuffer(0), 49 -> ArrayBuffer(0), 40 -> ArrayBuffer(0), 13 -> ArrayBuffer(0), 4 -> ArrayBuffer(0), 22 -> ArrayBuffer(0), 31 -> ArrayBuffer(0), 16 -> ArrayBuffer(0), 7 -> ArrayBuffer(0), 43 -> ArrayBuffer(0), 25 -> ArrayBuffer(0), 34 -> ArrayBuffer(0), 10 -> ArrayBuffer(0), 37 -> ArrayBuffer(0), 1 -> ArrayBuffer(0), 19 -> ArrayBuffer(0), 28 -> ArrayBuffer(0), 45 -> ArrayBuffer(0), 27 -> ArrayBuffer(0), 36 -> ArrayBuffer(0), 18 -> ArrayBuffer(0), 9 -> ArrayBuffer(0), 21 -> ArrayBuffer(0), 48 -> ArrayBuffer(0), 3 -> ArrayBuffer(0), 12 -> ArrayBuffer(0), 30 -> ArrayBuffer(0), 39 -> ArrayBuffer(0), 15 -> ArrayBuffer(0), 42 -> ArrayBuffer(0), 24 -> ArrayBuffer(0), 6 -> ArrayBuffer(0), 33 -> ArrayBuffer(0), 0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient)
[2024-03-16 12:52:33,678] INFO [KafkaApi-0] Auto creation of topic __consumer_offsets with 50 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2024-03-16 12:52:34,430] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-38, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-13, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager)
[2024-03-16 12:52:34,437] INFO [Log partition=__consumer_offsets-0, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:34,438] INFO [Log partition=__consumer_offsets-0, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2024-03-16 12:52:34,438] INFO Created log for partition __consumer_offsets-0 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:34,439] INFO [Partition __consumer_offsets-0 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition)
[2024-03-16 12:52:34,439] INFO [Partition __consumer_offsets-0 broker=0] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:34,439] INFO [Partition __consumer_offsets-0 broker=0] __consumer_offsets-0 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:34,503] INFO [Log partition=__consumer_offsets-29, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:34,504] INFO [Log partition=__consumer_offsets-29, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2024-03-16 12:52:34,504] INFO Created log for partition __consumer_offsets-29 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-29 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:34,504] INFO [Partition __consumer_offsets-29 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition)
[2024-03-16 12:52:34,505] INFO [Partition __consumer_offsets-29 broker=0] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:34,505] INFO [Partition __consumer_offsets-29 broker=0] __consumer_offsets-29 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:34,563] INFO [Log partition=__consumer_offsets-48, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:34,564] INFO [Log partition=__consumer_offsets-48, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2024-03-16 12:52:34,564] INFO Created log for partition __consumer_offsets-48 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-48 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:34,564] INFO [Partition __consumer_offsets-48 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition)
[2024-03-16 12:52:34,564] INFO [Partition __consumer_offsets-48 broker=0] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:34,565] INFO [Partition __consumer_offsets-48 broker=0] __consumer_offsets-48 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:34,623] INFO [Log partition=__consumer_offsets-10, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:34,623] INFO [Log partition=__consumer_offsets-10, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2024-03-16 12:52:34,624] INFO Created log for partition __consumer_offsets-10 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-10 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:34,624] INFO [Partition __consumer_offsets-10 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition)
[2024-03-16 12:52:34,624] INFO [Partition __consumer_offsets-10 broker=0] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:34,624] INFO [Partition __consumer_offsets-10 broker=0] __consumer_offsets-10 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:34,684] INFO [Log partition=__consumer_offsets-45, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:34,685] INFO [Log partition=__consumer_offsets-45, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2024-03-16 12:52:34,685] INFO Created log for partition __consumer_offsets-45 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-45 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:34,686] INFO [Partition __consumer_offsets-45 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition)
[2024-03-16 12:52:34,686] INFO [Partition __consumer_offsets-45 broker=0] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:34,686] INFO [Partition __consumer_offsets-45 broker=0] __consumer_offsets-45 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:34,747] INFO [Log partition=__consumer_offsets-26, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:34,747] INFO [Log partition=__consumer_offsets-26, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2024-03-16 12:52:34,748] INFO Created log for partition __consumer_offsets-26 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-26 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:34,748] INFO [Partition __consumer_offsets-26 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition)
[2024-03-16 12:52:34,748] INFO [Partition __consumer_offsets-26 broker=0] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:34,748] INFO [Partition __consumer_offsets-26 broker=0] __consumer_offsets-26 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:34,805] INFO [Log partition=__consumer_offsets-7, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:34,806] INFO [Log partition=__consumer_offsets-7, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2024-03-16 12:52:34,806] INFO Created log for partition __consumer_offsets-7 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-7 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:34,806] INFO [Partition __consumer_offsets-7 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition)
[2024-03-16 12:52:34,806] INFO [Partition __consumer_offsets-7 broker=0] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:34,806] INFO [Partition __consumer_offsets-7 broker=0] __consumer_offsets-7 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:34,866] INFO [Log partition=__consumer_offsets-42, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:34,867] INFO [Log partition=__consumer_offsets-42, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2024-03-16 12:52:34,868] INFO Created log for partition __consumer_offsets-42 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-42 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:34,868] INFO [Partition __consumer_offsets-42 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition)
[2024-03-16 12:52:34,868] INFO [Partition __consumer_offsets-42 broker=0] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:34,868] INFO [Partition __consumer_offsets-42 broker=0] __consumer_offsets-42 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:34,925] INFO [Log partition=__consumer_offsets-4, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:34,926] INFO [Log partition=__consumer_offsets-4, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2024-03-16 12:52:34,926] INFO Created log for partition __consumer_offsets-4 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-4 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:34,926] INFO [Partition __consumer_offsets-4 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition)
[2024-03-16 12:52:34,927] INFO [Partition __consumer_offsets-4 broker=0] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:34,927] INFO [Partition __consumer_offsets-4 broker=0] __consumer_offsets-4 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:34,987] INFO [Log partition=__consumer_offsets-23, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:34,987] INFO [Log partition=__consumer_offsets-23, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2024-03-16 12:52:34,988] INFO Created log for partition __consumer_offsets-23 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-23 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:34,988] INFO [Partition __consumer_offsets-23 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition)
[2024-03-16 12:52:34,989] INFO [Partition __consumer_offsets-23 broker=0] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:34,989] INFO [Partition __consumer_offsets-23 broker=0] __consumer_offsets-23 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:35,048] INFO [Log partition=__consumer_offsets-1, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:35,049] INFO [Log partition=__consumer_offsets-1, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2024-03-16 12:52:35,050] INFO Created log for partition __consumer_offsets-1 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-1 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:35,050] INFO [Partition __consumer_offsets-1 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition)
[2024-03-16 12:52:35,051] INFO [Partition __consumer_offsets-1 broker=0] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:35,051] INFO [Partition __consumer_offsets-1 broker=0] __consumer_offsets-1 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:35,106] INFO [Log partition=__consumer_offsets-20, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:35,107] INFO [Log partition=__consumer_offsets-20, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2024-03-16 12:52:35,108] INFO Created log for partition __consumer_offsets-20 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-20 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:35,108] INFO [Partition __consumer_offsets-20 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition)
[2024-03-16 12:52:35,108] INFO [Partition __consumer_offsets-20 broker=0] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:35,108] INFO [Partition __consumer_offsets-20 broker=0] __consumer_offsets-20 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:35,168] INFO [Log partition=__consumer_offsets-39, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:35,169] INFO [Log partition=__consumer_offsets-39, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2024-03-16 12:52:35,169] INFO Created log for partition __consumer_offsets-39 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-39 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:35,169] INFO [Partition __consumer_offsets-39 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition)
[2024-03-16 12:52:35,169] INFO [Partition __consumer_offsets-39 broker=0] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:35,169] INFO [Partition __consumer_offsets-39 broker=0] __consumer_offsets-39 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:35,233] INFO [Log partition=__consumer_offsets-17, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:35,235] INFO [Log partition=__consumer_offsets-17, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
[2024-03-16 12:52:35,236] INFO Created log for partition __consumer_offsets-17 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-17 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:35,236] INFO [Partition __consumer_offsets-17 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition)
[2024-03-16 12:52:35,236] INFO [Partition __consumer_offsets-17 broker=0] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:35,237] INFO [Partition __consumer_offsets-17 broker=0] __consumer_offsets-17 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:35,303] INFO [Log partition=__consumer_offsets-36, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:35,305] INFO [Log partition=__consumer_offsets-36, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
[2024-03-16 12:52:35,306] INFO Created log for partition __consumer_offsets-36 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-36 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:35,306] INFO [Partition __consumer_offsets-36 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition)
[2024-03-16 12:52:35,306] INFO [Partition __consumer_offsets-36 broker=0] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:35,306] INFO [Partition __consumer_offsets-36 broker=0] __consumer_offsets-36 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:35,371] INFO [Log partition=__consumer_offsets-14, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:35,372] INFO [Log partition=__consumer_offsets-14, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2024-03-16 12:52:35,373] INFO Created log for partition __consumer_offsets-14 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-14 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:35,373] INFO [Partition __consumer_offsets-14 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition)
[2024-03-16 12:52:35,373] INFO [Partition __consumer_offsets-14 broker=0] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:35,373] INFO [Partition __consumer_offsets-14 broker=0] __consumer_offsets-14 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:35,443] INFO [Log partition=__consumer_offsets-33, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:35,444] INFO [Log partition=__consumer_offsets-33, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2024-03-16 12:52:35,444] INFO Created log for partition __consumer_offsets-33 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-33 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:35,445] INFO [Partition __consumer_offsets-33 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition)
[2024-03-16 12:52:35,445] INFO [Partition __consumer_offsets-33 broker=0] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:35,445] INFO [Partition __consumer_offsets-33 broker=0] __consumer_offsets-33 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:35,586] INFO [Log partition=__consumer_offsets-49, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:35,587] INFO [Log partition=__consumer_offsets-49, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
[2024-03-16 12:52:35,589] INFO Created log for partition __consumer_offsets-49 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-49 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:35,589] INFO [Partition __consumer_offsets-49 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition)
[2024-03-16 12:52:35,589] INFO [Partition __consumer_offsets-49 broker=0] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:35,589] INFO [Partition __consumer_offsets-49 broker=0] __consumer_offsets-49 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:35,658] INFO [Log partition=__consumer_offsets-11, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:35,659] INFO [Log partition=__consumer_offsets-11, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
[2024-03-16 12:52:35,661] INFO Created log for partition __consumer_offsets-11 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-11 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:35,661] INFO [Partition __consumer_offsets-11 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition)
[2024-03-16 12:52:35,661] INFO [Partition __consumer_offsets-11 broker=0] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:35,661] INFO [Partition __consumer_offsets-11 broker=0] __consumer_offsets-11 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:35,774] INFO [Log partition=__consumer_offsets-30, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:35,776] INFO [Log partition=__consumer_offsets-30, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 52 ms (kafka.log.Log)
[2024-03-16 12:52:35,777] INFO Created log for partition __consumer_offsets-30 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-30 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:35,777] INFO [Partition __consumer_offsets-30 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition)
[2024-03-16 12:52:35,777] INFO [Partition __consumer_offsets-30 broker=0] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:35,778] INFO [Partition __consumer_offsets-30 broker=0] __consumer_offsets-30 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:35,843] INFO [Log partition=__consumer_offsets-46, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:35,844] INFO [Log partition=__consumer_offsets-46, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2024-03-16 12:52:35,845] INFO Created log for partition __consumer_offsets-46 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-46 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:35,845] INFO [Partition __consumer_offsets-46 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition)
[2024-03-16 12:52:35,845] INFO [Partition __consumer_offsets-46 broker=0] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:35,845] INFO [Partition __consumer_offsets-46 broker=0] __consumer_offsets-46 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:35,906] INFO [Log partition=__consumer_offsets-27, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:35,908] INFO [Log partition=__consumer_offsets-27, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
[2024-03-16 12:52:35,909] INFO Created log for partition __consumer_offsets-27 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-27 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:35,909] INFO [Partition __consumer_offsets-27 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition)
[2024-03-16 12:52:35,909] INFO [Partition __consumer_offsets-27 broker=0] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:35,909] INFO [Partition __consumer_offsets-27 broker=0] __consumer_offsets-27 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:35,971] INFO [Log partition=__consumer_offsets-8, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:35,972] INFO [Log partition=__consumer_offsets-8, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
[2024-03-16 12:52:35,974] INFO Created log for partition __consumer_offsets-8 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-8 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:35,974] INFO [Partition __consumer_offsets-8 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition)
[2024-03-16 12:52:35,974] INFO [Partition __consumer_offsets-8 broker=0] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:35,974] INFO [Partition __consumer_offsets-8 broker=0] __consumer_offsets-8 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,046] INFO [Log partition=__consumer_offsets-24, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,046] INFO [Log partition=__consumer_offsets-24, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2024-03-16 12:52:36,047] INFO Created log for partition __consumer_offsets-24 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-24 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,047] INFO [Partition __consumer_offsets-24 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition)
[2024-03-16 12:52:36,047] INFO [Partition __consumer_offsets-24 broker=0] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,047] INFO [Partition __consumer_offsets-24 broker=0] __consumer_offsets-24 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,109] INFO [Log partition=__consumer_offsets-43, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,109] INFO [Log partition=__consumer_offsets-43, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2024-03-16 12:52:36,110] INFO Created log for partition __consumer_offsets-43 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-43 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,110] INFO [Partition __consumer_offsets-43 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition)
[2024-03-16 12:52:36,110] INFO [Partition __consumer_offsets-43 broker=0] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,110] INFO [Partition __consumer_offsets-43 broker=0] __consumer_offsets-43 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,168] INFO [Log partition=__consumer_offsets-5, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,169] INFO [Log partition=__consumer_offsets-5, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2024-03-16 12:52:36,169] INFO Created log for partition __consumer_offsets-5 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-5 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,169] INFO [Partition __consumer_offsets-5 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition)
[2024-03-16 12:52:36,170] INFO [Partition __consumer_offsets-5 broker=0] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,170] INFO [Partition __consumer_offsets-5 broker=0] __consumer_offsets-5 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,230] INFO [Log partition=__consumer_offsets-21, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,232] INFO [Log partition=__consumer_offsets-21, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2024-03-16 12:52:36,233] INFO Created log for partition __consumer_offsets-21 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-21 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,233] INFO [Partition __consumer_offsets-21 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition)
[2024-03-16 12:52:36,233] INFO [Partition __consumer_offsets-21 broker=0] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,233] INFO [Partition __consumer_offsets-21 broker=0] __consumer_offsets-21 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,287] INFO [Log partition=__consumer_offsets-40, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,288] INFO [Log partition=__consumer_offsets-40, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2024-03-16 12:52:36,289] INFO Created log for partition __consumer_offsets-40 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-40 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,289] INFO [Partition __consumer_offsets-40 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition)
[2024-03-16 12:52:36,289] INFO [Partition __consumer_offsets-40 broker=0] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,289] INFO [Partition __consumer_offsets-40 broker=0] __consumer_offsets-40 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,352] INFO [Log partition=__consumer_offsets-2, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,353] INFO [Log partition=__consumer_offsets-2, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
[2024-03-16 12:52:36,354] INFO Created log for partition __consumer_offsets-2 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-2 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,354] INFO [Partition __consumer_offsets-2 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition)
[2024-03-16 12:52:36,354] INFO [Partition __consumer_offsets-2 broker=0] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,354] INFO [Partition __consumer_offsets-2 broker=0] __consumer_offsets-2 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,409] INFO [Log partition=__consumer_offsets-37, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,411] INFO [Log partition=__consumer_offsets-37, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2024-03-16 12:52:36,412] INFO Created log for partition __consumer_offsets-37 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-37 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,412] INFO [Partition __consumer_offsets-37 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition)
[2024-03-16 12:52:36,412] INFO [Partition __consumer_offsets-37 broker=0] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,412] INFO [Partition __consumer_offsets-37 broker=0] __consumer_offsets-37 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,473] INFO [Log partition=__consumer_offsets-18, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,474] INFO [Log partition=__consumer_offsets-18, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2024-03-16 12:52:36,475] INFO Created log for partition __consumer_offsets-18 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-18 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,475] INFO [Partition __consumer_offsets-18 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition)
[2024-03-16 12:52:36,475] INFO [Partition __consumer_offsets-18 broker=0] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,475] INFO [Partition __consumer_offsets-18 broker=0] __consumer_offsets-18 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,532] INFO [Log partition=__consumer_offsets-34, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,533] INFO [Log partition=__consumer_offsets-34, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2024-03-16 12:52:36,534] INFO Created log for partition __consumer_offsets-34 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-34 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,534] INFO [Partition __consumer_offsets-34 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition)
[2024-03-16 12:52:36,534] INFO [Partition __consumer_offsets-34 broker=0] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,534] INFO [Partition __consumer_offsets-34 broker=0] __consumer_offsets-34 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,593] INFO [Log partition=__consumer_offsets-15, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,595] INFO [Log partition=__consumer_offsets-15, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 6 ms (kafka.log.Log)
[2024-03-16 12:52:36,596] INFO Created log for partition __consumer_offsets-15 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-15 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,596] INFO [Partition __consumer_offsets-15 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition)
[2024-03-16 12:52:36,596] INFO [Partition __consumer_offsets-15 broker=0] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,596] INFO [Partition __consumer_offsets-15 broker=0] __consumer_offsets-15 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,652] INFO [Log partition=__consumer_offsets-12, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,652] INFO [Log partition=__consumer_offsets-12, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2024-03-16 12:52:36,653] INFO Created log for partition __consumer_offsets-12 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-12 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,653] INFO [Partition __consumer_offsets-12 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition)
[2024-03-16 12:52:36,653] INFO [Partition __consumer_offsets-12 broker=0] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,653] INFO [Partition __consumer_offsets-12 broker=0] __consumer_offsets-12 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,713] INFO [Log partition=__consumer_offsets-31, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,714] INFO [Log partition=__consumer_offsets-31, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2024-03-16 12:52:36,715] INFO Created log for partition __consumer_offsets-31 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-31 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,715] INFO [Partition __consumer_offsets-31 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition)
[2024-03-16 12:52:36,715] INFO [Partition __consumer_offsets-31 broker=0] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,715] INFO [Partition __consumer_offsets-31 broker=0] __consumer_offsets-31 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,775] INFO [Log partition=__consumer_offsets-9, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,776] INFO [Log partition=__consumer_offsets-9, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 7 ms (kafka.log.Log)
[2024-03-16 12:52:36,777] INFO Created log for partition __consumer_offsets-9 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-9 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,777] INFO [Partition __consumer_offsets-9 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition)
[2024-03-16 12:52:36,778] INFO [Partition __consumer_offsets-9 broker=0] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,778] INFO [Partition __consumer_offsets-9 broker=0] __consumer_offsets-9 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,836] INFO [Log partition=__consumer_offsets-47, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,837] INFO [Log partition=__consumer_offsets-47, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2024-03-16 12:52:36,838] INFO Created log for partition __consumer_offsets-47 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-47 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,838] INFO [Partition __consumer_offsets-47 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition)
[2024-03-16 12:52:36,838] INFO [Partition __consumer_offsets-47 broker=0] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,839] INFO [Partition __consumer_offsets-47 broker=0] __consumer_offsets-47 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,892] INFO [Log partition=__consumer_offsets-19, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,892] INFO [Log partition=__consumer_offsets-19, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2024-03-16 12:52:36,893] INFO Created log for partition __consumer_offsets-19 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-19 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,893] INFO [Partition __consumer_offsets-19 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition)
[2024-03-16 12:52:36,893] INFO [Partition __consumer_offsets-19 broker=0] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,893] INFO [Partition __consumer_offsets-19 broker=0] __consumer_offsets-19 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:36,973] INFO [Log partition=__consumer_offsets-28, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:36,974] INFO [Log partition=__consumer_offsets-28, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2024-03-16 12:52:36,974] INFO Created log for partition __consumer_offsets-28 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-28 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:36,974] INFO [Partition __consumer_offsets-28 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition)
[2024-03-16 12:52:36,974] INFO [Partition __consumer_offsets-28 broker=0] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:36,974] INFO [Partition __consumer_offsets-28 broker=0] __consumer_offsets-28 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:37,033] INFO [Log partition=__consumer_offsets-38, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:37,034] INFO [Log partition=__consumer_offsets-38, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2024-03-16 12:52:37,035] INFO Created log for partition __consumer_offsets-38 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-38 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:37,035] INFO [Partition __consumer_offsets-38 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition)
[2024-03-16 12:52:37,035] INFO [Partition __consumer_offsets-38 broker=0] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:37,035] INFO [Partition __consumer_offsets-38 broker=0] __consumer_offsets-38 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:37,083] INFO [Log partition=__consumer_offsets-35, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:37,083] INFO [Log partition=__consumer_offsets-35, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log)
[2024-03-16 12:52:37,083] INFO Created log for partition __consumer_offsets-35 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-35 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:37,083] INFO [Partition __consumer_offsets-35 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition)
[2024-03-16 12:52:37,083] INFO [Partition __consumer_offsets-35 broker=0] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:37,083] INFO [Partition __consumer_offsets-35 broker=0] __consumer_offsets-35 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:37,146] INFO [Log partition=__consumer_offsets-6, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:37,146] INFO [Log partition=__consumer_offsets-6, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log)
[2024-03-16 12:52:37,146] INFO Created log for partition __consumer_offsets-6 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-6 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:37,146] INFO [Partition __consumer_offsets-6 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition)
[2024-03-16 12:52:37,146] INFO [Partition __consumer_offsets-6 broker=0] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:37,146] INFO [Partition __consumer_offsets-6 broker=0] __consumer_offsets-6 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:37,208] INFO [Log partition=__consumer_offsets-44, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:37,208] INFO [Log partition=__consumer_offsets-44, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log)
[2024-03-16 12:52:37,208] INFO Created log for partition __consumer_offsets-44 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-44 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:37,208] INFO [Partition __consumer_offsets-44 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition)
[2024-03-16 12:52:37,208] INFO [Partition __consumer_offsets-44 broker=0] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:37,208] INFO [Partition __consumer_offsets-44 broker=0] __consumer_offsets-44 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:37,270] INFO [Log partition=__consumer_offsets-25, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:37,270] INFO [Log partition=__consumer_offsets-25, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log)
[2024-03-16 12:52:37,270] INFO Created log for partition __consumer_offsets-25 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-25 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:37,270] INFO [Partition __consumer_offsets-25 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition)
[2024-03-16 12:52:37,270] INFO [Partition __consumer_offsets-25 broker=0] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:37,270] INFO [Partition __consumer_offsets-25 broker=0] __consumer_offsets-25 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:37,333] INFO [Log partition=__consumer_offsets-16, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:37,333] INFO [Log partition=__consumer_offsets-16, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log)
[2024-03-16 12:52:37,333] INFO Created log for partition __consumer_offsets-16 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-16 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:37,333] INFO [Partition __consumer_offsets-16 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition)
[2024-03-16 12:52:37,333] INFO [Partition __consumer_offsets-16 broker=0] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:37,333] INFO [Partition __consumer_offsets-16 broker=0] __consumer_offsets-16 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:37,395] INFO [Log partition=__consumer_offsets-22, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:37,395] INFO [Log partition=__consumer_offsets-22, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log)
[2024-03-16 12:52:37,395] INFO Created log for partition __consumer_offsets-22 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-22 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:37,395] INFO [Partition __consumer_offsets-22 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition)
[2024-03-16 12:52:37,395] INFO [Partition __consumer_offsets-22 broker=0] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:37,395] INFO [Partition __consumer_offsets-22 broker=0] __consumer_offsets-22 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:37,458] INFO [Log partition=__consumer_offsets-41, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:37,458] INFO [Log partition=__consumer_offsets-41, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 15 ms (kafka.log.Log)
[2024-03-16 12:52:37,458] INFO Created log for partition __consumer_offsets-41 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-41 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:37,458] INFO [Partition __consumer_offsets-41 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition)
[2024-03-16 12:52:37,458] INFO [Partition __consumer_offsets-41 broker=0] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:37,458] INFO [Partition __consumer_offsets-41 broker=0] __consumer_offsets-41 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:37,520] INFO [Log partition=__consumer_offsets-32, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:37,520] INFO [Log partition=__consumer_offsets-32, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 15 ms (kafka.log.Log)
[2024-03-16 12:52:37,520] INFO Created log for partition __consumer_offsets-32 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-32 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:37,520] INFO [Partition __consumer_offsets-32 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition)
[2024-03-16 12:52:37,520] INFO [Partition __consumer_offsets-32 broker=0] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:37,520] INFO [Partition __consumer_offsets-32 broker=0] __consumer_offsets-32 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:37,567] INFO [Log partition=__consumer_offsets-3, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:37,567] INFO [Log partition=__consumer_offsets-3, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log)
[2024-03-16 12:52:37,567] INFO Created log for partition __consumer_offsets-3 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-3 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:37,567] INFO [Partition __consumer_offsets-3 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition)
[2024-03-16 12:52:37,567] INFO [Partition __consumer_offsets-3 broker=0] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:37,567] INFO [Partition __consumer_offsets-3 broker=0] __consumer_offsets-3 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:37,630] INFO [Log partition=__consumer_offsets-13, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 12:52:37,630] INFO [Log partition=__consumer_offsets-13, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log)
[2024-03-16 12:52:37,630] INFO Created log for partition __consumer_offsets-13 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__consumer_offsets-13 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 12:52:37,630] INFO [Partition __consumer_offsets-13 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition)
[2024-03-16 12:52:37,630] INFO [Partition __consumer_offsets-13 broker=0] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 12:52:37,630] INFO [Partition __consumer_offsets-13 broker=0] __consumer_offsets-13 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-25 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-31 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-34 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-37 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-40 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-43 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-7 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-13 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-19 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-14 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-23 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-32 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-27 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-36 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-22 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-28 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-31 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-37 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-43 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-49 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-7 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-10 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-13 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-2 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-8 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,692] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-35 in 16 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-3 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-12 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-18 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-03-16 12:52:37,708] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)

2.2. 第一次发送事务消息日志

第一次发送事务消息,会有查找事务协调器所在节点,初始化 Kafka 内部主题 __transaction_state 及其分区,初始化 epoch = 0 的生产者id(PID)等操作。

对比再次发送事务消息的日志,明显第一次发送事务消息多出来的初始化操作不会每次发送事务都会存在。

2.2.1. 客户端日志

日志关键词:

  1. Creating new transaction:为controller方法的事务注解创建一个kafka事务 @Transactional(rollbackFor = Exception.class, transactionManager = "kafkaTransactionManager")
  2. ProducerConfig values:生产者配置,用于向kafka服务注册生产者(包含客户端id和事务id)
    1. 第一个生产者
      • client.id = producer-tx-kafka-0:生产者客户端id,格式是 [producer]-[transactional.id]
      • transactional.id = tx-kafka-0:生产者事务id
    2. 第二个生产者
      • client.id = producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0:第二个生产者客户端id,格式是 [producer]-[transactional.id]
      • transactional.id = tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0:第二个生产者事务id,这个事务是在第一个消费者执行过程中衍生的,所以会有消费者组名和主题拼接。
    3. 第三个生产者
      • client.id = producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0:第三个生产者客户端id,格式也是 [producer]-[transactional.id]
      • transactional.id = tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0:第三个生产者事务id,这个事务是在第二个消费者过程中衍生的,所以也会有对应消费者组名和主题拼接
  3. Instantiated a transactional producer.:实例化一个具有事务特性的生产者
  4. Invoking InitProducerId for the first time in order to acquire a producer ID:首次调用InitProducerId以获取生产者ID
  5. Cluster ID:集群ID是Kafka集群中唯一且不可变的标识符
  6. Discovered transaction coordinator:发现的 Kafka 服务事务协调器
  7. Created Kafka transaction on producer:在生产者创建一个 Kafka 事务
2024-03-16 17:06:58.492  INFO 7008 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring DispatcherServlet 'dispatcherServlet'
2024-03-16 17:06:58.492  INFO 7008 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
2024-03-16 17:06:58.494  INFO 7008 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 2 ms
2024-03-16 17:06:58.593 DEBUG 7008 --- [nio-8080-exec-1] o.s.k.t.KafkaTransactionManager          : Creating new transaction with name [com.kitman.kafka.simple.demo.controller.SenderController.sendTransactionTwo]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; 'kafkaTransactionManager',-java.lang.Exception
2024-03-16 17:06:58.608  INFO 7008 --- [nio-8080-exec-1] o.a.k.clients.producer.ProducerConfig    : ProducerConfig values: 
	acks = -1
	batch.size = 16384
	bootstrap.servers = [localhost:9092]
	buffer.memory = 33554432
	client.dns.lookup = use_all_dns_ips
	client.id = producer-tx-kafka-0
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = true
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metadata.max.idle.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.connect.timeout.ms = null
	sasl.login.read.timeout.ms = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.login.retry.backoff.max.ms = 10000
	sasl.login.retry.backoff.ms = 100
	sasl.mechanism = GSSAPI
	sasl.oauthbearer.clock.skew.seconds = 30
	sasl.oauthbearer.expected.audience = null
	sasl.oauthbearer.expected.issuer = null
	sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
	sasl.oauthbearer.jwks.endpoint.url = null
	sasl.oauthbearer.scope.claim.name = scope
	sasl.oauthbearer.sub.claim.name = sub
	sasl.oauthbearer.token.endpoint.url = null
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 131072
	socket.connection.setup.timeout.max.ms = 30000
	socket.connection.setup.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2]
	ssl.endpoint.identification.algorithm = https
	ssl.engine.factory.class = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.certificate.chain = null
	ssl.keystore.key = null
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLSv1.2
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.certificates = null
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = tx-kafka-0
	value.serializer = class org.springframework.kafka.support.serializer.ToStringSerializer

2024-03-16 17:06:58.638  INFO 7008 --- [nio-8080-exec-1] o.a.k.clients.producer.KafkaProducer     : [Producer clientId=producer-tx-kafka-0, transactionalId=tx-kafka-0] Instantiated a transactional producer.
2024-03-16 17:06:58.666  WARN 7008 --- [nio-8080-exec-1] o.a.k.clients.producer.ProducerConfig    : The configuration 'max.poll.interval.ms' was supplied but isn't a known config.
2024-03-16 17:06:58.667  INFO 7008 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser     : Kafka version: 3.1.2
2024-03-16 17:06:58.667  INFO 7008 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: f8c67dc3ae0a3265
2024-03-16 17:06:58.667  INFO 7008 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1710580018667
2024-03-16 17:06:58.671  INFO 7008 --- [nio-8080-exec-1] o.a.k.c.p.internals.TransactionManager   : [Producer clientId=producer-tx-kafka-0, transactionalId=tx-kafka-0] Invoking InitProducerId for the first time in order to acquire a producer ID
2024-03-16 17:06:58.679  INFO 7008 --- [ucer-tx-kafka-0] org.apache.kafka.clients.Metadata        : [Producer clientId=producer-tx-kafka-0, transactionalId=tx-kafka-0] Cluster ID: TFg-ceVuSdugKwIQ_Wyijg
2024-03-16 17:07:05.114  INFO 7008 --- [ucer-tx-kafka-0] o.a.k.c.p.internals.TransactionManager   : [Producer clientId=producer-tx-kafka-0, transactionalId=tx-kafka-0] Discovered transaction coordinator DESKTOP-S0UTLJU:9092 (id: 0 rack: null)
2024-03-16 17:07:06.540  INFO 7008 --- [ucer-tx-kafka-0] o.a.k.c.p.internals.TransactionManager   : [Producer clientId=producer-tx-kafka-0, transactionalId=tx-kafka-0] ProducerId set to 0 with epoch 0
2024-03-16 17:07:06.555 DEBUG 7008 --- [nio-8080-exec-1] o.s.k.t.KafkaTransactionManager          : Created Kafka transaction on producer [CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer@7d662a23]]
2024-03-16 17:07:06.555 TRACE 7008 --- [nio-8080-exec-1] o.s.t.i.TransactionInterceptor           : Getting transaction for [com.kitman.kafka.simple.demo.controller.SenderController.sendTransactionTwo]
2024-03-16 17:07:06.571  INFO 7008 --- [nio-8080-exec-1] c.k.k.s.d.controller.SenderController    : 发送消息:这是发送kafka消息体
2024-03-16 17:07:06.571 TRACE 7008 --- [nio-8080-exec-1] o.s.t.i.TransactionInterceptor           : Getting transaction for [com.kitman.kafka.simple.demo.service.SenderService.sendTransactionTwo]
2024-03-16 17:07:06.633 TRACE 7008 --- [nio-8080-exec-1] o.s.t.i.TransactionInterceptor           : Getting transaction for [org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll]
2024-03-16 17:07:06.928 TRACE 7008 --- [nio-8080-exec-1] o.s.t.i.TransactionInterceptor           : Completing transaction for [org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll]
2024-03-16 17:07:06.928  INFO 7008 --- [nio-8080-exec-1] c.k.k.simple.demo.service.SenderService  : 1-验证数据库事务,查询数据库:[ProcessEventEntity(id=111, topic=PLAN_NODE_FINISHED, name=计划节点结束111, status=0, exception=null, tenantId=111111111111), ProcessEventEntity(id=222, topic=PLAN_NODE_FINISHED, name=计划节点结束222, status=0, exception=null, tenantId=222222222222), ProcessEventEntity(id=333, topic=PLAN_NODE_FINISHED, name=计划节点结束333, status=0, exception=null, tenantId=333333333333)]
2024-03-16 17:07:06.975 TRACE 7008 --- [nio-8080-exec-1] o.s.t.i.TransactionInterceptor           : Completing transaction for [com.kitman.kafka.simple.demo.service.SenderService.sendTransactionTwo]
2024-03-16 17:07:07.173 TRACE 7008 --- [nio-8080-exec-1] o.s.t.i.TransactionInterceptor           : Completing transaction for [com.kitman.kafka.simple.demo.controller.SenderController.sendTransactionTwo]
2024-03-16 17:07:07.173 DEBUG 7008 --- [nio-8080-exec-1] o.s.k.t.KafkaTransactionManager          : Initiating transaction commit
2024-03-16 17:07:07.327  INFO 7008 --- [ucer-tx-kafka-0] c.k.k.simple.demo.service.SenderService  : 2-事务发送kafka消息成功回调: SendResult [producerRecord=ProducerRecord(topic=TRANSACTION-TOPIC-1, partition=null, headers=RecordHeaders(headers = [RecordHeader(key = spring.message.value.type, value = [106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 83, 116, 114, 105, 110, 103])], isReadOnly = true), key=null, value=这是发送kafka消息体, timestamp=null), recordMetadata=TRANSACTION-TOPIC-1-0@0]
2024-03-16 17:07:07.562 DEBUG 7008 --- [ntainer#0-0-C-1] o.s.k.t.KafkaTransactionManager          : Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2024-03-16 17:07:07.563  INFO 7008 --- [ntainer#0-0-C-1] o.a.k.clients.producer.ProducerConfig    : ProducerConfig values: 
	acks = -1
	batch.size = 16384
	bootstrap.servers = [localhost:9092]
	buffer.memory = 33554432
	client.dns.lookup = use_all_dns_ips
	client.id = producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = true
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metadata.max.idle.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.connect.timeout.ms = null
	sasl.login.read.timeout.ms = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.login.retry.backoff.max.ms = 10000
	sasl.login.retry.backoff.ms = 100
	sasl.mechanism = GSSAPI
	sasl.oauthbearer.clock.skew.seconds = 30
	sasl.oauthbearer.expected.audience = null
	sasl.oauthbearer.expected.issuer = null
	sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
	sasl.oauthbearer.jwks.endpoint.url = null
	sasl.oauthbearer.scope.claim.name = scope
	sasl.oauthbearer.sub.claim.name = sub
	sasl.oauthbearer.token.endpoint.url = null
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 131072
	socket.connection.setup.timeout.max.ms = 30000
	socket.connection.setup.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2]
	ssl.endpoint.identification.algorithm = https
	ssl.engine.factory.class = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.certificate.chain = null
	ssl.keystore.key = null
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLSv1.2
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.certificates = null
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0
	value.serializer = class org.springframework.kafka.support.serializer.ToStringSerializer

2024-03-16 17:07:07.564  INFO 7008 --- [ntainer#0-0-C-1] o.a.k.clients.producer.KafkaProducer     : [Producer clientId=producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0, transactionalId=tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0] Instantiated a transactional producer.
2024-03-16 17:07:07.570  WARN 7008 --- [ntainer#0-0-C-1] o.a.k.clients.producer.ProducerConfig    : The configuration 'max.poll.interval.ms' was supplied but isn't a known config.
2024-03-16 17:07:07.570  INFO 7008 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser     : Kafka version: 3.1.2
2024-03-16 17:07:07.570  INFO 7008 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: f8c67dc3ae0a3265
2024-03-16 17:07:07.570  INFO 7008 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1710580027570
2024-03-16 17:07:07.571  INFO 7008 --- [ntainer#0-0-C-1] o.a.k.c.p.internals.TransactionManager   : [Producer clientId=producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0, transactionalId=tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0] Invoking InitProducerId for the first time in order to acquire a producer ID
2024-03-16 17:07:07.577  INFO 7008 --- [CTION-TOPIC-1.0] org.apache.kafka.clients.Metadata        : [Producer clientId=producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0, transactionalId=tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0] Cluster ID: TFg-ceVuSdugKwIQ_Wyijg
2024-03-16 17:07:07.578  INFO 7008 --- [CTION-TOPIC-1.0] o.a.k.c.p.internals.TransactionManager   : [Producer clientId=producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0, transactionalId=tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0] Discovered transaction coordinator DESKTOP-S0UTLJU:9092 (id: 0 rack: null)
2024-03-16 17:07:07.698  INFO 7008 --- [CTION-TOPIC-1.0] o.a.k.c.p.internals.TransactionManager   : [Producer clientId=producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0, transactionalId=tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0] ProducerId set to 1 with epoch 0
2024-03-16 17:07:07.709 DEBUG 7008 --- [ntainer#0-0-C-1] o.s.k.t.KafkaTransactionManager          : Created Kafka transaction on producer [CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer@4d47de30]]
2024-03-16 17:07:07.719 DEBUG 7008 --- [ntainer#0-0-C-1] o.s.k.t.KafkaTransactionManager          : Participating in existing transaction
2024-03-16 17:07:07.719 TRACE 7008 --- [ntainer#0-0-C-1] o.s.t.i.TransactionInterceptor           : Getting transaction for [com.kitman.kafka.simple.demo.consumer.TransactionOneEventListener.listenEvent1]
2024-03-16 17:07:07.753  INFO 7008 --- [ntainer#0-0-C-1] c.k.k.s.d.c.TransactionOneEventListener  : listenEvent1:接收kafka消息:[这是发送kafka消息体],from TRANSACTION-TOPIC-1 @ 0@ 0
2024-03-16 17:07:07.754 TRACE 7008 --- [ntainer#0-0-C-1] o.s.t.i.TransactionInterceptor           : Getting transaction for [com.kitman.kafka.simple.demo.service.SenderService.doEventV1]
2024-03-16 17:07:07.754 TRACE 7008 --- [ntainer#0-0-C-1] o.s.t.i.TransactionInterceptor           : Getting transaction for [org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll]
2024-03-16 17:07:07.756 TRACE 7008 --- [ntainer#0-0-C-1] o.s.t.i.TransactionInterceptor           : Completing transaction for [org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll]
2024-03-16 17:07:07.756  INFO 7008 --- [ntainer#0-0-C-1] c.k.k.simple.demo.service.SenderService  : 3-验证数据库事务,查询数据库:[ProcessEventEntity(id=111, topic=PLAN_NODE_FINISHED, name=计划节点结束111, status=0, exception=null, tenantId=111111111111), ProcessEventEntity(id=222, topic=PLAN_NODE_FINISHED, name=计划节点结束222, status=0, exception=null, tenantId=222222222222), ProcessEventEntity(id=333, topic=PLAN_NODE_FINISHED, name=计划节点结束333, status=0, exception=null, tenantId=333333333333)]
2024-03-16 17:07:07.761 TRACE 7008 --- [ntainer#0-0-C-1] o.s.t.i.TransactionInterceptor           : Completing transaction for [com.kitman.kafka.simple.demo.service.SenderService.doEventV1]
2024-03-16 17:07:07.762 TRACE 7008 --- [ntainer#0-0-C-1] o.s.t.i.TransactionInterceptor           : Completing transaction for [com.kitman.kafka.simple.demo.consumer.TransactionOneEventListener.listenEvent1]
2024-03-16 17:07:07.767  INFO 7008 --- [CTION-TOPIC-1.0] c.k.k.simple.demo.service.SenderService  : 4-事务发送kafka消息成功回调: SendResult [producerRecord=ProducerRecord(topic=TRANSACTION-TOPIC-2, partition=null, headers=RecordHeaders(headers = [RecordHeader(key = spring.message.value.type, value = [106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 83, 116, 114, 105, 110, 103])], isReadOnly = true), key=null, value=这是发送kafka消息体, timestamp=null), recordMetadata=TRANSACTION-TOPIC-2-0@0]
2024-03-16 17:07:07.778  INFO 7008 --- [CTION-TOPIC-1.0] o.a.k.c.p.internals.TransactionManager   : [Producer clientId=producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0, transactionalId=tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0] Discovered group coordinator DESKTOP-S0UTLJU:9092 (id: 0 rack: null)
2024-03-16 17:07:08.094 DEBUG 7008 --- [ntainer#0-0-C-1] o.s.k.t.KafkaTransactionManager          : Initiating transaction commit
2024-03-16 17:07:08.102 DEBUG 7008 --- [ntainer#1-0-C-1] o.s.k.t.KafkaTransactionManager          : Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2024-03-16 17:07:08.104  INFO 7008 --- [ntainer#1-0-C-1] o.a.k.clients.producer.ProducerConfig    : ProducerConfig values: 
	acks = -1
	batch.size = 16384
	bootstrap.servers = [localhost:9092]
	buffer.memory = 33554432
	client.dns.lookup = use_all_dns_ips
	client.id = producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = true
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metadata.max.idle.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.connect.timeout.ms = null
	sasl.login.read.timeout.ms = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.login.retry.backoff.max.ms = 10000
	sasl.login.retry.backoff.ms = 100
	sasl.mechanism = GSSAPI
	sasl.oauthbearer.clock.skew.seconds = 30
	sasl.oauthbearer.expected.audience = null
	sasl.oauthbearer.expected.issuer = null
	sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
	sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
	sasl.oauthbearer.jwks.endpoint.url = null
	sasl.oauthbearer.scope.claim.name = scope
	sasl.oauthbearer.sub.claim.name = sub
	sasl.oauthbearer.token.endpoint.url = null
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 131072
	socket.connection.setup.timeout.max.ms = 30000
	socket.connection.setup.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2]
	ssl.endpoint.identification.algorithm = https
	ssl.engine.factory.class = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.certificate.chain = null
	ssl.keystore.key = null
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLSv1.2
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.certificates = null
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0
	value.serializer = class org.springframework.kafka.support.serializer.ToStringSerializer

2024-03-16 17:07:08.105  INFO 7008 --- [ntainer#1-0-C-1] o.a.k.clients.producer.KafkaProducer     : [Producer clientId=producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0, transactionalId=tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0] Instantiated a transactional producer.
2024-03-16 17:07:08.114  WARN 7008 --- [ntainer#1-0-C-1] o.a.k.clients.producer.ProducerConfig    : The configuration 'max.poll.interval.ms' was supplied but isn't a known config.
2024-03-16 17:07:08.114  INFO 7008 --- [ntainer#1-0-C-1] o.a.kafka.common.utils.AppInfoParser     : Kafka version: 3.1.2
2024-03-16 17:07:08.115  INFO 7008 --- [ntainer#1-0-C-1] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: f8c67dc3ae0a3265
2024-03-16 17:07:08.115  INFO 7008 --- [ntainer#1-0-C-1] o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1710580028114
2024-03-16 17:07:08.115  INFO 7008 --- [ntainer#1-0-C-1] o.a.k.c.p.internals.TransactionManager   : [Producer clientId=producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0, transactionalId=tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0] Invoking InitProducerId for the first time in order to acquire a producer ID
2024-03-16 17:07:08.128  INFO 7008 --- [CTION-TOPIC-2.0] org.apache.kafka.clients.Metadata        : [Producer clientId=producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0, transactionalId=tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0] Cluster ID: TFg-ceVuSdugKwIQ_Wyijg
2024-03-16 17:07:08.129  INFO 7008 --- [CTION-TOPIC-2.0] o.a.k.c.p.internals.TransactionManager   : [Producer clientId=producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0, transactionalId=tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0] Discovered transaction coordinator DESKTOP-S0UTLJU:9092 (id: 0 rack: null)
2024-03-16 17:07:08.334  INFO 7008 --- [CTION-TOPIC-2.0] o.a.k.c.p.internals.TransactionManager   : [Producer clientId=producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0, transactionalId=tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0] ProducerId set to 2 with epoch 0
2024-03-16 17:07:08.334 DEBUG 7008 --- [ntainer#1-0-C-1] o.s.k.t.KafkaTransactionManager          : Created Kafka transaction on producer [CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer@36ccd58e]]
2024-03-16 17:07:08.336  INFO 7008 --- [ntainer#1-0-C-1] c.k.k.s.d.c.TransactionTwoEventListener  : listenEvent2:接收kafka消息:[这是发送kafka消息体],from TRANSACTION-TOPIC-2 @ 0@ 0
2024-03-16 17:07:08.341  INFO 7008 --- [CTION-TOPIC-2.0] o.a.k.c.p.internals.TransactionManager   : [Producer clientId=producer-tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0, transactionalId=tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0] Discovered group coordinator DESKTOP-S0UTLJU:9092 (id: 0 rack: null)
2024-03-16 17:07:08.527 DEBUG 7008 --- [ntainer#1-0-C-1] o.s.k.t.KafkaTransactionManager          : Initiating transaction commit

2.2.2. Kafka日志

日志关键词:

  1. Creating topic __transaction_state with configuration:自动创建 Kafka 内置主题:事务状态主题 __transaction_state
  2. Auto creation of topic __transaction_state with 50 partitions and replication factor 1 is successful:自动创建事务状态主题一共50个分区和一个备份(因为我只是开启的单个Kafka服务,不是集群,所以没有备份节点)
  3. Created log for partition __transaction_state:创建事务状态主题的分区日志
  4. Loading transaction metadata from:从事务状态分区日志文件加载事务元数据
  5. Initialized transactionalId: 根据客户端请求初始化事务id,共有有三个事务
    1. Initialized transactionalId tx-kafka-0 with producerId 0 ,给第一个生产者初始化事务id
    2. Initialized transactionalId tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0 with producerId 1,给第二个生产者初始化事务id
    3. Initialized transactionalId tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0 with producerId 2,给第三个生产者初始化事务id
[2024-03-16 17:06:58,685] INFO Creating topic __transaction_state with configuration {segment.bytes=104857600, unclean.leader.election.enable=false, min.insync.replicas=1, cleanup.policy=compact, compression.type=uncompressed} and initial partition assignment Map(23 -> ArrayBuffer(0), 32 -> ArrayBuffer(0), 41 -> ArrayBuffer(0), 17 -> ArrayBuffer(0), 8 -> ArrayBuffer(0), 35 -> ArrayBuffer(0), 44 -> ArrayBuffer(0), 26 -> ArrayBuffer(0), 11 -> ArrayBuffer(0), 29 -> ArrayBuffer(0), 38 -> ArrayBuffer(0), 47 -> ArrayBuffer(0), 20 -> ArrayBuffer(0), 2 -> ArrayBuffer(0), 5 -> ArrayBuffer(0), 14 -> ArrayBuffer(0), 46 -> ArrayBuffer(0), 49 -> ArrayBuffer(0), 40 -> ArrayBuffer(0), 13 -> ArrayBuffer(0), 4 -> ArrayBuffer(0), 22 -> ArrayBuffer(0), 31 -> ArrayBuffer(0), 16 -> ArrayBuffer(0), 7 -> ArrayBuffer(0), 43 -> ArrayBuffer(0), 25 -> ArrayBuffer(0), 34 -> ArrayBuffer(0), 10 -> ArrayBuffer(0), 37 -> ArrayBuffer(0), 1 -> ArrayBuffer(0), 19 -> ArrayBuffer(0), 28 -> ArrayBuffer(0), 45 -> ArrayBuffer(0), 27 -> ArrayBuffer(0), 36 -> ArrayBuffer(0), 18 -> ArrayBuffer(0), 9 -> ArrayBuffer(0), 21 -> ArrayBuffer(0), 48 -> ArrayBuffer(0), 3 -> ArrayBuffer(0), 12 -> ArrayBuffer(0), 30 -> ArrayBuffer(0), 39 -> ArrayBuffer(0), 15 -> ArrayBuffer(0), 42 -> ArrayBuffer(0), 24 -> ArrayBuffer(0), 6 -> ArrayBuffer(0), 33 -> ArrayBuffer(0), 0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient)
[2024-03-16 17:06:58,897] INFO [KafkaApi-0] Auto creation of topic __transaction_state with 50 partitions and replication factor 1 is successful (kafka.server.KafkaApis)
[2024-03-16 17:07:00,308] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(__transaction_state-42, __transaction_state-31, __transaction_state-45, __transaction_state-15, __transaction_state-12, __transaction_state-7, __transaction_state-46, __transaction_state-48, __transaction_state-49, __transaction_state-28, __transaction_state-2, __transaction_state-20, __transaction_state-24, __transaction_state-13, __transaction_state-0, __transaction_state-37, __transaction_state-3, __transaction_state-21, __transaction_state-29, __transaction_state-39, __transaction_state-38, __transaction_state-6, __transaction_state-14, __transaction_state-10, __transaction_state-44, __transaction_state-9, __transaction_state-22, __transaction_state-43, __transaction_state-4, __transaction_state-30, __transaction_state-33, __transaction_state-32, __transaction_state-25, __transaction_state-17, __transaction_state-23, __transaction_state-47, __transaction_state-18, __transaction_state-26, __transaction_state-36, __transaction_state-5, __transaction_state-8, __transaction_state-16, __transaction_state-11, __transaction_state-40, __transaction_state-19, __transaction_state-27, __transaction_state-41, __transaction_state-1, __transaction_state-34, __transaction_state-35) (kafka.server.ReplicaFetcherManager)
[2024-03-16 17:07:00,338] INFO [Log partition=__transaction_state-25, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 17:07:00,339] INFO [Log partition=__transaction_state-25, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 27 ms (kafka.log.Log)
[2024-03-16 17:07:00,340] INFO Created log for partition __transaction_state-25 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__transaction_state-25 with properties {compression.type -> uncompressed, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 17:07:00,345] INFO [Partition __transaction_state-25 broker=0] No checkpointed highwatermark is found for partition __transaction_state-25 (kafka.cluster.Partition)
[2024-03-16 17:07:00,345] INFO [Partition __transaction_state-25 broker=0] Log loaded for partition __transaction_state-25 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 17:07:00,345] INFO [Partition __transaction_state-25 broker=0] __transaction_state-25 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 17:07:00,412] INFO [Log partition=__transaction_state-6, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 17:07:00,414] INFO [Log partition=__transaction_state-6, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 11 ms (kafka.log.Log)
[2024-03-16 17:07:00,414] INFO Created log for partition __transaction_state-6 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__transaction_state-6 with properties {compression.type -> uncompressed, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 17:07:00,414] INFO [Partition __transaction_state-6 broker=0] No checkpointed highwatermark is found for partition __transaction_state-6 (kafka.cluster.Partition)
[2024-03-16 17:07:00,415] INFO [Partition __transaction_state-6 broker=0] Log loaded for partition __transaction_state-6 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 17:07:00,415] INFO [Partition __transaction_state-6 broker=0] __transaction_state-6 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 17:07:00,489] INFO [Log partition=__transaction_state-44, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2024-03-16 17:07:00,490] INFO [Log partition=__transaction_state-44, dir=F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)

50个事务主题分区遍历创建分区日志...

[2024-03-16 17:07:04,890] INFO Created log for partition __transaction_state-9 in F:\privateBox\kafkaCodeReadProject\kafka\tmp\kafka-logs\__transaction_state-9 with properties {compression.type -> uncompressed, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.4-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2024-03-16 17:07:04,890] INFO [Partition __transaction_state-9 broker=0] No checkpointed highwatermark is found for partition __transaction_state-9 (kafka.cluster.Partition)
[2024-03-16 17:07:04,891] INFO [Partition __transaction_state-9 broker=0] Log loaded for partition __transaction_state-9 with initial high watermark 0 (kafka.cluster.Partition)
[2024-03-16 17:07:04,891] INFO [Partition __transaction_state-9 broker=0] __transaction_state-9 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
[2024-03-16 17:07:05,031] INFO [Transaction State Manager 0]: Loading transaction metadata from __transaction_state-1 at epoch 0 (kafka.coordinator.transaction.TransactionStateManager)
[2024-03-16 17:07:05,040] INFO [Transaction State Manager 0]: Completed loading transaction metadata from __transaction_state-1 for coordinator epoch 0 (kafka.coordinator.transaction.TransactionStateManager)

50个事务主题分区遍历加载...

[2024-03-16 17:07:05,254] INFO [Transaction State Manager 0]: Loading transaction metadata from __transaction_state-18 at epoch 0 (kafka.coordinator.transaction.TransactionStateManager)
[2024-03-16 17:07:05,254] INFO [Transaction State Manager 0]: Completed loading transaction metadata from __transaction_state-18 for coordinator epoch 0 (kafka.coordinator.transaction.TransactionStateManager)
[2024-03-16 17:07:06,462] INFO [TransactionCoordinator id=0] Initialized transactionalId tx-kafka-0 with producerId 0 and producer epoch 0 on partition __transaction_state-28 (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-16 17:07:07,697] INFO [TransactionCoordinator id=0] Initialized transactionalId tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-1.0 with producerId 1 and producer epoch 0 on partition __transaction_state-29 (kafka.coordinator.transaction.TransactionCoordinator)
[2024-03-16 17:07:08,333] INFO [TransactionCoordinator id=0] Initialized transactionalId tx-kafka-spring-kafka-evo-consumer-004.TRANSACTION-TOPIC-2.0 with producerId 2 and producer epoch 0 on partition __transaction_state-18 (kafka.coordinator.transaction.TransactionCoordinator)

2.3. 再次发送事务消息日志

再次发送事务消息,对比第一次发送事务消息的日志,很明显少了很多初始化动作。

2.3.1. 客户端日志

日志关键词:

  1. Creating new transaction:为controller方法的事务注解创建一个kafka事务 @Transactional(rollbackFor = Exception.class, transactionManager = "kafkaTransactionManager")
  2. Created Kafka transaction on producer:在生产者创建一个 Kafka 事务
  3. Getting transaction for:Spring 获取事务
  4. Completing transaction for:Spring 提交事务
  5. Initiating transaction commit:Kafka 启动事务提交
2024-03-16 17:12:57.209 DEBUG 7008 --- [nio-8080-exec-2] o.s.k.t.KafkaTransactionManager          : Creating new transaction with name [com.kitman.kafka.simple.demo.controller.SenderController.sendTransactionTwo]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; 'kafkaTransactionManager',-java.lang.Exception
2024-03-16 17:12:57.209 DEBUG 7008 --- [nio-8080-exec-2] o.s.k.t.KafkaTransactionManager          : Created Kafka transaction on producer [CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer@7d662a23]]
2024-03-16 17:12:57.209 TRACE 7008 --- [nio-8080-exec-2] o.s.t.i.TransactionInterceptor           : Getting transaction for [com.kitman.kafka.simple.demo.controller.SenderController.sendTransactionTwo]
2024-03-16 17:12:57.210  INFO 7008 --- [nio-8080-exec-2] c.k.k.s.d.controller.SenderController    : 发送消息:这是发送kafka消息体
2024-03-16 17:12:57.210 TRACE 7008 --- [nio-8080-exec-2] o.s.t.i.TransactionInterceptor           : Getting transaction for [com.kitman.kafka.simple.demo.service.SenderService.sendTransactionTwo]
2024-03-16 17:12:57.210 TRACE 7008 --- [nio-8080-exec-2] o.s.t.i.TransactionInterceptor           : Getting transaction for [org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll]
2024-03-16 17:12:57.212 TRACE 7008 --- [nio-8080-exec-2] o.s.t.i.TransactionInterceptor           : Completing transaction for [org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll]
2024-03-16 17:12:57.212  INFO 7008 --- [nio-8080-exec-2] c.k.k.simple.demo.service.SenderService  : 1-验证数据库事务,查询数据库:[ProcessEventEntity(id=111, topic=PLAN_NODE_FINISHED, name=计划节点结束111, status=0, exception=null, tenantId=111111111111), ProcessEventEntity(id=222, topic=PLAN_NODE_FINISHED, name=计划节点结束222, status=0, exception=null, tenantId=222222222222), ProcessEventEntity(id=333, topic=PLAN_NODE_FINISHED, name=计划节点结束333, status=0, exception=null, tenantId=333333333333)]
2024-03-16 17:12:57.213 TRACE 7008 --- [nio-8080-exec-2] o.s.t.i.TransactionInterceptor           : Completing transaction for [com.kitman.kafka.simple.demo.service.SenderService.sendTransactionTwo]
2024-03-16 17:12:57.213 TRACE 7008 --- [nio-8080-exec-2] o.s.t.i.TransactionInterceptor           : Completing transaction for [com.kitman.kafka.simple.demo.controller.SenderController.sendTransactionTwo]
2024-03-16 17:12:57.213 DEBUG 7008 --- [nio-8080-exec-2] o.s.k.t.KafkaTransactionManager          : Initiating transaction commit
2024-03-16 17:12:57.254  INFO 7008 --- [ucer-tx-kafka-0] c.k.k.simple.demo.service.SenderService  : 2-事务发送kafka消息成功回调: SendResult [producerRecord=ProducerRecord(topic=TRANSACTION-TOPIC-1, partition=null, headers=RecordHeaders(headers = [RecordHeader(key = spring.message.value.type, value = [106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 83, 116, 114, 105, 110, 103])], isReadOnly = true), key=null, value=这是发送kafka消息体, timestamp=null), recordMetadata=TRANSACTION-TOPIC-1-0@2]
2024-03-16 17:12:57.260 DEBUG 7008 --- [ntainer#0-0-C-1] o.s.k.t.KafkaTransactionManager          : Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2024-03-16 17:12:57.261 DEBUG 7008 --- [ntainer#0-0-C-1] o.s.k.t.KafkaTransactionManager          : Created Kafka transaction on producer [CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer@4d47de30]]
2024-03-16 17:12:57.261 DEBUG 7008 --- [ntainer#0-0-C-1] o.s.k.t.KafkaTransactionManager          : Participating in existing transaction
2024-03-16 17:12:57.261 TRACE 7008 --- [ntainer#0-0-C-1] o.s.t.i.TransactionInterceptor           : Getting transaction for [com.kitman.kafka.simple.demo.consumer.TransactionOneEventListener.listenEvent1]
2024-03-16 17:12:57.261  INFO 7008 --- [ntainer#0-0-C-1] c.k.k.s.d.c.TransactionOneEventListener  : listenEvent1:接收kafka消息:[这是发送kafka消息体],from TRANSACTION-TOPIC-1 @ 0@ 2
2024-03-16 17:12:57.262 TRACE 7008 --- [ntainer#0-0-C-1] o.s.t.i.TransactionInterceptor           : Getting transaction for [com.kitman.kafka.simple.demo.service.SenderService.doEventV1]
2024-03-16 17:12:57.262 TRACE 7008 --- [ntainer#0-0-C-1] o.s.t.i.TransactionInterceptor           : Getting transaction for [org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll]
2024-03-16 17:12:57.264 TRACE 7008 --- [ntainer#0-0-C-1] o.s.t.i.TransactionInterceptor           : Completing transaction for [org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll]
2024-03-16 17:12:57.264  INFO 7008 --- [ntainer#0-0-C-1] c.k.k.simple.demo.service.SenderService  : 3-验证数据库事务,查询数据库:[ProcessEventEntity(id=111, topic=PLAN_NODE_FINISHED, name=计划节点结束111, status=0, exception=null, tenantId=111111111111), ProcessEventEntity(id=222, topic=PLAN_NODE_FINISHED, name=计划节点结束222, status=0, exception=null, tenantId=222222222222), ProcessEventEntity(id=333, topic=PLAN_NODE_FINISHED, name=计划节点结束333, status=0, exception=null, tenantId=333333333333)]
2024-03-16 17:12:57.265 TRACE 7008 --- [ntainer#0-0-C-1] o.s.t.i.TransactionInterceptor           : Completing transaction for [com.kitman.kafka.simple.demo.service.SenderService.doEventV1]
2024-03-16 17:12:57.266 TRACE 7008 --- [ntainer#0-0-C-1] o.s.t.i.TransactionInterceptor           : Completing transaction for [com.kitman.kafka.simple.demo.consumer.TransactionOneEventListener.listenEvent1]
2024-03-16 17:12:57.271 DEBUG 7008 --- [ntainer#0-0-C-1] o.s.k.t.KafkaTransactionManager          : Initiating transaction commit
2024-03-16 17:12:57.274  INFO 7008 --- [CTION-TOPIC-1.0] c.k.k.simple.demo.service.SenderService  : 4-事务发送kafka消息成功回调: SendResult [producerRecord=ProducerRecord(topic=TRANSACTION-TOPIC-2, partition=null, headers=RecordHeaders(headers = [RecordHeader(key = spring.message.value.type, value = [106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 83, 116, 114, 105, 110, 103])], isReadOnly = true), key=null, value=这是发送kafka消息体, timestamp=null), recordMetadata=TRANSACTION-TOPIC-2-0@2]
2024-03-16 17:12:57.279 DEBUG 7008 --- [ntainer#1-0-C-1] o.s.k.t.KafkaTransactionManager          : Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2024-03-16 17:12:57.279 DEBUG 7008 --- [ntainer#1-0-C-1] o.s.k.t.KafkaTransactionManager          : Created Kafka transaction on producer [CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer@36ccd58e]]
2024-03-16 17:12:57.280  INFO 7008 --- [ntainer#1-0-C-1] c.k.k.s.d.c.TransactionTwoEventListener  : listenEvent2:接收kafka消息:[这是发送kafka消息体],from TRANSACTION-TOPIC-2 @ 0@ 2
2024-03-16 17:12:57.283 DEBUG 7008 --- [ntainer#1-0-C-1] o.s.k.t.KafkaTransactionManager          : Initiating transaction commit

2.3.2. Kafka日志

再次发送事务消息时,Kafka端甚至没有打印任何 Info 级别的日志,省去了所有第一次发送时的初始化操作日志。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
1. Kafka源码简介: Kafka是一个分布式流处理平台,由Apache软件基金会开发。它是一种高吞吐量、低延迟的消息传递系统,主要用于实时数据处理领域。Kafka的核心组件包括Producer、Consumer和Broker。Producer负责向Kafka中写入数据,Consumer负责从Kafka中读取数据,Broker则是Kafka集群中的一个节点,负责存储和转发消息Kafka使用Zookeeper来进行集群管理和协调工作。 Kafka源码是Java编写的,主要分为以下几个模块: - kafka-clients:客户端API,包括Producer和Consumer - kafka-log:日志模块,用于消息的存储和检索 - kafka-server:服务端模块,包括Broker和Controller - kafka-streams:流处理模块,用于流式数据处理 - kafka-tools:工具类模块,包括命令行工具等 2. Kafka应用场景: Kafka具有高吞吐量、低延迟、可靠性高等特点,因此可以用于以下场景: - 日志收集:Kafka可以作为日志收集器,将分布式应用产生的日志收集到一个中心化的地方进行存储,方便后续的分析和处理。 - 流式数据处理:Kafka可以作为流处理平台,实现实时数据处理和分析。使用Kafka Stream API可以非常方便地进行流式数据处理。 - 消息队列:Kafka可以作为消息队列,实现应用之间的解耦和异步处理。Kafka的快速消息传递能力可以帮助解决系统间的数据传输问题,例如异步通知、订单处理等。 - 事件驱动架构:Kafka可以作为事件驱动架构的消息中心,实现不同组件之间的事件通知和消息传递。通过使用Kafka的Topic和Partition机制,可以实现高效的事件传递和消息分发。 - 大数据存储:Kafka的分布式存储特性可以方便地存储大规模数据。同时,Kafka的多Partition机制可以实现大规模数据的分片和分布式存储,提高数据的可靠性和可扩展性。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值