快速搭建开发环境 - Kafka 和 Kafka Eagle 监控(最新版本)

快速搭建开发环境 - Kafka 和 Kafka Eagle 监控(最新版本)

关键词:kafka、kafka环境搭建、流处理框架、kafka eagle 、kafka 监控

前言

有很多人会有疑惑,搭环境有什么好学的?

  • 一、可以帮助我们更好的认识这项技术;
  • 二、可以锻炼我们的实践能力,有助于今后在生产环境上排查故障;
  • 三、学习一项技术为什么好多人要去研究源码?

Kafka 2021 年 4 月 19 日发布 2.8.0 版本。从发行版本记录了解到,Kafka 有一个很重大变更,就是不再依赖 Zookeeper 了,而是使用自带的 managed。


一、Kafka 概述

Apache Kafka 是一个开源分布式事件流平台,具有水平可扩展性、容错性、快速性。现已被数千家公司用于高性能数据管道、流分析、数据集成和关键任务应用程序。


二、Kafka 核心优势

  • 核心能力

    • 高吞吐量

    使用延迟低至 2 毫秒的机器集群以网络有限的吞吐量传递消息。

    • 可扩展

    将生产集群扩展到多达一千个代理、每天数万亿条消息、PB 级数据、数十万个分区。弹性扩展和收缩存储和处理。

    • 永久存储

    将数据流安全地存储在分布式、持久、容错的集群中。

    • 高可用性

    在可用区上有效地扩展集群或跨地理区域连接单独的集群。

  • 生态系统

    • 内置流处理

    使用事件时间和仅一次处理,通过连接、聚合、过滤器、转换等处理事件流。

    • 连接到几乎任何东西

    Kafka 开箱即用的 Connect 接口与数百个事件源和事件接收器集成,包括 Postgres、JMS、Elasticsearch、AWS S3 等。

    • 客户端库

    读取、写入和处理大量编程语言中的事件流。

    • 大型生态系统开源工具

    大型开源工具生态系统:利用大量社区驱动的工具。

  • 信任和易用性

    • 关键任务

    支持关键任务用例,保证排序、零消息丢失和高效的一次性处理。

    • 受到数千个组织的信任

    成千上万的组织使用 Kafka,从互联网巨头到汽车制造商再到证券交易所。超过 500 万次独特的终生下载。

    • 庞大的用户社区

    Kafka 是 Apache 软件基金会最活跃的五个项目之一,在全球有数百个聚会。

    • 丰富的在线资源

    丰富的文档、在线培训、指导教程、视频、示例项目、Stack Overflow 等。


三、Kafka 应用场景

  • 消息传递

    Kafka 可以很好地替代更传统的消息代理。消息代理的使用有多种原因(将处理与数据生产者分离,缓冲未处理的消息等)。与大多数消息系统相比,Kafka 具有更好的吞吐量、内置分区、复制和容错,这使其成为大规模消息处理应用程序的良好解决方案。
    根据我们的经验,消息传递使用的吞吐量通常相对较低,但可能需要较低的端到端延迟,并且通常依赖于 Kafka 提供的强大的持久性保证。

    在这个领域,Kafka 可与传统的消息传递系统(如ActiveMQ或 RabbitMQ)相媲美。

  • 网站活动追踪

    Kafka 的原始用例是能够将用户活动跟踪管道重建为一组实时发布订阅源。这意味着站点活动(页面查看、搜索或用户可能采取的其他操作)被发布到中心主题,每个活动类型有一个主题。这些提要可用于订阅一系列用例,包括实时处理、实时监控以及加载到 Hadoop 或离线数据仓库系统以进行离线处理和报告。
    活动跟踪通常非常大量,因为每次用户页面查看都会生成许多活动消息。

  • 指标

    Kafka 常用于操作监控数据。这涉及聚合来自分布式应用程序的统计数据以生成操作数据的集中提要。

  • 日志聚合

    许多人使用 Kafka 作为日志聚合解决方案的替代品。日志聚合通常从服务器收集物理日志文件,并将它们放在中央位置(可能是文件服务器或 HDFS)进行处理。Kafka 抽象了文件的细节,并将日志或事件数据作为消息流进行了更清晰的抽象。这允许更低的处理延迟和更容易支持多个数据源和分布式数据消费。与 Scribe 或 Flume 等以日志为中心的系统相比,Kafka 提供了同样出色的性能、由于复制而产生的更强的持久性保证以及更低的端到端延迟。

  • 流处理

    Kafka 的许多用户在由多个阶段组成的处理管道中处理数据,其中从 Kafka 主题中消费原始输入数据,然后聚合、丰富或以其他方式转换为新主题,以供进一步消费或后续处理。例如,用于推荐新闻文章的处理管道可能会从 RSS 提要中抓取文章内容并将其发布到“文章”主题;进一步处理可能会对该内容进行规范化或重复删除,并将清理后的文章内容发布到新主题;最后的处理阶段可能会尝试向用户推荐此内容。此类处理管道基于各个主题创建实时数据流图。从 0.10.0.0 开始,一个轻量级但功能强大的流处理库,称为Kafka Streams 在 Apache Kafka 中可用以执行上述数据处理。除了 Kafka Streams,替代的开源流处理工具包括Apache Storm和 Apache Samza。

  • 事件溯源

    事件溯源是一种应用程序设计风格,其中状态更改被记录为按时间排序的记录序列。Kafka 对非常大的存储日志数据的支持使其成为以这种风格构建的应用程序的出色后端。

  • 提交日志

    Kafka 可以作为分布式系统的一种外部提交日志。日志有助于在节点之间复制数据,并作为故障节点恢复其数据的重新同步机制。Kafka 中的日志压缩功能有助于支持这种用法。在这种用法中,Kafka 类似于Apache BookKeeper项目。


四、快速搭建 Kafka

准备工作

硬件环境

ResourceMinimumRecommended
CPU2 CPU4 CPU
Mem4 GB8 GB
Disk40 GB160 GB

软件环境

SoftwareVersionDescription
JDKVersion 8+ or higherJDK
ZookeeperVersion 3.5 or higherZookeeper 用于管理元数据。
注:Kafka 自带Zookeeper,这里可以使用外部的 Zookeeper 集群

注:kafka 运行依赖上述环境


安装 JDK

  • 第一步 下载安装包

    jdk-8u192-linux-x64.rpm

  • 第二步 安装

    $ rpm -ivh jdk-8u192-linux-x64.rpm
    
  • 第三步 配置PATH

    将配置信息写入jdk_env.sh 文件,使用 Esc 按键+:x 退出保存

    $ vi /etc/profile.d/jdk_env.sh
    

    重新载入配置信息

    $ source /etc/profile
    

    配置信息

    export JAVA_HOME=/usr/java/jdk1.8.0_192-amd64
    export JRE_HOME=${JAVA_HOME}/jre
    export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH
    export JAVA_PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin
    export PATH=$PATH:${JAVA_PATH}
    
  • 第四步 验证

    $ java -version
    

离线安装 Kafka

  • 第一步 下载安装包

    kafka_2.12-2.7.1.tgz

  • 第二步 解压

    $ tar zxf kafka_2.12-2.7.1.tgz
    

    我们使用 tree -L 2 kafka_2.12-2.7.1 看下解压后的目录

    $ tree -L 2 kafka_2.12-2.7.1
    kafka_2.12-2.7.1
    ├── bin											# 存放很多脚本
    │   ├── connect-distributed.sh
    │   ├── connect-mirror-maker.sh
    │   ├── connect-standalone.sh
    │   ├── kafka-acls.sh
    │   ├── kafka-broker-api-versions.sh
    │   ├── kafka-configs.sh
    │   ├── kafka-console-consumer.sh
    │   ├── kafka-console-producer.sh
    │   ├── kafka-consumer-groups.sh
    │   ├── kafka-consumer-perf-test.sh
    │   ├── kafka-delegation-tokens.sh
    │   ├── kafka-delete-records.sh
    │   ├── kafka-dump-log.sh
    │   ├── kafka-features.sh
    │   ├── kafka-leader-election.sh
    │   ├── kafka-log-dirs.sh
    │   ├── kafka-mirror-maker.sh
    │   ├── kafka-preferred-replica-election.sh
    │   ├── kafka-producer-perf-test.sh
    │   ├── kafka-reassign-partitions.sh
    │   ├── kafka-replica-verification.sh
    │   ├── kafka-run-class.sh
    │   ├── kafka-server-start.sh
    │   ├── kafka-server-stop.sh
    │   ├── kafka-streams-application-reset.sh
    │   ├── kafka-topics.sh
    │   ├── kafka-verifiable-consumer.sh
    │   ├── kafka-verifiable-producer.sh
    │   ├── trogdor.sh
    │   ├── windows
    │   ├── zookeeper-security-migration.sh
    │   ├── zookeeper-server-start.sh
    │   ├── zookeeper-server-stop.sh
    │   └── zookeeper-shell.sh
    ├── config										# 配置文件
    │   ├── connect-console-sink.properties
    │   ├── connect-console-source.properties
    │   ├── connect-distributed.properties
    │   ├── connect-file-sink.properties
    │   ├── connect-file-source.properties
    │   ├── connect-log4j.properties
    │   ├── connect-mirror-maker.properties
    │   ├── connect-standalone.properties
    │   ├── consumer.properties
    │   ├── log4j.properties
    │   ├── producer.properties
    │   ├── server.properties
    │   ├── tools-log4j.properties
    │   ├── trogdor.conf
    │   └── zookeeper.properties
    ├── libs										# jar包
    │   ├── activation-1.1.1.jar
    │   ├── aopalliance-repackaged-2.6.1.jar
    │   ├── argparse4j-0.7.0.jar
    │   ├── audience-annotations-0.5.0.jar
    │   ├── commons-cli-1.4.jar
    │   ├── commons-lang3-3.8.1.jar
    │   ├── connect-api-2.7.1.jar
    │   ├── connect-basic-auth-extension-2.7.1.jar
    │   ├── connect-file-2.7.1.jar
    │   ├── connect-json-2.7.1.jar
    │   ├── connect-mirror-2.7.1.jar
    │   ├── connect-mirror-client-2.7.1.jar
    │   ├── connect-runtime-2.7.1.jar
    │   ├── connect-transforms-2.7.1.jar
    │   ├── hk2-api-2.6.1.jar
    │   ├── hk2-locator-2.6.1.jar
    │   ├── hk2-utils-2.6.1.jar
    │   ├── jackson-annotations-2.10.5.jar
    │   ├── jackson-core-2.10.5.jar
    │   ├── jackson-databind-2.10.5.1.jar
    │   ├── jackson-dataformat-csv-2.10.5.jar
    │   ├── jackson-datatype-jdk8-2.10.5.jar
    │   ├── jackson-jaxrs-base-2.10.5.jar
    │   ├── jackson-jaxrs-json-provider-2.10.5.jar
    │   ├── jackson-module-jaxb-annotations-2.10.5.jar
    │   ├── jackson-module-paranamer-2.10.5.jar
    │   ├── jackson-module-scala_2.12-2.10.5.jar
    │   ├── jakarta.activation-api-1.2.1.jar
    │   ├── jakarta.annotation-api-1.3.5.jar
    │   ├── jakarta.inject-2.6.1.jar
    │   ├── jakarta.validation-api-2.0.2.jar
    │   ├── jakarta.ws.rs-api-2.1.6.jar
    │   ├── jakarta.xml.bind-api-2.3.2.jar
    │   ├── javassist-3.25.0-GA.jar
    │   ├── javassist-3.26.0-GA.jar
    │   ├── javax.servlet-api-3.1.0.jar
    │   ├── javax.ws.rs-api-2.1.1.jar
    │   ├── jaxb-api-2.3.0.jar
    │   ├── jersey-client-2.31.jar
    │   ├── jersey-common-2.31.jar
    │   ├── jersey-container-servlet-2.31.jar
    │   ├── jersey-container-servlet-core-2.31.jar
    │   ├── jersey-hk2-2.31.jar
    │   ├── jersey-media-jaxb-2.31.jar
    │   ├── jersey-server-2.31.jar
    │   ├── jetty-client-9.4.38.v20210224.jar
    │   ├── jetty-continuation-9.4.38.v20210224.jar
    │   ├── jetty-http-9.4.38.v20210224.jar
    │   ├── jetty-io-9.4.38.v20210224.jar
    │   ├── jetty-security-9.4.38.v20210224.jar
    │   ├── jetty-server-9.4.38.v20210224.jar
    │   ├── jetty-servlet-9.4.38.v20210224.jar
    │   ├── jetty-servlets-9.4.38.v20210224.jar
    │   ├── jetty-util-9.4.38.v20210224.jar
    │   ├── jetty-util-ajax-9.4.38.v20210224.jar
    │   ├── jopt-simple-5.0.4.jar
    │   ├── kafka_2.12-2.7.1.jar
    │   ├── kafka_2.12-2.7.1.jar.asc
    │   ├── kafka_2.12-2.7.1-javadoc.jar
    │   ├── kafka_2.12-2.7.1-javadoc.jar.asc
    │   ├── kafka_2.12-2.7.1-sources.jar
    │   ├── kafka_2.12-2.7.1-sources.jar.asc
    │   ├── kafka_2.12-2.7.1-test.jar
    │   ├── kafka_2.12-2.7.1-test.jar.asc
    │   ├── kafka_2.12-2.7.1-test-sources.jar
    │   ├── kafka_2.12-2.7.1-test-sources.jar.asc
    │   ├── kafka-clients-2.7.1.jar
    │   ├── kafka-log4j-appender-2.7.1.jar
    │   ├── kafka-raft-2.7.1.jar
    │   ├── kafka-streams-2.7.1.jar
    │   ├── kafka-streams-examples-2.7.1.jar
    │   ├── kafka-streams-scala_2.12-2.7.1.jar
    │   ├── kafka-streams-test-utils-2.7.1.jar
    │   ├── kafka-tools-2.7.1.jar
    │   ├── log4j-1.2.17.jar
    │   ├── lz4-java-1.7.1.jar
    │   ├── maven-artifact-3.6.3.jar
    │   ├── metrics-core-2.2.0.jar
    │   ├── netty-buffer-4.1.59.Final.jar
    │   ├── netty-codec-4.1.59.Final.jar
    │   ├── netty-common-4.1.59.Final.jar
    │   ├── netty-handler-4.1.59.Final.jar
    │   ├── netty-resolver-4.1.59.Final.jar
    │   ├── netty-transport-4.1.59.Final.jar
    │   ├── netty-transport-native-epoll-4.1.59.Final.jar
    │   ├── netty-transport-native-unix-common-4.1.59.Final.jar
    │   ├── osgi-resource-locator-1.0.3.jar
    │   ├── paranamer-2.8.jar
    │   ├── plexus-utils-3.2.1.jar
    │   ├── reflections-0.9.12.jar
    │   ├── rocksdbjni-5.18.4.jar
    │   ├── scala-collection-compat_2.12-2.2.0.jar
    │   ├── scala-java8-compat_2.12-0.9.1.jar
    │   ├── scala-library-2.12.12.jar
    │   ├── scala-logging_2.12-3.9.2.jar
    │   ├── scala-reflect-2.12.12.jar
    │   ├── slf4j-api-1.7.30.jar
    │   ├── slf4j-log4j12-1.7.30.jar
    │   ├── snappy-java-1.1.7.7.jar
    │   ├── zookeeper-3.5.9.jar
    │   ├── zookeeper-jute-3.5.9.jar
    │   └── zstd-jni-1.4.5-6.jar
    ├── LICENSE
    ├── licenses
    │   ├── argparse-MIT
    │   ├── CDDL+GPL-1.1
    │   ├── DWTFYWTPL
    │   ├── eclipse-distribution-license-1.0
    │   ├── eclipse-public-license-2.0
    │   ├── jopt-simple-MIT
    │   ├── paranamer-BSD-3-clause
    │   ├── slf4j-MIT
    │   └── zstd-jni-BSD-2-clause
    ├── NOTICE
    └── site-docs
        └── kafka_2.12-2.7.1-site-docs.tgz
    
    

    kafka_2.12-2.7.1-site-docs.tgz 这个是官方指导手册。


五、Kafka 使用

启动 Zookeeper

$ bin/zookeeper-server-start.sh config/zookeeper.properties

启动 Kafka

$ bin/kafka-server-start.sh config/server.properties

查看 Zookeeper 和 Kafka 服务是否启动成功

$ jps -l

输出以下信息,说明服务启动成功

1248 org.apache.zookeeper.server.quorum.QuorumPeerMain
2647 sun.tools.jps.Jps
1822 kafka.Kafka

至此,我们就可以使用应用程序连接搭建的 Kafka 环境了~~~


六、Kafka 监控

硬件环境

ResourceMinimumRecommended
CPU1 CPU1 CPU
Mem2 GB4 GB
Disk10 GB10 GB

软件环境

SoftwareVersionDescription
JDKVersion 8+ or higherJDK
SQLiteVersion or higher默认使用sqlite
MySQLVersion 5.7 or higher

注:Kafka Eagle 运行依赖上述环境,MySQL 或 SQLite 选择一个就行


安装 Kafka Eagle

  • 第一步 下载安装包

    kafka-eagle-bin-2.0.6.tar.gz

  • 第二步 解压

    $ tar zxf kafka-eagle-bin-2.0.6.tar.gz
    # kafka-eagle-bin-2.0.6.tar.gz 解压目录下为 kafka-eagle-web-2.0.6-bin.tar.gz 
    $ tar zxf kafka-eagle-web-2.0.6-bin.tar.gz
    

    解压后目录

    $ tree -L 1 kafka-eagle-web-2.0.6
    kafka-eagle-web-2.0.6
    ├── bin
    ├── conf
    ├── db
    ├── font
    ├── kms
    └── logs
    
  • 第三步 设置PATH

    /etc/profile.d/ 目录下创建 kafka-eagle_env.sh文件,添加环境变量信息,按Esc+:x 保存退出。

    执行命令 source /etc/profile

    $ vi /etc/profile.d/kafka-eagle_env.sh
    

    环境变量信息

    export KE_HOME=/data/kafka-eagle-web-2.0.6
    export PATH=$PATH:$KE_HOME/bin
    

    验证

    $ echo $KE_HOME
    
    /data/kafka-eagle-web-2.0.6
    
  • 第四步 修改 Kafka Eagle 配置文件

  • 第五步 启动 Kafka Eagle

    $ ${KE_HOME}/bin/ke.sh start
    

    当我们看到以下信息,说明服务启动成功

    Welcome to
        __ __    ___     ____    __ __    ___            ______    ___    ______    __     ______
       / //_/   /   |   / __/   / //_/   /   |          / ____/   /   |  / ____/   / /    / ____/
      / ,<     / /| |  / /_    / ,<     / /| |         / __/     / /| | / / __    / /    / __/
     / /| |   / ___ | / __/   / /| |   / ___ |        / /___    / ___ |/ /_/ /   / /___ / /___
    /_/ |_|  /_/  |_|/_/     /_/ |_|  /_/  |_|       /_____/   /_/  |_|\____/   /_____//_____/
    
    
    Version 2.0.6 -- Copyright 2016-2021
    *******************************************************************
    * Kafka Eagle Service has started success.
    * Welcome, Now you can visit 'http://127.0.0.1:8048'
    * Account:admin ,Password:123456
    *******************************************************************
    * <Usage> ke.sh [start|status|stop|restart|stats] </Usage>
    * <Usage> https://www.kafka-eagle.org/ </Usage>
    *******************************************************************
    
    
  • 第六步 访问 Kafka Eagle 控制台

    访问控制台 http://yours ip:8048

    admin 123456

  • Kafka Eagle ke.sh 常用指令

    命令功能备注
    ke.sh start
    ke.sh status
    ke.sh stop
    ke.sh restart
    ke.sh stats
  • Kafka Eagle 相关界面截图

  • 登录界面在这里插入图片描述

  • Dashboard界面

    在这里插入图片描述

  • Topic 界面

    在这里插入图片描述

  • Kafka Eagle BScreen

    在这里插入图片描述

  • Kafka Performance

    在这里插入图片描述



总结

以上就是本文的主要内容了,本文主要介绍了 Kafka 是什么,Kafka 环境搭建,Kafka 监控环境搭建。相信无敌的你都已经get到全部要点了,本专栏后续将带你搭建其他开发必备环境,敬请期待哦(*^▽^*)


声明

以上内容均来源于网络,如有错误,请多多包含。


参考文献

Apache Kafka
Kafka-Eagle


附:

配置文件

server.properties

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

producer.properties

# see org.apache.kafka.clients.producer.ProducerConfig for more details

############################# Producer Basics #############################

# list of brokers used for bootstrapping knowledge about the rest of the cluster
# format: host1:port1,host2:port2 ...
bootstrap.servers=localhost:9092

# specify the compression codec for all data generated: none, gzip, snappy, lz4, zstd
compression.type=none

# name of the partitioner class for partitioning events; default partition spreads data randomly
#partitioner.class=

# the maximum amount of time the client will wait for the response of a request
#request.timeout.ms=

# how long `KafkaProducer.send` and `KafkaProducer.partitionsFor` will block for
#max.block.ms=

# the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together
#linger.ms=

# the maximum size of a request in bytes
#max.request.size=

# the default batch size in bytes when batching multiple records sent to a partition
#batch.size=

# the total bytes of memory the producer can use to buffer records waiting to be sent to the server
#buffer.memory=

consumer.properties

# see org.apache.kafka.clients.consumer.ConsumerConfig for more details

# list of brokers used for bootstrapping knowledge about the rest of the cluster
# format: host1:port1,host2:port2 ...
bootstrap.servers=localhost:9092

# consumer group id
group.id=test-consumer-group

# What to do when there is no initial offset in Kafka or if the current
# offset does not exist any more on the server: latest, earliest, none
#auto.offset.reset=
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值