kafka
作为一个高性能、可扩展的mq组件一直受到大家的青睐,下面讲述一般在实际应用中kafka的相关配置:
本次采用3个kafka实例,kafka版本为kafka_2.11-1.0.2
,
首先安装zookeeper,参见: zookeeper安装
安装完之后,接下来安装kafka集群。
默认kafka服务器配置在 config/server.properties
中,内容如下:
#每个kafkabroker的唯一标识不能重复
broker.id=0
############################# Socket Server Settings #############################
# 客户端连接监听地址,如果不配置将通过
# java.net.InetAddress.getCanonicalHostName() 获取.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
listeners=PLAINTEXT://本机IP:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
#网络连接处理线程数,用来处理网络请求和返回
num.network.threads=3
# 处理每个请求以及磁盘相关的线程数
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
#数据存放目录
log.dirs=/usr/service/kafka_2.11-1.0.2/log
#每个topic默认分区,这里三台服务器,可以改成3,注意分区的数量可以大于broker节点的数量,但是副本数一般小于等于broker节点数量
num.partitions=3
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# 数据保存时间,按小时记,默认168小时,7天
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# zk地址,用逗号分隔
zookeeper.connect=10.201.83.207:2181,10.202.82.49:2181,10.202.43.113:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
#空消费组延时rebalance ,修改为3s
group.initial.rebalance.delay.ms=3000
#是否允许自动创建主题当主题不存在的时候
auto.create.topics.enable =true
#默认的副本数量
default.replication.factor =1
启动kafka:
sh bin/kafka-server-start.sh -daemon config/server.properties
#或者
sh bin/kafka-server-start.sh config/server.properties &