kafka broker configs list

背景说明

在维护 kafka 集群的过程中,发现以前的一些配置项并不是很合理,原因是自己对 kafka broker 的配置项不了解,为了加强自己对这部分的理解,也为了更好的维护 kafka 集群,对 kafka-1.0.1 版本的 broker 的配置项进行了 review。

可配置项

1、必须要配置的参数有三个:

	broker.id
	log.dirs
	zookeeper.connect
复制代码

2、Topic级别的配置参数和默认值列表如下:

列表中的Dynamic Update Mode列中三个选项的含义分别如下:

> read-only : 必须要重启broker才能实现值得更新。
> per-broker :可以为每一个broker动态更新。
> cluster-wide : 可以作为集群范围的默认值进行动态更新,也可以作为per-broker类型进行更新(测试)。
复制代码

**下面是 topic 级别的 broker configs : **

namedescriptiontypedefaultvalid valueimportancedynamic update mode
zookeeper.connectzk列表Stringhighread-only
advertised.host.name已经弃用!使用‘advertised.listeners’替代。代表要发布到zk上的供客户端使用的HostName。Stringnullhighread-only
advertised.listeners发布到ZooKeeper上给客户端使用的监听器,如果与上述的监听器不同。在IaaS环境中,这个需要与broker绑定的接口不同。如果这个没有设置,将会使用listeners的值。Stringnullhighper-broker
advertised.port已经弃用!只有在‘advertised.listeners’或者'listener'没有设置的情况下才生效,使用‘advertised.listeners’替代。代表要发布到ZooKeeper以供客户端使用的端口。intnullhighread-only
auto.create.topics.enable是否允许在broker上自动创建topic。booleanTruehighread-only
auto.leader.rebalance.enable是否允许leader自动rebalance(自动选举)。booleanTruehighread-only
如果设置为True,那么后台会维护一个检测和触发leader rebalance的线程。
background.threads用于处理后台各种任务的线程数量。int10[1,...]highcluster-wide
broker.id服务器的broker标识。如果没有设置,会自动创建一个惟一的broker id。int-1highread-only
为了避免Zookeeper创建的标识与用户配置的标识产生冲突,自动创建的broker标识从 reserved.broker.max.id + 1开始。
compression.type为指定的topic指定压缩类型,可选类型有'gzip', 'snappy', 'lz4';Stringproducerhighcluster-wide
如果设置为'uncompressed' 就意味着没有设定压缩类型;
如果设置为 'producer' 就意味着保留producer设置的压缩类型。
delete.topic.enable允许删除topic。booleanTruehighread-only
如果这项参数没有启用,那么通过admin tool删除topic就不会有效果。
host.name已经弃用!只有在‘listeners’没有设置的情况下使用。String""highread-only
leader.imbalance.check.interval.seconds控制器触发的分区重新平衡检查的频率。long300highread-only
leader.imbalance.per.broker.percentage每个broker允许的leader不平衡比率阈值。int10highread-only
在每个broker上leader都不平衡的情况下,控制器才会触发leader rebalance.
该值以百分比形式设置,10 <==> 10%
这个值计算方法:(leader不是prefered leader的AR数量)/(AR列表中的总数)
listeners监听器列表选项。设置后将监听由逗号分割的URI和监听器列表,如果监器的名称不是一个安全的协议,listener.security.protocol.map也必须设置。Stringnullhighper-broker
指定主机名为0.0.0.0来绑定所有的接口。让主机名为空来绑定到默认的接口。
合法的监听器列表样例:
PLAINTEXT://myhost:9092,SSL://:9091 CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093
log.dirlog.dirs属性的补充,保存数据目录。string/tmp/kafka-logshighread-only
log.dirs保存数据目录。Stringnullhighread-only
log.flush.interval.messages消息被写入到磁盘前在日志分区上可保留的消息数量的最大值long9.22337203685477E+18highcluster-wide
log.flush.interval.ms消息被书写到磁盘之间在内存中保存的最大时长,如果没有设置,那么就用‘log.flush.scheduler.interval.ms’对应的值。longnullhighcluster-wide
log.flush.offset.checkpoint.interval.ms检查点文件的更新频率,就像日志的还原点int60000[0,...]highread-only
log.flush.scheduler.interval.mslog flusher【检查是否有log需要刷写到磁盘】的检查频率long9.22337203685477E+18highread-only
log.flush.start.offset.checkpoint.interval.ms日志起始偏移量持久化记录的刷新频率int60000[0,...]highread-only
log.retention.bytes日志保留最大的sizelong-1highcluster-wide
log.retention.hours日志保留时间,单位hourint168highread-only
优先级低于属性‘log.retention.ms ’
log.retention.minutes日志保留时间,单位minutesintnullhighread-only
如果没设置,则使用‘log.retention.ms’中的值
优先级低于属性‘log.retention.ms ’
log.retention.ms日志保留时间,单位mslongnullhighcluster-wide
如果没有设置,则使用‘log.retention.minutes’中的值
log.roll.hours生成一个新的log segment所需的最长时间,单位hourint168[1,...]highread-only
优先级低于log.roll.ms
log.roll.ms生成一个新的log segment所需的最长时间,单位mslongnullhighcluster-wide
如果没有设置,使用log.roll.hours项
log.roll.jitter.hours从logRollTimeMillis中减去的最大抖动(以小时为单位)int0[0,...]highread-only
是log.roll.jitter.ms属性的次要选项
log.roll.jitter.ms从logRollTimeMillis中减去的最大抖动(以ms为单位)longnullhighcluster-wide
log.segment.bytes单个log segment的最大sizeint1073741824 / 1G[14,...]highcluster-wide
log.segment.delete.delay.ms文件删除之前延迟时间 / mslong60000[0,...]highcluster-wide
message.max.bytesKafka允许的一批消息的最大size(一个batch的最大size)。int1000012[0,...]highcluster-wide
如果增加此参数的值并且存在0.10.2版本之前的消费者,那么老版本消费者的提取大小也必须增加,以便他们可以获取这么大的记录批次。
在最新的消息格式版本中,记录总是按批次分组以提高效率。 在以前的消息格式版本中,未压缩的记录不会分组到批次中,并且此限制仅适用于该情况下的单个记录。
可以单独为每一个topic设置该项。
min.insync.replicasack设置选项:-1/all、0、1、...int1[1,...]highcluster-wide
-1/all:
ISR中所有的副本确认后才认为发送成功
1899/12/31 上午12:00:00
默认所有的数据发送成功,吞吐量最大,不安全
1899/12/31 上午1:00:00
leader副本写入成功即可
2、3、4...k...
需要k个replica写入成功才行,但是如果k超过了replica的数量,会报错NotEnoughReplicas 或者 NotEnoughReplicasAfterAppend
num.io.threads用于处理请求的线程数量,包括磁盘I/O处理。int8[1,...]highcluster-wide
num.network.threads用于接受或者发送网络请求的线程数量int3[1,...]highcluster-wide
num.recovery.threads.per.data.dir每个日志目录用于日志恢复的线程数量(用于在启动时加载和关闭时刷写到磁盘)int1[1,...]highcluster-wide
num.replica.alter.log.dirs.threads用于在log dirctory之间移动replicas的线程数量,可能包括disk I/Ointnullhighread-only
num.replica.fetchers从一个broker源获复制数据的fetcher线程数量。int1highcluster-wide
增大该值可以提升follower broker的I/O并行度。
offset.metadata.max.bytes与偏移提交关联的元数据条目的最大sizeint4096highread-only
offsets.commit.required.acks接受提交之间所需的acks,通常情况下不应该修改默认值-1short-1highread-only
offsets.commit.timeout.ms提交Offset的最大允许等待时长。int5000[1,...]highread-only
偏移提交将被延迟到提交偏移主题的所有副本收提交或超时为止。这是类似于生产者请求超时。
offsets.load.buffer.size在将Offset从Offset segment加载到缓存中时,一次读取的batch的size大小int5242880[1,...]highread-only
offsets.retention.check.interval.ms检查老旧Offset的频率long600000[1,...]highread-only
offsets.retention.minutes存在超过时长的offset会被抛弃int1440[1,...]highread-only
offsets.topic.compression.codec偏移量topic的压缩编码器,可以保证原子提交int0highread-only
offsets.topic.num.partitionsoffset topic的partition数量int50[1,...]highread-only
offsets.topic.replication.factoroffset topic的副本数量short3[1,...]highread-only
offsets.topic.segment.bytesoffset topic中segment的大小设置。int104857600[1,...]highread-only
这个参数的值应该设置的相对较小,这样可以加快日志压缩和缓存加载
port已经弃用!设置接收和监听连接int9092highread-only
queued.max.requests网络线程被阻塞之前允许的请求队列的大小int500[1,...]highread-only
quota.consumer.default已弃用:仅在Zookeeper动态默认配额没有被配置时使用。任何通过客户端标识或消费者组来区分的消费者将会受到限制,如果它每秒获取的字节数多于此属性设置的值。long9.22337203685477E+18[1,...]highread-only
quota.producer.default已弃用:仅在Zookeeper动态默认配额没有被配置时使用。任何通过客户端标识来区分的生产者将会受到限制,如果它每秒产生的字节数多于此属性设置的值。long9.22337203685477E+18[1,...]highread-only
replica.fetch.min.bytes获取响应的期望的最小字节数,如果当前没有足够的字节数,那么等待replicaMaxWaitTimeMs时长int1highread-only
replica.fetch.wait.max.msfollower broker发出的每个fetcher请求的最长等待时间。int500highread-only
此值应始终始终小于replica.lag.time.max.ms,以防止低吞吐量的topic频繁收缩ISR
replica.high.watermark.checkpoint.interval.msHW(高水位)记录到磁盘的频率long5000highread-only
replica.lag.time.max.ms如果follower broker没有发送任何的fetch请求 || 还没有消费到leader log的最新的offset位置,那么leader副本会将该follower副本从ISR中移除long10000highread-only
replica.socket.receive.buffer.bytes用于网络请求的套接字接收缓存int65536highread-only
replica.socket.timeout.ms网络请求的socket超时时间,设置的值应该不小于‘replica.fetch.wait.max.ms’值int30000highread-only
request.timeout.ms这个配置控制了客户端一个请求等待响应的最大时间。int30000highread-only
如果已经超时了却没有收到响应,如有必要客户端会重新发送请求或是当重试耗尽时请求失败。
socket.receive.buffer.bytes套接字接收缓存,如果值是-1那么 OS默认值会被使用int102400highread-only
socket.request.max.bytes一个socket请求包含的最大字节数int104857600[1,...]highread-only
socket.send.buffer.bytes套接字发送缓存,如果值是-1那么 OS默认值会被使用。int102400highread-only
transaction.max.timeout.mstransaction/事物允许的最大超时时长。int900000[1,...]highread-only
如果客户端请求的事物时间超过该值,那么broker会在InitProducerIdRequest返回一个错误。这可以防止客户端因为有太大的超时,从而阻止其他消费者从事物中包含的topic中消费消息。
transaction.state.log.load.buffer.size将生产者id和事物加载到缓冲中时,从事物日志段中读取批次的大小设置。int5242880[1,...]highread-only
Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache.
transaction.state.log.min.isr重写/覆盖事物topic的min.insync.replicas参数。int2[1,...]highread-only
Overridden min.insync.replicas config for the transaction topic.
transaction.state.log.num.partitions事物主题的分区数量(部署后不能修改)int50[1,...]highread-only
The number of partitions for the transaction topic (should not change after deployment).
transaction.state.log.replication.factor事物主题的副本数量。只有在集群size大于设置的副本数量的情况下,内部主题才会创建成功。short3[1,...]highread-only
The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.
transaction.state.log.segment.bytes事物主题的log segment应该相对较小,这样可以达到更快的日志压缩和缓存加载效果。int104857600[1,...]highread-only
The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads
transactional.id.expiration.ms事务协调器在长时间没有从生产者事物id收取到任何事物状态更新时,会主动将其终止,该参数设置的是终止前的最长等待时间。int604800000[1,...]highread-only
The maximum amount of time in ms that the transaction coordinator will wait before proactively expire a producer's transactional id without receiving any transaction status updates from it.
unclean.leader.election.enable是否允许不在ISR列表中的副本当选为leader副本,设置为true可能会造成数据丢失。booleanFalsehighcluster-wide
Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss
zookeeper.connection.timeout.ms客户端和zk建立连接的最大等待时长,没有设置的话会采用‘zookeeper.session.timeout.ms’项对应的值。intnullhighread-only
The max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms is used
zookeeper.max.in.flight.requests客户端发送到zk命令阻塞之前允许发送的未确认请求数量的最大值。int10[1,...]highread-only
The maximum number of unacknowledged requests the client will send to Zookeeper before blocking.
zookeeper.session.timeout.mszk会话超时时间设置。int6000highread-only
Zookeeper session timeout
zookeeper.set.acl设置客户端使用安全的访问控制列表booleanFalsehighread-only
Set client to use secure ACLs
broker.id.generation.enable在服务器端启用自动生成broker id功能,如果开启该参数,那么也应该同时检查一下‘reserved.broker.max.id’项对应的值。booleanTruemediumread-only
Enable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed.
broker.rackbroker所在的机架。在考虑分配replica副本到哪一个broker上时,考虑机架的因素可以增加容错能力。stringnullmediumread-only
Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: RACK1, us-east-1d
connections.max.idle.ms空闲连接超时。服务端的socket处理线程的空闲时间超过该值后会关闭。long600000mediumread-only
Idle connections timeout: the server socket processor threads close the connections that idle more than this
controlled.shutdown.enable是否允许服务器的受控关机。booleanTruemediumread-only
Enable controlled shutdown of the server
controlled.shutdown.max.retries在‘受控关机’发生失败时可以重试的次数设置。int3mediumread-only
Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens
controlled.shutdown.retry.backoff.ms每次‘受控关机’重试的时间间隔,用于系统的状态恢复。long5000mediumread-only
Before each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying.
controller.socket.timeout.ms控制器到broker之间的socket频道/信道超时时间。int30000mediumread-only
The socket timeout for controller-to-broker channels
default.replication.factor允许自动创建topic的情况下,创建topic的副本数量。int1mediumread-only
default replication factors for automatically created topics
delegation.token.expiry.time.mstoken的有效时间,默认值1天,超时需要更新/续订。long86400000[1,...]mediumread-only
The token validity time in seconds before the token needs to be renewed. Default value 1 day.
delegation.token.master.key用于生成和验证委托tokens的主(公)/密钥。 必须在所有broker中配置相同的密钥。passwordnullmediumread-only
如果未设置密钥或将其设置为空字符串,那么broker将禁用委派token支持。
Master/secret key to generate and verify delegation tokens. Same key must be configured across all the brokers. If the key is not set or set to empty string, brokers will disable the delegation token support.
delegation.token.max.lifetime.mstoken的生命时长。默认值为7天,超过该值就无法再被更新/续订。long604800000[1,...]mediumread-only
The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days.
delete.records.purgatory.purge.interval.requests【删除消息】请求记录的清理间隔(此间隔非时间间隔,而是以请求数量为间隔)。int1mediumread-only
The purge interval (in number of requests) of the delete records request purgatory
fetch.purgatory.purge.interval.requests【拉取消息】请求记录的删除间隔(此间隔非时间间隔,而是以请求数量为间隔)。int1000mediumread-only
The purge interval (in number of requests) of the fetch request purgatory
group.initial.rebalance.delay.ms在new group内执行第一次重新平衡之前,组协调器会等待更多消费者加入新组的时间,该值设置的为其等待的最大时长。 较长的延迟意味着可能更少的rebalance,但会增加处理开始之前的时间。int3000mediumread-only
The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins.
group.max.session.timeout.ms允许已经注册成功的消费者最大的会话超时时间。较长的超市时长可以使得消费者有更多的时间在心跳之间处理消息,但是这是以更长的故障检测时长作为代价的。int300000mediumread-only
The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.
group.min.session.timeout.ms允许已经注册成功的消费者最小的会话超时时间。较短的超时时长可以导致更快的故障检测,代价是需要更频繁的检测消费者的心跳,这可能会导致broker资源垮掉。int6000mediumread-only
The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.
inter.broker.listener.namebroker之间用于通信的监听器名称。如果此值没有设置,监听器名称根据security.inter.broker.protocol定义。注意不能同时设置该项和security.inter.broker.protocol项。stringnullmediumread-only
Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocol. It is an error to set this and security.inter.broker.protocol properties at the same time.
inter.broker.protocol.version指明内部broker之间使用协议的版本。string1.1-IV0mediumread-only
Specify which version of the inter-broker protocol will be used. This is typically bumped after all brokers were upgraded to a new version. Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full list.
log.cleaner.backoff.mslog cleaner在没有日志清理时的休眠时间。long15000[0,...]mediumcluster-wide
The amount of time to sleep when there are no logs to clean
log.cleaner.dedupe.buffer.size用于所有cleaner线程进行日志重复删除的总内存。long134217728mediumcluster-wide
The total memory used for log deduplication across all cleaner threads
log.cleaner.delete.retention.ms删除记录/消息的保存时长。long86400000mediumcluster-wide
How long are delete records retained?
log.cleaner.enable允许日志清理进程在服务器上运行。booleanTruemediumread-only
只要存在设置了cleanup.policy=compac 项的topic,该项据需要设置为true,否者压缩不会执行,topic的size会不断增加。
Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size.
log.cleaner.io.buffer.load.factor日志清理器删除重复数据缓冲区加载因子。重复数据删除缓冲区可以映射为百分比。double0.9mediumcluster-wide
较高的值会导致一次清理更多的日志,但是对导致更多的哈希冲突。
Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions
log.cleaner.io.buffer.size所有的清理线程间用于日志清理的I/O缓冲的总内存。int524288[0,...]mediumcluster-wide
The total memory used for log cleaner I/O buffers across all cleaner threads
log.cleaner.io.max.bytes.per.second日志清理器将被限速,这样日志清理的读写I/O的总和平均值将小于这个值。double1.7976931348623157E308mediumcluster-wide
The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average
log.cleaner.min.cleanable.ratio超过该比例则进行log清理,比例计算方法:脏日志/总日志double0.5mediumcluster-wide
The minimum ratio of dirty log to total log for a log to eligible for cleaning
log.cleaner.min.compaction.lag.ms消息在log中保持未被压缩的最短时长,仅仅适用于要被压缩的日志。long0mediumcluster-wide
The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.
log.cleaner.threads用于日志清理的后台线程数。int1[0,...]mediumcluster-wide
The number of background threads to use for log cleaning
log.cleanup.policy除了保留窗口之外的log segment的默认的清理策略。listdelete[compact, delete]mediumcluster-wide
可以设置多个方法,方法之间通过逗号分隔,有效的策略有:"delete" and "compact"。
The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact"
log.index.interval.bytes在偏移量索引文件中插入索引的间隔(以字节为间隔,并不是消息的数量)。int4096[0,...]mediumcluster-wide
The interval with which we add an entry to the offset index
log.index.size.max.bytes偏移量索引文件的最大值。int10485760[4,...]mediumcluster-wide
The maximum size in bytes of the offset index
log.message.format.version指定消息格式的版本,broker会根据该选项将消息按指定的格式添加到log中。string1.1-IV0mediumread-only
需要是合法的ApiVersion,例如:0.8.2, 0.9.0.0, 0.10.0。
通过设置特定的消息格式版本,用户可以表明磁盘上的所有现有消息都小于或等于指定的版本。 错误地设置此值将导致旧版本的使用者中断,因为他们将接收具有他们不理解的格式的消息。
Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.
log.message.timestamp.difference.max.ms允许的broker接收到消息时的时间戳与消息中指定的时间戳之间的最大差异。long9.22337203685477E+18mediumcluster-wide
如果设置了 log.message.timestamp.type=CreateTime ,那么超过该阈值的消息会被拒绝。
如果设置了 log.message.timestamp.type=LogAppendTime,那么该参数会失效。
该项的设定值需要小于等于log.retention.ms,从而避免没有必要的频繁的日志滚动。
The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling.
log.message.timestamp.type指定消息中的时间戳是CreateTime还是LogAppendTime。默认是CreateTime。stringCreateTime[CreateTime, LogAppendTime]mediumcluster-wide
Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or LogAppendTime
log.preallocate在创建新的segment之前是否预分配文件。Windows上需要设置为true.booleanFalsemediumcluster-wide
Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true.
log.retention.check.interval.ms日志清理器检查是否有日志需要清理的检查间隔时间/mslong300000[1,...]mediumread-only
The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion
max.connections.per.ip单个ip允许建立连接的最大数量。int2147483647[1,...]mediumread-only
The maximum number of connections we allow from each ip address
max.connections.per.ip.overrides每个ip或者主机名最大连接数,覆盖默认值。string""mediumread-only
Per-ip or hostname overrides to the default maximum number of connections
max.incremental.fetch.session.cache.slots可维护的最大消息提取会话数量。int1000[0,...]mediumread-only
The maximum number of incremental fetch sessions that we will maintain.
num.partitions么个主题默认的partition数量。int1[1,...]mediumread-only
The default number of log partitions per topic
password.encoder.old.secret用于动态配置密码的旧的密钥。只有在更新密钥时才需要陈志祥参数。passwordnullmediumread-only
如果指定,那么使用此旧的密钥对所有的动态编码的密码进行解码,并在broker启动时使用password.encoder.secret进行重新编码。
The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If specified, all dynamically encoded passwords are decoded using this old secret and re-encoded using password.encoder.secret when broker starts up.
password.encoder.secret用于为此代理编码动态配置密码的密钥。passwordnullmediumread-only
The secret used for encoding dynamically configured passwords for this broker.
principal.builder.classKafkaPrincipalBuilder接口实现类的全名,用于构建KafkaPrincipal类型对象,这个对象在认证授权时会用到。可以理解为用来构建SSL安全协议的规则。classnullmediumper-broker
The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. This config also supports the deprecated PrincipalBuilder interface which was previously used for client authentication over SSL. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication, the principal name will be the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by sasl.kerberos.principal.to.local.rules if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS.
producer.purgatory.purge.interval.requests【生产请求消息】请求记录的清理间隔(此间隔非时间间隔,而是以请求数量为间隔)。int1000mediumread-only
The purge interval (in number of requests) of the producer request purgatory
queued.max.request.bytes在不再读取请求之前允许队列中有的字节数。long-1mediumread-only
The number of queued bytes allowed before no more requests are read
replica.fetch.backoff.ms当拉取partition出现错误时,拉取操作休眠时间。int1000[0,...]mediumread-only
The amount of time to sleep when fetch partition error occurs.
replica.fetch.max.bytes允许从每个partition中获取的消息的字节数。int1048576[0,...]mediumread-only
这不是一个绝对的最大值,如果获取的第一个非空的分区中的第一个记录批次大于这个属性的值,这个记录批次将继续被返回以确保取得进展。
此外,可以通过message.max.bytes (broker侧) 或 max.message.bytes (topic侧)来定义broker可以接受的消息batch/批次的最大size。
The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).
replica.fetch.response.max.bytes期望的拉取请求响应的最大字节数。int10485760[0,...]mediumread-only
Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).
reserved.broker.max.id可以作为broker id的最大值。int1000[0,...]mediumread-only
Max number that can be used for a broker.id
sasl.enabled.mechanismskafka server中启用的SASL机制列表。此列表可能包含安全提供程序可用的任何机制。默认情况下只有CSSAPI可用。listGSSAPImediumper-broker
qThe list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default.
sasl.jaas.configJAAS登录上下文参数,用于SASL连接。格式为: ' (=)*;'passwordnullmediumper-broker
JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: ' (=)*;'
sasl.kerberos.kinit.cmdKerberos kinit命令路径。string/usr/bin/kinitmediumper-broker
Kerberos kinit command path.
sasl.kerberos.min.time.before.relogin刷新尝试之间的登录线程睡眠时间。long60000mediumper-broker
Login thread sleep time between refresh attempts.
sasl.kerberos.principal.to.local.rules从主体名到短名之间的映射规则列表。按照映射规则顺序评估,只要找到匹配的规则后面的规则则被忽略。默认情况下,{username} / {hostname} @ {REALM}形式的主体名称将映射到{username}。 有关格式的更多详细信息,可以参阅安全授权和acls。 请注意,如果principal.builder.classconfiguration提供了KafkaPrincipalBuilder的扩展,则会忽略此配置。listDEFAULTmediumper-broker
A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username}. For more details on the format please seesecurity authorization and acls. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.classconfiguration.
sasl.kerberos.service.namekafka运行Kerberos的主体名。在Kafka的JAAS配置或Kafka的配置中定义都可以。stringnullmediumper-broker
The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.
sasl.kerberos.ticket.renew.jitter添加到续订时间的随机抖动百分比。double0.05mediumper-broker
Percentage of random jitter added to the renewal time.
sasl.kerberos.ticket.renew.window.factor登录线程将一直睡眠,直到指定的时间窗口因子从最近一次的刷新到的ticket超时过期,此时它将尝试更新/续订ticket。double0.8mediumper-broker
Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.
sasl.mechanism.inter.broker.protocol用于内部broker通信的简单身份验证和安全层机制。默认设置为CSSAPI/通用安全服务应用程序接口stringGSSAPImediumper-broker
SASL mechanism used for inter-broker communication. Default is GSSAPI.
security.inter.broker.protocol用于brokers之间通信的安全协议。可选项有PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.stringPLAINTEXTmediumread-only
不能同时设置该参数和inter.broker.listener.name参数。
Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error to set this and inter.broker.listener.name properties at the same time.
ssl.cipher.suites密码套件列表。这是用于使用TLS或SSL网络协议协商网络连接的安全设置的身份验证,加密,MAC和密钥交换算法的命名组合。 默认情况下,支持所有可用的密码套件。list""mediumper-broker
A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.
ssl.client.auth配置kafka broker以请求客户端身份验证。常见设置:stringnone[required, requested, none]mediumper-broker
ssl.client.auth = required #设置为required意味着客户端身份验证是必须的。
ssl.client.auth = requested #这意味着客户端身份验证是可选的。 与请求不同,如果设置了此选项,则客户端可以选择不提供有关自身的身份验证信息
ssl.client.auth = none #这意味着不需要客户端身份验证。默认值为none
Configures kafka broker to request client authentication. The following settings are common:
ssl.client.auth=required If set to required client authentication is required.
ssl.client.auth=requested This means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself
ssl.client.auth=noneThis means client authentication is not needed.
ssl.enabled.protocols为SSL连接启用的协议列表。listTLSv1.2,TLSv1.1,TLSv1mediumper-broker
The list of protocols enabled for SSL connections.
ssl.key.password密钥库文件中私钥的密码。 这对于客户来说是可选的。passwordnullmediumper-broker
The password of the private key in the key store file. This is optional for client.
ssl.keymanager.algorithm密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法。stringSunX509mediumper-broker
The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.
ssl.keystore.location密钥库文件的位置。 这对于客户端是可选的,可用于客户端的双向身份验证。stringnullmediumper-broker
The location of the key store file. This is optional for client and can be used for two-way authentication for client.
ssl.keystore.password密钥库文件的访问密码。 这对于客户端是可选的,仅在配置了ssl.keystore.location时才需要。passwordnullmediumper-broker
The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
ssl.keystore.type密钥库文件的文件格式。 这对于客户来说是可选的。stringJKSmediumper-broker
The file format of the key store file. This is optional for client.
ssl.protocol用于生成SSLContext的SSL协议。stringTLSmediumper-broker
默认设置为TLS,在大多数情况下都可以。 最近的JVM中的允许值是TLS,TLSv1.1和TLSv1.2。 较旧的JVM可能支持SSL,SSLv2和SSLv3,但由于已知的安全漏洞,不鼓励使用它们。
The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.
ssl.provider用于SSL连接的安全提供程序的名称。 默认值是JVM的默认安全提供程序。stringnullmediumper-broker
The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.
ssl.trustmanager.algorithm信任管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的信任管理器工厂算法。stringPKIXmediumper-broker
The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.
ssl.truststore.location信任库文件的位置。stringnullmediumper-broker
The location of the trust store file.
ssl.truststore.password信任库文件的密码。 如果未设置密码,则仍可访问信任库,但禁用完整性检查。passwordnullmediumper-broker
The password for the trust store file. If a password is not set access to the trust store is still available, but integrity checking is disabled.
ssl.truststore.type信任库文件的文件格式。stringJKSmediumper-broker
The file format of the trust store file.
alter.config.policy.class.name用于验证的改变配置方法类名称。classnulllowread-only
该类应该是接口 org.apache.kafka.server.policy.AlterConfigPolicy的实现类
The alter configs policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.AlterConfigPolicy interface.
alter.log.dirs.replication.quota.window.num保留在内存中用于修改log dirs副本指标的样本数。int11[1,...]lowread-only
The number of samples to retain in memory for alter log dirs replication quotas
alter.log.dirs.replication.quota.window.size.seconds上个参数中提到的改变log dirs副本指标的样本的时间跨度。int1[1,...]lowread-only
The time span of each sample for alter log dirs replication quotas
authorizer.class.name应该用于授权的授权者类。string""lowread-only
The authorizer class that should be used for authorization
create.topic.policy.class.name应该用于验证的创建topic策略类。classnulllowread-only
该类应该是org.apache.kafka.server.policy.CreateTopicPolicy接口的实现类。
The create topic policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.CreateTopicPolicy interface.
delegation.token.expiry.check.interval.ms扫描间隔以删除过期的token。long3600000[1,...]lowread-only
Scan interval to remove expired delegation tokens.
listener.security.protocol.map监听器名称和安全协议之间的映射。stringPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLlowper-broker
必须为同一安全协议定义,以便在多个端口或IP中使用。 例如,即使两者都需要SSL,也可以分离内部和外部流量。 具体地说,用户可以定义名为INTERNAL和EXTERNAL的侦听器,并将此属性定义为:INTERNAL:SSL,EXTERNAL:SSL。 如图所示,键和值由冒号分隔,映射条目以逗号分隔。 每个侦听器名称只应在地图中出现一次。 通过向配置名称添加规范化前缀(侦听器名称为小写),可以为每个侦听器配置不同的安全性(SSL和SASL)设置。 例如,要为INTERNAL侦听器设置不同的密钥库,将设置名为“listener.name.internal.ssl.keystore.location”的配置。 如果未设置侦听器名称的配置,则配置将回退到通用配置(即ssl.keystore.location)。
Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: INTERNAL:SSL,EXTERNAL:SSL. As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name listener.name.internal.ssl.keystore.location would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. ssl.keystore.location).
metric.reporters用于度量报告的类列表。实现接口 org.apache.kafka.common.metrics.MetricsReporter的类可以插入到类列表中,这样就可以被metic创建通知到. JmxReporter一般包括注册JMX统计。list""lowcluster-wide
A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.
metrics.num.samples用于维护计算指标的样本数量。int2[1,...]lowread-only
The number of samples maintained to compute metrics.
metrics.recording.level指标的最高纪录级别。stringINFOlowread-only
The highest recording level for metrics.
metrics.sample.window.ms计算度量样本的时间窗口。long30000[1,...]lowread-only
The window of time a metrics sample is computed over.
password.encoder.cipher.algorithm用于编码动态配置密码的密码算法。stringAES/CBC/PKCS5Paddinglowread-only
The Cipher algorithm used for encoding dynamically configured passwords.
password.encoder.iterations用于编码动态配置密码的迭代计数。int4096[1024,...]lowread-only
The iteration count used for encoding dynamically configured passwords.
password.encoder.key.length用于编码动态配置密码的密钥长度。int128[8,...]lowread-only
The key length used for encoding dynamically configured passwords.
password.encoder.keyfactory.algorithmSecretKeyFactory算法用于编码动态配置的密码。stringnulllowread-only
默认值为PBKDF2WithHmacSHA512(如果可用),否则为PBKDF2WithHmacSHA1。
The SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available and PBKDF2WithHmacSHA1 otherwise.
quota.window.num在内存中维护的用于客户端配额的样本数量。int11[1,...]lowread-only
The number of samples to retain in memory for client quotas
quota.window.size.seconds上面参数中样本的时间窗。int1[1,...]lowread-only
The time span of each sample for client quotas
replication.quota.window.num在内存中维护的用于副本配额的样本数量。int11[1,...]lowread-only
The number of samples to retain in memory for replication quotas
replication.quota.window.size.seconds用于复制配额的每个样本的时间跨度。int1[1,...]lowread-only
The time span of each sample for replication quotas
ssl.endpoint.identification.algorithm端点识别算法,使用服务器证书验证服务器主机名。stringnulllowper-broker
The endpoint identification algorithm to validate server hostname using server certificate.
ssl.secure.random.implementation用于SSL加密操作的SecureRandom PRNG实现。stringnulllowper-broker
The SecureRandom PRNG implementation to use for SSL cryptography operations.
transaction.abort.timed.out.transaction.cleanup.interval.ms回滚已超时的事务的时间间隔。int60000[1,...]lowread-only
The interval at which to rollback transactions that have timed out
transaction.remove.expired.transaction.cleanup.interval.ms删除【由于transactional.id.expiration.ms过期而引起的过期的事物】的时间间隔int3600000[1,...]lowread-only
The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing
zookeeper.sync.time.mszk的follower可以多长时间不与zk leader同步(zk follower可以落后zk leader的最长时间)int2000lowread-only
How far a ZK follower can be behind a ZK leader

3、修改上表中的参数需要通过./kafka-configs.sh文件来实现更改(kafka版本号 >= 1.1)。

例如改变当前broker 0上的log cleaner threads可以通过下面命令实现:

> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2
复制代码

查看当前broker 0的动态配置参数:

> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --describe
复制代码

删除broker id为0的server上的配置参数/设置为默认值:

> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --delete-config log.cleaner.threads
复制代码

同时更新集群上所有broker上的参数(cluster-wide类型,保持所有brokers上参数的一致性):

> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --alter --add-config log.cleaner.threads=2
复制代码

查看当前集群中动态的cluster-wide类型的参数列表:

> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --describe
复制代码

如果一个参数同时在不同的level层面进行了定义,那么其使用的优先级如下所示:

Dynamic per-broker config stored in ZooKeeper  # 保存在zk中的动态的per-broker配置
Dynamic cluster-wide default config stored in ZooKeeper # 保存在zk中的动态的cluster-wide级别的配置
Static broker config from server.properties  # server.properties中静态配置的参数
Kafka default, see broker configs  # kafka的终极默认值
复制代码
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值