Kafka配置文件及解释

broker.id=0          brokerID,>=0且每个broker不相同,对应zk中/brokers/ids路径下
num.network.threads=9    broker 处理消息的最大线程数,一般情况下不需要去修改
num.io.threads=24      broker处理磁盘IO 的线程数 ,理论上数值应该大于你的硬盘数
socket.send.buffer.bytes=102400    socket的发送缓冲区,socket的调优参数SO_SNDBUFF
listeners=PLAINTEXT://10.11.106.23:9092    监听的socket
port=9092                    broker 端口号
host.name=10.11.106.23            broker的主机地址,若是设置了,那么会绑定到这个地址上,若是没有,会绑定到所有的接口上。
socket.receive.buffer.bytes=102400     socket的接受缓冲区,socket的调优参数SO_RCVBUFF
socket.request.max.bytes=104857600     socket请求的最大数值,防止serverOOM,message.max.bytes必然要小于socket.request.max.bytes,会被topic创建时的指定参数覆盖
log.dirs=/var/kafka/logs/0          kafka数据的存放地址,多个地址的话用逗号分割,多个目录分布在不同磁盘上可以提高读写性能  
num.partitions=3                每个topic默认分区数,创建topic时如果指定的话会被指定参数覆盖
transaction.state.log.min.isr=1       ISR列表最少1个
log.retention.hours=72            消息落盘保留时常,可以手动在线更改
log.retention.check.interval.ms=300000    检测周期,默认五分钟
zookeeper.connect=10.12.176.3:2181,10.12.172.32:2181    zk集群地址
zookeeper.connection.timeout.ms=6000      zk的连接超时时间
group.initial.rebalance.delay.ms=3000      消费者启动可能逐个进行,等待足够多的consumers入组后进行rebalance
log.cleaner.enable=true             是否开启日志清理
delete.topic.enable=true            是否开启topic删除

 

1. addrep_cpd-app-down.json 文件内容:
{"version":1, "partitions":[ 
{"topic":"cpd-app-down","partition":0,"replicas":[1,2]}, 
{"topic":"cpd-app-down","partition":1,"replicas":[2,3]}, 
{"topic":"cpd-app-down","partition":2,"replicas":[3,4]}, 
{"topic":"cpd-app-down","partition":3,"replicas":[4,5]}, 
{"topic":"cpd-app-down","partition":4,"replicas":[5,6]}, 
{"topic":"cpd-app-down","partition":5,"replicas":[6,0]}, 
{"topic":"cpd-app-down","partition":6,"replicas":[0,1]}, 
{"topic":"cpd-app-down","partition":7,"replicas":[1,2]}, 
{"topic":"cpd-app-down","partition":8,"replicas":[2,3]}, 
{"topic":"cpd-app-down","partition":9,"replicas":[3,4]}, 
{"topic":"cpd-app-down","partition":10,"replicas":[4,5]}, 
{"topic":"cpd-app-down","partition":11,"replicas":[5,6]},
 {"topic":"cpd-app-down","partition":12,"replicas":[6,0]},
 {"topic":"cpd-app-down","partition":13,"replicas":[0,1]}
 ] }

2.sh kafka-reassign-partitions.sh --zookeeper 10.6.72.38:2181,10.6.72.8:2181 --reassignment-json-file ../config/addrep_cpd-app-down.json --execute

 

broker.id=1
listeners=PLAINTEXT://10.32.104.37:9092
num.network.threads=3 
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/var/data/kafka
num.partitions=6
num.recovery.threads.per.data.dir=1
log.retention.hours=72
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=true
zookeeper.connect=10.32.106.42:2181,10.32.114.34:2181,10.32.104.37:2181
zookeeper.connection.timeout.ms=6000
delete.topic.enable=true
transaction.state.log.min.isr=1
log.retention.hours=24
default.replication.factor=3

 

1. 创建topics-to-move.json,输入topic信息
{"topics":
     [{"topic": "TestSing"}],
     "version":1
}

2. 生成topic迁移到新broker的配置文件,json格式
sh bin/kafka-reassign-partitions.sh --zookeeper 10.32.106.42:2181 --topics-to-move-json-file topics-to-move.json --broker-list "3,4,5" --generate

3. 执行脚本,加载json文件,开始迁移操作
sh bin/kafka-reassign-partitions.sh --zookeeper 10.32.106.42:2181 --reassignment-json-file config/testsing.json --execute

 

kafka日志按文件大小分割: log4j.properties
#log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
#log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.kafkaAppender=org.apache.log4j.RollingFileAppender
log4j.appender.kafkaAppender.MaxFileSize=500MB
log4j.appender.kafkaAppender.MaxBackupIndex=5

log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.kafkaAppender=org.apache.log4j.RollingFileAppender
log4j.appender.kafkaAppender.MaxFileSize=500MB
log4j.appender.kafkaAppender.MaxBackupIndex=5
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.stateChangeAppender=org.apache.log4j.RollingFileAppender
log4j.appender.stateChangeAppender.MaxFileSize=500MB
log4j.appender.stateChangeAppender.MaxBackupIndex=5
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.requestAppender=org.apache.log4j.RollingFileAppender
log4j.appender.requestAppender.MaxFileSize=500MB
log4j.appender.requestAppender.MaxBackupIndex=5
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.cleanerAppender=org.apache.log4j.RollingFileAppender
log4j.appender.cleanerAppender.MaxFileSize=500MB
log4j.appender.cleanerAppender.MaxBackupIndex=5
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.controllerAppender=org.apache.log4j.RollingFileAppender
log4j.appender.controllerAppender.MaxFileSize=500MB
log4j.appender.controllerAppender.MaxBackupIndex=5
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.authorizerAppender=org.apache.log4j.RollingFileAppender
log4j.appender.authorizerAppender.MaxFileSize=500MB
log4j.appender.authorizerAppender.MaxBackupIndex=5
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.logger.kafka=INFO, kafkaAppender
log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
log4j.additivity.kafka.network.RequestChannel$=false
log4j.logger.kafka.request.logger=WARN, requestAppender
log4j.additivity.kafka.request.logger=false
log4j.logger.kafka.controller=INFO, controllerAppender
log4j.additivity.kafka.controller=false
log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
log4j.additivity.kafka.log.LogCleaner=false
log4j.logger.state.change.logger=INFO, stateChangeAppender
log4j.additivity.state.change.logger=false
log4j.logger.kafka.authorizer.logger=WARN, authorizerAppender
log4j.additivity.kafka.authorizer.logger=false

 

 

转载于:https://www.cnblogs.com/lwhctv/p/10749921.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值