配置
到安装包所在的目录,输入tar -zxvf 包名
进行解压
输入mv 包名 kafka
对解压包进行改名
配置环境变量,输入vi /etc/profile
并在最后添加环境变量路径
export KAFKA_HOME=/root/software/kafka
export PATH=$PATH:$KAFKA_HOME/bin
在文件夹根目录创建一个文件夹,用来放日志和数据文件
mkdir /root/software/kafka/logs
打开kafka目录进入config
输入vi server.properties
修改相关参数
主要配置(启动前配置)
集群中每台机器标识都不一样
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
这里填上自己的IP地址
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://192.168.150.100:9092
自己创建一个文件夹,用来放日志和数据文件
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/root/software/kafka/logs
改自己的IP 有多台机器的IP用","隔开
############################# Zookeeper #############################
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=192.168.150.100:2181
添加上最后一句话,produce可以推送消息到一个不存在的topic,自动建立topic
############################# Group Coordinator Settings #############################
# However, in production environments the default value of 3 seconds is more suitable as this will help
group.initial.rebalance.delay.ms=0
delete.topic.enable=true
其他配置 (按需求配置)
############################# Log Retention Policy #############################
数据存留小时数
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=1680
单文件log存储数据大小(默认一个G)
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
单文件log存储数据大小(默认一个G)当达到这个大小时,将创建一个新的日志段。
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
检查日志段是否可以根据保留策略删除的时间间隔
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
启动
启动前确保机器已经安装zookeeper
1、首先启动zookeeper
进入zookeeper安装目录,打开bin目录,输入
./zkServer.sh start
启动zookeeper,
启动成功后输入./zkServer.sh status
以查看zookeeper状态
(伪分布式,如果不是Mode:standalone
在/root/software/zkpr/conf/zoo.cfg 中确认只有本机一个IP(注释掉其它IP))
2、启动kafka
1、输入
kafka-server-start.sh /root/software/kafka/config/server.properties
正常启动
2、输入nohup kafka-server-start.sh /root/software/kafka/config/server.properties >kafka.log 2>&1
后台启动
运行
1、连接zpkr查看消息队列(topic)
kafka-topics.sh --zookeeper 192.168.150.100:2181 --list
2、创建队列
主题名字叫mydemo 分区为1 副本为1(两个相同内容的文件夹)
(副本数取决于集群有几台机器)
kafka-topics.sh --zookeeper 192.168.150.100:2181 --create --topic mydemo --partitions 1 --replication-factor 1
3、删除队列
kafka-topics.sh --zookeeper 192.168.150.100:2181 --create --topic mydemo
4、查看队列描述
kafka-topics.sh --zookeeper 192.168.150.100:2181 --describe --topic mydemo
5、生产消息到demo
kafka-console-producer.sh --topic mydemo --broker-list 192.168.150.100:9092
6、消费消息(从demo中取消息)
kafka-console-consumer.sh --topic mydemo --bootstrap-server 192.168.150.100:9092 --from-beginning
7、查看队列消息数量
kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 192.168.150.100:9092 --topic mydemo -time -1 --offsets 1
打开logs文件夹
在logs文件夹中,有消息队列名称,进去有以下类型文件
00000000000000000000.index //记录位置信息,满1G换一个文件
00000000000000000000.log //记录数据,根据数据去找位置
00000000000000000000.timeindex
leader-epoch-checkpoint