kafka入门教程
1,安装zookeeper
由于kafka依赖于zookeeper·,所以需要先安装zookeeper
1.1 选择目录解压zookeeper·安装包 比如(/usr/yyj/zookeeper)
tar -zxvf zookeeper-3.4.5.tar.gz
1.2 修改zookeeper配置文件
先进入zookeeper的conf目录
cd /usr/yyj/zookeeper/zookeeper-3.4.5
然后复制zoo文件
cp zoo_sample.cfg zoo.cfg
之后编辑zoo.cfg文件
vim zoo.cfg
修改dataDir参数
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/yyj/zookeeper/zookeeper-3.4.5/data
# the port at which the clients will connect
clientPort=2181
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
1.3 创建data目录
mkdir /usr/yyj/zookeeper/zookeeper-3.4.5/data
1.4 修改环境变量
vim /etc/profile
export ZK_HOME=/usr/yyj/zookeeper/zookeeper-3.4.5
export PATH=.:$ZK_HOME/bin:$PATH
1.5 修改完成后使其生效:
source /etc/profile
2 运行zookeeper
zkServer.sh start
zkServer.sh status
3 安装kafka
下载并解压kafka到自定义目录
tar -xvf kafka_2.11-2.1.0.tgz
3.1 修改配置文件
[root@along kafka_2.11-2.1.0]# cat | grep "^[^#]" config/server.properties
broker.id=0
listeners=PLAINTEXT://localhost:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
注:可根据自己需求修改配置文件
broker.id:唯一标识ID
listeners=PLAINTEXT://localhost:9092:kafka服务监听地址和端口
log.dirs:日志存储目录
zookeeper.connect:指定zookeeper服务
3.2 配置环境变量
[root@along ~]# vim /etc/profile.d/kafka.sh
export KAFKA_HOME="/usr/yyj/kafka/kafka_2.11-2.4.0"
export PATH="${KAFKA_HOME}/bin:$PATH"
[root@along ~]# source /etc/profile.d/kafka.sh
4 启动kafka并测试
4.1启动一个broker
bin/kafka-server-start.sh config/server.properties
4.2 创建一个topic
topic是一个消息的标签,从业务上讲我们只要知道一个消息的topic我们就知道他是谁生产的,谁应该消费,负责啥业务的
./kafka-topics.sh --zookeeper 127.0.0.1:2181 --partitions 1 --replication-factor 1 --create --topic productscanlog
上面的命令中–create是创建,zookeeper是指定zookeeper,topic 是指定名字,–replication-factor表示这个topic会备份到多少个broker中,partitions表示这个topic中有多少个partitions(这两个参数读者如果不理解可以先放一放),执行成功后我们可以通过下面这个命令查看我们所有topic的列表
4.3 启动一个consumer
bin/./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic productscanlog --from-beginning
4.4 启动一个producer
bin/./kafka-console-producer.sh --broker-list localhost:9092 --topic productscanlog
4.5 在生产者那边输入消息,回车即可。