集群规划
hadoop102 hadoop103 hadoop104
zk zk zk
kafka kafka kafka
jar包下载
http://kafka.apache.org/downloads.html
Kafka集群部署
1)解压安装包
tar -zxvf kafka_2.11-0.11.0.0.tgz -C /opt/module/
2)修改解压后的文件名称(看心情修改)
mv kafka_2.11-0.11.0.0/ kafka
3)在/opt/module/kafka目录下创建logs文件夹
mkdir logs
4)修改配置文件
cd config/
vi server.properties
修改以下内容
#broker的全局唯一编号,不能重复
broker.id=0
#删除topic功能使能
delete.topic.enable=true
#kafka运行日志存放的路径
log.dirs=/opt/module/kafka/logs
#配置连接Zookeeper集群地址
zookeeper.connect=master:2181,server01:2181,server02:2181
5)配置环境变量
sudo vi /etc/profile
KAFKA_HOME
export KAFKA_HOME=/opt/module/kafka
export PATH=$PATH:$KAFKA_HOME/bin
source /etc/profile
6)分发安装包
scp -r conf/ server01:/home/waiwai/modules/hbase
scp -r conf/ server02:/home/waiwai/modules/hbase
7)修改其他主机上的配置文件
module/kafka/config/server.properties中的broker.id=1、broker.id=2
broker.id不得重复
8) 启动集群
每个节点都要启动
bin/kafka-server-start.sh config/server.properties &
bin/kafka-server-start.sh config/server.properties &
bin/kafka-server-start.sh config/server.properties &
9)关闭集群
先修改kafka-server-stop.sh中的配置.
否则可能出现 No kafka server to stop
将 PIDS=$(ps ax | grep -i ‘kafka.Kafka’ | grep java | grep -v grep | awk ‘{print $1}’)
修改为 PIDS=$(jps -lm | grep -i ‘kafka.Kafka’ | awk ‘{print $1}’)
bin/kafka-server-stop.sh stop
bin/kafka-server-stop.sh stop
bin/kafka-server-stop.sh stop