安装与配置
1.在node1,2,3安装kafka集群
2.上传kafka_2.11-2.0.0.zip
到node1的/usr/local,解压unzip kafka_2.11-2.0.0.zip
,如果解压不了,可能是你没有安装unzip,使用这个命令安装yum install -y unzip zip
,重新解压成功后,重命名:mv kafka_2.11-2.0.0 kafka211200
3.配置环境变量:vi /etc/profile
export KAFKA_HOME=/usr/local/kafka211200
export PATH=$PATH:${JAVA_HOME}/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$KAFKA_HOME/bin
source /etc/profile
4.修改kafka配置文件:cd kafka211200/config
vi server.properties
修改如下配置:
broker.id=0
log.dirs=/opt/kafka-logs
zookeeper.connect=192.168.76.200:2181,192.168.76.201:2181,192.168.76.202:2181
5.同步kafka目录到node2,3:
scp -r kafka211200 node2:/usr/local
scp -r kafka211200 node3:/usr/local
在node2,3配置环境变量
5.修改node2,3中config/server.properties配置
node2:broker.id=1
node3:broker.id=2
保证zk集群启动
cd /usr/local/zk336/bin
写一个zk的启动脚本:vi zk-all.sh
#! /bin/bash
# 设置Zookeeper集群节点地址
hosts=(node1 node2 node3)
# 获取输入命令参数
cmd=$1
# 执行分布式管理命令
function zookeeper()
{
for i in ${hosts[@]}
do
ssh root@$i "source /etc/profile;zkServer.sh $cmd;echo Zookeeper node is $i, run $cmd command." &
sleep 1
done
}
# 判断输入的Zookeeper命令参数是否有效
case "$1" in
start)
zookeeper
;;
stop)
zookeeper
;;
status)
zookeeper
;;
start-foreground)
zookeeper
;;
upgrade)
zookeeper
;;
restart)
zookeeper
;;
print-cmd)
zookeeper
;;
*)
echo "Usage: $0 {start|start-foreground|stop|restart|status|upgrade|print-cmd}"
RETVAL=1
esac
加入执行权限:chmod +x zk-all.sh
启动:./zk-all.sh start
kafka集群启动
cd /usr/local/kafka211200/bin
写一个kafka的启动脚本:vi kafka-all.sh
#! /bin/bash
# Kafka代理节点地址
hosts=(node1 node2 node3)
# 打印启动分布式脚本信息
mill=`date "+%N"`
tdate=`date "+%Y-%m-%d %H:%M:%S,${mill:0:3}"`
echo [$tdate] INFO [Kafka Cluster] begins to execute the $1 operation.
# 执行分布式开启命令
function start()
{
for i in ${hosts[@]}
do
smill=`date "+%N"`
stdate=`date "+%Y-%m-%d %H:%M:%S,${smill:0:3}"`
ssh root@$i "source /etc/profile;echo [$stdate] INFO [Kafka Broker $i] begins to execute the startup operation.;kafka-server-start.sh $KAFKA_HOME/config/server.properties>/dev/null" &
sleep 1
done
}
# 执行分布式关闭命令
function stop()
{
for i in ${hosts[@]}
do
smill=`date "+%N"`
stdate=`date "+%Y-%m-%d %H:%M:%S,${smill:0:3}"`
ssh root@$i "source /etc/profile;echo [$stdate] INFO [Kafka Broker $i] begins to execute the shutdown operation.;kafka-server-stop.sh>/dev/null;" &
sleep 1
done
}
# 查看Kafka代理节点状态
function status()
{
for i in ${hosts[@]}
do
smill=`date "+%N"`
stdate=`date "+%Y-%m-%d %H:%M:%S,${smill:0:3}"`
ssh root@$i "source /etc/profile;echo [$stdate] INFO [Kafka Broker $i] status message is :;jps | grep Kafka;" &
sleep 1
done
}
# 判断输入的Kafka命令参数是否有效
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status
;;
*)
echo "Usage: $0 {start|stop|status}"
RETVAL=1
esac
加入执行权限:chmod +x kafka-all.sh
可以将脚本发送到node2,node3的kafka211200/bin目录下,以后在任意节点运行都可以启动kafka集群
./kafka-all.sh start
至此,kafka的集群安装