预备工作:
3台机器分别编辑/etc/hosts文件,添加主机名映射
10.1.2.17 master
10.1.2.22 slave01
10.1.7.12 slave02
配置免密登陆
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub root@10.1.2.17
ssh-copy-id -i ~/.ssh/id_rsa.pub root@10.1.2.22
ssh-copy-id -i ~/.ssh/id_rsa.pub root@10.1.7.12
安装jdk
安装zookeeper
解压安装包
tar -zxvf zookeeper-3.4.10.tar.gz -C /data
/data/zookeeper-3.4.10/conf文件下重命名文件
mv zoo_sample.cfg zoo.cfg
编辑zoo.cfg文件
dataDir=/data/zookeeper-3.4.10/data
clientPort=12181
server.0=master:2888:3888
server.1=slave01:2888:3888
server.2=slave02:2888:3888
创建data目录
mkdir /data/zookeeper-3.4.10/data
/data/zookeeper-3.4.10/data目录下添加myid文件
echo 0 > myid
分发zookeeper文件夹到另外2个节点
scp -r /data/zookeeper-3.4.10 slave01:/data
scp -r /data/zookeeper-3.4.10 slave02:/data
在另外2个节点下分别修改myid文件中的值
slave01:
echo 1 > myid
slave02:
echo 2 > myid
编写zookeeper操作脚本:
vi zk.sh
#!/bin/bash
for host in master slave01 slave02
do
echo "$host $1......"
ssh $host "source /etc/profile;/data/zookeeper-3.4.10/bin/zkServer.sh $1" ## 参数为start/stop/status
done
./zk.sh start 启动zookeeper集群
./zk.sh stop 关闭zookeeper集群
./zk.sh status 查看zookeeper集群状态
安装kafka
解压kafka压缩包
tar -zxvf kafka_2.10-0.10.0.1.tgz -C /data/
编辑配置文件server.properties
/data/kafka_2.10-0.10.0.1/config/server.properties
broker.id=0
listeners=PLAINTEXT://master:19092
log.dirs=/data/kafka-logs
zookeeper.connect=master:12181,slave01:12181,slave02:12181
kafka文件夹分发到另外2个节点
scp -r /data/kafka_2.10-0.10.0.1 slave01:/data
scp -r /data/kafka_2.10-0.10.0.1 slave02:/data
修改另外2个节点下的server.properties文件
slave01:
broker.id=1
listeners=PLAINTEXT://slave01:19092
slave02:
broker.id=2
listeners=PLAINTEXT://slave02:19092
编写kafka操作脚本
vi kafka.sh
#!/bin/bash
source /etc/profile
if [ $1 = "start" ]
then
echo "------------ensure that zookeeper is started------------------"
cat /data/kafka_2.10-0.10.0.1/config/nodes | while read host
do
{
echo "$host is starting..."
ssh $host "source /etc/profile;nohup /data/kafka_2.10-0.10.0.1/bin/kafka-server-start.sh /data/kafka_2.10-0.10.0.1/config/server.properties >/dev/null 2>&1 &"
}&
wait
done
elif [ $1 = "stop" ]
then
cat /data/kafka_2.10-0.10.0.1/config/nodes | while read host
do
{
echo "$host is stoping..."
ssh $host "source /etc/profile;nohup /data/kafka_2.10-0.10.0.1/bin/kafka-server-stop.sh /data/kafka_2.10-0.10.0.1/config/server.properties >/dev/null 2>&1 &"
}&
wait
done
elif [ $1 = "producer" ]
then
kafka-console-producer.sh --broker-list master:19092,slave01:19092,slave02:19092 --topic $2 # 启动生产者,$2是topic
elif [ $1 = "consumer" ]
then
kafka-console-consumer.sh --zookeeper localhost:12181 --from-beginning --topic $2 #启动消费topic
elif [ $1 = "delete" ]
then
kafka-topics.sh --delete --zookeeper localhost:12181 --topic $2 #删除topic
elif [ $1 = "create" ]
then
kafka-topics.sh --create --zookeeper localhost:12181 --replication-factor 1 --partitions 3 --topic $2 #创建topic
elif [ $1 = "list" ]
then
kafka-topics.sh --list --zookeeper localhost:12181 #查看topic
else
echo "parameter invalid"
fi
脚本运行要求有nodes文件
vi /data/kafka_2.10-0.10.0.1/config/nodes
10.1.2.17
10.1.2.22
10.1.7.12
启动kafka之前先启动zookeeper
./kafka.sh start 启动kafka集群
./kafka.sh stop 停止kafka集群
./kafka.sh list 查看所有topic列表
./kafka.sh create [topic_name] 创建topic
./kafka.sh producer [topic_name] 给topic生产消息
./kafka.sh consumer [topic_name] 给topic消费消息