Kafka+zookeeper集群部署

Kafka+zookeeper集群部署

一、准备环境

本次kafka+zookeeper集群搭建采用三台虚机,以192.168.190.151,192.168.190.152,192.168.190.153三台虚机为例

虚机配置:

机器IP

CPU&内存

系统盘

备注

192.168.190.151

hadoop1

2C4G

50GB

建议最低配置

192.168.190.152

hadoop2

2C4G

50GB

建议最低配置

192.168.190.153

hadoop3

2C4G

50GB

建议最低配置

二、详细步骤(zookeeper集群)

目前以192.168.190.151虚机为例:

1、检查虚机防火墙是否关闭

systemctl status firewalld

如果没有关闭:

①、临时关闭:systemctl stop firewalld

②、永久关闭:systemctl disable firewalld

2、创建目录地址(三台虚机都需要创建)

mkdir /opt/{java,zookeeper,kafka}

3、下载java、zookeeper及kafka的tar包

zookeeper下载网址:

Apache ZooKeeper

解压zookeeper:

tar -xvf apache-zookeeper-3.7.2-bin.tar.gz

重命名zookeeper:

mv apache-zookeeper-3.7.2-bin.tar.gz zk1  ##192.168.190.151虚机上

mv apache-zookeeper-3.7.2-bin.tar.gz zk2  ##192.168.190.152虚机上

mv apache-zookeeper-3.7.2-bin.tar.gz zk3  ##192.168.190.153虚机上

java下载:

wget https://devops-public.oss-cn-shanghai.aliyuncs.com/moonsin/flink/jdk-8u192-linux-x64.tar.gz

解压java:

tar -xvf jdk-8u192-linux-x64.tar.gz

scp -r /opt/java/jdk1.8.0_192 root@192.168.190.152:/opt/java/

scp -r /opt/java/jdk1.8.0_192 root@192.168.190.153:/opt/java/

kafka下载:

wget https://archive.apache.org/dist/kafka/2.6.0/kafka_2.13-2.6.0.tgz

解压kafka:

tar -xvf kafka_2.13-2.6.0.tgz

重命名kafka:

mv kafka_2.13-2.6.0 kafka

scp -r /opt/kafka/kafka root@192.168.190.152:/opt/kafka

scp -r /opt/kafka/kafka root@192.168.190.153:/opt/kafka

4、配置环境变量:

export JAVA_HOME=/opt/java/jdk1.8.0_192

export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar

export PATH=$PATH:$JAVA_HOME/bin

配置完成后,执行source /etc/profile

scp -r /etc/profile root@192.168.190.152:/etc  ##scp完成之后,在192.168.190.152执行

source /etc/profile

scp -r /etc/profile root@192.168.190.153:/etc  ##scp完成之后,在192.168.190.153执行

source /etc/profile

5、创建必要目录

mkdir /opt/zookeeper/zk1/{data,logs}  ##在虚机192.168.190.151上

mkdir /opt/zookeeper/zk2/{data,logs}  ##在虚机192.168.190.152上

mkdir /opt/zookeeper/zk3/{data,logs}  ##在虚机192.168.190.153上

6、修改zookeeper配置文件

cd /opt/zookeeper/zk1/conf  ##进入配置文件目录(根据实际目录路径来写)

cp zoo_sample.cfg zoo.cfg  ##备份配置文件并修改配置文件名称为zoo.cfg

① 在虚机192.168.190.151上如下修改:

vi /opt/zookeeper/zk1/conf/zoo.cfg

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/opt/zookeeper/zk1/data

dataLogDir=/opt/zookeeper/zk1/logs

clientPort=2181

server.01=192.168.190.151:2341:2351

server.02=192.168.190.152:2342:2352

server.03=192.168.190.153:2343:2353

[zoo.cfg]

② 在虚机192.168.190.152上如下修改:

vi /opt/zookeeper/zk2/conf/zoo.cfg

dataDir=/opt/zookeeper/zk2/data

dataLogDir=/opt/zookeeper/zk2/logs

clientPort=2182

③ 在虚机192.168.190.153上如下修改:

vi /opt/zookeeper/zk3/conf/zoo.cfg

dataDir=/opt/zookeeper/zk3/data

dataLogDir=/opt/zookeeper/zk3/logs

clientPort=2183

7、进入data目录,创建文件myid,并输入01

① 在虚机192.168.190.151上如下修改:

cd /opt/zookeeper/zk1/data

touch myid

echo “01” > myid

② 在虚机192.168.190.152上如下修改:

cd /opt/zookeeper/zk2/data

touch myid

echo “02” > myid

③ 在虚机192.168.190.153上如下修改:

cd /opt/zookeeper/zk3/data

touch myid

echo “03” > myid

8、操作zookeeper(启动zookeeper时,需三台虚机同时启动)

cd /opt/zookeeper/zk1/bin

./zkServer.sh start  ##启动zookeeper

./zkServer.sh stop  ##启停zookeeper

./zkServer.sh status  ##查看zookeeper状态

 后期修改为systemctl模式操作

三、详细步骤(kafka集群)

1、创建kafka的消息目录(三台虚机都需要)

mkdir /opt/kafka/kafka/kafka-logs

2、修改配置文件

① 在192.168.190.151虚机上:

vi /opt/kafka/kafka/config/server.properties

broker.id=0

listeners=PLAINTEXT://hadoop1:9092

log.dirs=/opt/module/kafka/kafka-logs

zookeeper.connect=hadoop1:2181,hadoop2:2181,hadoop3:2181

[server.properties]

② 在192.168.190.152虚机上:

vi /opt/kafka/kafka/config/server.properties

broker.id=1

listeners=PLAINTEXT://hadoop2:9092

log.dirs=/opt/module/kafka/kafka-logs

zookeeper.connect=hadoop1:2181,hadoop2:2181,hadoop3:2181

[server.properties]

③ 在192.168.190.153虚机上:

vi /opt/kafka/kafka/config/server.properties

broker.id=2

listeners=PLAINTEXT://hadoop3:9092

log.dirs=/opt/module/kafka/kafka-logs

zookeeper.connect=hadoop1:2181,hadoop2:2181,hadoop3:2181

[server.properties]

3、编写kafka操作脚本

cd /opt/kafka/kafka/bin

vi kafka-cluster.sh

#!/bin/bash

case $1 in

"start"){

        for i in hadoop1 hadoop2 hadoop3

        do

                 echo -------------------------------- $i kafka 启动 ---------------------------
                ssh $i "source /etc/profile;/opt/kafka/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/kafka/config/server.properties"

        done

}

;;

"stop"){

        for i in hadoop1 hadoop2 hadoop3

        do

                echo -------------------------------- $i kafka 停止 ---------------------------
                ssh $i "/opt/kafka/kafka/bin/kafka-server-stop.sh"

        done

}

;;

esac

chmod +x ./kafka-cluster.sh

启动kafka:./kafka-cluster.sh start

启停kafka:./kafka-cluster.sh stop

 后期修改为systemctl模式操作

  • 16
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值