zookeeper+kafka二进制集群安装

1、环境准备

主机名称       IP地址zookeeper版本kafka版本安装位置
zookeeper01192.168.124.251apache-zookeeper-3.6.3kafka_2.13-3.0.2/opt/zookeeper、/opt/kafka
zookeeper02192.168.124.252apache-zookeeper-3.6.3kafka_2.13-3.0.2/opt/zookeeper、/opt/kafka
zookeeper03192.168.124.253apache-zookeeper-3.6.3kafka_2.13-3.0.2/opt/zookeeper、/opt/kafka

1.1、下载地址

zookeeper下载地址:https://archive.apache.org/dist/zookeeper/
kafka下载地址:https://downloads.apache.org/kafka/

2、安装JDK

三台机器都要安装

下载地址:Java Downloads | Oracle

tar -zxf jdk-8u201-linux-x64.tar.gz -C /usr/local/

#最后一行添加变量
vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_201/
export JAVA_HOME=/usr/local/jdk1.8.0_201/jre
export CLASSPATH=$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
#生效变量
source /etc/profile
#查看版本
java -version

java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)

3、初始化系统

三台都要操作

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭 selinux
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

# 设置主机名
hostnamectl set-hostname zookeeper01
hostnamectl set-hostname zookeeper02
hostnamectl set-hostname zookeeper03

# 添加hosts文件
cat >> /etc/hosts <<EOF
192.168.124.251 zookeeper01
192.168.124.252 zookeeper02
192.168.124.253 zookeeper03
EOF

4、部署zookeeper集群

4.1、创建目录

zookeeper01、zookeeper02、zookeeper03都要创建


mkdir -p /opt/zookeeper/{data,logs}

4.2、解压安装包

tar -zxf apache-zookeeper-3.6.3.tar.gz -C /opt/zookeeper/

4.3、重命名配置文件

cd /opt/zookeeper/apache-zookeeper-3.6.3/conf/
mv zoo_sample.cfg zoo.cfg

4.4、修改配置文件

zookeeper01、zookeeper02、zookeeper03 修改一样

cat > zoo.cfg <<EOF
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/data
dataLogDir=/opt/zookeeper/logs
clientPort=2181
server.1=192.168.19.251:2888:3888
server.2=192.168.19.252:2888:3888
server.3=192.168.19.253:2888:3888
EOF

4.5、创建myid文件

zookeeper01的配置

cd /opt/zookeeper/data/
touch myid
echo 1 > myid

zookeeper02的配置

cd /opt/zookeeper/data/
touch myid
echo 2 > myid

zookeeper03的配置

cd /opt/zookeeper/data/
touch myid
echo 3 > myid

4.6、停止和启动

cd /opt/zookeeper/apache-zookeeper-3.6.3/bin/
./zkServer.sh start    #启动
./zkServer.sh stop     #停止
./zkServer.sh status   #查看状态

#查看三台的zookeeper状态,成功
mode:follower    # 从
mode:leader      # 主
mode:follower    # 从

5、部署kafka集群

5.1、创建目录

mkdir -p /opt/kafka/{logs}

5.2、解压安装包

tar -zxf kafka_2.13-3.0.2.tgz -C /opt/kafka/

5.3、修改配置文件

zookeeper01的配置

cd /opt/kafka/kafka_2.13-3.0.2/config
cp -r server.properties server.properties.bak

cat >server.properties <<EOF
broker.id=1
listeners=PLAINTEXT://192.168.124.251:9092
num.network.threads=12
num.io.threads=24
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka/logs
num.partitions=3
num.recovery.threads.per.data.dir=12
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=3
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.124.251:2181,192.168.124.252:2181,192.168.124.253:2181
zookeeper.connection.timeout.ms=18000
EOF

zookeeper02的配置

cat >server.properties <<EOF
broker.id=2
listeners=PLAINTEXT://192.168.124.252:9092
num.network.threads=12
num.io.threads=24
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka/logs
num.partitions=3
num.recovery.threads.per.data.dir=12
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=3
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.124.251:2181,192.168.124.252:2181,192.168.124.253:2181
zookeeper.connection.timeout.ms=18000
EOF

zookeeper03的配置

cat >server.properties <<EOF
broker.id=3
listeners=PLAINTEXT://192.168.124.253:9092
num.network.threads=12
num.io.threads=24
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka/logs
num.partitions=3
num.recovery.threads.per.data.dir=12
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=3
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.124.251:2181,192.168.124.252:2181,192.168.124.253:2181
zookeeper.connection.timeout.ms=18000
EOF

5.4、启动kafka

cd /opt/kafka/kafka_2.13-3.0.2/bin/
nohup ./kafka-server-start.sh ../config/server.properties &>> /opt/kafka/logs/ &

5.5、验证kafka

 #创建 topic long
 ./kafka-topics.sh --create --bootstrap-server 192.168.124.251:9092,192.168.124.252:9092,192.168.124.253:9092 --replication-factor 3 --partitions 3 --topic long
 
 #列出所有topic
./kafka-topics.sh --list --bootstrap-server 192.168.124.251:9092
./kafka-topics.sh --list --bootstrap-server 192.168.124.252:9092
./kafka-topics.sh --list --bootstrap-server 192.168.124.253:9092

5.6、验证消费

# 在 zookeeper01 发布消息
./kafka-console-producer.sh --broker-list  192.168.124.251:9092 --topic long
>veritcation
>zoo

# 在 zookeeper02 消费
./kafka-console-consumer.sh --bootstrap-server 192.168.124.252:9092 --topic long --from-beginning

输出如下
veritcation
zoo

# 在  zookeeper03 消费
./kafka-console-consumer.sh --bootstrap-server 192.168.124.252:9092 --topic long --from-beginning

输出如下
veritcation
zoo

5.7、验证zookeeper

# 进入 zookeeper 客户端
./zkCli.sh
[zk: localhost:2181(CONNECTED) 1] ls /
[admin, brokers, cluster, config, consumers, controller, controller_epoch, feature, isr_change_notification, latest_producer_id_block, log_dir_event_notification, zookeeper]
[zk: localhost:2181(CONNECTED) 2] ls /config 
[brokers, changes, clients, ips, topics, users]
[zk: localhost:2181(CONNECTED) 3] ls /config/brokers
[]
[zk: localhost:2181(CONNECTED) 4] ls /config/topics 
[__consumer_offsets, long]

到这里完成集群的验证与安装

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值