centos Kafka 2.12 三节点集群部署

1、下载kafka、zookeeper

可通过如下地址直接进行kafka和zookeeper安装包下载

kafka:

http://mirrors.aliyun.com/apache/kafka/2.8.2/kafka_2.12-2.8.2.tgz?spm=a2c6h.25603864.0.0.796041a4GI8479

zookeeper:

http://mirrors.aliyun.com/apache/zookeeper/zookeeper-3.5.10/apache-zookeeper-3.5.10.tar.gz?spm=a2c6h.25603864.0.0.1ac91366MBEm07

2、zookeeper 安装

新增myid文件,需放在 dataDir目录下

vi /home/zookeeper/apache-zookeeper-3.5.5-bin/data/myid

每个节点的编号不一样,可按1、2、3的顺序给每个节点分配id

解压安装包,修改 conf 下的 zoo.cfg 文件

vi /home/zookeeper/apache-zookeeper-3.5.5-bin/conf/zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
#数据存储目录,自行创建
dataDir=/home/zookeeper/apache-zookeeper-3.5.5-bin/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
#集群中三个节点的IP地址,交互端口
server.1=192.168.3.106:2888:3888
server.2=192.168.3.107:2888:3888
server.3=192.168.3.108:2888:3888

 将改好的配置文件scp到其他节点中,配置无需改变,保证myid不一致即可。依次启动每个节点的zk

进入bin目录,执行

./zkServer.sh start

查看状态

./zkServer.sh status

3、kafka 安装

解压kafka的安装包,修改 conf 目录下的server.properties,配置如下。可根据业务情况修改。集群每个节点需要修改的的几个配置:broker.id、 listerners、zookeeper.connect  

#每个节点的id不一样
broker.id=0

#主机地址
listeners=PLAINTEXT://192.168.3.106:9092


num.network.threads=3


num.io.threads=8


socket.send.buffer.bytes=102400

socket.request.max.bytes=104857600

#数据存放目录,自定义
log.dirs=/home/kafka/kafka_2.12-2.8.2/logs


num.partitions=1


num.recovery.threads.per.data.dir=1


offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1


log.retention.hours=168


log.segment.bytes=1073741824


log.retention.check.interval.ms=300000

#zk的地址
zookeeper.connect=192.168.3.106:2181,192.168.3.107:2181,192.168.3.108:2181


zookeeper.connection.timeout.ms=18000

group.initial.rebalance.delay.ms=0


将改好后的配置scp到另外两个节点,并修改  broker.id、 listerners

启动kafka

./kafka-server-start.sh -daemon ../config/server.properties

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值