java kafka zookeeper_kafka + zookeeper 伪集群搭建

一 zookeeper伪集群搭建

1.创建3个zoo.cfg(zoo1.cfg,zoo2.cfg,zoo3.cfg)

zoo1.cfg 设置:

# 数据文件夹

dataDir=/usr/local/zookeeper/data/data1

# 日志文件夹

dataLogDir=/usr/local/zookeeper/logs/logs1

clientPort=2181

server.1=192.168.94.132:2887:3887

server.2=192.168.94.132:2888:3888

server.3=192.168.94.132:2889:3889

zoo2.cfg 设置:

# 数据文件夹

dataDir=/usr/local/zookeeper/data/data2

# 日志文件夹

dataLogDir=/usr/local/zookeeper/logs/logs2

clientPort=2182

server.1=192.168.94.132:2887:3887

server.2=192.168.94.132:2888:3888

server.3=192.168.94.132:2889:3889

zoo3.cfg 设置:

# 数据文件夹

dataDir=/usr/local/zookeeper/data/data3

# 日志文件夹

dataLogDir=/usr/local/zookeeper/logs/logs2

clientPort=2183

server.1=192.168.94.132:2887:3887

server.2=192.168.94.132:2888:3888

server.3=192.168.94.132:2889:3889

2.创建3个myid

在usr/local/zookeeper/data/datax下闯进myid文件作为标识符,对应server.x数字

3.启动伪集群服务

bin/zkServer.sh start conf/zoo1.cfg

bin/zkServer.sh start conf/zoo2.cfg

bin/zkServer.sh start conf/zoo3.cfg

4.查看伪集群服务每个zookeeper状态

[root@bogon zookeeper-3.4.11]# bin/zkServer.sh status conf/zoo1.cfg

ZooKeeper JMX enabled by default

Using config: conf/zoo1.cfg

Mode: follower

[root@bogon zookeeper-3.4.11]# bin/zkServer.sh status conf/zoo2.cfg

ZooKeeper JMX enabled by default

Using config: conf/zoo2.cfg

Mode: leader

[root@bogon zookeeper-3.4.11]# bin/zkServer.sh status conf/zoo3.cfg

ZooKeeper JMX enabled by default

Using config: conf/zoo3.cfg

Mode: follower

二 kafka伪集群搭建

1. 将配置文件拷贝多份

[root@master zookeeper-3.4.11]$cp config/server.properties config/server-1.properties

[root@master zookeeper-3.4.11]$cp config/server.properties config/server-2.properties

[root@master zookeeper-3.4.11]$cp config/server.properties config/server-3.properties

2.修改每个server-x.properties

server-1.properties 设置:

broker.id=1

port=9092

host.name=192.168.94.132

log.dirs=/usr/local/kafka/kafka-logs-1

message.max.byte=5242880

default.replication.factor=2

replica.fetch.max.bytes=5242880

zookeeper.connect=192.168.94.132:2181,192.168.94.132:2182,192.168.94.132:2183

# 可删除topic

delete.topic.enable=true

server-2.properties 设置:

broker.id=2

port=9093

host.name=192.168.94.132

log.dirs=/usr/local/kafka/kafka-logs-2

message.max.byte=5242880

default.replication.factor=2

replica.fetch.max.bytes=5242880

zookeeper.connect=192.168.94.132:2181,192.168.94.132:2182,192.168.94.132:2183

# 可删除topic

delete.topic.enable=true

server-3.properties 设置:

broker.id=3

port=9093

host.name=192.168.94.132

log.dirs=/usr/local/kafka/kafka-logs-3

message.max.byte=5242880

default.replication.factor=2

replica.fetch.max.bytes=5242880

zookeeper.connect=192.168.94.132:2181,192.168.94.132:2182,192.168.94.132:2183

# 可删除topic

delete.topic.enable=true

每个kafka的broker.id不一样,作为唯一标识符

三 启动集群测试

注:启动时:先启动 zookeeper,后启动 kafka;关闭时:先关闭 kafka,后关闭zookeeper

1. 分别在每个节点上启动zookeeper

[root@master zookeeper-3.4.11]$ bin/zkServer.sh start conf/zoo1.cfg

[root@worker1 zookeeper-3.4.11]$ bin/zkServer.sh start conf/zoo2.cfg

[root@worker2 zookeeper-3.4.11]$ bin/zkServer.sh start conf/zoo3.cfg

2.验证zookeeper集群

# 在master节点上

[root@master zookeeper-3.4.11]$ bin/zkServer.sh status conf/zoo1.cfg

# 在worker1节点上

[root@worker1 zookeeper-3.4.11]$ bin/zkServer.sh status conf/zoo2.cfg

# 在worker2节点上

[root@worker2 zookeeper-3.4.11]$ bin/zkServer.sh status conf/zoo3.cfg

显示结果为:有一个是 leader,剩下的都是 follower

3.启动kafka集群

后台启动方式:

[root@master kafka_2.12-1.1.0]$ bin/kafka-server-start.sh -daemon config/server-1.properties

[root@master kafka_2.12-1.1.0]$ bin/kafka-server-start.sh config/server-1.properties

[root@worker1 kafka_2.12-1.1.0]$ bin/kafka-server-start.sh config/server-1.properties

[root@worker2 kafka_2.12-1.1.0]$ bin/kafka-server-start.sh config/server-1.properties

4.测试kafka集群

4.1 创建 topic 和 显示 topic 信息

# 创建topic

bin/kafka-topics.sh --create --zookeeper 192.168.94.132:2181,192.168.94.132:2182,192.168.94.132:2183 --replication-factor 3 --partitions 3 --topic test

# 显示topic信息

bin/kafka-topics.sh --describe --zookeeper 192.168.94.132:2181,192.168.94.132:2182,192.168.94.132:2183 --topic test

# 列出topic

bin/kafka-topics.sh --list --zookeeper 192.168.94.132:2181,192.168.94.132:2182,192.168.94.132:2183

4.2 创建 producer

bin/kafka-console-producer.sh --broker-list master:9092 -topic test

4.3 创建 consumer

# 测试消费 也可分别做测试

bin/kafka-console-consumer.sh --zookeeper 192.168.94.132:2181,192.168.94.132:2182,192.168.94.132:2183 -topic test --from-beginning

然后在 producer 里输入消息,consumer 中就会显示出同样的内容,表示消费成功

4.4 删除节点

bin/kafka-topics.sh --delete --zookeeper 192.168.94.132:2181,192.168.94.132:2182,192.168.94.132:2183 --topic test

四 关闭集群服务

# 删除topic

bin/kafka-topics.sh --delete --zookeeper 192.168.94.132:2181,192.168.94.132:2182,192.168.94.132:2183 --topic test

# 关闭kafka

[root@worker2 kafka_2.12-1.1.0]$ bin/kafka-server-stop.sh conf/server-1.properties

[root@worker2 kafka_2.12-1.1.0]$ bin/kafka-server-stop.sh conf/server-2.properties

[root@worker2 kafka_2.12-1.1.0]$ bin/kafka-server-stop.sh conf/server-3.properties

# 关闭zookeeper

[root@master zookeeper-3.4.11]$ bin/zkServer.sh stop conf/zoo1.cfg

[root@worker1 zookeeper-3.4.11]$ bin/zkServer.shstop conf/zoo2.cfg

[root@worker2 zookeeper-3.4.11]$ bin/zkServer.shstop conf/zoo3.cfg

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值