【README】
【1】搭建kafka集群
1、下载 kafka jar包, https://kafka.apache.org/downloads
https://archive.apache.org/dist/kafka/0.11.0.0/kafka_2.11-0.11.0.0.tgz
2、修改配置文件; kafka/config/server.properties
-- kafka代理实例编号
broker.id=1
-- 是否可以删除topic
delete.topic.enable=true
-- 日志路径
log.dirs=/opt/module/kafka-0.11/logs
-- zk集群
zookeeper.connect=centos201:2181,centos202:2181,centos203:2181
3、在 /etc/profile 中配置kafka的命令;
# kafak conf
export KAFKA_HOME=/opt/module/kafka-0.11
export PATH=$PATH:$KAFKA_HOME/bin
# zk conf
export ZK_HOME=/opt/module/zookeeper-3.4.10
export PATH=$PATH:$ZK_HOME/bin
再执行 source /etc/profile;
4、rsync 把机器1的文件夹下的文件 同步到 机器2
[root@localhost module]# rsync -azv /opt/module/zookeeper-3.4.10/ root@192.168.163.202:/opt/module/zookeeper-3.4.10/
5、启动zk集群 (* 非常重要)
-- 后台启动 kafka
kafka-server-start.sh -daemon /opt/module/kafka-0.11/config/server.properties
-- 同步启动 kafka
kafka-server-start.sh /opt/module/kafka-0.11/config/server.properties
-- 同步启动
kafka-server-start.sh config/server.properties
-- 停止
kafka-server-stop.sh config/server.properties
【2】kafka命令行命令行操作-topic 增删查
序号 | 命令 | |
1 | kafka-topics.sh -- create | 新增 |
2 | kafka-topics.sh -- list | 查看列表 |
3 | kafka-topics.sh -- delete | 删除 |
4 | kafka-topics.sh -- describe | 描述 |
1、新增topic并查看
[root@centos201 ~]# kafka-topics.sh --list --zookeeper centos201:2181
[root@centos201 ~]#
[root@centos201 ~]# kafka-topics.sh --create --zookeeper centos201:2181 --topic first --partitions 2 --replication-factor 2
Created topic "first".
[root@centos201 ~]#
[root@centos201 ~]# kafka-topics.sh --list --zookeeper centos201:2181
first
2、删除topic并查看
[root@centos201 logs]# kafka-topics.sh --delete --zookeeper centos201:2181 --topic first
Topic first is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
[root@centos201 logs]#
[root@centos201 logs]# kafka-topics.sh --list --zookeeper centos201:2181
[root@centos201 logs]#
3、查看topic描述
[root@centos201 logs]# kafka-topics.sh --describe --topic first --zookeeper centos201:2181
Topic:first PartitionCount:3 ReplicationFactor:1 Configs:
Topic: first Partition: 0 Leader: 1 Replicas: 1 Isr: 1
Topic: first Partition: 1 Leader: 2 Replicas: 2 Isr: 2
Topic: first Partition: 2 Leader: 3 Replicas: 3 Isr: 3
【3】开始kafka生产者+消费者*
1)开启生产者
kafka-console-producer.sh --topic first --broker-list centos201:9092
2)基于zookeeper开启消费者
kafka-console-consumer.sh --topic first --zookeeper centos201:2181
但当我们在开启消费者线程时,加上参数 --from-beginning 时
可以消费或收到201之前写的数据;只不过消息无序了;
kafka-console-consumer.sh --topic first --zookeeper centos201:2181 --from-beginning
3)基于 bootstrap-server 开启消费者
kafka-console-consumer.sh --topic first --bootstrap-server centos201:9092