Kafka版本: 1.0.1
1. 创建topic
[hadoop@node03 bin]$ kafka-topics.sh --create --partitions 3 --replication-factor 2 --topic test --zookeeper node01:2181,node02:2181,node03:2181
Created topic "test".
参数说明:
partitions: 指定topic的分区个数
replication-factor:指定分区副本个数
topic:指定topic名称
zookeeper:指定zk集群地址及端口
2. 列出全部topic
[hadoop@node03 bin]$ kafka-topics.sh --list --zookeeper node01:2181,node02:2181,node03:2181
__consumer_offsets
test
3. 查看topic详细信息
[hadoop@node03 bin]$ kafka-topics.sh --describe --topic test --zookeeper node01:2181,node02:2181,node03:2181
Topic:test PartitionCount:3 ReplicationFactor:2 Configs:
Topic: test Partition: 0 Leader: 2 Replicas: 0,2 Isr: 2,0
Topic: test Partition: 1 Leader: 0 Replicas: 1,0 Isr: 0,1
Topic: test Partition: 2 Leader: 2 Replicas: 2,1 Isr: 2,1
下面对详情信息进行说明:
第一行:
Topic:topic名称
PartitionCount:分区数
ReplicationFactor: 分区副本数
Configs:其他配置信息
第二-四行:
Topic:topic名称
Partition:分区编号(分区编号从0开始)
Leader:分区的Leader副本所在的broker.id (kafka集群的broker.id在server.properties中指定)
lsr:已同步的分区副本所在broker.id
4. 删除topic
[hadoop@node03 bin]$ kafka-topics.sh --delete --topic test --zookeeper node01:2181,node02:2181,node03:2181
Topic test is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
5. 向topic中写入数据
[hadoop@node03 bin]$ kafka-console-producer.sh --broker-list node01:9092,node02:9092,node03:9092 --topic test
broker-list:指定broker所在主机及端口
6. 消费topic数据
[hadoop@node02 bin]$ kafka-console-consumer.sh --bootstrap-server node01:9092,node02:9092,node03:9092 --topic test --from-beginning
bootstrap-server:指定broker所在主机及端口
from-beginning:指定从头开始消费topic
总结
kafka-topics.sh 搭配不同参数,对topic进行创建或删除
kafka-console-producer.sh 用于向topic写入数据
kafka-console-consumer.sh 用于消费topic数据