查看kafka版本:find ./libs/ -name \*kafka_\* | head -1 | grep -o '\kafka[^\n]*'
Topic
1.创建kafka Topic
./kafka-topics.sh --zookeeper node1:2181,node2:2181,node3:2181/kafka-test --create --topic topicname --replication-factor 2 --partitions 4
2.删除kafka Topic
./kafka-topics.sh --zookeeper node1:2181,node2:2181,node3:2181/kafka-test --delete --topic topicname
会提示:
opic xxxxx(topicname) is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
没有真正删除,具体参考:https://blog.csdn.net/wyqwilliam/article/details/84427224
3.查看所有的kafka Topic
./kafka-topics.sh --zookeeper node1:2181,node2:2181,node3:2181/kafka-test --list
4.查看topic的详细信息
./kafka-topics.sh --zookeeper node1:2181,node2:2181,node3:2181/kafka-test --describe --topic topicname
5.修改partition的数据量
./kafka-topics.sh --zookeeper node1:2181,node2:2181,node3:2181/kafka-test --alter --topic topicname --partitions 2
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!
Consumer
6.查看kafka中的数据(从头开始,N条数据)
./kafka-console-consumer.sh --zookeeper node1:2181,node2:2181,node3:2181/kafka-test --topic topicname --from-beginning --max-messages N
其他
7.topic所在位置
partition0 ,登陆对应机器,目录/kafka-data/topicname-0
ACL
8.查看集群的acl权限
./kafka-acls.sh --authorizer-properties zookeeper.connect=node1:2181,node2:2181,node3:2181/kafka-test-new --list
9.给topic 的使用用户赋权 ALL 、 Describe 、 Write 、Read
./kafka-acls.sh --add --allow-principal user:CN=x1,OU=x2,O=x3,L=BJ,ST=BJ,C=CN --operation ALL --topic test --authorizer-properties zookeeper.connect=node1:2181,node2:2181,node3:2181/kafka-test-new
10.如果想让所有用户读取Test-topic 但只拒绝用户:BadBob IP 10.5.10.3
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* --allow-host * --deny-principal User:BadBob --deny-host 10.5.10.3 --operation Read --topic topicname
11.使用ACL 的方式 往topic里写入数据
kafka-console-producer.sh --broker-list node1:9094,node2:9094 --topic topicname --producer.config
12.使用ACL 的方式 从topic里读取数据
kafka-console-consumer.sh --bootstrap-server node1:9094,node2:9094 --topic topicname --new-consumer --consumer.config