Kafka简单部署操作
启动Kafka需要先启动zk,推荐手动启动zk,不用kafka的命令
配置文件zookeeper.properties
配置项
dataDir:zk dataDir 配置的目录
clientPort:zk端口
bin/zookeeper-server-start.sh config/zookeeper.properties
启动kafka
Broker配置文件server.properties
配置项
broker.id: broker唯一整型标识
port: socket监听,不能被占用
log.dirs:日志存放文件路径
zookeeper.connect: zk的IP:PORT(/path)
bin/kafka-server-start.sh config/server.properties
启动Kafka后会在zk上创建节点[admin, consumers, controller, controller_epoch, brokers, config]
停止Kafka
bin/kafka-server-stop.sh
创建topic
bin/kafka-topics.sh --create --zookeeper localhost:2343/helloKafka --replication-factor 1 --partitions 1 --topic test
创建成功:Created topic “test”. ZK上添加对应节点
[zk: localhost:2343(CONNECTED) 15] ls /helloKafka/brokers/topics
[test]
也可通过kafka命令查看
bin/kafka-topics.sh --list --zookeeper localhost:2343/helloKafka
删除topic,会在/helloKafka/admin/delete_topics下添加该topic节点,要在配置文件中将delete.topic.enable设为true才会真正删除
bin/kafka-topics.sh --zookeeper localhost:2343/helloKafka --delete --topic test
发送消息
bin/kafka-console-producer.sh --broker-list localhost:9090 --topic test2
消费消息
bin/kafka-console-consumer.sh --zookeeper localhost:2343/helloKafka --topic test2 --from-beginning
此时在生产端控制台输入的消息会在消费端输出