KafKa环境搭建(单机)
1.上传kafka_2.11-0.10.0.0.tgz到software下面
2.解压kafka_2.11-0.10.0.0.tgz(并将kafka安装包放到别的文件夹中,统一管理。可以不理)
tar -zxvf kafka_2.11-0.10.0.0.tgz
3.将kafka_2.11-0.10.0.0修改成kafka_2.11-0.10
mv kafka_2.11-0.10.0.0 kafka_2.11-0.10
4.修改server.properties
cd kafka_2.11-0.10/config/
vi server.properties
log.dirs=/home/hadoop/kafka_2.11-0.10/kafka-logs
num.partitions=2
zookeeper.connect=namenode:2181,datanode1:2181,datanode2:2181
5. 启动kafka服务
bin/kafka-server-start.sh config/server.properties &
看到INFO [Kafka Server 0], started (kafka.server.KafkaServer)启动成功
6. 创建topic(重新打开一个终端或是ctrl+c返回)
bin/kafka-topics.sh --create --zookeeper namenode:2181 --replication-factor 1 --partition 2 --topic test
看到create topic "test"便是创建topic成功
7.查看日志
cd kafka-logs/test-0
ll
8.列出topic
cd ..
cd ..
bin/kafka-topics.sh --list --zookeeper namenode:2181
看到名为test的topic
9.向topic里写入日志
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
this is a message:1
this is a message:3
this is a message:3
10.查看kafka日志文件大小(重新打开一个终端或者是ctrl+c返回)
ll kafka-logs//test-0
ll kafka-logs//test-1
11.查看数据(datanode2上执行)
bin/kafka-console-consumer.sh --zookeeper namenode:2181 --topic test --from-beginning
看到之前输入的数据,那么kafka就搭建完成了(注kafka接收是没有顺序的)
this is a message:1
this is a message:3
this is a message:3
12.查看某个topic的结构
bin/kafka-topics.sh --describe --zookeeper datanode2:2181 --topic test
13.查看所有topic的结构
bin/kafka-topics.sh --describe --zookeeper datanode2:2181