消息topic查看
上一篇中使用describe命令查看topic情况
./bin/kafka-topics.sh --describe --zookeeper bigdata01:2181 --topic firsttopic
显示如下:
Topic:firsttopic PartitionCount:3 ReplicationFactor:2 Configs:
Topic: firsttopic Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: firsttopic Partition: 1 Leader: 2 Replicas: 2,0 Isr: 2,0
Topic: firsttopic Partition: 2 Leader: 0 Replicas: 0,1 Isr: 0,1
其中Partition是topic的分区数,消息中的不同数据存放在不同分区;Replicas是副本存放的机器broker.id
;Leader是消息的主存放位置;Isr跟Replicas相同则消息已经存放复制完成,如果不同说明还在存放复制中。下面详细看下消息存放的物理路径。
消息存放的物理路径log.dirs
Kafka中查看消息可用kafka-run-class.sh命令。我们前一篇文章发送了三行消息,下面具体看看三行消息的物理存放路径:
hello
my first
kafka topic
在部署Kafka环境时,我们在配置文件server.properties中设置了存放目录(可根据实际情况自行设置)。
log.dirs=/tmp/kafka-logs
看看存放目录都有哪些文件,在bigdata01机器可以看到我们发送的topic存放目录firsttopic-1和firsttopic-2(bigdata02机器存放了firsttopic-2和firsttopic-1,bigdata03机器存放了firsttopic-0和firsttopic-1,这与我们设置的副本数量相吻合)
其中.log文件存放了消息内容。
消息查看命令kafka-run-class.sh
.log文件的内容需要使用命令kafka-run-class.sh查看,下面按照分区0、1、2的顺序来一次查看分区内存放的内容。分区0存放在broker.id=1,2的机器,分区1存放在broker.id=2,0的机器,分区2存放在broker.id=0,1的机器。broker.id从0到2依次对应bigdata01、bigdata02、bigdata03机器。
1、分区0存放的bigdata02机器上执行:
cd /opt/kafka_2.11-1.1.0/
./kafka-run-class.sh kafka.tools.DumpLogSegments --files /tmp/kafka-logs/firsttopic-0/00000000000000000000.log --print-data-log > 00000000000000000000.txt0
查看导出的00000000000000000000.txt0文件的内容,显示如下:
Dumping /tmp/kafka-logs/firsttopic-0/00000000000000000000.log
Starting offset: 0
offset: 0 position: 0 CreateTime: 1546363668273 isvalid: true keysize: -1 valuesize: 11 magic: 2 compresscodec: NONE producerId: -1 producerEpoch: -1 sequence: -1 isTransactional: false headerKeys: [] payload: kafka topic
可以看到分区0里存放了kafka topic
2、分区1存放的bigdata03机器上执行:
cd /opt/kafka_2.11-1.1.0/
./kafka-run-class.sh kafka.tools.DumpLogSegments --files /tmp/kafka-logs/firsttopic-1/00000000000000000000.log --print-data-log > 00000000000000000000.txt1
查看导出的00000000000000000000.txt1文件的内容,显示如下:
Dumping /tmp/kafka-logs/firsttopic-1/00000000000000000000.log
Starting offset: 0
offset: 0 position: 0 CreateTime: 1546363662797 isvalid: true keysize: -1 valuesize: 8 magic: 2 compresscodec: NONE producerId: -1 producerEpoch: -1 sequence: -1 isTransactional: false headerKeys: [] payload: my first
可以看到分区1里存放了my first
3、分区2存放的bigdata01机器上执行
cd /opt/kafka_2.11-1.1.0/
./kafka-run-class.sh kafka.tools.DumpLogSegments --files /tmp/kafka-logs/firsttopic-2/00000000000000000000.log --print-data-log > 00000000000000000000.txt2
查看导出的00000000000000000000.txt2文件的内容,显示如下:
Dumping /tmp/kafka-logs/firsttopic-2/00000000000000000000.log
Starting offset: 0
offset: 0 position: 0 CreateTime: 1546363651837 isvalid: true keysize: -1 valuesize: 5 magic: 2 compresscodec: NONE producerId: -1 producerEpoch: -1 sequence: -1 isTransactional: false headerKeys: [] payload: hello
可以看到分区2里存放了hello
Zookeeper中的信息zkCli.sh
Zookeeper使用zkCli.sh命令查看目录内容
cd /opt/zookeeper-3.4.12/bin
./zkCli.sh
输入
ls /
显示
[cluster, controller_epoch, controller, brokers, zookeeper, admin, isr_change_notification, consumers, log_dir_event_notification, latest_producer_id_block, config]
ls /brokers
[ids, topics, seqid]
ls /brokers/ids
[0, 1, 2]
ls /brokers/ids/0
[]
ls /brokers/topics
[__consumer_offsets, firsttopic]
ls /brokers/topics/firsttopic
[partitions]
ls /brokers/topics/firsttopic/partitions
[0, 1, 2]
ls /brokers/topics/firsttopic/partitions/0
[state]
ls /brokers/topics/firsttopic/partitions/0/state