kafka管理工具

创建一个xlucas的topic 6个分区 3个副本

[hadoop@master bin]$ ./kafka-topics.sh --create --zookeeper 192.168.1.101:2181 --topic xlucas --partitions 6 --replication-factor 3
Created topic "xlucas".

包含了Topic各个分区的相关信息

[hadoop@slave1 bin]$ ./kafka-topics.sh --topic xlucas --describe  --zookeeper 192.168.1.101:2181
Topic:xlucas    PartitionCount:6        ReplicationFactor:3     Configs:
        Topic: xlucas   Partition: 0    Leader: 102     Replicas: 102,101,103   Isr: 102,101,103
        Topic: xlucas   Partition: 1    Leader: 103     Replicas: 103,102,101   Isr: 103,102,101
        Topic: xlucas   Partition: 2    Leader: 101     Replicas: 101,103,102   Isr: 101,103,102
        Topic: xlucas   Partition: 3    Leader: 102     Replicas: 102,103,101   Isr: 102,103,101
        Topic: xlucas   Partition: 4    Leader: 103     Replicas: 103,101,102   Isr: 103,101,102
        Topic: xlucas   Partition: 5    Leader: 101     Replicas: 101,102,103   Isr: 101,102,103

通过get方法我们可以在zookeeper的客户端获取offerset.可以通过set 来修改offerset的值

[zk: localhost:2181(CONNECTED) 37] get /consumers/console-consumer-54832/offsets/xlucas/1
191
cZxid = 0x2700000158
ctime = Thu Jun 30 08:51:08 PDT 2016
mZxid = 0x2700000158
mtime = Thu Jun 30 08:51:08 PDT 2016
pZxid = 0x2700000158
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 3
numChildren = 0
[hadoop@slave1 bin]$ ./kafka-run-class.sh kafka.tools.DumpLogSegments
Parse a log file and dump its contents to the console, useful for debugging a seemingly corrupt log segment.
Option                                  Description                            
------                                  -----------                            
--deep-iteration                        if set, uses deep instead of shallow   
                                          iteration                            
--files <file1, file2, ...>             REQUIRED: The comma separated list of  
                                          data and index log files to be dumped
--key-decoder-class                     if set, used to deserialize the keys.  
                                          This class should implement kafka.   
                                          serializer.Decoder trait. Custom jar 
                                          should be available in kafka/libs    
                                          directory. (default: kafka.          
                                          serializer.StringDecoder)            
--max-message-size <Integer: size>      Size of largest message. (default:     
                                          5242880)                             
--print-data-log                        if set, printing the messages content  
                                          when dumping data logs               
--value-decoder-class                   if set, used to deserialize the        
                                          messages. This class should          
                                          implement kafka.serializer.Decoder   
                                          trait. Custom jar should be          
                                          available in kafka/libs directory.   
                                          (default: kafka.serializer.          
                                          StringDecoder)                       
--verify-index-only                     if set, just verify the index log      
                                          without printing its content
[hadoop@slave1 bin]$ ./kafka-run-class.sh kafka.tools.DumpLogSegments --files /tmp/kafka-logs/xlucas-1/00000000000000000000.log
Dumping /tmp/kafka-logs/xlucas-1/00000000000000000000.log
Starting offset: 0
offset: 0 position: 0 isvalid: true payloadsize: 6 magic: 0 compresscodec: NoCompressionCodec crc: 3104223744
offset: 1 position: 32 isvalid: true payloadsize: 7 magic: 0 compresscodec: NoCompressionCodec crc: 1339025819
offset: 2 position: 65 isvalid: true payloadsize: 7 magic: 0 compresscodec: NoCompressionCodec crc: 3603347489
offset: 3 position: 98 isvalid: true payloadsize: 7 magic: 0 compresscodec: NoCompressionCodec crc: 2713815223
offset: 4 position: 131 isvalid: true payloadsize: 5 magic: 0 compresscodec: NoCompressionCodec crc: 2788391197
offset: 5 position: 162 isvalid: true payloadsize: 7 magic: 0 compresscodec: NoCompressionCodec crc: 3146695692
offset: 6 position: 195 isvalid: true payloadsize: 9 magic: 0 compresscodec: NoCompressionCodec crc: 2173962942
offset: 7 position: 230 isvalid: true payloadsize: 7 magic: 0 compresscodec: NoCompressionCodec crc: 2991598982
offset: 8 position: 263 isvalid: true payloadsize: 10 magic: 0 compresscodec: NoCompressionCodec crc: 1603989127
offset: 9 position: 299 isvalid: true payloadsize: 10 magic: 0 compresscodec: NoCompressionCodec crc: 743853991
offset: 10 position: 335 isvalid: true payloadsize: 10 magic: 0 compresscodec: NoCompressionCodec crc: 3104513575
offset: 11 position: 371 isvalid: true payloadsize: 11 magic: 0 compresscodec: NoCompressionCodec crc: 3936904359
offset: 12 position: 408 isvalid: true payloadsize: 9 magic: 0 compresscodec: NoCompressionCodec crc: 1078687857
offset: 13 position: 443 isvalid: true payloadsize: 11 magic: 0 compresscodec: NoCompressionCodec crc: 2111057158
offset: 14 position: 480 isvalid: true payloadsize: 12 magic: 0 compresscodec: NoCompressionCodec crc: 1561700752
offset: 15 position: 518 isvalid: true payloadsize: 9 magic: 0 compresscodec: NoCompressionCodec crc: 925442697
offset: 16 position: 553 isvalid: true payloadsize: 7 magic: 0 compresscodec: NoCompressionCodec crc: 1122047920
offset: 17 position: 586 isvalid: true payloadsize: 8 magic: 0 compresscodec: NoCompressionCodec crc: 2686337610
offset: 18 position: 620 isvalid: true payloadsize: 9 magic: 0 compresscodec: NoCompressionCodec crc: 1543195022
offset: 19 position: 655 isvalid: true payloadsize: 8 magic: 0 compresscodec: NoCompressionCodec crc: 1320836663
offset: 20 position: 689 isvalid: true payloadsize: 8 magic: 0 compresscodec: NoCompressionCodec crc: 2278993749
offset: 21 position: 723 isvalid: true payloadsize: 10 magic: 0 compresscodec: NoCompressionCodec crc: 2902654633
offset: 22 position: 759 isvalid: true payloadsize: 9 magic: 0 compresscodec: NoCompressionCodec crc: 3592875052
offset: 23 position: 794 isvalid: true payloadsize: 8 magic: 0 compresscodec: NoCompressionCodec crc: 4062224363
offset: 24 position: 828 isvalid: true payloadsize: 10 magic: 0 compresscodec: NoCompressionCodec crc: 2043758347
offset: 25 position: 864 isvalid: true payloadsize: 4 magic: 0 compresscodec: NoCompressionCodec crc: 190530983
offset: 26 position: 894 isvalid: true payloadsize: 6 magic: 0 compresscodec: NoCompressionCodec crc: 3790159255
offset: 27 position: 926 isvalid: true payloadsize: 10 magic: 0 compresscodec: NoCompressionCodec crc: 2821376011

可以看出,这个命令将Kafka中Message中Header的相关信息和偏移量都显示出来了,但是没有看到日志的内容,我们可以通过–print-data-log来设置。如果需要查看多个日志文件,可以以逗号分割。

Consumer Offset Checker
Consumer Offset Checker主要是运行kafka.tools.ConsumerOffsetChecker类,对应的脚本是kafka-consumer-offset-checker.sh,会显示出Consumer的Group、Topic、分区ID、分区对应已经消费的Offset、logSize大小,Lag以及Owner等信息。
如果运行kafka-consumer-offset-checker.sh脚本的时候什么信息都不输入,那么会显示以下信息:

[hadoop@slave1 bin]$ ./kafka-consumer-offset-checker.sh 
Check the offset of your consumers.
Option                                  Description                            
------                                  -----------                            
--broker-info                           Print broker info                      
--group                                 Consumer group.                        
--help                                  Print this message.                    
--retry.backoff.ms <Integer>            Retry back-off to use for failed       
                                          offset queries. (default: 3000)      
--socket.timeout.ms <Integer>           Socket timeout to use when querying    
                                          for offsets. (default: 6000)         
--topic                                 Comma-separated list of consumer       
                                          topics (all topics if absent).       
--zookeeper                             ZooKeeper connect string. (default:    
                                          localhost:2181)                    
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值