三、kafka常用命令

11 篇文章 0 订阅
3 篇文章 0 订阅
  • 查看所有topic
     ./kafka-topics.sh --zookeeper 172.17.161.177:2181,172.17.161.178:2181,172.17.161.179:2181/kafka --list
  • 创建topic     
    ./kafka-topics.sh --zookeeper 172.17.161.177:2181,172.17.161.178:2181,172.17.161.179:2181/kafka --create --replication-factor 2 --partitions 5  --topic test
  • 查看topic详情
    ./kafka-topics.sh --zookeeper 172.17.161.177:2181,172.17.161.178:2181,172.17.161.179:2181/kafka --topic test1 --describe

  • 删除topic
    ./kafka-topics.sh --zookeeper localhost:2181/kafka --delete --topic test
  • 生产者发送数据
    ./kafka-console-producer.sh --broker-list 172.17.161.177:9092,172.17.161.178:9092,172.17.161.179:9092 --topic test
    ./kafka-console-consumer.sh --zookeeper 172.17.161.177:2181,172.17.161.178:2181,172.17.161.179:2181/kafka --from-beginning --topic test

  • 查询最小/大偏移量
    ./kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 172.17.161.177:9092,172.17.161.178:9092,172.17.161.179:9092 --topic test4 --time -1
    ./kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 172.17.161.177:9092,172.17.161.178:9092,172.17.161.179:9092 --topic test4 --time -2


  • 查看消费者组的offset
    ./kafka-consumer-offset-checker.sh --group console-consumer-98499 --zookeeper 172.17.161.177:2181,172.17.161.178:2181,172.17.161.179:2181/kafka --topic test
  • 为topic修改分区
    ./kafka-topics.sh --zookeeper 172.17.161.177:2181,172.17.161.178:2181,172.17.161.179:2181/kafka --alter --partitions 2 --topic test4
     

Kafka命令行详细介绍

常用的几个命令如下:

  • kafka-server-start.sh
  • kafka-console-consumer.sh
  • kafka-console-producer.sh
  • kafka-topics.sh

在这几个命令中,第一个仅用于启动Kafka,后两个console常用于测试,用途最多的是最后一个命令,所以下面命令中主要介绍的就是 kafka-topics.sh。

kafka-server-start.sh

用法: > bin/kafka-server-start.sh [-daemon] server.properties [--override property=value]*

这个命令后面可以有多个参数,第一个是可选参数,该参数可以让当前命令以后台服务方式执行,第二个必须是 Kafka 的配置文件。后面还可以有多个 --override
开头的参数,其中的 property
可以是
Broker Configs

中提供的所有参数。这些额外的参数会覆盖配置文件中的设置。

例如下面使用同一个配置文件,通过参数覆盖启动多个Broker。

> bin/kafka-server-start.sh -daemon config/server.properties --override broker.id=0 --override log.dirs=/tmp/kafka-logs-1 --override listeners=PLAINTEXT://:9092 --override advertised.listeners=PLAINTEXT://192.168.16.150:9092

> bin/kafka-server-start.sh -daemon config/server.properties --override broker.id=1 --override log.dirs=/tmp/kafka-logs-2 --override listeners=PLAINTEXT://:9093 --override advertised.listeners=PLAINTEXT://192.168.16.150:9093

上面这种用法只是用于演示,真正要启动多个Broker 应该针对不同的 Broker 创建相应的 server.properties 配置。

kafka-console-consumer.sh

这个命令只是简单的将消息输出到标准输出中,该命令支持的参数如下。

option                                   Description                            
------                                   -----------                            
--blacklist           Blacklist of topics to exclude from    
                                           consumption.                         
--bootstrap-server                               used): The server to connect to.     
--consumer-property                            properties in the form key=value to  
                                           the consumer.                        
--consumer.config   Consumer config properties file. Note  
                                           that [consumer-property] takes       
                                           precedence over this config.         
--csv-reporter-enabled                   If set, the CSV metrics reporter will  
                                           be enabled                           
--delete-consumer-offsets                If specified, the consumer path in     
                                           zookeeper is deleted when starting up
--enable-systest-events                  Log lifecycle events of the consumer   
                                           in addition to logging consumed      
                                           messages. (This is specific for      
                                           system tests.)                       
--formatter               The name of a class to use for         
                                           formatting kafka messages for        
                                           display. (default: kafka.tools.      
                                           DefaultMessageFormatter)             
--from-beginning                         If the consumer does not already have  
                                           an established offset to consume     
                                           from, start with the earliest        
                                           message present in the log rather    
                                           than the latest message.             
--key-deserializer                                                          
--max-messages    The maximum number of messages to      
                                           consume before exiting. If not set,  
                                           consumption is continual.            
--metrics-dir                                this parameter isset, the csv        
                                           metrics will be outputed here        
--new-consumer                           Use the new consumer implementation.   
                                           This is the default.                 
--offset         The offset id to consume from (a non-  
                                           negative number), or 'earliest'      
                                           which means from beginning, or       
                                           'latest' which means from end        
                                           (default: latest)                    
--partition          The partition to consume from.         
--property                 The properties to initialize the       
                                           message formatter.                   
--skip-message-on-error                  If there is an error when processing a 
                                           message, skip it instead of halt.    
--timeout-ms        If specified, exit if no message is    
                                           available for consumption for the    
                                           specified interval.                  
--topic                   The topic id to consume on.            
--value-deserializer                                                       
--whitelist           Whitelist of topics to include for     
                                           consumption.                         
--zookeeper                REQUIRED (only when using old          
                                           consumer): The connection string for 
                                           the zookeeper connection in the form 
                                           host:port. Multiple URLS can be      
                                           given to allow fail-over.

--bootstrap-server
必须指定,通常 --topic
也要指定查看的主题。如果想要从头查看消息,还可以指定 --from-beginning
参数。一般使用的命令如下。

> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

还可以通过下面的命令指定分区查看:

>> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --partition 0

kafka-console-producer.sh

这个命令可以将文件或标准输入的内容发送到Kafka集群。该命令参数如下。

Option                                   Description                            
------                                   -----------                            
--batch-size              Number of messages to send in a single 
                                           batch if they are not being sent     
                                           synchronously. (default: 200)        
--broker-list       REQUIRED: The broker list string in    
                                           the form HOST1:PORT1,HOST2:PORT2.    
--compression-codec [String:             The compression codec: either 'none',  
  compression-codec]                       'gzip', 'snappy', or 'lz4'.If        
                                           specified without value, then it     
                                           defaults to 'gzip'                   
--key-serializer                            implementation to use for            
                                           serializing keys. (default: kafka.   
                                           serializer.DefaultEncoder)           
--line-reader      The class name of the class to use for 
                                           reading lines from standard in. By   
                                           default each line is read as a       
                                           separate message. (default: kafka.   
                                           tools.                               
                                           ConsoleProducer$LineMessageReader)   
--max-block-ms                                     block for during a send request      
                                           (default: 60000)                     
--max-memory-bytes                                 to buffer records waiting to be sent 
                                           to the server. (default: 33554432)   
--max-partition-memory-bytes            partition. When records are received 
                                           which are smaller than this size the 
                                           producer will attempt to             
                                           optimistically group them together   
                                           until this size is reached.          
                                           (default: 16384)                     
--message-send-max-retries      Brokers can fail receiving the message 
                                           for multiple reasons, and being      
                                           unavailable transiently is just one  
                                           of them. This property specifies the 
                                           number of retires before the         
                                           producer give up and drop this       
                                           message. (default: 3)                
--metadata-expiry-ms                      after which we force a refresh of    
                                           metadata even if we haven't seen any 
                                           leadership changes. (default: 300000)
--old-producer                           Use the old producer implementation.   
--producer-property                            properties in the form key=value to  
                                           the producer.                        
--producer.config   Producer config properties file. Note  
                                           that [producer-property] takes       
                                           precedence over this config.         
--property                 A mechanism to pass user-defined       
                                           properties in the form key=value to  
                                           the message reader. This allows      
                                           custom configuration for a user-     
                                           defined message reader.              
--queue-enqueuetimeout-ms                  2147483647)                          
--queue-size        If set and the producer is running in  
                                           asynchronous mode, this gives the    
                                           maximum amount of  messages will     
                                           queue awaiting sufficient batch      
                                           size. (default: 10000)               
--request-required-acks                    requests (default: 1)                
--request-timeout-ms                               requests. Value must be non-negative 
                                           and non-zero (default: 1500)         
--retry-backoff-ms              Before each retry, the producer        
                                           refreshes the metadata of relevant   
                                           topics. Since leader election takes  
                                           a bit of time, this property         
                                           specifies the amount of time that    
                                           the producer waits before refreshing 
                                           the metadata. (default: 100)         
--socket-buffer-size      The size of the tcp RECV size.         
                                           (default: 102400)                    
--sync                                   If set message send requests to the    
                                           brokers are synchronously, one at a  
                                           time as they arrive.                 
--timeout           If set and the producer is running in  
                                           asynchronous mode, this gives the    
                                           maximum amount of time a message     
                                           will queue awaiting sufficient batch 
                                           size. The value is given in ms.      
                                           (default: 1000)                      
--topic                   REQUIRED: The topic id to produce      
                                           messages to.                         
--value-serializer                            implementation to use for            
                                           serializing values. (default: kafka. 
                                           serializer.DefaultEncoder)

其中 --broker-list
--topic
是两个必须提供的参数。

常用命令如下。

使用标准输入方式。

> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

从文件读取:

> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < file-input.txt

kafka-topics.sh

相比上面几个偶尔使用的命令来说,kafka-topics.sh 相对就比较重要。该命令包含以下参数。

Create, delete, describe, or change a topic.
Option                                   Description                            
------                                   -----------                            
--alter                                  Alter the number of partitions,        
                                           replica assignment, and/or           
                                           configuration for the topic.         
--config             A topic configuration override for the 
                                           topic being created or altered.The   
                                           following is a list of valid         
                                           configurations:                      
                                            cleanup.policy                        
                                            compression.type                      
                                            delete.retention.ms                   
                                            file.delete.delay.ms                  
                                            flush.messages                        
                                            flush.ms                              
                                            follower.replication.throttled.       
                                           replicas                             
                                            index.interval.bytes                  
                                            leader.replication.throttled.replicas 
                                            max.message.bytes                     
                                            message.format.version                
                                            message.timestamp.difference.max.ms   
                                            message.timestamp.type                
                                            min.cleanable.dirty.ratio             
                                            min.compaction.lag.ms                 
                                            min.insync.replicas                   
                                            preallocate                           
                                            retention.bytes                       
                                            retention.ms                          
                                            segment.bytes                         
                                            segment.index.bytes                   
                                            segment.jitter.ms                     
                                            segment.ms                            
                                            unclean.leader.election.enable        
                                         See the Kafka documentation for full   
                                           details on the topic configs.        
--create                                 Create a new topic.                    
--delete                                 Delete a topic                         
--delete-config            A topic configuration override to be   
                                           removed for an existing topic (see   
                                           the list of configurations under the 
                                           --config option).                    
--describe                               List details for the given topics.     
--disable-rack-aware                     Disable rack aware replica assignment  
--force                                  Suppress console prompts               
--help                                   Print usage information.               
--if-exists                              if set when altering or deleting       
                                           topics, the action will only execute 
                                           if the topic exists                  
--if-not-exists                          if set when creating topics, the       
                                           action will only execute if the      
                                           topic does not already exist         
--list                                   List all available topics.             
--partitions   正在创建或更改主题的分区数
                                         (警告:如果为具有密钥的主题   
                                         (分区)增加了分区  
                                          消息的逻辑或排序将受到影响                    
--replica-assignment                                            
--replication-factor                     
--topic                   The topic to be create, alter or       
                                           describe. Can also accept a regular  
                                           expression except for --create option
--topics-with-overrides                  if set when describing topics, only    
                                           show topics that have overridden     
                                           configs                              
--unavailable-partitions                 if set when describing topics, only    
                                           show partitions whose leader is not  
                                           available                            
--under-replicated-partitions            if set when describing topics, only    
                                           show under replicated partitions     
--zookeeper                REQUIRED: The connection string for    
                                           the zookeeper connection in the form 
                                           host:port. Multiple URLS can be      
                                           given to allow fail-over.         

下面是几种常用的 topic 命令。

描述主题的配置
bin/kafka-configs.sh --zookeeper localhost:2181 --describe --entity-type topics --entity-name test_topic
设置保留时间
# Deprecated way
bin/kafka-topics.sh  --zookeeper localhost:2181 --alter --topic test_topic --config retention.ms=1000

# Modern way
bin/kafka-configs.sh --zookeeper localhost:2181 --alter --entity-type topics --entity-name test_topic --add-config retention.ms=1000

如果您需要删除主题中的所有消息,则可以利用保留时间。首先将保留时间设置为非常低(1000 ms),等待几秒钟,然后将保留时间恢复为上一个值。

注意:默认保留时间为24小时(86400000毫秒)。

删除主题
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test_topic

注意:需要在Broker的配置文件server.properties中配置 delete.topic.enable=true 才能删除主题。

主题信息
bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test_topic
添加分区
bin/kafka-topics.sh --alter --zookeeper localhost:2181 --topic test_topic --partitions 3
创建主题
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic test_topic
列出主题
bin/kafka-topics.sh --list --zookeeper localhost:2181

topic 相关内容来源: http://ronnieroller.com/kafka/cheat-sheet

命令那么多,怎么记?

Kafka 的命令行工具提供了非常丰富的提示信息,所以只需要记住上面大概的几个用法,知道怎么写就行。当需要用到某个命令时,通过命令提示进行操作。

比如说,如何使用 kafka-configs.sh 查看主题(Topic)的配置?

首先,在命令行中输入 bin/kafka-configs.sh
,然后或输出下面的命令提示信息。

Add/Remove entity config for a topic, client, user or broker
Option                      Description                                        
------                      -----------                                        
--add-config        Key Value pairs of configs to add. Square brackets 
                              can be used to group values which contain commas:
                              'k1=v1,k2=[v1,v2,v2],k3=v3'. The following is a  
                              list of valid configurations: For entity_type    
                              'topics':                                        
                                cleanup.policy                                    
                                compression.type                                  
                                delete.retention.ms                               
                                file.delete.delay.ms                              
                                flush.messages                                    
                                flush.ms                                          
                                follower.replication.throttled.replicas           
                                index.interval.bytes                              
                                leader.replication.throttled.replicas             
                                max.message.bytes                                 
                                message.format.version                            
                                message.timestamp.difference.max.ms               
                                message.timestamp.type                            
                                min.cleanable.dirty.ratio                         
                                min.compaction.lag.ms                             
                                min.insync.replicas                               
                                preallocate                                       
                                retention.bytes                                   
                                retention.ms                                      
                                segment.bytes                                     
                                segment.index.bytes                               
                                segment.jitter.ms                                 
                                segment.ms                                        
                                unclean.leader.election.enable                    
                            For entity_type 'brokers':                         
                                follower.replication.throttled.rate               
                                leader.replication.throttled.rate                 
                            For entity_type 'users':                           
                                producer_byte_rate                                
                                SCRAM-SHA-256                                     
                                SCRAM-SHA-512                                     
                                consumer_byte_rate                                
                            For entity_type 'clients':                         
                                producer_byte_rate                                
                                consumer_byte_rate                                
                            Entity types 'users' and 'clients' may be specified
                              together to update config for clients of a       
                              specific user.                                   
--alter                     Alter the configuration for the entity.            
--delete-config     config keys to remove 'k1,k2'                      
--describe                  List configs for the given entity.                 
--entity-default            Default entity name for clients/users (applies to  
                              corresponding entity type in command line)       
--entity-name       Name of entity (topic name/client id/user principal
                              name/broker id)                                  
--entity-type       Type of entity (topics/clients/users/brokers)      
--force                     Suppress console prompts                           
--help                      Print usage information.                           
--zookeeper   REQUIRED: The connection string for the zookeeper  
                              connection in the form host:port. Multiple URLS  
                              can be given to allow fail-over.

从第一行可以看到这个命令可以修改 topic, client, user 或 broker 的配置。

如果要设置 topic,就需要设置 entity-type
topics
,输入如下命令:

> bin/kafka-configs.sh --entity-type topics
Command must include exactly one action: --describe, --alter

命令提示需要指定一个操作(不只是上面提示的两个操作),增加 --describe
试试:

> bin/kafka-configs.sh --entity-type topics --describe
[root@localhost kafka_2.11-0.10.2.1]# bin/kafka-configs.sh --entity-type topics --describe
Missing required argument "[zookeeper]"

继续增加 --zookeeper

> bin/kafka-configs.sh --entity-type topics --describe --zookeeper localhost:2181
Configs for topic '__consumer_offsets' are segment.bytes=104857600,cleanup.policy=compact,compression.type=producer

由于没有指定主题名,这里显示了 __consumer_offsets
的信息。下面指定一个topic试试。

> bin/kafka-configs.sh --entity-type topics --describe --zookeeper localhost:2181 --entity-name test
Configs for topic 'test' are

此时显示了 test
主题的信息,这里是空。

因为Kafka完善的命令提示,可以很轻松的通过提示信息来进行下一步操作,运用熟练后,基本上很快就能实现自己想要的命令。


  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值