kafka常用操作

1.启动集群:
export KAFKA_HEAP_OPTS=" -Xms3g -Xmx3g -XX:PermSize=48m -XX:MaxPermSize=48m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 " 
nohup ./kafka-server-start.sh -daemon ../config/server.properties
nohup ./kafka-server-start.sh ../config/server.properties >/dev/null 2>&1 & 

2.停止集群:
./kafka-server-stop.sh 

3.创建topic:
./kafka-topics.sh --create --zookeeper 10.129.142.46:2181,10.166.141.46:2181,10.166.141.47:2181/kafka --replication-factor 2 --partitions 10 --topic testtopic
zookeeper指定其中一个节点即可(10.129.142.46:2181)

4.查看全部topic:
./kafka-topics.sh --list --zookeeper 10.129.142.46:2181,10.166.141.46:2181,10.166.141.47:2181/testkafka

查看某个topic的详细信息:
./kafka-topics.sh --describe --zookeeper 10.129.142.46:2181,10.166.141.46:2181,10.166.141.47:2181/testkafka --topic itil_topic_4038

5.发送消息(在一个终端启动Producer):
./kafka-console-producer.sh --broker-list 10.49.133.77:9092,10.49.133.76:9092,10.49.133.75:9092 --topic itil_topic_4097
ctrl+c退出发送

6.接收消息(在一个终端启动Consumer):
./kafka-console-consumer.sh --zookeeper 10.129.142.46:2181,10.166.141.46:2181,10.166.141.47:2181/testkafka --topic itil_topic_4097 --from-beginning

7.删除一个topic:
./kafka-topics.sh  --delete --zookeeper 10.129.142.46:2181/kafka  --topic test_topic
配置文件中必须delete.topic.enable=true,否则只会标记为删除,而不是真正删除。

8.修改topic:
./kafka-topics.sh --alter  --zookeeper 10.129.142.46:2181/kafka --topic test_topic  --partitions 4 
./kafka-topics.sh —alter --zookeeper 10.129.142.46:2181/kafka  --topic test_topic --config key=value 
./kafka-topics.sh —alter --zookeeper 10.129.142.46:2181/kafka  --topic test_topic --deleteConfig key 
(http://blog.jobbole.com/99195/)

9.kafka重新分区:
./kafka-reassign-partitions.sh --zookeeper 10.129.142.46:2181,10.166.141.46:2181,10.166.141.47:2181/appnews_kafka2 --reassignment-json-file result.json --execute
./kafka-reassign-partitions.sh --zookeeper 10.129.142.46:2181,10.166.141.46:2181,10.166.141.47:2181/appnews_kafka2 --topics-to-move-json-file topic-to-move.json --broker-list "1,2,3,4,5" --generate
{"topics": [{"topic": "itil_topic_2954"},{"topic": "itil_topic_3825"},{"topic": "itil_topic_3295"}],
 "version":1
}


10.调整分片副本数

准备json文档addReplicas.json:

{
    "version": 1,
    "partitions": [
        {
            "topic": "itil_topic_4499",
            "partition": 0,
            "replicas": [
                2,
                1
            ]
        },
        {
            "topic": "itil_topic_4499",
            "partition": 1,
            "replicas": [
                1,
                2
            ]
        },
		 {
            "topic": "itil_topic_4499",
            "partition": 6,
            "replicas": [
                2,
                1
            ]
        },
		 {
            "topic": "itil_topic_4499",
            "partition": 9,
            "replicas": [
                2,
                1
            ]
        }
    ]
}
执行:

./kafka-reassign-partitions.sh --zookeeper 10.129.142.46:2181,10.166.141.46:2181,10.166.141.47:2181/appnews_kafka3 --reassignment-json-file addReplicas.json --execute
查看执行情况:

./kafka-reassign-partitions.sh --zookeeper 10.129.142.46:2181,10.166.141.46:2181,10.166.141.47:2181/appnews_kafka3 --reassignment-json-file addReplicas.json --verify

11.配置

相关URL:

http://kafka.apache.org/documentation.html#topic-config
https://m.oschina.net/blog/413649

12.kafka系统工具
https://cwiki.apache.org/confluence/display/KAFKA/System+Tools
查看topic的latest offset
./kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 10.49.133.77:9092,10.49.133.76:9092,10.49.133.75:9092 --topic itil_topic_2954 --time -1
./kafka-run-class.sh kafka.tools.ExportZkOffsets --zkconnect 10.129.142.46:2181/kafka --output-file abc.txt
./kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper 10.129.142.46:2181/kafka --topic itil_topic_2954 --broker-info  --group videoplay_3295
./kafka-run-class.sh kafka.tools.SimpleConsumerShell --broker-list 10.49.133.77:9092,10.49.133.76:9092,10.49.133.75:9092 --topic itil_topic_3295 --partition 3
./kafka-run-class.sh kafka.tools.VerifyConsumerRebalance  --group videoplay_3295 --zookeeper.connect 10.129.142.46:2181/kafka   


12.C语言API:
https://github.com/edenhill/librdkafka
http://docs.confluent.io/2.0.0/clients/librdkafka/classRdKafka_1_1Conf.html
链接里加上:-lrdkafka -lz -lpthread -lrt.


13监控

KafkaOffsetMonitor启动命令:

nohup java -cp KafkaOffsetMonitor-assembly-0.2.1.jar com.quantifind.kafka.offsetapp.OffsetGetterWeb --zk 10.129.142.46:2181,10.166.141.46:2181,10.166.141.47:2181/testkafka --port 80 --refresh 10.seconds --retain 2.days &

14.同类文章推荐:

Kafka集群操作指南:http://blog.jobbole.com/99195/

 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值