kafka 脚本发送_kafka----简单的脚本命令重点

kafka命令如下:

kafka-shell基本命令

在节点hadoop-2,hadoop-3,hadoop-5,启动kafka

启动命令如下

kafka-server-start.sh /usr/local/kafka_2.11-0.10.0.1/config/server.properties > /usr/local/kafka_2.11-0.10.0.1/logs/logs &1.创建topic

kafka-topics.sh --zookeeper hadoop-2:2181,hadoop-3:2181,hadoop-5:2181 --create --topic your.topic.name --partitions 30 --replication-factor 1partitions指定topic分区数,replication-factor指定topic每个分区的副本数(一般等于broker个数)

2.查看topic列表

kafka-topics.sh --zookeeper hadoop-2:2181,hadoop-3:2181,hadoop-5:2181 --list

3.查看topic信息

kafka-topics.sh --zookeeper hadoop-2:2181,hadoop-3:2181,hadoop-5:2181 --describe --topic your.topic.name

4.向topic生产数据

kafka-console-producer.sh --broker-list hadoop-2:9092,hadoop-4:9092,hadoop-5:9092 --topic your.topic.name

5.消费者消费数据

kafka-console-consumer.sh --zookeeper hadoop-2:2181,hadoop-3:2181,hadoop-5:2181 -topic your.topic.name --from-beginning6.查看topic某分区偏移量的最大(小)值

kafka-run-class.sh kafka.tools.GetOffsetShell --topic kafkademo --time -1 --broker-list hadoop-2:9092,hadoop-4:9092,hadoop-5:9092 --partitions 0

7.增加topic分区数量(只能增加,无法减少)

kafka-topics.sh --zookeeper hadoop-2:2181,hadoop-3:2181,hadoop-5:2181 --alter --topic your.topic.name --partitions 40

8.查看kafka消费进度

kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper hadoop-2:2181,hadoop-3:2181,hadoop-5:2181 --group pv

9.删除kafka的队列[注意需要重启kafka集群]

kafka-run-class.sh kafka.admin.DeleteTopicCommand --topic test_kafka --zookeeper chenx02:2181

10.查看不可用的分区

kafka-topics.sh --describe --unavailable-partitions --zookeeper chenx02:2181 --topic  test_kafka

11.发送消息

./kafka-console-producer.sh --broker-list chenx02:9092 --topic test

12.---查看kafka的数据偏移量 [root@hadoop-5 data]#kafka-run-class.sh kafka.tools.GetOffsetShell --topic guaishou --time -1 --broker-list 192.***:9092 --partitions 0

2.hadoop和zookeeper脚本管理集群

starthadoop.sh脚本

#!/bin/bash

ssh hadoop-2 "/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh start"sleep 5s

ssh hadoop-3 "/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh start"sleep 5s

ssh hadoop-5 "/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh start"sleep 5s/usr/local/hadoop/hadoop-2.7.3/sbin/start-dfs.sh

sleep 30s

ssh hadoop-3 "/usr/local/hadoop/hadoop-2.7.3/sbin/start-yarn.sh"sleep 30s

ssh hadoop-2 "/usr/local/spark/spark-2.2.1-bin-hadoop2.7/sbin/start-all.sh"

stophadoop.sh

#!/bin/bash

ssh hadoop-2 "/usr/local/spark/spark-2.2.1-bin-hadoop2.7/sbin/stop-all.sh"sleep 10s

ssh hadoop-3 "/usr/local/hadoop/hadoop-2.7.3/sbin/stop-yarn.sh"sleep 30s/usr/local/hadoop/hadoop-2.7.3/sbin/stop-dfs.sh

sleep 30s

ssh hadoop-5 "/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh stop"sleep 3s

ssh hadoop-3 "/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh stop"sleep 3s

ssh hadoop-2 "/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh stop"sleep 3s

------------------------------------------------------------------------------------------------------------------------------------------

1)关于模拟kafka消费的博客(spring+springMVC+mybatis+kafka)。http://www.cnblogs.com/jun1019/p/6580371.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值