----kafka集群基本操作命令
启动zookeeper
zkServer.shstart/status/stop
启动kafka
./kafka-server-start.sh-daemon ../config/server.properties
列出所有 topics
kafka-topics.sh --list -zookeeper 192.168.123.102:2181
创建topic
kafka-topics.sh --create --zookeeper 192.168.123.102:2181 --replication-factor 1 --partitions 1 --topic text
列出一个topic的详细信息
kafka-topics.sh --describe --zookeeper 192.168.123.102:2181 --topic text
生产者
./kafka-console-producer.sh --broker-list192.168.123.102:19092,192.168.123.101:19092,192.168.123.100:19092 --topic 1234
消费者
./kafka-console-consumer.sh --zookeeper 192.168.123.102:2181,192.168.123.101:2181,192.168.123.100:2181 --topic 1234 --from-beginning
增加partition数量 (副本数量)
kafka-topics.sh --zookeeper 192.168.123.102:2181 --alter --topic text --partitions 4
集群leader平衡: 移动topic到另一个
kafka-preferred-replica-election.sh -zookeeper 192.168.123.102:2181
or 配置参数 auto.leader.rebalance.enable=true
集群status监控
运用kafka offset monitor
java-cp KafkaOffsetMonitor-assembly-0.2.1.jar com.quantifind.kafka.offsetapp.OffsetGetterWeb --zk 192.168.123.100,192.168.123.101,192.168.213.102--refresh 5.minutes --retain 1.day &
与flume搭建数据通道集群:
参考 http://www.tuicool.com/articles/V3yeeqU
http://lxw1234.com/archives/2015/09/510.htm
----flume输入端进行文件实时扫描收集数据
步骤:
1.先启动kafka集群
2.启动flume
3.灌入数据
Flume 启动
./bin/flume-ng agent -n agent -c conf -fconf/hw.conf -Dflume.root.logger=INFO,console
生成数据脚本
for((i=0;i<=40000;i++));
do echo "message-" + $i >> /home/admin/data/logs/flume.log;
done