flume+kafka+storm的集成

第一步:启动storm:

1.1启动storm集群

master:
        python bin/storm nimbus &
        python bin/storm ui &
        python bin/storm logviewer &

slave:
        python bin/storm supervisor &
        python bin/storm logviewer &

1.2开发storm+kafka的集成代码:

stormKafka.java:

package stormKafkaPackage;

import backtype.storm.Config;
import backtype.storm.LocalCluster;
import backtype.storm.StormSubmitter;
import backtype.storm.spout.SchemeAsMultiScheme;
import backtype.storm.topology.TopologyBuilder;
import storm.kafka.BrokerHosts;
import storm.kafka.KafkaSpout;
import storm.kafka.SpoutConfig;
import storm.kafka.StringScheme;
import storm.kafka.ZkHosts;

public class stormKafka {
    public static void main(String[] args) throws Exception {

        String topic = "badou_storm_kafka_test";
        String zkRoot = "/badou_storm_kafka_test";
        String spoutId = "kafkaSpout";

        BrokerHosts brokerHosts = new ZkHosts("master:2181");
        SpoutConfig kafkaConf = new SpoutConfig(brokerHosts, topic, zkRoot, spoutId);
        kafkaConf.forceFromStart = true;
        kafkaConf.scheme = new SchemeAsMultiScheme(new StringScheme());

        KafkaSpout kafkaSpout = new KafkaSpout(kafkaConf);

        TopologyBuilder builder = new TopologyBuilder();

        builder.setSpout("spout", kafkaSpout, 2);
        builder.setBolt("printer", new PrinterBolt())
                .shuffleGrouping("spout");

        Config config = new Config();
        config.setDebug(false);

        if(args!=null && args.length > 0) {
            config.setNumWorkers(3);

            StormSubmitter.submitTopology(args[0], config, builder.createTopology());
        } else {
            config.setMaxTaskParallelism(3);

            LocalCluster cluster = new LocalCluster();
            cluster.submitTopology("kafka", config, builder.createTopology());

//            Thread.sleep(10000);

//            cluster.shutdown();
        }
    }
}

PrinterBolt.java:

package stormKafkaPackage;

import backtype.storm.topology.BasicOutputCollector;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.topology.base.BaseBasicBolt;
import backtype.storm.tuple.Tuple;


public class PrinterBolt extends BaseBasicBolt {

    @Override
    public void execute(Tuple tuple, BasicOutputCollector collector) {
        System.out.println(tuple);
    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer ofd) {
    }

}

1.3运行storm程序脚本:

python /usr/local/src/apache-storm-0.9.3/bin/storm jar \
    /root/IdeaProjects/stormtest/target/stormtest-1.0-SNAPSHOT.jar \
    stormKafkaPackage.stormKafka \
    guoqing_remote

第二步:启动kafka:
 ./bin/kafka-server-start.sh config/server.properties

第三步:启动flume:

3.1编写flume文件:

3.2启动命令:
./bin/flume-ng agent --conf conf --conf-file ./conf/flume_kafka.conf --name a1 -Dflume.root.logger=INFO,console

3.3测试:

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

曾牛

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值