csdn问鼎

大数据初学者

storm实时消费kafka数据

  1. 程序环境,在kafka创建名称为data的topic,开启消费者模式,准备输入数据。
  2. 程序的pom.xml文件


  <dependencies>

  <dependency>
    <groupId>org.apache.storm</groupId>
    <artifactId>storm-core</artifactId>
    <version>1.0.2</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.apache.storm</groupId>
    <artifactId>storm-kafka</artifactId>
    <version>1.0.2</version>
</dependency>
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka_2.10</artifactId>
    <version>0.8.2.0</version>
</dependency>

  <dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>1.2.14</version>
</dependency>


<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>log4j-over-slf4j</artifactId>
    <version>1.7.21</version>
</dependency>

  </dependencies>

3.spout代码


public class MykafkaSpout {

    /**
     * @param args
     * @throws AuthorizationException 
     */
    public static void main(String[] args) throws AuthorizationException {
        // TODO Auto-generated method stub

        String topic = "data" ;
        ZkHosts zkHosts = new ZkHosts("192.168.59.132:2181");
        SpoutConfig spoutConfig = new SpoutConfig(zkHosts, topic, 
                "", 
                "MyTrack") ;
        List<String> zkServers = new ArrayList<String>() ;
        zkServers.add("192.168.59.132");
        spoutConfig.zkServers = zkServers;
        spoutConfig.zkPort = 2181;
        spoutConfig.socketTimeoutMs = 60 * 1000 ;
        spoutConfig.scheme = new SchemeAsMultiScheme(new StringScheme()) ; 

        TopologyBuilder builder = new TopologyBuilder() ;
        builder.setSpout("spout", new KafkaSpout(spoutConfig) ,1) ;
        builder.setBolt("bolt1", new MyKafkaBolt(), 1).shuffleGrouping("spout") ;

        Config conf = new Config ();
        conf.setDebug(false) ;

        if (args.length > 0) {
            try {
                StormSubmitter.submitTopology(args[0], conf, builder.createTopology());
            } catch (AlreadyAliveException e) {
                e.printStackTrace();
            } catch (InvalidTopologyException e) {
                e.printStackTrace();
            }
        }else {
            LocalCluster localCluster = new LocalCluster();
            localCluster.submitTopology("mytopology", conf, builder.createTopology());
        }

    }

}

4.bolt代码,这里为了简化,只把数据打印出来



public class MyKafkaBolt implements IBasicBolt {

    /**
     * 
     */
    private static final long serialVersionUID = 1L;

    @Override
    public void cleanup() {
        // TODO Auto-generated method stub

    }

    @Override
    public void execute(Tuple input, BasicOutputCollector collector) {
        // TODO Auto-generated method stub

        String kafkaMsg = input.getString(0) ;
        System.err.println("bolt:"+kafkaMsg);
    }

    @Override
    public void prepare(Map stormConf, TopologyContext context) {
        // TODO Auto-generated method stub

    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        // TODO Auto-generated method stub

    }

    @Override
    public Map<String, Object> getComponentConfiguration() {
        // TODO Auto-generated method stub
        return null;
    }

}

5.如何确定SpoutConfig中的zkRoot,查看kafka中的server.properties文件,如果zookeeper.connect后面没有跟/bc这种就是,直接为”“,否则zkRoot为bc,就类似于zookeeper.connect=localhostlei1:2181,localhostlei2:2181,localhostlei3:2181/bc

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhostlei1:2181,localhostlei2:2181,localhostlei3:2181

6.开始任务后,尝试往kafka中写入数据,数据就能马上被storm所消费。

阅读更多
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_22222499/article/details/72868587
文章标签: storm kafka-java
个人分类: storm
所属专栏: Storm实战开发
想对作者说点什么? 我来说一句

没有更多推荐了,返回首页

加入CSDN,享受更精准的内容推荐,与500万程序员共同成长!
关闭
关闭