Apache Storm(Low Level)

Storm

Storm概念

Storm是免费开源的分布式实时计算系统,该系统在 2.0.0 之前改架构核心实现使用Clojure编程实现,在本次版本以后Storm底层实现做了重大的调整使用Java8重构了Storm。Storm是一个实时的流处理引擎,能实现对记录的亚秒级的延迟处理。Storm在 realtime analytics、online machine learning、continuous computation、distributed RPC、 ETL等领域都有应用。每秒中一个计算节点可以处理100万个Tuple记录。除此之外Storm还可以和现有的数据(RDBMS/NoSQL)以及 消息队列集成(Kafka)。

流计算 :将大规模流动数据在不断变化的运动过程中实现数据的实时分析,捕捉到可能有用的信息,并把结果发送
到下一计算节点。 主流流计算框架:Kafka Streaming、Apache Storm、Spark Streaming、Flink DataStream等。

  • Kafka Streaming:是一套基于Kafka-Streaming库的一套流计算工具jar包,具有简单容易集成等特点。
  • Apache Storm/Jstorm:流处理框架实现对流数据流的处理和状态管理等操作。
  • Spark Streaming:构建在Spark批处理之上的流处理框架,微观批处理,因此诟病 延迟较高。
  • Flink DataStream/Blink:属于第三代流计算框架,吸取了Spark和Storm设计经验,在实时性和应用性上以及性能
    都有很大的提升,是目前为止最强的流计算引擎。

架构概述

Apache Storm提供了一种基于Topology流计算概念,该概念等价于hadoop的mapreduce计算,但是不同于MapReduce计
算因为MR计算会最终终止,但是Topology计算会一直运行下去,除非用户执行storm kill指令该计算才会终止.Storm提供
了高可靠/可扩展/高度容错的流计算服务 ,该服务可以保证数据|Tuple可靠性处理(至少一次|精确1次)处理机制.可以方
便的和现用户的服务进行集成,例如:HDFS/Kafka/Hbase/Redis/Memcached/Yarn等服务集成.Storm的单个阶段每秒钟可
以处理100万条数据|Tuple

在这里插入图片描述

nimbus :计算任务的主节点,负责分发代码/分配任务/故障检测 Supervisor任务执行.

supervisor :接受来自Nimbus的任务分配,启动Worker进程执行计算任务.

zookeeper :负责Nimbus和Supervisor协调,Storm会使用zookeeper存储nimbus和supervisor进程状态信息,这就导致
了Nimbus和Supervisor是无状态的可以实现任务快速故障恢复,即而让流计算达到难以置信的稳定。

Worker :是Supervisor专门为某一个Topology任务启动的一个Java 进程,Worker进程通过执行Executors(线程)完成任
务的执行,每个任务会被封装成一个个Task。

集群构建

安装JDK8+,配置主机名和IP的映射关系,关闭防火墙,

  1. 同步时钟
[root@CentOSX ~]# yum install -y ntp 
[root@CentOSX ~]# service ntpd start 
[root@CentOSX ~]# ntpdate cn.pool.ntp.org
  1. 安装zookeeper集群
[root@CentOSX ~]# tar -zxf zookeeper-3.4.6.tar.gz -C /usr/ 
[root@CentOSX ~]# mkdir zkdata 
#复制zoo_sample.cfg并改名为zoo.cfg,或者直接将zoo_sample.cfg改名为zoo.cfg
[root@CentOSX ~]# cp /usr/zookeeper-3.4.6/conf/zoo_sample.cfg /usr/zookeeper-3.4.6/conf/zoo.cfg  
#配置zoo.cfg
[root@CentOSX ~]# vi /usr/zookeeper-3.4.6/conf/zoo.cfg
tickTime=2000 
dataDir=/root/zkdata 
clientPort=2181 
#启动zookeeper
[root@CentOSX ~]# /usr/zookeeper-3.4.6/bin/zkServer.sh start zoo.cfg
JMX enabled by default Using config: /usr/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED [root@CentOSX ~]# /usr/zookeeper-3.4.6/bin/zkServer.sh status zoo.cfg
  1. 安装配置Storm
[root@CentOSX ~]# tar -zxf apache-storm-2.0.0.tar.gz -C /usr/ 
#设置storm环境变量
[root@CentOSX ~]# vi .bashrc 
STORM_HOME=/usr/apache-storm-2.0.0 
JAVA_HOME=/usr/java/latest 
CLASSPATH=. 
PATH=$PATH:$JAVA_HOME/bin:$STORM_HOME/bin 
export JAVA_HOME 
export CLASSPATH 
export PATH 
export STORM_HOME

##重启环境配置
[root@CentOSX ~]# source .bashrc 
#查看storm版本,测试安装是否成功
[root@CentOSX ~]# storm version

如果是Storm-2.0.0需要二外安装 yum install -y python-argparse 否则 storm指令无法正常使用

  1. 修改storm.yaml 配置文件(一定小心,很容易出错)
[root@CentOSX ~]# vi /usr/apache-storm-2.0.0/conf/storm.yaml

########### These MUST be filled in for a storm configuration
#指定zookeeper集群
storm.zookeeper.servers: 
	- "CentOSA" 
	- "CentOSB" 
	- "CentOSC" 

#storm存放日志的目录
storm.local.dir: "/usr/storm-stage" 

#分发任务/故障检测的节点,主从式,由zookeeper选主
nimbus.seeds: ["CentOSA","CentOSB","CentOSC"] 

#计算任务的端口号
supervisor.slots.ports: 
	- 6700 
	- 6701 
	- 6702 
	- 6703

注意 ymal配置格式前面 空格

  1. 启动Storm进程
[root@CentOSX ~]# nohup storm nimbus >/dev/null 2>&1 & -- 启动 主节点 每台机器都要执行
[root@CentOSX ~]# nohup storm supervisor >/dev/null 2>&1 & --启动 计算节点  每台机器都要执行
[root@CentOSA ~]# nohup storm ui >/dev/null 2>&1 & --启动web ui界面  随便一台机器启动即可
  1. 启动成功后访问访问: http://CentOSA:8080        CentOSA为主机名
    在这里插入图片描述

Topology概念

Topology:Storm topology编织数据流计算的流程。Storm拓扑类似于MapReduce作业。一个关键的区别是
MapReduce作业最终完成,而拓扑结构永远运行(当然,直到你杀死它)。

Streams:流是无限的Tuple(等价与Kafka Streaming的Record)序列,以分布式方式并行处理和创建。Streams是使用Schema定义的,该Schema命名流的Tuple中的字段。

Tuple:是Storm中一则记录,该记录存储是一个数组元素,Tuple元素都是只读的,不允许修改

Tuple t=new Tuple(new Object[]{1,“zs”,true})    // readOnly

Spouts:负责产生Tuple,是Streams源头.通常是通过Spout读取外围系统的数据,并且将数据封装成Tuple,并且将封装Tuple发射|emit到Topology中.IRichSpout|BaseRichSpout

Bolts:所有的Topology中的Tuple是通过Bolt处理,Bolt作用是用于过滤/聚合/函数处理/join/存储数据到DB中等.

IRichBolt|BaseRichBolt:At Most Once机制,
IBasicBolt|BaseBasicBolt:At Least Once机制 ,
IStatefulBolt | BaseStatefulBolt:有状态计算。
storm官方文档概念介绍

快速入门

pom依赖

<dependency>
	<groupId>org.apache.storm</groupId>
    <artifactId>storm-core</artifactId>
    <version>2.0.0</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.apache.storm</groupId>
    <artifactId>storm-client</artifactId>
    <version>2.0.0</version>
    <scope>provided</scope>
</dependency>

编写Spout

public class WordCountSpout extends BaseRichSpout {
  	 private String[] lines = {"this is a demo", "hello Storm", "ni hao"};
     //该类负责将数据发送给下游
     private SpoutOutputCollector collector;

     public void open(Map<String, Object> conf, TopologyContext context, SpoutOutputCollector collector) {
         this.collector = collector;
     }

     //向下游发送Tuple ,该Tuple的Schemal在declareOutputFields声明
     public void nextTuple() {
         Utils.sleep(1000);//休息1s钟
         String line = lines[new Random().nextInt(lines.length)];
         collector.emit(new Values(line));
     }

     //对emit中的tuple做字段的描述
     public void declareOutputFields(OutputFieldsDeclarer declarer) {
         declarer.declare(new Fields("line"));
     }
}

编写 Bolt

//1.LineSplitBolt 
public class LineSplitBolt extends BaseRichBolt {
     //该类负责将数据发送给下游
    private OutputCollector collector;
    public void prepare(Map<String, Object> topoConf, TopologyContext context, OutputCollector collector) {
        this.collector=collector;
    }
	
    public void execute(Tuple input) {
        String line = input.getStringByField("line");//上一个spout中的Schemal,即对数据的描述
        String[] tokens = line.split("\\W+");
        for (String token : tokens) {
            collector.emit(new Values(token,1));
        }
    }
	//对emit中的tuple做字段的描述
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("word","count"));
    }
}

//2.WordCountBolt 
public class WordCountBolt extends BaseRichBolt {
    //存储状态
    private Map<String,Integer> keyValueState;
    //该类负责将数据发送给下游
    private OutputCollector collector;
    
    public void prepare(Map<String, Object> topoConf, TopologyContext context, OutputCollector collector) {
        this.collector=collector;
        keyValueState=new HashMap<String, Integer>();
    }

    public void execute(Tuple input) {
        String key = input.getStringByField("word");//上一个blot中的Schemal,即对数据的描述
        int count=0;
        if(keyValueState.containsKey(key)){
            count=keyValueState.get(key);
        }
        //更新状态
        int currentCount=count+1;
        keyValueState.put(key,currentCount);
        //将最后结果输出给下游
        collector.emit(new Values(key,currentCount));
    }
	//对emit中的tuple做字段的描述
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("key","result"));
    }
}

//3.WordPrintBolt
public class WordPrintBolt extends BaseRichBolt {
    
    public void prepare(Map<String, Object> topoConf, TopologyContext context, OutputCollector collector) {
    }

    public void execute(Tuple input) {
    	//上一个blot中的Schemal,即对数据的描述
        String word=input.getStringByField("key");
        Integer result=input.getIntegerByField("result");
        System.out.println(input+"\t"+word+" , "+result);
    }
	//对emit中的tuple做字段的描述,因这是数据处理的最后一步,所以不用再描述了
    public void declareOutputFields(OutputFieldsDeclarer declarer) {

    }
}

编写Topology

import org.apache.storm.Config;
import org.apache.storm.StormSubmitter;
import org.apache.storm.topology.TopologyBuilder;
import org.apache.storm.tuple.Fields;

public class WordCountTopology {
    public static void main(String[] args) throws Exception {
        //1.创建TopologyBuilder
        TopologyBuilder builder = new TopologyBuilder();
        //2.编织流处理逻辑- 重点(Spout、Bolt、连接方式)
        builder.setSpout("WordCountSpout", new WordCountSpout(), 1);
        builder.setBolt("LineSplitBolt", new LineSplitBolt(), 3).
                shuffleGrouping("WordCountSpout");//设置 LineSplitBolt 接收上游数据通过 随机
        
        //此代码中,做的是相同key的统计,就不能用shuffleGrouping,需要用fieldsGrouping,
        //第三个参数,对应接收的tuple的描述
        builder.setBolt("WordCountBolt", new WordCountBolt(), 3)
        		.fieldsGrouping("LineSplitBolt", new Fields("word"));
        		
        builder.setBolt("WordPrintBolt", new WordPrintBolt(), 4)
        		.fieldsGrouping("WordCountBolt", new Fields("key"));
        //3.提交流计算
        Config conf = new Config();
        conf.setNumWorkers(3); //设置Topology运行所需的Worker资源,JVM个数
        conf.setNumAckers(0); //关闭Storm应答,可靠性有关
        StormSubmitter.submitTopology("worldcount", conf, builder.createTopology());
    }
}

shuffleGrouping:表示下游的LineSplitBolt会随机的接收上游的Spout发出的数据流。
fieldsGrouping:表示相同的Fields数据总会发送给同一个Task Bolt节点。

任务提交

使用mvn package打包应用,然后将打包的jar包上传到 集群中的任意一台机器

使用storm jar jar包目录 Topology类的路径 运行jar包
比如:storm jar jar包名 com.test.WordCountTopology  其中com.test.WordCountTopology指Topology类的路径

提交成功后,用户可以查看Storm UI界面查看程序的执行效果http://CentOSA:8080/
在这里插入图片描述

查看任务列表

[root@CentOSA ~]# storm list 
... 
Topology_name 	Status 		Num_tasks 	Num_workers 	Uptime_secs 	Topology_Id 				Owner
----------------------------------------------------------------------------------------
worldcount 		ACTIVE 		11 				3 				66 			worldcount-2-1560760048 	root

杀死Topology

[root@CentOSX ~]# storm kill worldcount

任务的并行度理解

  • 并行度和线程是一一对应的在这里插入图片描述

  • conf.setNumWorkers(3);

决定了当前的Topology计算所需的Work进程,每一个Worker只能属于某一个Topology。每一个Worker就代表一个计算资源称为Slot

在这里插入图片描述

一个Supervisor最多启动/管理4个Woker/Slot 进程(配置文件中只配置了4个进程号)。Woker不能跨Topology共享,每一个流计算任务在启动前就已经分配好了JVM计算进程。Storm是通过Worker/Slot实现计算资源的隔离。

  • Task和Executor关系

Task实际是Spout和Blot的实例。默认情况下一个线程只运行一个Task(一个线程中只有一个Spout或者Bolt)。例如以下代码

builder.setBolt(“LineSplitBolt”,new LineSplitBolt(),3)
          .setNumTasks(5)        //实例化的task个数
          .shuffleGrouping(“WordCountSpout”);//设置 LineSplitBolt 接收上游数据通过随机

LineSplitBolt 组件会占用3线程,系统会实例化5个 LineSplitBolt 的实例。其中2、2、1分配方式分配给三个线程
在这里插入图片描述

  • Topology并行度:Worker(进程)、Executors(线程)和Task(实例)相关
    在这里插入图片描述

问题:是不是Worker越多效率越高?
当数据量非常大时,只有一个work也不合适,会造成work压力大
但work越多,通信成本也越高

  • storm rebalance
[root@CentOSA ~]# storm rebalance 
usage: storm rebalance [-h] [-w WAIT_TIME_SECS] [-n NUM_WORKERS] 
					   [-e EXECUTORS] [-r RESOURCES] [-t TOPOLOGY_CONF] 
					   [--config CONFIG] 
					   [-storm_config_opts STORM_CONFIG_OPTS] 
					   topology-name
  • 修改Worker数目
[root@CentOSX ~]# storm rebalance -w 10 -n 6 wordcount02
  • 修改某个组件的并行度,一般不能超过Task个数(每个线程中至少要有一个task)
[root@CentOSX ~]# storm rebalance -w 10 -n 3 -e LineSplitBolt=5 wordcount02

Tuple可靠性处理

Storm 消息Tuple可以通过一个叫做 __ackerBolt 去监测整个Tuple Tree是否能够被完整消费,如果消费超时或者失败该 __ackerBolt 会调用Spout组件(发送改Tuple的Spout组件)的fail方法,要求Spout重新发送Tuple.默认__ackerBolt 并行度是和Worker数目一致,用户可以通过config.setNumAckers(0);关闭Storm的Acker机制。

如何使用可靠性发送
  • Spout端
  • Spout在发射 tuple 的时候必须提供msgID
  • 同时覆盖ack和fail方法
public class WordCountSpout extends BaseRichSpout {
	private String[] lines = {"this is a demo", "hello Storm", "ni hao"};
	private SpoutOutputCollector collector;

	public void open(Map<String, Object> conf, TopologyContext context, SpoutOutputCollector collector) {
		this.collector = collector;
	}

	public void nextTuple() {
		Utils.sleep(5000);//休息1s钟
		int msgId = new Random().nextInt(lines.length);
		String line = lines[msgId];
		//发送 Tuple 指定 msgId
		collector.emit(new Values(line), msgId);
	}

	//对emit中的tuple做字段的描述
	public void declareOutputFields(OutputFieldsDeclarer declarer) {
		declarer.declare(new Fields("line"));
	}

	//发送成功回调 AckerBolt
	@Override
	public void ack(Object msgId) {
		System.out.println("发送成功:" + msgId);
	}

	//发送失败回调 AckerBolt
	@Override
	public void fail(Object msgId) {
		String line = lines[(Integer) msgId];
		System.out.println("发送失败:" + msgId + "\t" + line);
	}
}
  • Bolt端
  • 将当前的子Tuple 锚定到父Tuple上
  • 向上游应答当前父Tuple的状态,应答有两种方式 collector.ack(input); 或者collector.fail(input);
public void execute(TupleTuple) {
	try {//do sth
		// 锚定当前父Tuple
		collector.emit(Tuple,Tuple);
		//向上游应答当前父Tuple的状态
		collector.ack(Tuple);
	} catch (Exception e) {
		collector.fail(Tuple);
	}
}
可靠性机制检测原理

在这里插入图片描述

更多请参考:http://storm.apache.org/releases/2.0.0/Guaranteeing-message-processing.html

  • BasicBolt|BaseBasicBolt

许多Bolt遵循读取输入元组的共同模式(锚定、ack出错fail),基于它发出元组,然后在执行方法结束时执行元组。
因此Storm给我们提供了一套规范,如果用户使用Ack机制,在编写Bolt的时候只需要实现BasicBolt接口或者继承BaseBasicBolt类即可(无需再去覆盖ack和fail方法,无需再向上游应答)。

//继承BaseBasicBolt,无需覆盖ack和fail方法,无需向上游应答
public class WordCountBolt extends BaseBasicBolt {
	//存储状态 
	private Map<String, Integer> keyValueState;

	@Override
	public void prepare(Map<String, Object> topoConf, TopologyContext context) {
		keyValueState = new HashMap<String, Integer>();
	}

	public void execute(Tuple input, BasicOutputCollector collector) {
		String key = input.getStringByField("word");
		int count = 0;
		if (keyValueState.containsKey(key)) {
			count = keyValueState.get(key);
		}
		//更新状态 
		int currentCount = count + 1;
		keyValueState.put(key, currentCount);
		//将最后结果输出给下游 
		collector.emit(new Values(key, currentCount));
	}

	public void declareOutputFields(OutputFieldsDeclarer declarer) {
		declarer.declare(new Fields("key", "result"));
	}
}
关闭Acker机制
  • 设置NumAcker数目为0
  • 在Spout发送时不提供MsgID
  • 在Bolt 不使用锚定


优点:可以提示Storm处理性能,减少延迟。

Storm 状态管理

Storm提供了一种机制使得Bolt可以存储和查询自己的操作的状态,目前Storm提供了一个默认的实现,该实现基于内存In-Memory实现,除此之外还提供了基于Redis/Memcached和Hbase等的实现.Storm提供了IStatefulBolt|BaseStatefulBolt用于实现Bolt的状态管理

public class WordCountBolt extends BaseStatefulBolt<KeyValueState<String, Integer>> {
	private KeyValueState<String, Integer> state;
	private OutputCollector collector;

	public void initState(KeyValueState<String, Integer> state) {
		this.state = state;
	}

	@Override
	public void prepare(Map<String, Object> topoConf, TopologyContext context, OutputCollector collector) {
		this.collector = collector;
	}

	public void execute(Tuple input) {
		String key = input.getStringByField("word");
		Integer count = input.getIntegerByField("count");
		Integer historyCount = state.get(key, 0);
		Integer currentCount = historyCount + count;
		//更新状态 
		state.put(key, currentCount);
		//必须锚定当前的input 
		collector.emit(input, new Values(key, currentCount));
		collector.ack(input);
	}

	@Override
	public void declareOutputFields(OutputFieldsDeclarer declarer) {
		declarer.declare(new Fields("key", "result"));
	}
}

含有State的bolt的Topology必须开启Ack机制。

集成Redis实现状态持久化

  • pom依赖
<dependency> 
	<groupId>org.apache.storm</groupId> 
	<artifactId>storm-redis</artifactId> 
	<version>2.0.0</version> 
</dependency>
  • 安装Redis
[root@CentOSA ~]# yum install -y gcc-c++ 
[root@CentOSA ~]# tar -zxf redis-3.2.9.tar.gz 
[root@CentOSA ~]# cd redis-3.2.9 
[root@CentOSA redis-3.2.9]# vi redis.conf 
bind CentOSA 
protected-mode no 
daemonize yes 
[root@CentOSA redis-3.2.9]# ./src/redis-server redis.conf 
[root@CentOSA redis-3.2.9]# ps -aux | grep redis-server 
Warning: bad syntax, perhaps a bogus '-'? See /usr/share/doc/procps-3.2.8/FAQ 
root 41601 0.1 0.5 135648 5676 ? Ssl 14:45 0:00 ./src/redis-server CentOSA:6379 
root 41609 0.0 0.0 103260 888 pts/1 S+ 14:45 0:00 grep redis-server
  • 配置topology,添加如下配置信息
//配置Redis
conf.put(Config.TOPOLOGY_STATE_PROVIDER,"org.apache.storm.redis.state.RedisKeyValueStateProvider");
Map<String, Object> stateConfig = new HashMap<String, Object>();
Map<String, Object> redisConfig = new HashMap<String, Object>(); redisConfig.put("host","CentOSA");
redisConfig.put("port",6379); stateConfig.put("jedisPoolConfig",redisConfig);
ObjectMapper objectMapper = new ObjectMapper();
System.out.println(objectMapper.writeValueAsString(stateConfig));
conf.put(Config.TOPOLOGY_STATE_PROVIDER_CONFIG,objectMapper.writeValueAsString(stateConfig));

HBase 集成实现状态持久化

  • pom依赖
<dependency> 
	<groupId>org.apache.storm</groupId> 
	<artifactId>storm-hbase</artifactId> 
	<version>2.0.0</version> 
</dependency>
  • 安装Hbase
  1. 安装Hadoop(略)
  2. 安装Hbase环境 (略)
  3. 配置topology,添加如下配置信息
config.put(Config.TOPOLOGY_STATE_PROVIDER,"org.apache.storm.hbase.state.HBaseKeyValueStateProvider");
Map<String, Object> hbaseConfig = new HashMap<String, Object>();
hbaseConfig.put("hbase.zookeeper.quorum","CentOSA");
//Hbase zookeeper连接参数
config.put("hbase.conf",hbaseConfig);
ObjectMapper objectMapper = new ObjectMapper();
Map<String, Object> stateConfig = new HashMap<String, Object>();
stateConfig.put("hbaseConfigKey","hbase.conf");
stateConfig.put("tableName","namespace:表名");
stateConfig.put("columnFamily","cf1");
config.put(Config.TOPOLOGY_STATE_PROVIDER_CONFIG,objectMapper.writeValueAsString(stateConfig));

检查点机制

检查点由指定 topology.state.checkpoint.interval.ms 的内部检查点spout触发。如果拓扑中至少有一个 IStatefulBolt,则拓扑构建器会自动添加检查点spout。对于有状态拓扑,拓扑构建器将IStatefulBolt包装在StatefulBoltExecutor中,该处理器在接收检查点元组时处理状态提交。非状态Bolt包装在CheckpointTupleForwarder中,它只转发检查点Tuple,以便检查点元组可以流经拓扑DAG。检查点元组流经单独的内部流,即$ checkpoint。拓扑构建器在整个拓扑中连接检查点流,并在根处设置检查点spout。

在这里插入图片描述

注意在配置检查点时间的时候,要求检查点的时间不得大于topology.message.timeout.secs 时间。

检查点官方文档介绍

Distributed RPC

Storm是一个分布式实时处理框架,它支持以DRPC方式调用.可以理解为Storm是一个集群,DRPC提供了集群中处理功能的访问接口

Storm的DRPC真正的实现了并行计算.Storm Topology接受用户的参数进行计算,然后最终将计算结果以Tuple形式返回给用户.
在这里插入图片描述
修改storm.yaml配置文件(注意配置时前面要有个空格)

 storm.zookeeper.servers:
     - "CentOSA"
     - "CentOSB"
     - "CentOSC"
 storm.local.dir: "/usr/apache-storm-1.2.2/storm-stage"
 nimbus.seeds: ["CentOSA","CentOSB","CentOSC"]
 supervisor.slots.ports:
     - 6700
     - 6701
     - 6702
     - 6703
 drpc.servers:
     - "CentOSA"
     - "CentOSB"
     - "CentOSC"
 storm.thrift.transport: "org.apache.storm.security.auth.plain.PlainSaslTransportPlugin"
  • 重启Storm所有服务
#多一步启动dprc
[root@CentOSX ~]# nohup storm drpc  >/dev/null 2>&1 &
[root@CentOSX ~]# nohup storm nimbus >/dev/null 2>&1 &
[root@CentOSX ~]# nohup storm supervisor >/dev/null 2>&1 &
[root@CentOSA ~]# nohup storm ui >/dev/null 2>&1 &
DRPC案例剖析
<dependency>
    <groupId>org.apache.storm</groupId>
    <artifactId>storm-redis</artifactId>
    <version>2.0.0</version>
</dependency>

WordCountRedisLookupMapper

public class WordCountRedisLookupMapper implements RedisLookupMapper {
    // iTuple 上游发送的iTuple,目的是为了获取id
    public List<Values> toTuple(ITuple iTuple, Object value) {
        Object id = iTuple.getValue(0);
        List<Values> values = Lists.newArrayList();
        if(value == null){
            value = 0;
        }
        values.add(new Values(id, value));

        return values;

    }

    //第一个位置的name必须为id,后续的无所谓了
    public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
        outputFieldsDeclarer.declare(new Fields("id", "num"));
    }
    //告知数据类型
    public RedisDataTypeDescription getDataTypeDescription() {
        return new RedisDataTypeDescription(RedisDataTypeDescription.RedisDataType.HASH,"wordcount");
    }

    public String getKeyFromTuple(ITuple iTuple) {
        return iTuple.getString(1);
    }
    //该方法无需实现,默认是给RedisStoreBolt使用
    public String getValueFromTuple(ITuple iTuple) {
        return null;
    }
}

TopologyDRPCStreeamTest

public class TopologyDRPCStreeamTest {
    public static void main(String[] args) throws Exception {
        LinearDRPCTopologyBuilder builder = new LinearDRPCTopologyBuilder("count");
        Config conf = new Config();
        conf.setDebug(false);

        JedisPoolConfig jedisConfig = new JedisPoolConfig.Builder()
                .setHost("CentOSA").setPort(6379).build();

        RedisLookupBolt lookupBolt = new RedisLookupBolt(jedisConfig, new WordCountRedisLookupMapper());
        builder.addBolt(lookupBolt);

        StormSubmitter.submitTopology("drpc-demo", conf, builder.createRemoteTopology());

    }
}
  • 打包服务
  • 提交topology
storm jar jar包名 Topology类的路径 --artifacts  程序运行所需的maven坐标依赖,
因为和redis关联,这里后跟的是redis的maven坐标
[root@CentOSA ~]# storm jar storm-lowlevel-1.0-SNAPSHOT.jar  com.test.TopologyDRPCStreeamTest --artifacts 'org.apache.storm:storm-redis:2.0.0'

–artifacts 指定程序运行所需的maven坐标依赖,strom脚本会自动连接网络下载,如果有多个依赖请使用^隔开。如果依赖实在私服上用户可以使用--artifactRepositories

[root@CentOSA ~]# storm jar storm-lowlevel-1.0-SNAPSHOT.jar  com.test.TopologyDRPCStreeamTest 
        --artifacts 'org.apache.storm:storm-redis:2.0.0' 
        --artifactRepositories  'local^http://192.168.111.1:8081/nexus/content/groups/public/'

使用artifactRepositories 时,要自己有maven私服
Maven私服的安装与配置
artifacts参考文档,在页面最下面

Kafka Storm集成(官方文档上有和各个软件集成的案例)

  • pom
<dependency>
    <groupId>org.apache.storm</groupId>
    <artifactId>storm-kafka-client</artifactId>
    <version>2.0.0</version>
</dependency>
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>2.2.0</version>
</dependency>
  • 构建Kafkaspout
//此案例中,是将kafka的topic中的数据作为spout
public class KafkaTopologyDemo {
    public static void main(String[] args) throws Exception {
        TopologyBuilder builder = new TopologyBuilder();

        String boostrapServers="CentOSA:9092,CentOSB:9092,CentOSC:9092";
                String topic="topic01";

        KafkaSpout<String, String> kafkaSpout = buildKafkaSpout(boostrapServers,topic);

        //默认输出的Tuple格式 new Fields(new String[]{"topic", "partition", "offset", "key", "value"});
        builder.setSpout("KafkaSpout",kafkaSpout,3);

        builder.setBolt("KafkaPrintBlot",new KafkaPrintBlot(),1)
                .shuffleGrouping("KafkaSpout");


        Config conf = new Config();
        conf.setNumWorkers(3);
        LocalCluster cluster = new LocalCluster();
        cluster.submitTopology("kafkaspout",conf,builder.createTopology());
    }
    public static KafkaSpout<String, String> buildKafkaSpout(String boostrapServers,String topic){

        KafkaSpoutConfig<String,String> kafkaspoutConfig=KafkaSpoutConfig.builder(boostrapServers,topic)
                .setProp(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer")
                .setProp(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer")
                .setProp(ConsumerConfig.GROUP_ID_CONFIG,"g1")
                .setEmitNullTuples(false)
                .setFirstPollOffsetStrategy(FirstPollOffsetStrategy.LATEST)
                .setProcessingGuarantee(KafkaSpoutConfig.ProcessingGuarantee.AT_LEAST_ONCE)
                .setMaxUncommittedOffsets(10)//一旦分区积压有10个未提交offset,Spout停止poll数据,解决Storm背压问题
                .setRecordTranslator(new MyRecordTranslator<String, String>())
                .build();
        return new KafkaSpout<String, String>(kafkaspoutConfig);
    }
}
  • MyRecordTranslator
public class MyRecordTranslator<K, V>  extends DefaultRecordTranslator<K, V> {
    @Override
    public List<Object> apply(ConsumerRecord<K, V> record) {
        return new Values(new Object[]{record.topic(),record.partition(),record.offset(),record.key(),record.value(),record.timestamp()});
    }

    @Override
    public Fields getFieldsFor(String stream) {
        return new Fields("topic","partition","offset","key","value","timestamp");
    }
}

Kafka Hbase Redis 整合

  • pom
<dependency>
    <groupId>org.apache.storm</groupId>
    <artifactId>storm-core</artifactId>
    <version>2.0.0</version>
    <scope>provide</scope>
</dependency>

<dependency>
    <groupId>org.apache.storm</groupId>
    <artifactId>storm-client</artifactId>
    <version>2.0.0</version>
    <scope>provide</scope>
</dependency>

<dependency>
    <groupId>org.apache.storm</groupId>
    <artifactId>storm-redis</artifactId>
    <version>2.0.0</version>
</dependency>

<dependency>
    <groupId>org.apache.storm</groupId>
    <artifactId>storm-hbase</artifactId>
    <version>2.0.0</version>
</dependency>
<dependency>
    <groupId>org.apache.storm</groupId>
    <artifactId>storm-kafka-client</artifactId>
    <version>2.0.0</version>
</dependency>
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>2.2.0</version>
</dependency>

WodCountTopology

//此案例中,redis做状态存储,kafka的tpoic做spout,hbase存储流处理结果
public class WodCountTopology {
    public static void main(String[] args) throws Exception {

        TopologyBuilder builder=new TopologyBuilder();
        Config conf = new Config();

        //Redis 状态管理
        conf.put(Config.TOPOLOGY_STATE_PROVIDER,"org.apache.storm.redis.state.RedisKeyValueStateProvider");
        Map<String,Object> stateConfig=new HashMap<String,Object>();
        Map<String,Object> redisConfig=new HashMap<String,Object>();
        redisConfig.put("host","CentOSA");
        redisConfig.put("port",6379);
        stateConfig.put("jedisPoolConfig",redisConfig);
        ObjectMapper objectMapper=new ObjectMapper();
        System.out.println(objectMapper.writeValueAsString(stateConfig));
        conf.put(Config.TOPOLOGY_STATE_PROVIDER_CONFIG,objectMapper.writeValueAsString(stateConfig));

        //配置Hbase连接参数
        Map<String, Object> hbaseConfig = new HashMap<String, Object>();
        hbaseConfig.put("hbase.zookeeper.quorum", "CentOSA");
        conf.put("hbase.conf", hbaseConfig);

        //构建KafkaSpout
        KafkaSpout<String, String> kafkaSpout = KafkaSpoutUtils.buildKafkaSpout("CentOSA:9092,CentOSB:9092,CentOSC:9092", "topic01");

        builder.setSpout("KafkaSpout",kafkaSpout,3);
        builder.setBolt("LineSplitBolt",new LineSplitBolt(),3)
                .shuffleGrouping("KafkaSpout");
        builder.setBolt("WordCountBolt",new WordCountBolt(),3)
                .fieldsGrouping("LineSplitBolt",new Fields("word"));
		
		//hbase的存储位置  rowkey、列簇、列限定符
        SimpleHBaseMapper mapper = new SimpleHBaseMapper()
                .withRowKeyField("key")
                .withColumnFields(new Fields("key"))
                .withCounterFields(new Fields("result"))//要求改field的值必须是数值类型,用来计数的
                .withColumnFamily("cf1");

        HBaseBolt haseBolt = new HBaseBolt("namespace:tablename", mapper)
                .withConfigKey("hbase.conf");
        builder.setBolt("HBaseBolt",haseBolt,3)
                .fieldsGrouping("WordCountBolt",new Fields("key"));

        StormSubmitter.submitTopology("wordcount1",conf,builder.createTopology());
    }
}

WordCountBolt

public class WordCountBolt extends BaseStatefulBolt<KeyValueState<String,Integer>> {
    private KeyValueState<String,Integer> state;
    private OutputCollector collector;
    public void initState(KeyValueState<String,Integer> state) {
        this.state=state;
    }
    @Override
    public void prepare(Map<String, Object> topoConf, TopologyContext context, OutputCollector collector) {
        this.collector=collector;
    }
    public void execute(Tuple input) {
        String key = input.getStringByField("word");
        Integer count=input.getIntegerByField("count");
        Integer historyCount = state.get(key, 0);

        Integer currentCount=historyCount+count;
        //更新状态
        state.put(key,currentCount);

        //必须锚定当前的input
        collector.emit(input,new Values(key,currentCount));
        collector.ack(input);

    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("key","result"));
    }
}

LineSplitBolt

public class LineSplitBolt extends BaseBasicBolt {
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("word","count"));
    }
    public void execute(Tuple input, BasicOutputCollector collector) {
        String line = input.getStringByField("value");
        String[] tokens = line.split("\\W+");
        for (String token : tokens) {
            //锚定当前Tuple
            collector.emit(new Values(token,1));
        }
    }
}

Storm 窗口函数

Storm核心支持处理窗口内的一组元组。 Windows使用以下两个参数指定(类似Kafka Streaming):

  • 窗口长度- the length or duration of the window
  • 滑动间隔- the interval at which the windowing slides

Sliding Window(kafka Stream的hopping time window)

Tuples以窗口进行分组,窗口每间隔一段滑动间隔滑动出一个新的窗口。例如下面就是一个基于时间滑动的窗口,
窗口每间隔10秒钟为一个窗口,每间隔5秒钟滑动一次窗口,从下面的案例中可以看到,滑动窗口是存在一定的重
叠,也就是说一个tuple可能属于1~n个窗口 。

........| e1 e2 | e3 e4 e5 e6 | e7 e8 e9 |...
-5 		0 		5 			  10 		15 		-> time 
|<------- w1 -->| 
		|<---------- w2 ----->| 
				|<-------------- w3 ---->|
public class WodCountTopology {
	public static void main(String[] args) throws Exception {
		TopologyBuilder builder = new TopologyBuilder();
		Config conf = new Config();
		//构建KafkaSpout,这里是和kafka做整合了,若不需要,随便是个spout即可
		KafkaSpout<String, String> kafkaSpout = KafkaSpoutUtils.buildKafkaSpout("CentOSA:9092,CentOSB:9092,CentOSC:9092", "topic01");
		builder.setSpout("KafkaSpout", kafkaSpout, 3);
		builder.setBolt("LineSplitBolt", new LineSplitBolt(), 3).shuffleGrouping("KafkaSpout");
		ClickWindowCountBolt clickWindowCountBolt = new ClickWindowCountBolt();
		clickWindowCountBolt.withWindow(BaseWindowedBolt.Duration.seconds(5), BaseWindowedBolt.Duration.seconds(2));
		builder.setBolt("ClickWindowCountBolt", clickWindowCountBolt, 3).fieldsGrouping("LineSplitBolt", new Fields("word"));
		builder.setBolt("WordPrintBolt", new WordPrintBolt(), 3).fieldsGrouping("ClickWindowCountBolt", new Fields("key"));
		//new LocalCluster():本地运行Topology,不用放到linux系统上,方便调试
		new LocalCluster().submitTopology("wordcount", conf, builder.createTopology());
	}
}

Tumbling Window (kafka Stream的Tumbling time windows)

Tuples以窗口分组,窗口滑动的长度恰好等于窗口长度,这就导致和Tumbling Window和Sliding Window最大的区别
是Tumbling Window没有重叠,也就是说一个Tuple只属于固定某一个window。

| e1 e2 | e3 e4 e5 e6 | e7 e8 e9 |... 
0 		5 			  10 		15 		-> time 
	w1 			w2 			w3
public class WodCountTopology {
	public static void main(String[] args) throws Exception {
		TopologyBuilder builder = new TopologyBuilder();
		Config conf = new Config();
		//构建KafkaSpout,这里是和kafka做整合了,若不需要,随便是个spout即可
		KafkaSpout<String, String> kafkaSpout = KafkaSpoutUtils.buildKafkaSpout("CentOSA:9092,CentOSB:9092,CentOSC:9092", "topic01");
		builder.setSpout("KafkaSpout", kafkaSpout, 3);
		builder.setBolt("LineSplitBolt", new LineSplitBolt(), 3).shuffleGrouping("KafkaSpout");
		ClickWindowCountBolt clickWindowCountBolt = new ClickWindowCountBolt();
		//设置滚动窗口
		clickWindowCountBolt.withTumblingWindow(BaseWindowedBolt.Duration.seconds(5));
		builder.setBolt("ClickWindowCountBolt", clickWindowCountBolt, 3).fieldsGrouping("LineSplitBolt", new Fields("word"));
		builder.setBolt("WordPrintBolt", new WordPrintBolt(), 3).fieldsGrouping("ClickWindowCountBolt", new Fields("key"));
		//new LocalCluster():本地运行Topology,不用放到linux系统上,方便调试
		new LocalCluster().submitTopology("wordcount", conf, builder.createTopology());
	}
}

ClickWindowCountBolt类

public class ClickWindowCountBolt extends BaseWindowedBolt {
    private OutputCollector collector;

    @Override
    public void prepare(Map<String, Object> topoConf, TopologyContext context, OutputCollector collector) {
        this.collector = collector;
    }

    public void execute(TupleWindow tupleWindow) {
        Long startTimestamp = tupleWindow.getStartTimestamp();
        Long endTimestamp = tupleWindow.getEndTimestamp();
        SimpleDateFormat sdf = new SimpleDateFormat("HH:mm:ss");
        System.out.println(sdf.format(startTimestamp) + "\t" + sdf.format(endTimestamp));
        HashMap<String, Integer> hashMap = new HashMap<String, Integer>();
        List<Tuple> tuples = tupleWindow.get();
        for (Tuple tuple : tuples) {
            String key = tuple.getStringByField("word");
            Integer historyCount = 0;
            if (hashMap.containsKey(key)) {
                historyCount = hashMap.get(key);
            }
            int currentCount = historyCount + 1;
            hashMap.put(key, currentCount);
        }//将数据输出给PrintBolt
        for (Map.Entry<String, Integer> entry : hashMap.entrySet()) {
            collector.emit(tupleWindow.get(), new Values(entry.getKey(), entry.getValue()));
        }
        for (Tuple tuple : tupleWindow.get()) {
            collector.ack(tuple);
        }
    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("key", "result"));
    }
}
Storm窗口的时间采样策略
  • 默认情况下,Storm窗口计算时间是根据Tuple抵达Bolt时当前系统时间。只有当记录产生时间和计算时间差非常小的时候,该计算才有意义,通常把这种计算时间的策略称为 Prcessing Time
  • 通常在实际业务场景中,计算节点的时间往往比数据产生的时间较晚,这个时候基于窗口的就失去了原有的意义。Storm支持通过提取Tuple所携带的时间参数,进行窗口计算。通常把这种计算时间的策略称为 Event Time
延迟Tuple处理
  • 水位线:watermaker,该值的取值是当前接收Tuple的最新的时间戳减去 延迟lag即可以得到水位线。水位线的作用是为了推进触发窗口的。

在这里插入图片描述

  • lag:设置水位线的延迟间隙
public class WodCountTopology {
    public static void main(String[] args) throws Exception {
        TopologyBuilder builder = new TopologyBuilder();
        Config conf = new Config();
        //构建KafkaSpout
        KafkaSpout<String, String> kafkaSpout = KafkaSpoutUtils.buildKafkaSpout("CentOSA:9092,CentOSB:9092,CentOSC:9092", "topic02");
        builder.setSpout("KafkaSpout", kafkaSpout, 3);
        builder.setBolt("ExtractTimeBolt", new ExtractTimeBolt(), 3).shuffleGrouping("KafkaSpout");
        builder.setBolt("ClickWindowCountBolt", new ClickWindowCountBolt().withWindow(BaseWindowedBolt.Duration.seconds(10), BaseWindowedBolt.Duration.seconds(5))
                .withTimestampField("timestamp")
                .withLag(BaseWindowedBolt.Duration.seconds(2))
                .withWatermarkInterval(BaseWindowedBolt.Duration.seconds(1))
                .withLateTupleStream("latestream"), 1)
                .fieldsGrouping("ExtractTimeBolt", new Fields("word"));
        builder.setBolt("lateBolt", new LateBolt(), 3).shuffleGrouping("ClickWindowCountBolt", "latestream");
        new LocalCluster().submitTopology("wordcount", conf, builder.createTopology());
    }
}

ClickWindowCountBolt类

public class ClickWindowCountBolt extends BaseWindowedBolt {
    private OutputCollector collector;

    @Override
    public void prepare(Map<String, Object> topoConf, TopologyContext context, OutputCollector collector) {
        this.collector = collector;
    }

    public void execute(TupleWindow tupleWindow) {
        Long startTimestamp = tupleWindow.getStartTimestamp();
        Long endTimestamp = tupleWindow.getEndTimestamp();
        SimpleDateFormat sdf = new SimpleDateFormat("HH:mm:ss");
        System.out.println(sdf.format(startTimestamp) + "\t" + sdf.format(endTimestamp) + " \t" + this);
        for (Tuple tuple : tupleWindow.get()) {
            collector.ack(tuple);
            String key = tuple.getStringByField("word");
            System.out.println("\t" + key);
        }
    }
}

ExtractTimeBolt类

public class ExtractTimeBolt extends BaseBasicBolt {
    public void execute(Tuple input, BasicOutputCollector collector) {
        String line = input.getStringByField("value");
        String[] tokens = line.split("\\W+");
        SimpleDateFormat sdf = new SimpleDateFormat("HH:mm:ss");
        Long ts = Long.parseLong(tokens[1]);
        System.out.println("收到:" + tokens[0] + "\t" + sdf.format(ts));
        collector.emit(new Values(tokens[0], ts));
    }

    public void declareOutputFields(OutputFieldsDeclarer declarer) {
        declarer.declare(new Fields("word", "timestamp"));
    }
}

LateBolt类

public class LateBolt extends BaseBasicBolt {
    public void execute(Tuple tuple, BasicOutputCollector basicOutputCollector) {
        System.out.println("迟到的元素:" + tuple);
    }

    public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
    }
}
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值