Spark Streaming的窗口操作

原创 2014年04月15日 18:16:24

Spark Streaming的Window Operation可以理解为定时的进行一定时间段内的数据的处理。

不要怪我语文不太好。。下面上原理图吧,一图胜千言:


如图:

1. 红色的矩形就是一个窗口,窗口hold的是一段时间内的数据流。

2.这里面每一个time都是时间单元,在官方的例子中,每隔window size是3 time unit, 而且每隔2个单位时间,窗口会slide一次。

所以基于窗口的操作,需要指定2个参数:

  • window length - The duration of the window (3 in the figure)
  • slide interval - The interval at which the window-based operation is performed (2 in the figure).  
1.窗口大小,个人感觉是一段时间内数据的容器。
2.滑动间隔,就是我们可以理解的cron表达式吧。 - -!

举个例子吧:
还是以最著名的wordcount举例,每隔10秒,统计一下过去30秒过来的数据。
// Reduce last 30 seconds of data, every 10 seconds
val windowedWordCounts = pairs.reduceByKeyAndWindow(_ + _, Seconds(30), Seconds(10))

这里的paris就是一个MapedRDD, 类似(word,1)
reduceByKeyAndWindow // 这个类似RDD里面的reduceByKey,就是对RDD应用function
在这里是根据key,对至进行聚合,然后累加。

下面粘贴一下它的API,仅供参考:
Transformation Meaning
window(windowLengthslideInterval) Return a new DStream which is computed based on windowed batches of the source DStream.
countByWindow(windowLength,slideInterval) Return a sliding window count of elements in the stream.
reduceByWindow(funcwindowLength,slideInterval) Return a new single-element stream, created by aggregating elements in the stream over a sliding interval using func. The function should be associative so that it can be computed correctly in parallel.
reduceByKeyAndWindow(func,windowLengthslideInterval, [numTasks]) When called on a DStream of (K, V) pairs, returns a new DStream of (K, V) pairs where the values for each key are aggregated using the given reduce function func over batches in a sliding window. Note: By default, this uses Spark's default number of parallel tasks (2 for local machine, 8 for a cluster) to do the grouping. You can pass an optional numTasks argument to set a different number of tasks.
reduceByKeyAndWindow(funcinvFunc,windowLengthslideInterval, [numTasks]) A more efficient version of the above reduceByKeyAndWindow() where the reduce value of each window is calculated incrementally using the reduce values of the previous window. This is done by reducing the new data that enter the sliding window, and "inverse reducing" the old data that leave the window. An example would be that of "adding" and "subtracting" counts of keys as the window slides. However, it is applicable to only "invertible reduce functions", that is, those reduce functions which have a corresponding "inverse reduce" function (taken as parameterinvFunc. Like in reduceByKeyAndWindow, the number of reduce tasks is configurable through an optional argument.
countByValueAndWindow(windowLength,slideInterval, [numTasks]) When called on a DStream of (K, V) pairs, returns a new DStream of (K, Long) pairs where the value of each key is its frequency within a sliding window. Like in reduceByKeyAndWindow, the number of reduce tasks is configurable through an optional argument.
   

Output Operations

When an output operator is called, it triggers the computation of a stream. Currently the following output operators are defined:

Output Operation Meaning
print() Prints first ten elements of every batch of data in a DStream on the driver.
foreachRDD(func) The fundamental output operator. Applies a function, func, to each RDD generated from the stream. This function should have side effects, such as printing output, saving the RDD to external files, or writing it over the network to an external system.
saveAsObjectFiles(prefix, [suffix]) Save this DStream's contents as a SequenceFile of serialized objects. The file name at each batch interval is generated based on prefix and suffix"prefix-TIME_IN_MS[.suffix]".
saveAsTextFiles(prefix, [suffix]) Save this DStream's contents as a text files. The file name at each batch interval is generated based on prefix and suffix"prefix-TIME_IN_MS[.suffix]".
saveAsHadoopFiles(prefix, [suffix]) Save this DStream's contents as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix"prefix-TIME_IN_MS[.suffix]".
原创,转载请注明出处http://blog.csdn.net/oopsoom/article/details/23776477

版权声明:

相关文章推荐

通过Spark Streaming的window操作实战模拟热点搜索词案例实战

本博文主要内容包括:1、在线热点搜索词实现解析 2、SparkStreaming 利用reduceByKeyAndWindow实现在线热点搜索词实战一:在线热点搜索词实现解析背景描述:在社交网络(例...

Spark-Streaming之window滑动窗口应用

Spark-Streaming之window滑动窗口应用,Spark Streaming提供了滑动窗口操作的支持,从而让我们可以对一个滑动窗口内的数据执行计算操作。每次掉落在窗口内的RDD的数据,会被...

SparkStream 使用

炼数成金 课程 1、监控本地文件夹下的文件信息 import org.apache.spark.SparkConf import org.apache.spark.streaming.{Seconds...

spark streaming窗口函数的使用和理解

spark  streaming中的窗口函数虽然不如flink那么丰富,但是特别有用,看下面例子: kafkaStream.transform { rdd => offsetRang...

Spark RDD Action 详解---Spark学习笔记8

Spark RDD Action 详解配有实际例子

腾讯公司十周年中奖活动是真的吗√腾讯公司10周年中奖活动是真的吗

★抽奖腾讯备案电话〖0755+3303一7551〗王总经理抽 奖 二 线【95013一2195一0586】活 动 热 线〖0755+3303一7551〗 幸 运 用 户 必 须 遵 守 领 奖 程 序...

大数据工程师(开发)面试系列(7)

MapReduce1. 不指定语言,写一个WordCount的MapReduce我:最近刚学了scala,并且就有scala版本的WordCount,刚好学以致用了一下: 补:至于java版本,虾皮...

算法相关文章索引(4)

实战演练 LeetCode 474-Ones and Zeroes LeetCode 19-Remove Nth Node From End of List  LeetCode 203-Remove ...

Spark源码编译---Spark学习笔记1

要学习一个框架最好的方式就是调试其源代码。 编译Spark 0.81  with hadoop2.2.0 本机环境: 1.eclipse kepler 2.maven3.1 3.scala2...

亲疏分坐

9、每次干杯时,倒满,然后在喝前伪装没有拿稳酒盅,尽量洒出去一些,这样每次可以少喝进去不少;­7、领导夹菜时,千万不要转酒桌中间的圆盘,领导夹菜你转盘是酒桌上大忌;­规矩六:自己敬别人,如果碰杯,一句...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:深度学习:神经网络中的前向传播和反向传播算法推导
举报原因:
原因补充:

(最多只允许输入30个字)