Spark Streaming 3:转换操作

1.6.2 spark streaming programming guide http://spark.apache.org/docs/1.6.2/streaming-programming-guide.html


DStreams转换操作 Transformations on DStreams

与rdd类似,DStream也有许多转换操作,常用的如下

TransformationsMeaning
map(func)Return a new DStream by passing each element of the source DStream through a function func.
flatMap(func)Similar to map, but each input item can be mapped to 0 or more output items.
filter(func)Return a new DStream by selecting only the records of the source DStream on which func returns true.
repartition(numPartitions)Changes the level of parallelism in this DStream by creating more or fewer partitions.
union(otherStream)Return a new DStream that contains the union of the elements in the source DStream and otherDStream.
count()Return a new DStream of single-element RDDs by counting the number of elements in each RDD of the source DStream.
reduce(func)Return a new DStream of single-element RDDs by aggregating the elements in each RDD of the source DStream using a function func (which takes two arguments and returns one). The function should be associative so that it can be computed in parallel.
countByValue()When called on a DStream of elements of type K, return a new DStream of (K, Long) pairs where the value of each key is its frequency in each RDD of the source DStream.
reduceByKey(func, [numTasks])When called on a DStream of (K, V) pairs, return a new DStream of (K, V) pairs where the values for each key are aggregated using the given reduce function. Note: By default, this uses Spark's default number of parallel tasks (2 for local mode, and in cluster mode the number is determined by the config property spark.default.parallelism) to do the grouping. You can pass an optional numTasks argument to set a different number of tasks.
join(otherStream, [numTasks])When called on two DStreams of (K, V) and (K, W) pairs, return a new DStream of (K, (V, W)) pairs with all pairs of elements for each key.
cogroup(otherStream, [numTasks])When called on a DStream of (K, V) and (K, W) pairs, return a new DStream of (K, Seq[V], Seq[W]) tuples.
transform(func)Return a new DStream by applying a RDD-to-RDD function to every RDD of the source DStream. This can be used to do arbitrary RDD operations on the DStream.
updateStateByKey(func)Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values for the key. This can be used to maintain arbitrary state data for each key.

  • transform(func)
可以对DStream中的rdd进行操作
  • updateStateByKey(func)     

返回一个新的DStream。根据给定的func更新之前批次状态的结果,实现sparkstreaming计算结果的跨批次更新

案例:wordcount中实现跨批次计数

#encoding=utf8
"""SimpleApp"""

from pyspark import SparkContext,SparkConf
from pyspark.sql import HiveContext,Row
from pyspark.streaming import StreamingContext
import sys
reload(sys)
sys.setdefaultencoding('utf-8')

# test upddateStateByKey function
def updateFunc(newValues,states):
    return sum(newValues) + (states or 0)

sc = SparkContext("local[2]","streamApp")
sqlContext = HiveContext(sc)
ssc = StreamingContext(sc,30)

ssc.checkpoint('file:///input/checkpoint')

lines = ssc.textFileStream("file:///input/flume").flatMap(lambda line:line.split(',')).map(lambda x:(x,1)).reduceByKey(lambda x,y:x+y)

output = lines.updateStateByKey(updateFunc)
output.pprint()

ssc.start()
ssc.awaitTermination()



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值