SparkStreaming中reduceByKeyAndWindow和updateStateByKey实现实时的WordCount
本文使用SparkStreaming中reduceByKeyAndWindow和updateStateByKey实现实时的WordCount的小案例
话不多说,直接上代码
package com.bigdata.wb.spark
import org.apache.spark.{HashPartitioner, SparkConf}
import org.apache.spark.streaming.{Seconds, StreamingContext}
/**
* @ author spencer
* @ date 2020/7/15 11:03
*/
object SparkWindowDemo {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("SparkWindowDemo").setMaster("local[*]")
val ssc = new StreamingContext(conf, Seconds(2))
val sc = ssc.sparkContext
ssc.checkpoint("file:///D:\\IdeaProjects\\spark-hbase\\chck")
val wordDStream = ssc.socketTextStream("localhost", 7777)
// val windowWordCount = wordDStream.flatMap(_.split(" "))
// .map((_, 1))
// .reduceByKeyAndWindow((a: Int, b: Int) => a + b, Seconds(6), Seconds(4))
val windowWordCount = wordDStream.flatMap(_.split(" "))
.map(word => (word, 1))
.reduceByKeyAndWindow((a: Int, b: Int) => a + b, Seconds(10), Seconds(6))
.updateStateByKey((it: Iterator[(String, Seq[Int], Option[Int])]) => {
it.map(x => {
(x._1, x._2.sum + x._3.getOrElse(0))
})
}, new HashPartitioner(sc.defaultParallelism), true)
windowWordCount.print()
ssc.start()
ssc.awaitTermination()
}
//定义一个匿名函数,并赋值给变量
val myfunc = (it: Iterator[(String, Seq[Int], Option[Int])]) => {
it.map(x => {
(x._1, x._2.sum + x._3.getOrElse(0))
})
}
}
代码说明,reduceByKeyAndWindow((a: Int, b: Int) => a + b, Seconds(6), Seconds(4)),窗口大小为6s,滑动步长为4s,即每4s统计一次6s内的数据,即存在重叠。