记StructedStreaming水印(watermark)的一个问题
问题:
update模式时:官网的意思2020-04-11 12:21:00,owl,数据出现之后,watermark将变成2020-04-11 12:11:00,2020-04-11 12:04:00,dnokey也会算延迟数据抛弃,但是当event-time大于等于2020-04-11 12:05:02(02不是重点,请看代码,我加入了startTime = 2 seconds)时,数据依然有效。而2020-04-11 12:05:02是从何算来的,不知道。(我也只是想测一下watermark是不是左包含,结果发现大于等于2020-04-11 12:05:02 的数据都是有效数据),谁对官网有比较深刻的理解能给俺讲解一番否?
apped模式:输出不会更新的窗口,即:窗口的endTime< max event-time 减去 latethreshold
complete模式时:全部输出,没有过期一说。
更新
官网解释:
For a specific window ending at time T, the engine will maintain state and allow late data to update the state until (max event time seen by the engine - late threshold > T). In other words, late data within the threshold will be aggregated, but data later than the threshold will start getting dropped (see later in the section for the exact guarantees).
大致理解如下:
问题解决:update模式添加水印,数据可能是过期数据,但是它在某些窗口中还能用,比如watermark为:2020-04-11 12:11:00 ,但是窗口2020-04-11 12:05:00 - 2020-04-11 12:15:00,还有2020-04-11 12:10:00 - 2020-04-11 12:20:00,还是该继续更新(2020-04-11 12:12:00,dog 就会继续更新这两个窗口的数据,因为他并没有过期,所以代表这两个窗口里面的数据并不能保证不会更新,窗口没有过期那么在窗口中的任何数据都回继续更新)。个人理解。
同理:append模式是只会输出窗口不会更新的数据,可以推导。
代码如下:
package com.blueT.spark.oprations
import java.sql.Timestamp
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.execution.streaming.FileStreamSource.Timestamp
import org.apache.spark.sql.functions.window
import org.apache.spark.sql.streaming.Trigger
/**
* @Author: tao76
* @Description:
* @Date:Create:in 2020/4/11 15:58
* @Modified By:
*/
object WindedWordCountWithWaterMark {
def main(args: Array[String]): Unit = {
val windowDuration = s"10 minutes"
val slideDuration = s"5 minutes"
val spark = SparkSession
.builder
.master("local[2]")
.appName("WindedWordCountWithWaterMark")
.getOrCreate()
val lines = spark.readStream
.format("socket")
.option("host", "vm1")
.option("port", 9999)
.load()
import spark.implicits._
val words = lines.as[String].map(line =>{
val splits = line.split(",")
(Timestamp.valueOf(splits(0)),splits(1))
}
).toDF("timestamp","word")
val windowedCounts = words
.withWatermark("timestamp","10 minutes")
.groupBy(
window($"timestamp", windowDuration, slideDuration,"2 seconds"), $"word"
).count()
// .orderBy("window")
val query = windowedCounts.writeStream
.outputMode("update")
.format("console")
.trigger(Trigger.ProcessingTime(200))
.option("truncate", "false")
.start()
query.awaitTermination()
}
}
官网案例
update 模式:有待验证(反正我测出来是没看懂)
输入数据如下:
2020-04-11 12:07:00,dog
2020-04-11 12:11:00,dog
2020-04-11 12:00:00,dog
2020-04-11 11:55:00,dog
2020-04-11 12:00:00,dog
2020-04-11 12:07:00,dog
2020-04-11 12:08:00,owl
2020-04-11 12:14:00,dog
2020-04-11 12:09:00,cat
2020-04-11 12:15:00,cat
2020-04-11 12:08:00,dog
2020-04-11 12:13:00,owl
2020-04-11 12:21:00,owl
2020-04-11 12:04:00,dnokey
2020-04-11 12:17:00,owl
2020-04-11 12:04:00,owl
2020-04-11 12:10:00,owl
2020-04-11 12:11:00,owl
2020-04-11 12:10:59,owl
2020-04-11 12:10:01,owl
2020-04-11 12:10:00,owl
2020-04-11 12:09:59,owl
2020-04-11 12:05:03,owl
2020-04-11 12:05:02,owl
2020-04-11 12:05