尝试了解Spark Structured Streaming方面的SparkSql .
Spark Session从kafka主题中读取事件,将数据聚合到按不同列名分组的计数,并将其打印到控制台 .
原始输入数据结构如下:
+--------------+--------------------+----------+----------+-------+-------------------+--------------------+----------+
|. sourceTypes| Guid| platform|datacenter|pagesId| eventTimestamp| Id1234| Id567890|
+--------------+--------------------+----------+----------+-------+-------------------+--------------------+----------+
| Notififcation|....................| ANDROID| dev| aa|2018-09-27 09:41:29|fce81f05-a085-392...|{"id":...|
| Notififcation|....................| ANDROID| dev| ab|2018-09-27 09:41:29|fce81f05-a085-392...|{"id":...|
| Notififcation|....................| WEBOS| dev| aa|2018-09-27 09:42:46|0ee089c1-d5da-3b3...|{"id":...|
| Notififcation|....................| WEBOS| dev| aa|2018-09-27 09:42:48|57c18964-40c9-311...|{"id":...|
| Notififcation|....................| WEBOS| dev| aa|2018-09-27 09:42:48|5ecf1d77-321a-379...|{"id":...|
| Notififcation|....................| WEBOS| dev| aa|2018-09-27 09:42:48|5ecf1d77-321a-379...|{"id":...|
| Notififcation|....................| WEBOS| dev| aa|2018-09-27 09:42:52|d9fc4cfa-0934-3e9...|{"id":...|
+--------------+--------------------+----------+----------+-------+-------------------+--------------------+---------+
sourceTypes , platform , datacenter 和 pageId 需要计数 .
使用以下代码聚合数据:
Dataset query = sourceDataset
.withWatermark("eventTimestamp", watermarkInterval)
.select(
col("eventTimestamp"),
col("datacenter"),
col("platform"),
col("pageId")
)
.groupBy(
window(col("eventTimestamp"), windowInterval),
col("datacenter"),
col("platform"),
col("pageId")
)
.agg(
max(col("eventTimestamp"))
);
这里 watermarkInterval=45seconds , windowInterval=15seconds & triggerInterval=15seconds .
使用新的聚合数据集:
aggregatedDataset
.writeStream()
.outputMode(OutputMode.Append())
.format("console")
.trigger(Trigger.ProcessingTime(triggerInterval))
.start();
有几个问题:
输出数据不打印每个 groupBy 平台,pageId等的计数 .
如何以json格式打印输出?我尝试在控制台上输出数据时使用 select(to_json(struct("*")).as("value")) ,但它不起作用 .