一、实现功能
当sql中既有groupby又有窗口函数,那么两者的执行顺序是什么样?这个特此研究一下,方便后续有使用的时候会方便。
二、实际例子
3.1案例数据
/opt/datas/score.json,学生名字、课程、分数
{"name":"A","lesson":"Math","score":100}
{"name":"B","lesson":"Math","score":100}
{"name":"C","lesson":"Math","score":99}
{"name":"D","lesson":"Math","score":98}
{"name":"A","lesson":"E","score":100}
{"name":"B","lesson":"E","score":99}
{"name":"C","lesson":"E","score":99}
{"name":"D","lesson":"E","score":98}
Spark读取数据,并且注册临时表
scala> val dfJson = spark.read.format("json").load("file:///opt/datas/score.json")
dfJson: org.apache.spark.sql.DataFrame = [lesson: string, name: string ... 1 more field]
scala> dfJson.show
+------+----+-----+
|lesson|name|score|
+------+----+-----+
| Math| A| 100|
| Math| B| 100|
| Math| C| 99|
| Math| D| 98|
| E| A| 100|
| E| B| 99|
| E| C| 99|
| E| D| 98|
+------+----+-----+
scala> dfJson.createOrReplaceTempView("score")
3.2 进行分析实例
(1)单纯groupby求聚合后数量
scala> spark.sql("select lesson,count(*) from score group by lesson").show
+------+--------+
|lesson|count(1)|
+------+--------+
| E| 4|
| Math| 4|
+------+--------+
打印执行计划
scala> spark.sql("select lesson,count(*) from score group by lesson").explain
== Physical Plan ==
*(2) HashAggregate(keys=[lesson#315], functions=[count(1)])
+- Exchange hashpartitioning(lesson#315, 200)
+- *(1) HashAggregate(keys=[lesson#315], functions=[partial_count(1)])
+- *(1) FileScan json [lesson#315] Batched: false, Format: JSON, Location: InMemoryFileIndex[file:/home/mip/chongliang/tmp/score.json], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<lesson:string>
(2)单纯窗口函数,以lesson为窗口,计数
scala> spark.sql("select lesson,count(*) over (partition by lesson order by name) as count from score ").show
+------+-----+
|lesson|count|
+------+-----+
| E| 1|
| E| 2|
| E| 3|
| E| 4|
| Math| 1|
| Math| 2|
| Math| 3|
| Math| 4|
+------+-----+
打印执行计划
scala> spark.sql("select lesson,count(*) over (partition by lesson order by name) as count from score ").explain
== Physical Plan ==
*(3) Project [lesson#315, count#2426L]
+- Window [count(1) windowspecdefinition(lesson#315, name#316 ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS count#2426L], [lesson#315], [name#316 ASC NULLS FIRST]
+- *(2) Sort [lesson#315 ASC NULLS FIRST, name#316 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(lesson#315, 200)
+- *(1) Project [lesson#315, name#316]
+- *(1) FileScan json [lesson#315,name#316] Batched: false, Format: JSON, Location: InMemoryFileIndex[file:/home/mip/chongliang/tmp/score.json], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<lesson:string,name:string>
(3)同时groupby,然后开窗口
scala> spark.sql(" select lesson,count(*) over (partition by lesson) as count from score group by lesson").show
+------+-----+
|lesson|count|
+------+-----+
| E| 1|
| Math| 1|
+------+-----+
打印执行计划
scala> spark.sql(" select lesson,count(*) over (partition by lesson) as count from score group by lesson").explain
== Physical Plan ==
Window [count(1) windowspecdefinition(lesson#315, specifiedwindowframe(RowFrame, unboundedpreceding$(), unboundedfollowing$())) AS count#2662L], [lesson#315]
+- *(2) Sort [lesson#315 ASC NULLS FIRST], false, 0
+- *(2) HashAggregate(keys=[lesson#315], functions=[])
+- Exchange hashpartitioning(lesson#315, 200)
+- *(1) HashAggregate(keys=[lesson#315], functions=[])
+- *(1) FileScan json [lesson#315] Batched: false, Format: JSON, Location: InMemoryFileIndex[file:/home/mip/chongliang/tmp/score.json], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<lesson:string>
三、总结
所以,Sparksql窗口函数,是在聚合groupby后,再在聚合集合的基础上去开窗的。也就是先执行groupby,然后,执行窗口函数。