SparkSQL(13): 窗口函数和group by执行顺序

本文详细分析了 Spark SQL 中同时使用 GroupBy 聚合与 Window 函数时的执行顺序。通过具体的数据实例和执行计划展示,得出结论:在 Spark 中,先执行 GroupBy 操作,然后在此基础上应用窗口函数。这一理解有助于优化复杂 SQL 查询的编写和性能调优。
摘要由CSDN通过智能技术生成

一、实现功能

当sql中既有groupby又有窗口函数,那么两者的执行顺序是什么样?这个特此研究一下,方便后续有使用的时候会方便。

二、实际例子

3.1案例数据

/opt/datas/score.json,学生名字、课程、分数

{"name":"A","lesson":"Math","score":100}
{"name":"B","lesson":"Math","score":100}
{"name":"C","lesson":"Math","score":99}
{"name":"D","lesson":"Math","score":98}
{"name":"A","lesson":"E","score":100}
{"name":"B","lesson":"E","score":99}
{"name":"C","lesson":"E","score":99}
{"name":"D","lesson":"E","score":98}

Spark读取数据,并且注册临时表

scala> val dfJson = spark.read.format("json").load("file:///opt/datas/score.json")
dfJson: org.apache.spark.sql.DataFrame = [lesson: string, name: string ... 1 more field]

scala> dfJson.show
+------+----+-----+
|lesson|name|score|
+------+----+-----+
|  Math|   A|  100|
|  Math|   B|  100|
|  Math|   C|   99|
|  Math|   D|   98|
|     E|   A|  100|
|     E|   B|   99|
|     E|   C|   99|
|     E|   D|   98|
+------+----+-----+

scala> dfJson.createOrReplaceTempView("score")


3.2 进行分析实例

(1)单纯groupby求聚合后数量
scala>  spark.sql("select  lesson,count(*) from score  group by lesson").show
+------+--------+
|lesson|count(1)|
+------+--------+
|     E|       4|
|  Math|       4|
+------+--------+

打印执行计划
scala>  spark.sql("select  lesson,count(*) from score  group by lesson").explain
== Physical Plan ==
*(2) HashAggregate(keys=[lesson#315], functions=[count(1)])
+- Exchange hashpartitioning(lesson#315, 200)
   +- *(1) HashAggregate(keys=[lesson#315], functions=[partial_count(1)])
      +- *(1) FileScan json [lesson#315] Batched: false, Format: JSON, Location: InMemoryFileIndex[file:/home/mip/chongliang/tmp/score.json], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<lesson:string>
(2)单纯窗口函数,以lesson为窗口,计数
scala> spark.sql("select  lesson,count(*) over (partition by lesson order by name) as count from score ").show
+------+-----+
|lesson|count|
+------+-----+
|     E|    1|
|     E|    2|
|     E|    3|
|     E|    4|
|  Math|    1|
|  Math|    2|
|  Math|    3|
|  Math|    4|
+------+-----+


打印执行计划
scala> spark.sql("select  lesson,count(*) over (partition by lesson order by name) as count from score ").explain
== Physical Plan ==
*(3) Project [lesson#315, count#2426L]
+- Window [count(1) windowspecdefinition(lesson#315, name#316 ASC NULLS FIRST, specifiedwindowframe(RangeFrame, unboundedpreceding$(), currentrow$())) AS count#2426L], [lesson#315], [name#316 ASC NULLS FIRST]
   +- *(2) Sort [lesson#315 ASC NULLS FIRST, name#316 ASC NULLS FIRST], false, 0
      +- Exchange hashpartitioning(lesson#315, 200)
         +- *(1) Project [lesson#315, name#316]
            +- *(1) FileScan json [lesson#315,name#316] Batched: false, Format: JSON, Location: InMemoryFileIndex[file:/home/mip/chongliang/tmp/score.json], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<lesson:string,name:string>
(3)同时groupby,然后开窗口
scala> spark.sql(" select lesson,count(*) over (partition by lesson) as count from score  group by lesson").show
+------+-----+
|lesson|count|
+------+-----+
|     E|    1|
|  Math|    1|
+------+-----+

打印执行计划

scala> spark.sql(" select lesson,count(*) over (partition by lesson) as count from score  group by lesson").explain
== Physical Plan ==
Window [count(1) windowspecdefinition(lesson#315, specifiedwindowframe(RowFrame, unboundedpreceding$(), unboundedfollowing$())) AS count#2662L], [lesson#315]
+- *(2) Sort [lesson#315 ASC NULLS FIRST], false, 0
   +- *(2) HashAggregate(keys=[lesson#315], functions=[])
      +- Exchange hashpartitioning(lesson#315, 200)
         +- *(1) HashAggregate(keys=[lesson#315], functions=[])
            +- *(1) FileScan json [lesson#315] Batched: false, Format: JSON, Location: InMemoryFileIndex[file:/home/mip/chongliang/tmp/score.json], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<lesson:string>

三、总结

所以,Sparksql窗口函数,是在聚合groupby后,再在聚合集合的基础上去开窗的。也就是先执行groupby,然后,执行窗口函数。

  • 7
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值