hive 窗口函数


在sql中有一类函数叫做聚合函数,例如sum()、avg()、max()等等,这类函数可以将多行数据按照规则聚集为一行,一般来讲聚集后的行数是要少于聚集前的行数的。但是有时我们想要既显示聚集前的数据,又要显示聚集后的数据,这时我们便引入了窗口函数。窗口函数又叫OLAP函数/分析函数,窗口函数兼具分组和排序功能。

窗口函数最重要的关键字是 partition byorder by。

具体语法如下:over (partition by xxx order by xxx)

sum,avg,min,max 函数

准备数据

beeline -e "
drop table tmp.chenyj_win_t1;
create table tmp.chenyj_win_t1(
cookieid string,
createtime string,   --day 
pv int
) row format delimited 
fields terminated by ','
STORED AS TEXTFILE ;
"

加载数据:

cookie1,2018-04-10,1
cookie1,2018-04-11,5
cookie1,2018-04-12,7
cookie1,2018-04-13,3
cookie1,2018-04-14,2
cookie1,2018-04-15,4
cookie1,2018-04-16,4

hadoop fs -put /home/datasrv/chenyj/over/win01.txt /user/datasrv/chenyj/test
beeline -e "load data inpath '/user/datasrv/chenyj/test/win01.txt' into table tmp.chenyj_win_t1"

beeline -e "select * from tmp.chenyj_win_t1"

测试函数

select 
    cookieid
    ,createtime
    ,pv
    ,sum(pv) over(partition by cookieid order by createtime) as pv1 
    ,sum(pv) over(partition by cookieid order by createtime rows between unbounded preceding and current row) as pv2
    ,sum(pv) over(partition by cookieid) as pv3
    ,sum(pv) over(partition by cookieid order by createtime rows between 3 preceding and current row) as pv4
    ,sum(pv) over(partition by cookieid order by createtime rows between 3 preceding and 1 following) as pv5
    ,sum(pv) over(partition by cookieid order by createtime rows between current row and unbounded following) as pv6
from tmp.chenyj_win_t1;

cookie1	2018-04-10	1	1	1	26	1	6	26
cookie1	2018-04-11	5	6	6	26	6	13	25
cookie1	2018-04-12	7	13	13	26	13	16	20
cookie1	2018-04-13	3	16	16	26	16	18	13
cookie1	2018-04-14	2	18	18	26	17	21	10
cookie1	2018-04-15	4	22	22	26	16	20	8
cookie1	2018-04-16	4	26	26	26	13	13	4

结果

  • pv1: 分组内从起点到当前行的pv累积,如,11号的pv1=10号的pv+11号的pv, 12号=10号+11号+12号
  • pv2: 同pv1
  • pv3: 分组内(cookie1)所有的pv累加
  • pv4: 分组内当前行+往前3行,如,11号=10号+11号, 12号=10号+11号+12号,13号=10号+11号+12号+13号, 14号=11号+12号+13号+14号
  • pv5: 分组内当前行+往前3行+往后1行,如,14号=11号+12号+13号+14号+15号=5+7+3+2+4=21
  • pv6: 分组内当前行+往后所有行,如,13号=13号+14号+15号+16号=3+2+4+4=13,14号=14号+15号+16号=2+4+4=10
如果不指定rows between,默认为从起点到当前行;
如果不指定order by,则将分组内所有值累加;
关键是理解rows between含义,也叫做**window子句**:
preceding:往前
following:往后
current row:当前行
unbounded:起点
unbounded preceding 表示从前面的起点
unbounded following:表示到后面的终点

AVG,MIN,MAX,和SUM用法一样。

row_number,rank,dense_rank,ntile 函数

准备数据

beeline -e "
drop table tmp.chenyj_win_t2;
create table tmp.chenyj_win_t2(
cookieid string,
createtime string,   --day 
pv int
) row format delimited 
fields terminated by ','
STORED AS TEXTFILE ;
"


cookie1,2018-04-10,1
cookie1,2018-04-11,5
cookie1,2018-04-12,7
cookie1,2018-04-13,3
cookie1,2018-04-14,2
cookie1,2018-04-15,4
cookie1,2018-04-16,4
cookie2,2018-04-10,2
cookie2,2018-04-11,3
cookie2,2018-04-12,5
cookie2,2018-04-13,6
cookie2,2018-04-14,3
cookie2,2018-04-15,9
cookie2,2018-04-16,7

加载数据:

hadoop fs -put /home/datasrv/chenyj/over/win02.txt /user/datasrv/chenyj/test
beeline -e "load data inpath '/user/datasrv/chenyj/test/win02.txt' into table tmp.chenyj_win_t2"

beeline -e "select * from tmp.chenyj_win_t2"

测试数据:


SELECT 
cookieid
,createtime
,pv
,ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn 
,RANK() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn1
,DENSE_RANK() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn2
,NTILE(2) OVER(PARTITION BY cookieid ORDER BY createtime) AS rn3
,NTILE(3) OVER(PARTITION BY cookieid ORDER BY createtime) AS rn4
FROM tmp.chenyj_win_t2;



cookie1 2018-04-12  7   1   1   1   1   1
cookie1	2018-04-11	5	2	2	2	1	1
cookie1	2018-04-15	4	3	3	3	2	3
cookie1	2018-04-16	4	4	3	3	2	3
cookie1	2018-04-13	3	5	5	4	1	2
cookie1	2018-04-14	2	6	6	5	2	2
cookie1	2018-04-10	1	7	7	6	1	1
cookie2	2018-04-15	9	1	1	1	2	3
cookie2	2018-04-16	7	2	2	2	2	3
cookie2	2018-04-13	6	3	3	3	1	2
cookie2	2018-04-12	5	4	4	4	1	1
cookie2	2018-04-11	3	5	5	5	1	1
cookie2	2018-04-14	3	6	5	5	2	2
cookie2	2018-04-10	2	7	7	6	1	1

结果:

  • ROW_NUMBER()从1开始,按照顺序,生成分组内记录的序列
  • RANK() 生成数据项在分组中的排名,排名相等会在名次中留下空位
  • DENSE_RANK()生成数据项在分组中的排名,排名相等会在名次中不会留下空位。
  • ntile可以看成是:把有序的数据集合平均分配到指定的数量(num)个桶中, 将桶号分配给每一行。如果不能平均分配,则优先分配较小编号的桶,并且各个桶中能放的行数最多相差1。

lag,lead,first_value,last_value 函数

测试数据:

SELECT 
cookieid
,createtime
,lag(createtime,1,'1970-01-01') OVER(PARTITION BY cookieid ORDER BY createtime) 
,lead(createtime,1,'1970-01-01') OVER(PARTITION BY cookieid ORDER BY createtime) 
,first_value(createtime) OVER(PARTITION BY cookieid ORDER BY createtime)
,LAST_VALUE(createtime) OVER(PARTITION BY cookieid ORDER BY createtime)
,first_value(createtime) OVER(PARTITION BY cookieid ORDER BY createtime desc)
FROM tmp.chenyj_win_t2;

结果:

  • LAG(col,n,DEFAULT) 用于统计窗口内往上第n行值**第一个参数为列名,第二个参数为往上第n行(可选,默认为1),第三个参数为默认值(当往上第n行为NULL时候,取默认值,如不指定,则为NULL)
  • LEAD(col,n,DEFAULT) 用于统计窗口内往下第n行值** 第一个参数为列名,第二个参数为往下第n行(可选,默认为1),第三个参数为默认值(当往下第n行为NULL时候,取默认值,如不指定,则为NULL)
  • FIRST_VALUE() 取分组内排序后,截止到当前行,第一个值
  • LAST_VALUE() 取分组内排序后,截止到当前行,最后一个值

grouping sets,grouping_id,cube,rollup 函数

这几个分析函数通常用于OLAP中,不能累加,而且需要根据不同维度上钻和下钻的指标统计,比如,分小时、天、月的UV数。

准备数据

beeline -e "
drop table tmp.chenyj_win_t3;
CREATE TABLE tmp.chenyj_win_t3 (
month STRING,
day STRING, 
cookieid STRING 
) ROW FORMAT DELIMITED 
FIELDS TERMINATED BY ',' 
stored as textfile;
"

2018-03,2018-03-10,cookie1
2018-03,2018-03-10,cookie5
2018-03,2018-03-12,cookie7
2018-04,2018-04-12,cookie3
2018-04,2018-04-13,cookie2
2018-04,2018-04-13,cookie4
2018-04,2018-04-16,cookie4
2018-03,2018-03-10,cookie2
2018-03,2018-03-10,cookie3
2018-04,2018-04-12,cookie5
2018-04,2018-04-13,cookie6
2018-04,2018-04-15,cookie3
2018-04,2018-04-15,cookie2
2018-04,2018-04-16,cookie1

加载数据:

hadoop fs -put /home/datasrv/chenyj/over/win03.txt /user/datasrv/chenyj/test
beeline -e "load data inpath '/user/datasrv/chenyj/test/win03.txt' into table tmp.chenyj_win_t3"

beeline -e "select * from tmp.chenyj_win_t3"

GROUPING SETS

grouping sets是一种将多个group by 逻辑写在一个sql语句中的便利写法。
等价于将不同维度的GROUP BY结果集进行UNION ALL。
GROUPING__ID,表示结果属于哪一个分组集合。

SELECT 
 month,
 day,
 COUNT(DISTINCT cookieid) AS uv,
 GROUPING__ID 
FROM tmp.chenyj_win_t3 
GROUP BY month,day 
GROUPING SETS (month,day,(month,day)) 
ORDER BY GROUPING__ID;

等价于
SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING__ID FROM tmp.chenyj_win_t3  GROUP BY month 
UNION ALL 
SELECT NULL,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING__ID FROM tmp.chenyj_win_t3  GROUP BY day
UNION ALL 
SELECT month,day,COUNT(DISTINCT cookieid) AS uv,3 AS GROUPING__ID FROM tmp.chenyj_win_t3  GROUP BY month,day;


2018-03 2018-03-12      1       0
2018-04	2018-04-12	2	0
2018-04	2018-04-13	3	0
2018-04	2018-04-15	2	0
2018-04	2018-04-16	2	0
2018-03	2018-03-10	4	0
2018-03	NULL	5	1
2018-04	NULL	6	1
NULL	2018-04-13	3	2
NULL	2018-03-12	1	2
NULL	2018-03-10	4	2
NULL	2018-04-15	2	2
NULL	2018-04-16	2	2
NULL	2018-04-12	2	2

CUBE

根据GROUP BY的维度的所有组合进行聚合。

  SELECT 
    month,
    day,
    COUNT(DISTINCT cookieid) AS uv,
    GROUPING__ID 
  FROM tmp.chenyj_win_t3  
  GROUP BY month,day 
  WITH CUBE 
  ORDER BY GROUPING__ID;
  
  等价于
  SELECT NULL,NULL,COUNT(DISTINCT cookieid) AS uv,0 AS GROUPING__ID FROM tmp.chenyj_win_t3 
  UNION ALL 
  SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING__ID FROM tmp.chenyj_win_t3  GROUP BY month 
  UNION ALL 
  SELECT NULL,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING__ID FROM tmp.chenyj_win_t3  GROUP BY day
  UNION ALL 
  SELECT month,day,COUNT(DISTINCT cookieid) AS uv,3 AS GROUPING__ID FROM tmp.chenyj_win_t3  GROUP BY month,day;
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值