hive-sql金典面试题分别用sql,spark-sql(DSL),Spark-rdd形式实现

1.案例一

需求:求同一个用户连续登录三天及以上的用户有哪些

1.1 数据准备

guid01,2018-03-01
guid01,2018-02-28
guid01,2018-03-01
guid01,2018-03-04
guid01,2018-03-02
guid01,2018-03-05
guid01,2018-03-06
guid01,2018-03-07
guid02,2018-03-01
guid02,2018-03-02
guid02,2018-03-03
guid02,2018-03-06

字段说明:

uid:  用户id 示例(guid01)

login_time:  用户登陆时间 示例(2018-03-01)

1.2 sql实现:

第一步:

--第一步:应为同一个用户在一天之内可能登陆好几次,所以先去重(这里采用的是分组(uid+login_time)去重的方法),然后在按照uid在组内排序,并且使用row_number的方法开窗口打行号
select
  uid ,
  login_time,
  row_number()over(partition by uid order by uid login_time) rw
from
  v_guid
group by
uid ,
login_time;

第一步结果展示:

+------+----------+---+
|   uid|login_time| rw|
+------+----------+---+
|guid02|2018-03-01|  1|
|guid02|2018-03-02|  2|
|guid02|2018-03-03|  3|
|guid02|2018-03-06|  4|
|guid01|2018-02-28|  1|
|guid01|2018-03-01|  2|
|guid01|2018-03-02|  3|
|guid01|2018-03-04|  4|
|guid01|2018-03-05|  5|
|guid01|2018-03-06|  6|
|guid01|2018-03-07|  7|
+------+----------+---+

第二步:

-- 第二步:求出login_time和rw(行号)的时间差,如果是连续登陆的天数,那么所得到的时间差就是一样的,
--举例说明(假如某用户,2018-03-01,2018-03-02,2018-03-03这三天是连续登陆的,打行号按照顺序为:1,2,3,那么所得的结果就都是2018-02-28,可以说明是连续登陆
--如果登陆时间为2018-03-01,2018-03-02,2018-03-03 ,2018-03-06 那么按照打行号为:1,2,3,4,那么前三天所得的结果都是:2018-02-28,但是2018-03-06所得结果就是2018-03-02,可以得知这一天和前三天是不连续的)
select          
  t1.uid,      
  t1.login_time,
  t1.rw,
  date_sub(t1.login_time,t1.rw) sub_num
from
  (select
    uid ,
    login_time,
    row_number()over(partition by uid order by login_time asc) rw
  from
    v_guid
  group by
    uid ,login_time)t1

第二步结果展示:

+------+----------+---+----------+
|guid02|2018-03-01|  1|2018-02-28|
|guid02|2018-03-02|  2|2018-02-28|
|guid02|2018-03-03|  3|2018-02-28|
|guid02|2018-03-06|  4|2018-03-02|
|guid01|2018-02-28|  1|2018-02-27|
|guid01|2018-03-01|  2|2018-02-27|
|guid01|2018-03-02|  3|2018-02-27|
|guid01|2018-03-04|  4|2018-02-28|
|guid01|2018-03-05|  5|2018-02-28|
|guid01|2018-03-06|  6|2018-02-28|
|guid01|2018-03-07|  7|2018-02-28|
+------+----------+---+----------+

最后一步:

--最后一步 按照用户id和时间差进行分组,然后,在组内进行聚合,求出起始时间和结束时间,并求出连续登陆的天数
select
t2.uid,
min(t2.login_time) start_time,--起始时间
max(t2.login_time) end_time,--结束时间
count(1) total_day --连续登陆的天数
from
(select
t1.uid,t1.login_time,t1.rw,
date_sub(t1.login_time,t1.rw) sub_num
from
(select
uid ,login_time,
row_number()over(partition by uid order by login_time asc) rw
from
v_guid
group by
uid ,login_time)t1)t2
group by
t2.uid,t2.sub_num
having
total_day>=3;

最终结果展示:

+------+----------+----------+---------+
|   uid|start_time|  end_time|total_day|
+------+----------+----------+---------+
|guid02|2018-03-01|2018-03-03|        3|
|guid01|2018-02-28|2018-03-02|        3|
|guid01|2018-03-04|2018-03-07|        4|
+------+----------+----------+---------+

2. 案例二

需求:求出同一个店铺当前月以及之前月金额的累加总和

2.1 数据准备

shop1,2019-1-18,500
shop1,2019-2-10,500
shop1,2019-2-10,200
shop1,2019-2-11,600
shop1,2019-2-12,400
shop1,2019-2-13,200
shop1,2019-2-15,100
shop1, 2019-3-5,180
shop1, 2019-4-5,280
shop1, 2019-4-6,220
shop2,2019-2-10,100
shop2,2019-2-11,100
shop2,2019-2-13,100
shop2,2019-3-15,100
shop2,2019-4-15,100

字段说明:

shop1为id 店铺id

2019-1-18为dt 成交时间

500 为money 成交金额

2.2 sql实现

--第一步,先用字符串截取函数,截取出dt字段中的年和月
 

select
id,
substr(dt,0,6) sub_dt,
money
from
v_shop

第一步结果展示:

+-----+------+-----+
|   id|sub_dt|money|
+-----+------+-----+
|shop1|2019/1|  500|
|shop1|2019/2|  500|
|shop1|2019/2|  200|
|shop1|2019/2|  600|
|shop1|2019/2|  400|
|shop1|2019/2|  200|
|shop1|2019/2|  100|
|shop1|2019/3|  180|
|shop1|2019/4|  280|
|shop1|2019/4|  220|
|shop2|2019/2|  100|
|shop2|2019/2|  100|
|shop2|2019/2|  100|
|shop2|2019/3|  100|
|shop2|2019/4|  100|
+-----+------+-----+

第二步:按照id和sub_dt字段进行分组排序

select
t1.id,t1.sub_dt,
sum(money) total_money
from
(select
id,
substr(dt,0,6) sub_dt,
money
from
v_shop)t1
group by
t1.id,t1.sub_dt
order by
t1.id,t1.sub_dt

第二步结果展示

+-----+------+-----------+
|   id|sub_dt|total_money|
+-----+------+-----------+
|shop1|2019/1|      500.0|
|shop1|2019/2|     2000.0|
|shop1|2019/3|      180.0|
|shop1|2019/4|      500.0|
|shop2|2019/2|      300.0|
|shop2|2019/3|      100.0|
|shop2|2019/4|      100.0|
+-----+------+-----------+

最后一步:实现当前月以及前几个月的累加,使用开窗函数,以id分组,sub__dt排序
 

select
t2.id,t2.sub_dt,t2.total_money,
sum(total_money)over(partition by t2.id order by t2.id,t2.sub_dt ) sum_money
from
(select
t1.id,t1.sub_dt,
sum(money) total_money
from
(select
id,
substr(dt,0,6) sub_dt,
money
from
v_shop)t1
group by
t1.id,t1.sub_dt
order by
t1.id,t1.sub_dt)t2

最终结果展示

+-----+------+-----------+---------+
|   id|sub_dt|total_money|sum_money|
+-----+------+-----------+---------+
|shop2|2019/2|      300.0|    300.0|
|shop2|2019/3|      100.0|    400.0|
|shop2|2019/4|      100.0|    500.0|
|shop1|2019/1|      500.0|    500.0|
|shop1|2019/2|     2000.0|   2500.0|
|shop1|2019/3|      180.0|   2680.0|
|shop1|2019/4|      500.0|   3180.0|
+-----+------+-----------+---------+

 

3.案例三(压轴案例经典中的经典)

需求:当同一个用户id的begin_time减去上一条数据的end_time大于10min,再分下一组,如果小于10分钟,就将两条数据合并

3.1 数据准备

1,2020-2-18 14:20,2020-2-18 14:46,20
1,2020-2-18 14:47,2020-2-18 15:20,30
1,2020-2-18 15:37,2020-2-18 16:05,40
1,2020-2-18 16:06,2020-2-18 17:20,50
1,2020-2-18 17:21,2020-2-18 18:03,60
2,2020-2-18 14:18,2020-2-18 15:01,20
2,2020-2-18 15:20,2020-2-18 15:30,30
2,2020-2-18 16:01,2020-2-18 16:40,40
2,2020-2-18 16:44,2020-2-18 17:40,50
3,2020-2-18 14:39,2020-2-18 15:35,20
3,2020-2-18 15:36,2020-2-18 15:24,30

字段说明:

1 用户id,

2020-2-18 14:20, 用户登录的开始时间:start_time

2020-2-18 14:46,用户登陆的结束时间:end_time

20   用户的下行流量:flow

3.2 sql实现

第一步:这里用到压栈函数以用户id分组start_time排序开窗口,将end_time字段往下压一行,如果是null则用start_time补充,生成新的字段lag_time

select
  uid,
  start_time,
  end_time,flow,
  LAG(end_time,1,start_time)over(partition by uid  order by start_time) lag_time
from
v_flow

第一步结果展示:

+---+---------------+---------------+----+---------------+
|uid|     start_time|       end_time|flow|       lag_time|
+---+---------------+---------------+----+---------------+
|  1|2020-2-18 14:20|2020-2-18 14:46|  20|2020-2-18 14:20|
|  1|2020-2-18 14:47|2020-2-18 15:20|  30|2020-2-18 14:46|
|  1|2020-2-18 15:37|2020-2-18 16:05|  40|2020-2-18 15:20|
|  1|2020-2-18 16:06|2020-2-18 17:20|  50|2020-2-18 16:05|
|  1|2020-2-18 17:21|2020-2-18 18:03|  60|2020-2-18 17:20|
|  3|2020-2-18 14:39|2020-2-18 15:35|  20|2020-2-18 14:39|
|  3|2020-2-18 15:36|2020-2-18 15:24|  30|2020-2-18 15:35|
|  2|2020-2-18 14:18|2020-2-18 15:01|  20|2020-2-18 14:18|
|  2|2020-2-18 15:20|2020-2-18 15:30|  30|2020-2-18 15:01|
|  2|2020-2-18 16:01|2020-2-18 16:40|  40|2020-2-18 15:30|
|  2|2020-2-18 16:44|2020-2-18 17:40|  50|2020-2-18 16:40|
+---+---------------+---------------+----+---------------+

--第二步:用to_timestamp函数将字符串字段的函数转为标准的日期格式
 

select
  t1.uid,
  to_timestamp(t1.start_time)st,
  to_timestamp(t1.end_time)et,
  to_timestamp(t1.lag_time)lt,
  t1.flow
from
  (select
    uid,start_time,end_time,flow,
    LAG(end_time,1,start_time)over(partition by uid  order by start_time) lag_time
  from
  v_flow)t1

第二步结果展示:

+---+-------------------+-------------------+-------------------+----+
|uid|                 st|                 et|                 lt|flow|
+---+-------------------+-------------------+-------------------+----+
|  3|2020-02-18 14:39:00|2020-02-18 15:35:00|2020-02-18 14:39:00|  20|
|  3|2020-02-18 15:36:00|2020-02-18 15:24:00|2020-02-18 15:35:00|  30|
|  1|2020-02-18 14:20:00|2020-02-18 14:46:00|2020-02-18 14:20:00|  20|
|  1|2020-02-18 14:47:00|2020-02-18 15:20:00|2020-02-18 14:46:00|  30|
|  1|2020-02-18 15:37:00|2020-02-18 16:05:00|2020-02-18 15:20:00|  40|
|  1|2020-02-18 16:06:00|2020-02-18 17:20:00|2020-02-18 16:05:00|  50|
|  1|2020-02-18 17:21:00|2020-02-18 18:03:00|2020-02-18 17:20:00|  60|
|  2|2020-02-18 14:18:00|2020-02-18 15:01:00|2020-02-18 14:18:00|  20|
|  2|2020-02-18 15:20:00|2020-02-18 15:30:00|2020-02-18 15:01:00|  30|
|  2|2020-02-18 16:01:00|2020-02-18 16:40:00|2020-02-18 15:30:00|  40|
|  2|2020-02-18 16:44:00|2020-02-18 17:40:00|2020-02-18 16:40:00|  50|
+---+-------------------+-------------------+-------------------+----+

--第三步:使用unix_timestamp函数将上一步转换的标准的时间格式转为时间戳,然后求开始时间st减去压栈之后的时间lt,求出时间差
 

select
t2.uid,t2.st,t2.et,t2.lt,
(unix_timestamp(t2.st) - unix_timestamp(t2.lt))/60 sub_time
from
(select
t1.uid,to_timestamp(t1.start_time)st,to_timestamp(t1.end_time)et,to_timestamp(t1.lag_time)lt,
t1.flow
from
(select
uid,start_time,end_time,flow,
LAG(end_time,1,start_time)over(partition by uid  order by start_time) lag_time
from
v_flow)t1)t2

第三步结果展示:

+---+-------------------+-------------------+-------------------+--------+
|uid|                 st|                 et|                 lt|sub_time|
+---+-------------------+-------------------+-------------------+--------+
|  3|2020-02-18 14:39:00|2020-02-18 15:35:00|2020-02-18 14:39:00|     0.0|
|  3|2020-02-18 15:36:00|2020-02-18 15:24:00|2020-02-18 15:35:00|     1.0|
|  1|2020-02-18 14:20:00|2020-02-18 14:46:00|2020-02-18 14:20:00|     0.0|
|  1|2020-02-18 14:47:00|2020-02-18 15:20:00|2020-02-18 14:46:00|     1.0|
|  1|2020-02-18 15:37:00|2020-02-18 16:05:00|2020-02-18 15:20:00|    17.0|
|  1|2020-02-18 16:06:00|2020-02-18 17:20:00|2020-02-18 16:05:00|     1.0|
|  1|2020-02-18 17:21:00|2020-02-18 18:03:00|2020-02-18 17:20:00|     1.0|
|  2|2020-02-18 14:18:00|2020-02-18 15:01:00|2020-02-18 14:18:00|     0.0|
|  2|2020-02-18 15:20:00|2020-02-18 15:30:00|2020-02-18 15:01:00|    19.0|
|  2|2020-02-18 16:01:00|2020-02-18 16:40:00|2020-02-18 15:30:00|    31.0|
|  2|2020-02-18 16:44:00|2020-02-18 17:40:00|2020-02-18 16:40:00|     4.0|
+---+-------------------+-------------------+-------------------+--------+

--第四步:使用case when else end 语法为时间差大于10分钟和小于10分钟打标记 

select
t3.uid,
t3.st,
t3.et,
t3.lt,
t3.flow,
t3.sub_time,
case
when sub_time>=10 then 1
when sub_time <10 then 0
end as flag
from
(select
t2.uid,t2.st,t2.et,t2.lt,t2.flow,
(unix_timestamp(t2.st) - unix_timestamp(t2.lt))-60 sub_time
from
(select
t1.uid,to_timestamp(t1.start_time)st,to_timestamp(t1.end_time)et,to_timestamp(t1.lag_time)lt,
t1.flow
from
(select
uid,start_time,end_time,flow,
LAG(end_time,1,start_time)over(partition by uid  order by start_time) lag_time
from
v_flow)t1)t2)t3

第四步结果展示:

+---+-------------------+-------------------+-------------------+----+--------+----+
|uid|                 st|                 et|                 lt|flow|sub_time|flag|
+---+-------------------+-------------------+-------------------+----+--------+----+
|  3|2020-02-18 14:39:00|2020-02-18 15:35:00|2020-02-18 14:39:00|  20|     0.0|   0|
|  3|2020-02-18 15:36:00|2020-02-18 15:24:00|2020-02-18 15:35:00|  30|     1.0|   0|
|  1|2020-02-18 14:20:00|2020-02-18 14:46:00|2020-02-18 14:20:00|  20|     0.0|   0|
|  1|2020-02-18 14:47:00|2020-02-18 15:20:00|2020-02-18 14:46:00|  30|     1.0|   0|
|  1|2020-02-18 15:37:00|2020-02-18 16:05:00|2020-02-18 15:20:00|  40|    17.0|   1|
|  1|2020-02-18 16:06:00|2020-02-18 17:20:00|2020-02-18 16:05:00|  50|     1.0|   0|
|  1|2020-02-18 17:21:00|2020-02-18 18:03:00|2020-02-18 17:20:00|  60|     1.0|   0|
|  2|2020-02-18 14:18:00|2020-02-18 15:01:00|2020-02-18 14:18:00|  20|     0.0|   0|
|  2|2020-02-18 15:20:00|2020-02-18 15:30:00|2020-02-18 15:01:00|  30|    19.0|   1|
|  2|2020-02-18 16:01:00|2020-02-18 16:40:00|2020-02-18 15:30:00|  40|    31.0|   1|
|  2|2020-02-18 16:44:00|2020-02-18 17:40:00|2020-02-18 16:40:00|  50|     4.0|   0|
+---+-------------------+-------------------+-------------------+----+--------+----+

    --第五步:这里使用到了开窗函数,将flag进行累加,是为了对小于10分钟的数据和大于10分钟的数据进行分组
    --(这里有可能会出现连续两条数据差都大于10分钟,说明需要将flag进行累加)


select
t4.uid,
t4.st,
t4.et,
t4.flow,
t4.flag,
sum(t4.flag)over(partition by t4.uid order by t4.uid,t4.st) sum_flag
from
(select
t3.uid,
t3.st,
t3.et,
t3.lt,
t3.flow,
t3.sub_time,
case
when sub_time>=10 then 1
when sub_time <10 then 0
end as flag
from
(select
t2.uid,t2.st,t2.et,t2.lt,t2.flow,
(unix_timestamp(t2.st) - unix_timestamp(t2.lt))-60 sub_time
from
(select
t1.uid,to_timestamp(t1.start_time)st,to_timestamp(t1.end_time)et,to_timestamp(t1.lag_time)lt,
t1.flow
from
(select
uid,start_time,end_time,flow,
LAG(end_time,1,start_time)over(partition by uid  order by start_time) lag_time
from
v_flow)t1)t2)t3)t4

第五步结果展示

+---+-------------------+-------------------+----+----+--------+
|uid|                 st|                 et|flow|flag|sum_flag|
+---+-------------------+-------------------+----+----+--------+
|  3|2020-02-18 14:39:00|2020-02-18 15:35:00|  20|   0|       0|
|  3|2020-02-18 15:36:00|2020-02-18 15:24:00|  30|   0|       0|
|  1|2020-02-18 14:20:00|2020-02-18 14:46:00|  20|   0|       0|
|  1|2020-02-18 14:47:00|2020-02-18 15:20:00|  30|   0|       0|
|  1|2020-02-18 15:37:00|2020-02-18 16:05:00|  40|   1|       1|
|  1|2020-02-18 16:06:00|2020-02-18 17:20:00|  50|   0|       1|
|  1|2020-02-18 17:21:00|2020-02-18 18:03:00|  60|   0|       1|
|  2|2020-02-18 14:18:00|2020-02-18 15:01:00|  20|   0|       0|
|  2|2020-02-18 15:20:00|2020-02-18 15:30:00|  30|   1|       1|
|  2|2020-02-18 16:01:00|2020-02-18 16:40:00|  40|   1|       2|
|  2|2020-02-18 16:44:00|2020-02-18 17:40:00|  50|   0|       2|
+---+-------------------+-------------------+----+----+--------+

--最后一步:按照uid和sum_flag进行分组,然后组内求出开始时间,和结束时间,
--然后在求出聚合后的流量的总和,也就是求出了的结果,
--当前行的开始时间减去上一行的结束时间,小于10分钟的合并,大于10分钟的分到新的一组中

select
t5.uid,
min(t5.st) st_time,
max(t5.et) et_time,
sum(t5.flow) count_flow
from
(select
t4.uid,
t4.st,
t4.et,
t4.flow,
t4.flag,
sum(t4.flag)over(partition by t4.uid order by t4.uid,t4.st) sum_flag
from
(select
t3.uid,
t3.st,
t3.et,
t3.lt,
t3.flow,
t3.sub_time,
case
when sub_time>=10 then 1
when sub_time <10 then 0
end as flag
from
(select
t2.uid,t2.st,t2.et,t2.lt,t2.flow,
(unix_timestamp(t2.st) - unix_timestamp(t2.lt))-60 sub_time
from
(select
t1.uid,to_timestamp(t1.start_time)st,to_timestamp(t1.end_time)et,to_timestamp(t1.lag_time)lt,
t1.flow
from
(select
uid,start_time,end_time,flow,
LAG(end_time,1,start_time)over(partition by uid  order by start_time) lag_time
from
v_flow)t1)t2)t3)t4)t5
group by 
t5.uid,t5.sum_flag

最终结果展示:

+---+-------------------+-------------------+----------+
|uid|            st_time|            et_time|count_flow|
+---+-------------------+-------------------+----------+
|  3|2020-02-18 14:39:00|2020-02-18 15:35:00|      50.0|
|  1|2020-02-18 14:20:00|2020-02-18 15:20:00|      50.0|
|  1|2020-02-18 15:37:00|2020-02-18 18:03:00|     150.0|
|  2|2020-02-18 14:18:00|2020-02-18 15:01:00|      20.0|
|  2|2020-02-18 15:20:00|2020-02-18 15:30:00|      30.0|
|  2|2020-02-18 16:01:00|2020-02-18 17:40:00|      90.0|
+---+-------------------+-------------------+----------+

 

补充一:用spark-sql来实现以上三个案例

4. 案例一:求同一个用户连续登录三天及以上的用户

用spark-sql代码实现

package com.zxx.spark.day10

import org.apache.spark.sql.{DataFrame, SparkSession}

object SparkSqlLoginDemo1 {
  def main(args: Array[String]): Unit = {
    val spark: SparkSession = SparkSession.builder().master("local[*]").getOrCreate()
    val df: DataFrame = spark.read.option("inferSchema", "true").csv("D://guid.txt")
    val df2 = df.toDF("uid", "login_time")
    df2.createTempView("v_guid")
    //用spark-sql写sql,需求:求连续登陆三天以上的用户
    import org.apache.spark.sql.functions._
    val df3: DataFrame = spark.sql(
      """
        |select
        |t2.uid,t2.sub_num,
        |min(t2.login_time) start_time,
        |max(t2.login_time) end_time,
        |count(1) total_day
        |from
        |(select
        |t1.uid,t1.login_time,t1.rw,
        |date_sub(t1.login_time,t1.rw) sub_num
        |from
        |(select
        |uid ,login_time,
        |row_number()over(partition by uid order by login_time asc) rw
        |from
        |v_guid
        |group by
        |uid ,login_time)t1)t2
        |group by
        |t2.uid,t2.sub_num
        |having
        |total_day>=3
        |""".stripMargin)
    df3.printSchema()
    df3.show()
    spark.stop()
//+------+----------+----------+---------+
    //|   uid|start_time|  end_time|total_day|
    //+------+----------+----------+---------+
    //|guid02|2018-03-01|2018-03-03|        3|
    //|guid01|2018-02-28|2018-03-02|        3|
    //|guid01|2018-03-04|2018-03-07|        4|
    //+------+----------+----------+---------+

  }

}

5. 案例二:求出同一个店铺当前月以及之前月金额的累加总和

spark-sql代码实现:

package com.zxx.spark.day10

import org.apache.spark.sql.{DataFrame, SparkSession}

object SparkSqlShopDemo1 {
  def main(args: Array[String]): Unit = {
    //先创建sparkSession
    val spark: SparkSession = SparkSession.builder().master("local[*]").getOrCreate()
    val df: DataFrame = spark.read.csv("D://shop.txt")
    val df2: DataFrame = df.toDF("id", "dt", "money")
    df2.createTempView("v_shop")
    val df3: DataFrame = spark.sql(
      """
        |select
        |t2.id,t2.sub_dt,t2.total_money,
        |sum(total_money)over(partition by t2.id order by t2.id,t2.sub_dt ) sum_money
        |from
        |(select
        |t1.id,t1.sub_dt,
        |sum(money) total_money
        |from
        |(select
        |id,
        |substr(dt,0,6) sub_dt,
        |money
        |from
        |v_shop)t1
        |group by
        |t1.id,t1.sub_dt
        |order by
        |t1.id,t1.sub_dt)t2
        |""".stripMargin)
    df3.show()
    spark.stop()


  }

}

6. 案例三:当同一个用户id的begin_time减去上一条数据的end_time大于10min,再分下一组,如果小于10分钟,就将两条数据合并

spark-sql代码实现:

package com.zxx.spark.day10

import org.apache.spark.sql.{DataFrame, SparkSession}

object SparkSqlFlowDemo1 {
  def main(args: Array[String]): Unit = {
    //创建sparkSession
    val spark: SparkSession = SparkSession.builder().master("local[*]")
      .getOrCreate()
    //读入数据源数据
    val df: DataFrame = spark.read.csv("D://data.txt")
    val df1: DataFrame = df.toDF("uid", "start_time", "end_time", "flow")
    df1.createTempView("v_flow")

    val df2: DataFrame = spark.sql(
      """
        |select
        |t5.uid,
        |min(t5.st) st_time,
        |max(t5.et) et_time,
        |sum(t5.flow) count_flow
        |from
        |(select
        |t4.uid,
        |t4.st,
        |t4.et,
        |t4.flow,
        |t4.flag,
        |sum(t4.flag)over(partition by t4.uid order by t4.uid,t4.st) sum_flag
        |from
        |(select
        |t3.uid,
        |t3.st,
        |t3.et,
        |t3.lt,
        |t3.flow,
        |t3.sub_time,
        |case
        |when sub_time>=10 then 1
        |when sub_time <10 then 0
        |end as flag
        |from
        |(select
        |t2.uid,t2.st,t2.et,t2.lt,t2.flow,
        |(unix_timestamp(t2.st) - unix_timestamp(t2.lt))/60 sub_time
        |from
        |(select
        |t1.uid,to_timestamp(t1.start_time)st,to_timestamp(t1.end_time)et,to_timestamp(t1.lag_time)lt,
        |t1.flow
        |from
        |(select
        |uid,start_time,end_time,flow,
        |LAG(end_time,1,start_time)over(partition by uid  order by start_time) lag_time
        |from
        |v_flow)t1)t2)t3)t4)t5
        |group by
        |t5.uid,t5.sum_flag
        |order by uid asc ,st_time asc
        |""".stripMargin)
    //    df2.printSchema()
    df2.show()
    spark.stop()
  }


}

补充二:用spark-sql中的特殊的语法DSL语法来实现以上三个案例

案例一:求同一个用户连续登录三天及以上的用户

spark-sql:DSL语法实现

package com.zxx.spark.day11

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.{DataFrame, Dataset, SparkSession}

object DSLloginDemo1 {
  def main(args: Array[String]): Unit = {
    //创建sparkSession
    val spark: SparkSession = SparkSession.builder().master("local[*]").appName(this.getClass.getSimpleName).getOrCreate()
    //用sparkSession读入数据源
    val df: DataFrame = spark.read.csv("data/data1.txt")
    //导入隐式转换
    import spark.implicits._
    //导入spark内置的函数
    import org.apache.spark.sql.functions._
    val df1: DataFrame = df.toDF("uid", "login_time")
    df1.distinct().select(//去重
      $"uid",
      $"login_time",
      row_number().over(Window.partitionBy("uid").orderBy("login_time")).as("rn")
    ).select(//求日期和行号的差
      'uid,
      'login_time,
      date_sub('login_time, 'rn).as("date_diff")
    ).groupBy('uid, 'date_diff)//按照用户uid和时间差值进行分组
      .agg(//对分组之后的进行聚合,求出连续登陆的天数,和起始登陆的时间和结束登陆的时间
        count("*").as("count_day"),
        min("login_time").as("start_time"),
        max("login_time").as("end_time")
      ).where($"count_day" >= 3).drop("date_diff").show()

    spark.stop()
  }

}

案例二:求出同一个店铺当前月以及之前月金额的累加总和

spark-sql:DSL语法实现

package com.zxx.spark.day11


import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.{DataFrame, SparkSession}

object DSLShopDemo1 {
  def main(args: Array[String]): Unit = {

    //创建sparkSession
    val spark: SparkSession = SparkSession.builder().master("local[*]").appName(this.getClass.getSimpleName).getOrCreate()
    //读入数据源数据
    val df: DataFrame = spark.read.csv("D://shop.txt")
    val df2: DataFrame = df.toDF("id", "dt", "money")
    //导入spark中内置的函数
    import org.apache.spark.sql.functions._
    //导入隐式转换
    import spark.implicits._
    df2.select(//截取时间字符串中的年份和月份
      $"id",
      $"dt",
      $"money",
      substring($"dt", 0, 7).as("sub_date")
    ).groupBy($"id", $"sub_date").agg(//按照用户id和年份月份进行分组,然后在组里进行聚合,求出同一店铺,同一个月份的总金额
      sum($"money").as("s_money")
    ).select(//使用开窗函数sum()over(),对用一个店铺,月份进行排序 ,然后求出累加和
      $"id",
      $"sub_date",
      $"s_money",
      sum($"s_money").over(Window.partitionBy("id").orderBy("sub_date")).as("sum_money")
    ).show()

    spark.stop()
  }

}

案例三:当同一个用户id的begin_time减去上一条数据的end_time大于10min,再分下一组,如果小于10分钟,就将两条数据合并

spark-sql:DSL语法实现

package com.zxx.spark.day11


import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.{DataFrame, SparkSession}

object DSLFlowDemo1 {
  def main(args: Array[String]): Unit = {
    //创建SparkSession
    val spark: SparkSession = SparkSession.builder().master("local[*]").appName(this.getClass.getSimpleName).getOrCreate()
    //读入数据源
    val df: DataFrame = spark.read.option("header", "true").csv("data/data2.csv")
    //第一步先对edt字段进行压栈,这样才能实现sdt-edt的效果
    //先导入隐式转换,在导入函数
    import org.apache.spark.sql.functions._
    import spark.implicits._
    df.select(//先开窗对end_time字段进行压栈操作
      'uid,
      'sdt,
      'edt,
      'flow,
      expr("lag(edt, 1,sdt) over(partition By uid order by sdt ) as lag_time ")
    ).select(//求出时间差,然后用if函数给每一条数据打标记
      'uid,
      'sdt,
      'edt,
      'flow,
      expr("if ((to_unix_timestamp(sdt,'yyyy-MM-dd HH:mm:ss')-to_unix_timestamp(lag_time,'yyyy-MM-dd HH:mm:ss'))/60>10 ,1, 0 )as flag")
    ).select(//使用sum()over()开窗,然后对flag字段进行累加,为了将小于10分钟和大于10分钟的数据进行区分,方便后面分组
      'uid,
      'sdt,
      'edt,
      'flow,
      sum($"flag").over(Window.partitionBy("uid").orderBy("sdt")).as("s_flag")
    ).groupBy(//按照uid和s_flag进行分组
      'uid,
      's_flag
    ).agg(//对组内的数据进行排序,求出开始时间和结束时间,然后求出小于10分钟的总流量,相当于对小于10分钟的数据进行合并
      min("sdt").as("start_time"),
      max("edt").as("end_time"),
      sum("flow").as("total_flow")
    ).drop("s_flag").show()

    spark.stop()
  }

}

补充三:用Spark中RDD算子来实现以上三个案例

案例一:

package com.zxx.spark.day12

import java.text.SimpleDateFormat
import java.util.{Calendar, Date}

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}


object SparkRddContinueLogin {
  def main(args: Array[String]): Unit = {
    //需求:求同一个用户连续登录三天及以上的用户有哪些
    //1.創建sparkCtontext
    val conf: SparkConf = new SparkConf().setAppName(this.getClass.getSimpleName)
    //判断是否为本地模式
    val flag: Boolean = args(0).toBoolean
    if (flag == true) {
      conf.setMaster("local[*]")
    }
    val sc: SparkContext = new SparkContext(conf)
    //创建RDD
    val lines: RDD[String] = sc.textFile("data/data1.txt")
    val rdd: RDD[(String, String)] = lines.map(e => {
      val fields: Array[String] = e.split(",")
      (fields(0), fields(1))
    })
    //先对数据进行去重
    val disRdd: RDD[(String, String)] = rdd.distinct()
    //对数据进行分组,并且排序
    val grouped: RDD[(String, Iterable[String])] = disRdd.groupByKey()
    val flatMaped: RDD[(String, (String, String, Int))] = grouped.flatMapValues(it => {
      val ls: List[String] = it.toList.sortBy(e => e)
      //给每一个行数据打行号
      val sdf: SimpleDateFormat = new SimpleDateFormat("yyyy-MM-dd")
      val calendar: Calendar = Calendar.getInstance()
      var i = 0;
      for (e <- 0 to ls.length - 1) yield {
        i += 1
        val dt: Date = sdf.parse(ls(e))
        calendar.setTime(dt)
        calendar.add(Calendar.DAY_OF_MONTH, -i)
        val subTime: String = sdf.format(calendar.getTime)
        (ls(e), subTime, 1)
      }
    })

    //对数据进行调整,调整为Key和value的形式
    val maped: RDD[((String, String), (String, String, Int))] = flatMaped.map(e => {
      ((e._1, e._2._2), (e._2._1, e._2._1, e._2._3))
    })
    val res: RDD[((String, String), (String, String, Int))] = maped.reduceByKey((a, b) => {
      val min: String = if (a._1 < b._1) a._1 else b._1
      val max = if (a._1 > b._1) a._1 else b._1
      val count = a._3 + b._3
      (min, max, count)
    })
    val res1: RDD[(String, String, String, Int)] = res.filter(e => {
      e._2._3 >= 3
    }).map(e => {
      (e._1._1, e._2._1, e._2._2, e._2._3)
    })
    sc.stop()
    println(res1.collect().toBuffer)
    //结果展示
//    ArrayBuffer((guid01,2018-02-28,2018-03-02,3), (guid02,2018-03-01,2018-03-03,3), (guid01,2018-03-04,2018-03-07,4))

  }

}

案例二:

package com.zxx.spark.day12

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}

object SparkRDDShopDemo {
  def main(args: Array[String]): Unit = {
    //需求:求出同一个店铺当前月以及之前月金额的累加总和
    //创建sparkContext
    val conf: SparkConf = new SparkConf().setAppName(this.getClass.getSimpleName)
    //判断是否为本地模式
    val flag: Boolean = args(0).toBoolean
    if (flag == true) {
      conf.setMaster("local[*]")
    }
    val sc: SparkContext = new SparkContext(conf)
    //创建rdd
    val rdd: RDD[String] = sc.textFile("data/shop.txt")
    val maped: RDD[((String, String), Double)] = rdd.map(e => {
      val fields: Array[String] = e.split(",")
      val shopName: String = fields(0)
      val dt: String = fields(1)
      val yearAndmonth: String = dt.substring(0, 6)
      val money: Double = fields(2).toDouble
      ((shopName, yearAndmonth), money)
    })
    //先对按照店铺名称分组组内在按照月份分组
    val reduced: RDD[((String, String), Double)] = maped.reduceByKey(_ + _)
    val remap: RDD[(String, (String, Double))] = reduced.map(e => {
      (e._1._1, (e._1._2, e._2))
    })
    //对每个店铺,求当月已经之前月金额的累加
    val grouped: RDD[(String, Iterable[(String, Double)])] = remap.groupByKey()
    val res: RDD[(String, (String, Double, Double))] = grouped.flatMapValues(it => {
      val list: List[(String, Double)] = it.toList.sortBy(e => e._1)
      var temp = 0.0
      for (elem <- list) yield {
        temp += elem._2
        (elem._1, elem._2,temp)
      }
    })
    println(res.collect().toBuffer)
    //结果展示:
    //ArrayBuffer((shop2,(2019/2,300.0,300.0)), (shop2,(2019/3,100.0,400.0)), (shop2,(2019/4,100.0,500.0)), (shop1,( 2019/,680.0,680.0)), (shop1,(2019/1,500.0,1180.0)), (shop1,(2019/2,2000.0,3180.0)))
    sc.stop()
  }
}

案例三:

package com.zxx.spark.day10

import java.text.SimpleDateFormat

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}

object FlowRddDemo {
  def main(args: Array[String]): Unit = {
    //创建rdd
    val conf: SparkConf = new SparkConf().setAppName(this.getClass.getSimpleName).setMaster("local[*]")
    val sc: SparkContext = new SparkContext(conf)
    val lines: RDD[String] = sc.textFile("D://data.txt")
    val tp: RDD[(String, (Long, Long, Double))] = lines.mapPartitions(it => {
      val sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm")
      it.map(e => {
        val fields: Array[String] = e.split(",")
        val uid: String = fields(0)
        val startTime: Long = sdf.parse(fields(1)).getTime
        val endTime: Long = sdf.parse(fields(2)).getTime
        val flow: Double = fields(3).toDouble
        (uid, (startTime, endTime, flow))
      })
    })
    val res: RDD[(String, (Long, Long, Long, Int, Double))] = tp.groupByKey().flatMapValues(it => {
      val ls: List[(Long, Long, Double)] = it.toList.sortBy(_._1)
      //定义变量用来接收参数
      var flag = 0 //这个是用来给两个日期打标签的 只能是0,或者是1
      var tem = 0L //这个是用来接收endTime的
      var sum = 0 //用来对flag进行累加的
      ls.map(e => {
        val start = e._1
        val end = e._2
        if (tem != 0) {
          if ((start - tem) / 60000 >= 10) {
            flag = 1
          } else {
            flag = 0
          }
        }
        sum += flag
        tem = e._2
        (start, end, tem, sum, e._3)
      })
    })
    val res2: RDD[((String, Int), (Long, Long, Double))] = res.map(e => {
      ((e._1, e._2._4), (e._2._1, e._2._2, e._2._5))
    })
    val res3: RDD[((String, Int), (Long, Long, Double))] = res2.reduceByKey((a, b) => {
      val min: Long = if (a._1 < b._1) a._1 else b._1
      val max = if (a._2 > b._2) a._2 else b._2
      val sum: Double = a._3 + b._3
      (min, max, sum)
    })
    val res4: RDD[((String, String), (String, Double, Int))] = res3.mapPartitions(it => {
      val sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm")
      it.map(e => {
        ((e._1._1, sdf.format(e._2._1)), (sdf.format(e._2._2), e._2._3, e._1._2))
      })
    })
    val res5: RDD[((String, String), (String, Double, Int))] = res4.sortByKey()
    println(res5.collect().toBuffer)
    Thread.sleep(10000000000L)
    sc.stop()

  }

}

 

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值