hive的五种去重方式

1.distinct

问题:
每个app下只保留一个用户
案例:

spark-sql> with test1 as
         > (select 122 as userid,100024 as apptypeid
         > union all
         > select 123 as userid,100024 as apptypeid
         > union all
         > select 123 as userid,100024 as apptypeid)
         > select 
         >   distinct userid,apptypeid
         > from test1;
122     100024                                                                  
123		100024
Time taken: 4.781 seconds, Fetched 2 row(s)
2.group by

问题:
每个app下只保留一个用户
案例:

spark-sql> with test1 as
         > (select 122 as userid,100024 as apptypeid
         > union all
         > select 123 as userid,100024 as apptypeid
         > union all
         > select 123 as userid,100024 as apptypeid)
         > select 
         >   userid,
         >   apptypeid
         > from 
         > (select 
         >   userid,
         >   apptypeid
         > from test1) t1
         > group by userid,apptypeid;
122     100024                                                                  
123		100024
Time taken: 10.5 seconds, Fetched 2 row(s)
3.row_number()

问题:
每个app下,每个用户取最近的渠道、版本、操作系统数据
分析:
distinct只是简单的去重,解决不了问题;group by也是简单的分组去重,也解决不了问题;order by只是简单的排序,也解决不了问题。那这个时候row_number()就派上用场了,分组完再排序
案例:

spark-sql> with test1 as
         > (select 122 as userid,100024 as apptypeid,'appstore' as qid,'ios' as os,'1.0.2' as ver,1627440618 as dateline
         > union all
         > select 123 as userid,100024 as apptypeid,'huawei' as qid,'android' as os,'1.0.3' as ver,1627440620 as dateline
         > union all
         > select 123 as userid,100024 as apptypeid,'huawei' as qid,'android' as os,'1.0.4' as ver,1627440621 as dateline)
         > select 
         >   userid,
         >   apptypeid,
         >   qid,
         >   os,
         >   ver
         > from 
         > (select 
         >   userid,
         >   apptypeid,
         >   qid,
         >   os,
         >   ver,
         >   row_number() over(distribute by apptypeid,userid sort by dateline desc) as rank
         > from test1) t1
         > where t1.rank=1;
122     100024  	appstore        ios     	1.0.2                                   
123		100024		huawei			android		1.0.4
Time taken: 5.286 seconds, Fetched 2 row(s)
4.left join

问题:
求每天的新增用户。现在有一张每天的用户表test1,有一张历史的新用户表test2(新用户:每个app下,每个用户只有一条数据)
分析:
1.每天的用户表test1用group by进行去重,得到每天的用户数据
2.再将用户数据根据历史新用户表进行关联,不在历史新用户表里面的,即为每天新增用户
案例:

spark-sql> with test1 as
         > (select 122 as userid,100024 as apptypeid
         > union all
         > select 123 as userid,100024 as apptypeid
         > union all
         > select 123 as userid,100024 as apptypeid),
         > 
         > test2 as
         > (select 122 as userid,100024 as apptypeid
         > union all
         > select 124 as userid,100024 as apptypeid
         > union all
         > select 125 as userid,100024 as apptypeid)
         > select 
         >   t1.userid,
         >   t1.apptypeid
         > from 
         > (select 
         >   userid,
         >   apptypeid
         > from test1
         > group by userid,apptypeid) t1
         > 
         > left join
         > (select 
         >   userid,
         >   apptypeid
         > from test2) t2
         > on t1.apptypeid=t2.apptypeid and t1.userid=t2.userid
         > where t2.userid is null;
123     	100024                                                                  
Time taken: 19.816 seconds, Fetched 1 row(s)
5.位操作:union all+group by

问题:
求每天的新增用户。现在有一张每天的用户表test1,有一张历史的新用户表test2(新用户:每个app下,每个用户只有一条数据)
分析:
1.每天的用户表test1用group by进行去重,得到每天的用户数据
2.将每天的用户数据打上标签10,历史的新用户数据打上标签1(位操作的标签)
3.进行union all拼接,对标签进行汇总,取标签为10的数据,即为每天的新增用户
案例:

spark-sql> with test1 as
         > (select 122 as userid,100024 as apptypeid
         > union all
         > select 123 as userid,100024 as apptypeid
         > union all
         > select 123 as userid,100024 as apptypeid),
         > 
         > test2 as
         > (select 122 as userid,100024 as apptypeid
         > union all
         > select 124 as userid,100024 as apptypeid
         > union all
         > select 125 as userid,100024 as apptypeid)
         > 
         > select 
         >   userid,
         >   apptypeid
         > from 
         > (select 
         >   sum(tag) as tag,
         >   userid,
         >   apptypeid
         > from 
         > (select 
         >   10 as tag,
         >   t1.userid,
         >   t1.apptypeid
         > from 
         > (select 
         >   userid,
         >   apptypeid
         > from test1
         > group by userid,apptypeid) t1
         > 
         > union all
         > select 
         >   1 as tag,
         >   userid,
         >   apptypeid
         > from test2) t2
         > group by userid,apptypeid) t3
         > where t3.tag=10;
123    	 100024                                                                  
Time taken: 10.428 seconds, Fetched 1 row(s)

总结:
1.简单数据去重建议用group by替代distinct的方式;distinct去重,所有数据都在一个reduce里面,很浪费资源,效率又很低,会有内存溢出的风险
2.对于求每天新增用户,如果数据量很大的情况下,建议用位操作的方式;

  • 5
    点赞
  • 46
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值