hive的count(distinct id)测试--慎用

一, 测试语句

语句一:

select count(distinct order_id) ,count(1) from d_common_wlt_info

 

语句二:

1 select count(order_id), count(one_num)
2 from 
3 (
4 select order_id ,count(1) one_num
5 from d_common_wlt_info 
6 group by order_id
7 )t

二, 执行日志及表说明

表大小信息

Partition Parameters:	 	 
	COLUMN_STATS_ACCURATE	true                
	numFiles            	9                   
	numRows             	28176219            
	rawDataSize         	8785300782          
	totalSize           	1820024671          
	transient_lastDdlTime	1551631895  

 (1)语句一日志

Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 1
2019-03-04 13:10:24,719 Stage-1 map = 0%,  reduce = 0%
2019-03-04 13:10:39,289 Stage-1 map = 33%,  reduce = 0%, Cumulative CPU 23.87 sec
2019-03-04 13:10:58,984 Stage-1 map = 58%,  reduce = 0%, Cumulative CPU 76.04 sec
2019-03-04 13:11:39,220 Stage-1 map = 68%,  reduce = 0%, Cumulative CPU 191.24 sec
2019-03-04 13:11:50,532 Stage-1 map = 78%,  reduce = 0%, Cumulative CPU 212.97 sec
2019-03-04 13:11:56,726 Stage-1 map = 89%,  reduce = 0%, Cumulative CPU 228.88 sec
2019-03-04 13:12:06,036 Stage-1 map = 91%,  reduce = 0%, Cumulative CPU 247.36 sec
2019-03-04 13:12:09,118 Stage-1 map = 98%,  reduce = 0%, Cumulative CPU 250.66 sec
2019-03-04 13:12:12,205 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 253.8 sec
2019-03-04 13:12:34,851 Stage-1 map = 100%,  reduce = 68%, Cumulative CPU 274.25 sec
2019-03-04 13:12:37,942 Stage-1 map = 100%,  reduce = 70%, Cumulative CPU 277.76 sec
2019-03-04 13:12:41,023 Stage-1 map = 100%,  reduce = 73%, Cumulative CPU 280.93 sec
2019-03-04 13:12:44,103 Stage-1 map = 100%,  reduce = 75%, Cumulative CPU 284.14 sec
2019-03-04 13:12:47,185 Stage-1 map = 100%,  reduce = 77%, Cumulative CPU 287.42 sec
2019-03-04 13:12:50,267 Stage-1 map = 100%,  reduce = 80%, Cumulative CPU 290.65 sec
2019-03-04 13:12:53,353 Stage-1 map = 100%,  reduce = 82%, Cumulative CPU 293.78 sec
2019-03-04 13:12:56,432 Stage-1 map = 100%,  reduce = 84%, Cumulative CPU 296.95 sec
2019-03-04 13:12:59,547 Stage-1 map = 100%,  reduce = 86%, Cumulative CPU 300.13 sec
2019-03-04 13:13:02,639 Stage-1 map = 100%,  reduce = 88%, Cumulative CPU 303.33 sec
2019-03-04 13:13:05,728 Stage-1 map = 100%,  reduce = 91%, Cumulative CPU 306.51 sec
2019-03-04 13:13:08,837 Stage-1 map = 100%,  reduce = 93%, Cumulative CPU 309.67 sec
2019-03-04 13:13:11,917 Stage-1 map = 100%,  reduce = 95%, Cumulative CPU 312.88 sec
2019-03-04 13:13:14,996 Stage-1 map = 100%,  reduce = 97%, Cumulative CPU 315.99 sec
2019-03-04 13:13:17,052 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 318.29 sec
MapReduce Total cumulative CPU time: 5 minutes 18 seconds 290 msec
Ended Job = job_1546585330012_16354054
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 3  Reduce: 1   Cumulative CPU: 318.29 sec   HDFS Read: 805206007 HDFS Write: 68 SUCCESS
Total MapReduce CPU Time Spent: 5 minutes 18 seconds 290 msec
OK
21205531	27866943
Time taken: 218.373 seconds, Fetched: 1 row(s)

 (2) 语句二日志

Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 1
2019-03-04 13:12:08,331 Stage-1 map = 0%,  reduce = 0%
2019-03-04 13:12:19,715 Stage-1 map = 33%,  reduce = 0%, Cumulative CPU 6.48 sec
2019-03-04 13:12:42,875 Stage-1 map = 46%,  reduce = 0%, Cumulative CPU 69.34 sec
2019-03-04 13:12:43,906 Stage-1 map = 58%,  reduce = 0%, Cumulative CPU 72.54 sec
2019-03-04 13:13:16,847 Stage-1 map = 68%,  reduce = 0%, Cumulative CPU 162.12 sec
2019-03-04 13:13:19,930 Stage-1 map = 79%,  reduce = 0%, Cumulative CPU 168.67 sec
2019-03-04 13:13:23,060 Stage-1 map = 89%,  reduce = 0%, Cumulative CPU 174.94 sec
2019-03-04 13:13:32,311 Stage-1 map = 97%,  reduce = 0%, Cumulative CPU 192.25 sec
2019-03-04 13:13:35,397 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 195.41 sec
2019-03-04 13:13:54,977 Stage-1 map = 100%,  reduce = 22%, Cumulative CPU 204.97 sec
2019-03-04 13:13:58,058 Stage-1 map = 100%,  reduce = 67%, Cumulative CPU 210.04 sec
2019-03-04 13:14:01,135 Stage-1 map = 100%,  reduce = 69%, Cumulative CPU 213.98 sec
2019-03-04 13:14:04,236 Stage-1 map = 100%,  reduce = 71%, Cumulative CPU 217.42 sec
2019-03-04 13:14:07,345 Stage-1 map = 100%,  reduce = 74%, Cumulative CPU 220.64 sec
2019-03-04 13:14:10,432 Stage-1 map = 100%,  reduce = 77%, Cumulative CPU 223.88 sec
2019-03-04 13:14:13,518 Stage-1 map = 100%,  reduce = 79%, Cumulative CPU 227.09 sec
2019-03-04 13:14:16,602 Stage-1 map = 100%,  reduce = 81%, Cumulative CPU 230.22 sec
2019-03-04 13:14:19,678 Stage-1 map = 100%,  reduce = 83%, Cumulative CPU 233.48 sec
2019-03-04 13:14:22,759 Stage-1 map = 100%,  reduce = 85%, Cumulative CPU 236.69 sec
2019-03-04 13:14:25,841 Stage-1 map = 100%,  reduce = 87%, Cumulative CPU 239.9 sec
2019-03-04 13:14:28,920 Stage-1 map = 100%,  reduce = 89%, Cumulative CPU 242.94 sec
2019-03-04 13:14:32,011 Stage-1 map = 100%,  reduce = 91%, Cumulative CPU 246.05 sec
2019-03-04 13:14:35,103 Stage-1 map = 100%,  reduce = 93%, Cumulative CPU 249.11 sec
2019-03-04 13:14:38,181 Stage-1 map = 100%,  reduce = 95%, Cumulative CPU 252.12 sec
2019-03-04 13:14:41,261 Stage-1 map = 100%,  reduce = 97%, Cumulative CPU 255.26 sec
2019-03-04 13:14:43,312 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 256.88 sec
MapReduce Total cumulative CPU time: 4 minutes 16 seconds 880 msec
Ended Job = job_1546585330012_16354401
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1546585330012_16354794, Tracking URL = http://tjtx-81-187.58os.org:9088/proxy/application_1546585330012_16354794/
Kill Command = /usr/lib/software/hadoop/bin/hadoop job  -kill job_1546585330012_16354794
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2019-03-04 13:15:10,988 Stage-2 map = 0%,  reduce = 0%
2019-03-04 13:15:21,338 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 2.15 sec
2019-03-04 13:15:30,634 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 4.56 sec
MapReduce Total cumulative CPU time: 4 seconds 560 msec
Ended Job = job_1546585330012_16354794
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 3  Reduce: 1   Cumulative CPU: 256.88 sec   HDFS Read: 805205212 HDFS Write: 123 SUCCESS
Stage-Stage-2: Map: 1  Reduce: 1   Cumulative CPU: 4.56 sec   HDFS Read: 5330 HDFS Write: 68 SUCCESS
Total MapReduce CPU Time Spent: 4 minutes 21 seconds 440 msec
OK
21205531	21205532
Time taken: 225.271 seconds, Fetched: 1 row(s)

 三, 总结

在测试使用的集群中2000w的数据量级在效率上差别不大.

为什么慎用呢? 因为hive的distinct底层使用了HashSet去重.

即然效率上差不多,还是能不用就不用的好.

--

转载于:https://www.cnblogs.com/dobbin2018/p/10470172.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值