hive 参数配置

1,hive.exec.parallel参数:

功能:同一个sql中的不同的job是否可以同时运行

默认为false:

设置参数set hive.exec.parallel=true;
由:
Total MapReduce jobs = 3
Launching Job 1 out of 3
Launching Job 2 out of 3
是运行的job1,2是同时进行的


hive> set hive.exec.parallel=false;                                                                                                                            
hive> select r1.num from (select t.num from tb1 t join tb2 s on t.num=s.num) r1 join (select s.num from tb2 t join tb1 s on t.num=s.num) r2 on (r1.num=r2.num);
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201307162036_0031, Tracking URL = http://hadoopmaster:50030/jobdetails.jsp?jobid=job_201307162036_0031
Kill Command = /home/hadoop/bigdata/hadoop-1.0.4/libexec/../bin/hadoop job  -kill job_201307162036_0031
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2013-07-17 21:56:29,862 Stage-1 map = 0%,  reduce = 0%
2013-07-17 21:56:35,969 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.38 sec
2013-07-17 21:56:36,981 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.38 sec
2013-07-17 21:56:37,993 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.38 sec
2013-07-17 21:56:39,003 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.38 sec
2013-07-17 21:56:40,014 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.38 sec
2013-07-17 21:56:41,024 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.38 sec
2013-07-17 21:56:42,035 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.38 sec
2013-07-17 21:56:43,063 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.38 sec
2013-07-17 21:56:44,073 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.38 sec
2013-07-17 21:56:45,082 Stage-1 map = 100%,  reduce = 33%, Cumulative CPU 3.38 sec
2013-07-17 21:56:46,092 Stage-1 map = 100%,  reduce = 33%, Cumulative CPU 3.38 sec
2013-07-17 21:56:47,108 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.86 sec
2013-07-17 21:56:48,115 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.86 sec
2013-07-17 21:56:49,124 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.86 sec
2013-07-17 21:56:50,134 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.86 sec
2013-07-17 21:56:51,143 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.86 sec
2013-07-17 21:56:52,166 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.86 sec
2013-07-17 21:56:53,172 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.86 sec
MapReduce Total cumulative CPU time: 4 seconds 860 msec
Ended Job = job_201307162036_0031
Launching Job 2 out of 3
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201307162036_0032, Tracking URL = http://hadoopmaster:50030/jobdetails.jsp?jobid=job_201307162036_0032
Kill Command = /home/hadoop/bigdata/hadoop-1.0.4/libexec/../bin/hadoop job  -kill job_201307162036_0032
Hadoop job information for Stage-4: number of mappers: 2; number of reducers: 1
2013-07-17 21:57:00,872 Stage-4 map = 0%,  reduce = 0%
2013-07-17 21:57:06,902 Stage-4 map = 50%,  reduce = 0%, Cumulative CPU 0.6 sec
2013-07-17 21:57:07,911 Stage-4 map = 50%,  reduce = 0%, Cumulative CPU 0.6 sec
2013-07-17 21:57:08,921 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.12 sec
2013-07-17 21:57:09,930 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.12 sec
2013-07-17 21:57:10,941 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.12 sec
2013-07-17 21:57:11,950 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.12 sec
2013-07-17 21:57:12,964 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.12 sec
2013-07-17 21:57:13,973 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.12 sec
2013-07-17 21:57:14,982 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.12 sec
2013-07-17 21:57:15,996 Stage-4 map = 100%,  reduce = 33%, Cumulative CPU 1.12 sec
2013-07-17 21:57:17,005 Stage-4 map = 100%,  reduce = 33%, Cumulative CPU 1.12 sec
2013-07-17 21:57:18,017 Stage-4 map = 100%,  reduce = 33%, Cumulative CPU 1.12 sec
2013-07-17 21:57:19,024 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 2.5 sec
2013-07-17 21:57:20,032 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 2.5 sec
2013-07-17 21:57:21,042 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 2.5 sec
2013-07-17 21:57:22,052 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 2.5 sec
2013-07-17 21:57:23,063 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 2.5 sec
2013-07-17 21:57:24,073 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 2.5 sec
2013-07-17 21:57:25,083 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 2.5 sec
MapReduce Total cumulative CPU time: 2 seconds 500 msec
Ended Job = job_201307162036_0032
Launching Job 3 out of 3
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201307162036_0033, Tracking URL = http://hadoopmaster:50030/jobdetails.jsp?jobid=job_201307162036_0033
Kill Command = /home/hadoop/bigdata/hadoop-1.0.4/libexec/../bin/hadoop job  -kill job_201307162036_0033
Hadoop job information for Stage-2: number of mappers: 2; number of reducers: 1
2013-07-17 21:57:32,543 Stage-2 map = 0%,  reduce = 0%
2013-07-17 21:57:38,566 Stage-2 map = 50%,  reduce = 0%, Cumulative CPU 0.56 sec
2013-07-17 21:57:39,574 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.04 sec
2013-07-17 21:57:40,583 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.04 sec
2013-07-17 21:57:41,593 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.04 sec
2013-07-17 21:57:42,602 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.04 sec
2013-07-17 21:57:43,610 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.04 sec
2013-07-17 21:57:44,620 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.04 sec
2013-07-17 21:57:45,626 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.04 sec
2013-07-17 21:57:46,631 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.04 sec
2013-07-17 21:57:47,638 Stage-2 map = 100%,  reduce = 17%, Cumulative CPU 1.04 sec
2013-07-17 21:57:48,647 Stage-2 map = 100%,  reduce = 17%, Cumulative CPU 1.04 sec
2013-07-17 21:57:49,658 Stage-2 map = 100%,  reduce = 17%, Cumulative CPU 1.04 sec
2013-07-17 21:57:50,665 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.52 sec
2013-07-17 21:57:51,673 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.52 sec
2013-07-17 21:57:52,683 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.52 sec
2013-07-17 21:57:53,693 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.52 sec
2013-07-17 21:57:54,704 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.52 sec
2013-07-17 21:57:55,711 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.52 sec
2013-07-17 21:57:56,719 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.52 sec
MapReduce Total cumulative CPU time: 2 seconds 520 msec
Ended Job = job_201307162036_0033
MapReduce Jobs Launched: 
Job 0: Map: 2  Reduce: 1   Cumulative CPU: 4.86 sec   HDFS Read: 452 HDFS Write: 153 SUCCESS
Job 1: Map: 2  Reduce: 1   Cumulative CPU: 2.5 sec   HDFS Read: 452 HDFS Write: 153 SUCCESS
Job 2: Map: 2  Reduce: 1   Cumulative CPU: 2.52 sec   HDFS Read: 1222 HDFS Write: 12 SUCCESS
Total MapReduce CPU Time Spent: 9 seconds 880 msec
OK
111
222
333
Time taken: 97.98 seconds
设置参数set hive.exec.parallel=true;
由:
Total MapReduce jobs = 3
Launching Job 1 out of 3
Launching Job 2 out of 3
是运行的job1,2是同时进行的

hive> set hive.exec.parallel=true;                                                                                                                        
hive> select r1.num from (select t.num from tb1 t join tb2 s on t.num=s.num) r1 join (select s.num from tb2 t join tb1 s on t.num=s.num) r2 on (r1.num=r2.num);
Total MapReduce jobs = 3
Launching Job 1 out of 3
Launching Job 2 out of 3
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201307162036_0034, Tracking URL = http://hadoopmaster:50030/jobdetails.jsp?jobid=job_201307162036_0034
Kill Command = /home/hadoop/bigdata/hadoop-1.0.4/libexec/../bin/hadoop job  -kill job_201307162036_0034
Starting Job = job_201307162036_0035, Tracking URL = http://hadoopmaster:50030/jobdetails.jsp?jobid=job_201307162036_0035
Kill Command = /home/hadoop/bigdata/hadoop-1.0.4/libexec/../bin/hadoop job  -kill job_201307162036_0035
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2013-07-17 21:58:29,258 Stage-1 map = 0%,  reduce = 0%
Hadoop job information for Stage-4: number of mappers: 2; number of reducers: 1
2013-07-17 21:58:29,324 Stage-4 map = 0%,  reduce = 0%
2013-07-17 21:58:35,284 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.49 sec
2013-07-17 21:58:35,345 Stage-4 map = 50%,  reduce = 0%, Cumulative CPU 1.0 sec
2013-07-17 21:58:36,289 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.49 sec
2013-07-17 21:58:36,349 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.61 sec
2013-07-17 21:58:37,297 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.49 sec
2013-07-17 21:58:37,357 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.61 sec
2013-07-17 21:58:38,305 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.49 sec
2013-07-17 21:58:38,365 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.61 sec
2013-07-17 21:58:39,314 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.49 sec
2013-07-17 21:58:39,373 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.61 sec
2013-07-17 21:58:40,324 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.49 sec
2013-07-17 21:58:40,382 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.61 sec
2013-07-17 21:58:41,332 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.49 sec
2013-07-17 21:58:41,393 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.61 sec
2013-07-17 21:58:42,339 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.49 sec
2013-07-17 21:58:42,405 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.61 sec
2013-07-17 21:58:43,347 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.49 sec
2013-07-17 21:58:43,413 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.61 sec
2013-07-17 21:58:44,356 Stage-1 map = 100%,  reduce = 17%, Cumulative CPU 2.49 sec
2013-07-17 21:58:44,419 Stage-4 map = 100%,  reduce = 33%, Cumulative CPU 1.61 sec
2013-07-17 21:58:45,363 Stage-1 map = 100%,  reduce = 17%, Cumulative CPU 2.49 sec
2013-07-17 21:58:45,424 Stage-4 map = 100%,  reduce = 33%, Cumulative CPU 1.61 sec
2013-07-17 21:58:46,371 Stage-1 map = 100%,  reduce = 17%, Cumulative CPU 2.49 sec
2013-07-17 21:58:46,432 Stage-4 map = 100%,  reduce = 33%, Cumulative CPU 1.61 sec
2013-07-17 21:58:47,377 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.11 sec
2013-07-17 21:58:47,438 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 3.17 sec
2013-07-17 21:58:48,391 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.11 sec
2013-07-17 21:58:48,442 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 3.17 sec
2013-07-17 21:58:49,398 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.11 sec
2013-07-17 21:58:49,449 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 3.17 sec
2013-07-17 21:58:50,405 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.11 sec
2013-07-17 21:58:50,460 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 3.17 sec
2013-07-17 21:58:51,411 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.11 sec
2013-07-17 21:58:51,476 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 3.17 sec
2013-07-17 21:58:52,419 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.11 sec
2013-07-17 21:58:52,483 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 3.17 sec
2013-07-17 21:58:53,425 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.11 sec
MapReduce Total cumulative CPU time: 4 seconds 110 msec
Ended Job = job_201307162036_0034
2013-07-17 21:58:53,489 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 3.17 sec
MapReduce Total cumulative CPU time: 3 seconds 170 msec
Ended Job = job_201307162036_0035
Launching Job 3 out of 3
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201307162036_0036, Tracking URL = http://hadoopmaster:50030/jobdetails.jsp?jobid=job_201307162036_0036
Kill Command = /home/hadoop/bigdata/hadoop-1.0.4/libexec/../bin/hadoop job  -kill job_201307162036_0036
Hadoop job information for Stage-2: number of mappers: 2; number of reducers: 1
2013-07-17 21:59:02,940 Stage-2 map = 0%,  reduce = 0%
2013-07-17 21:59:08,967 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.1 sec
2013-07-17 21:59:09,973 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.1 sec
2013-07-17 21:59:10,979 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.1 sec
2013-07-17 21:59:11,987 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.1 sec
2013-07-17 21:59:12,996 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.1 sec
2013-07-17 21:59:14,002 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.1 sec
2013-07-17 21:59:15,009 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.1 sec
2013-07-17 21:59:16,018 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.1 sec
2013-07-17 21:59:17,026 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.1 sec
2013-07-17 21:59:18,034 Stage-2 map = 100%,  reduce = 33%, Cumulative CPU 1.1 sec
2013-07-17 21:59:19,042 Stage-2 map = 100%,  reduce = 33%, Cumulative CPU 1.1 sec
2013-07-17 21:59:20,050 Stage-2 map = 100%,  reduce = 33%, Cumulative CPU 1.1 sec
2013-07-17 21:59:21,056 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.58 sec
2013-07-17 21:59:22,060 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.58 sec
2013-07-17 21:59:23,065 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.58 sec
2013-07-17 21:59:24,072 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.58 sec
2013-07-17 21:59:25,078 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.58 sec
2013-07-17 21:59:26,086 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.58 sec
2013-07-17 21:59:27,092 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.58 sec
MapReduce Total cumulative CPU time: 2 seconds 580 msec
Ended Job = job_201307162036_0036
MapReduce Jobs Launched: 
Job 0: Map: 2  Reduce: 1   Cumulative CPU: 4.11 sec   HDFS Read: 452 HDFS Write: 153 SUCCESS
Job 1: Map: 2  Reduce: 1   Cumulative CPU: 3.17 sec   HDFS Read: 452 HDFS Write: 153 SUCCESS
Job 2: Map: 2  Reduce: 1   Cumulative CPU: 2.58 sec   HDFS Read: 1222 HDFS Write: 12 SUCCESS
Total MapReduce CPU Time Spent: 9 seconds 860 msec
OK
111
222
333
Time taken: 68.251 seconds
hive> 


  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值