hive实现任务并行执行

hive.exec.parallel参数控制在同一个sql中的不同的job是否可以同时运行,默认为false.
下面是对于该参数的测试过程:

测试sql:
select r1.a
from (select t.a from sunwg_10 t join sunwg_10000000 s on t.a=s.b) r1 join (select s.b from sunwg_100000 t join sunwg_10 s on t.a=s.b) r2 on (r1.a=r2.b);

1
Set hive.exec.parallel=false;
当参数为false的时候,三个job是顺序的执行

[html] view plain copy print ?
hive> set hive.exec.parallel=false;
hive> select r1.a
> from (select t.a from sunwg_10 t join sunwg_10000000 s on t.a=s.b) r1 join (select s.b from sunwg_100000 t join sunwg_10 s on t.a=s.b) r2 on (r1.a=r2.b);
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Cannot run job locally: Input Size (= 397778060) is larger than hive.exec.mode.local.auto.inputbytes.max (= -1)
Starting Job = job_201208241319_2001905, Tracking URL = http://hdpjt:50030/jobdetails.jsp?jobid=job_201208241319_2001905
Kill Command = /dhwdata/hadoop/bin/…/bin/hadoop job -Dmapred.job.tracker=hdpjt:9001 -kill job_201208241319_2001905
Hadoop job information for Stage-1: number of mappers: 7; number of reducers: 1
2012-09-07 17:55:40,854 Stage-1 map = 0%, reduce = 0%
2012-09-07 17:55:55,663 Stage-1 map = 14%, reduce = 0%
2012-09-07 17:56:00,506 Stage-1 map = 56%, reduce = 0%
2012-09-07 17:56:10,254 Stage-1 map = 100%, reduce = 0%
2012-09-07 17:56:19,871

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值