Hive——级联求和accumulate

  > create table t_access_times(username string,month string,salary int)
    > row format delimited fields terminated by ',';
OK
Time taken: 0.812 seconds
hive> select  * from  t_access_times ;
OK
Time taken: 0.468 seconds
hive> load data local inpath "/home/wangshumin/hive/t_access_times.data" into table t_access_times;
Loading data to table default.t_access_times
Table default.t_access_times stats: [numFiles=1, totalSize=123]
OK
Time taken: 0.706 seconds
hive> select  * from  t_access_times ;
OK
A 2015-01 5
A 2015-01 15
B 2015-01 5
A 2015-01 8
B 2015-01 25
A 2015-01 5
A 2015-02 4
A 2015-02 6
B 2015-02 10
B 2015-02 5
Time taken: 0.073 seconds, Fetched: 10 row(s)
hive> desc t_access_times;
OK
username             string                                  
month               string                                  
salary               int                                    
Time taken: 0.083 seconds, Fetched: 3 row(s)
hive> select username , month  from t_access_times groupby username;
FAILED: ParseException line 1:53 extraneous input 'username' expecting EOF near '<EOF>'
hive> select username , month ,sum(salary) as  salary from t_access_times groupby username;
FAILED: ParseException line 1:76 extraneous input 'username' expecting EOF near '<EOF>'
hive> select username , month ,sum(salary) as  salary from t_access_times group by username;
FAILED: SemanticException Line 0:-1 Expression not in GROUP BY key 'month'
hive> select username , month ,sum(salary) as  salary from t_access_times group by username,month;
Query ID = wangshumin_20180320235349_b8ac6339-2637-4efd-be6f-3cab2fd60675
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1521538874183_0020, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0020/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0020
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-03-20 23:53:59,728 Stage-1 map = 0%,  reduce = 0%
2018-03-20 23:54:19,852 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.39 sec
2018-03-20 23:54:33,628 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.55 sec
MapReduce Total cumulative CPU time: 3 seconds 550 msec
Ended Job = job_1521538874183_0020
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 3.55 sec   HDFS Read: 7708 HDFS Write: 52 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 550 msec
OK
A 2015-01 33
A 2015-02 10
B 2015-01 30
B 2015-02 15
Time taken: 45.435 seconds, Fetched: 4 row(s)
hive> select username ,sum(salary) as  salary from t_access_times group by username;
Query ID = wangshumin_20180320235451_bd2c3acd-7dc7-4741-bd3d-e544dff36070
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1521538874183_0021, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0021/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0021
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-03-20 23:54:58,802 Stage-1 map = 0%,  reduce = 0%
2018-03-20 23:55:08,144 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.61 sec
2018-03-20 23:55:14,531 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.78 sec
MapReduce Total cumulative CPU time: 4 seconds 780 msec
Ended Job = job_1521538874183_0021
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 4.78 sec   HDFS Read: 7531 HDFS Write: 10 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 780 msec
OK
A 43
B 45
Time taken: 24.693 seconds, Fetched: 2 row(s)
hive> select username , month ,sum(salary) as  salary from t_access_times group by username,month;
Query ID = wangshumin_20180320235607_f23413a3-5461-48d9-9be7-7c580f035311
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1521538874183_0022, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0022/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0022
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-03-20 23:56:16,453 Stage-1 map = 0%,  reduce = 0%
2018-03-20 23:56:36,292 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.69 sec
2018-03-20 23:56:55,231 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.87 sec
MapReduce Total cumulative CPU time: 3 seconds 870 msec
Ended Job = job_1521538874183_0022
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 3.87 sec   HDFS Read: 7901 HDFS Write: 52 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 870 msec
OK
A 2015-01 33
A 2015-02 10
B 2015-01 30
B 2015-02 15
Time taken: 49.991 seconds, Fetched: 4 row(s)
hive> select A.*,B.* FROM
    > (select username,month,sum(salary) as salary from t_access_times group by username,month) A 
    > inner join 
    > (select username,month,sum(salary) as salary from t_access_times group by username,month) B
    > on
    > A.username=B.username;
Query ID = wangshumin_20180320235702_41a56641-8914-44ff-91d3-e078851e5396
Total jobs = 5
Launching Job 1 out of 5
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1521538874183_0023, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0023/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0023
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-03-20 23:57:11,093 Stage-1 map = 0%,  reduce = 0%
2018-03-20 23:57:26,772 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 8.57 sec
2018-03-20 23:57:33,091 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 9.8 sec
MapReduce Total cumulative CPU time: 9 seconds 800 msec
Ended Job = job_1521538874183_0023
Launching Job 2 out of 5
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1521538874183_0024, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0024/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0024
Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 1
2018-03-20 23:57:42,263 Stage-3 map = 0%,  reduce = 0%
2018-03-20 23:57:51,630 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 2.23 sec
2018-03-20 23:57:57,909 Stage-3 map = 100%,  reduce = 100%, Cumulative CPU 3.29 sec
MapReduce Total cumulative CPU time: 3 seconds 290 msec
Ended Job = job_1521538874183_0024
Stage-7 is selected by condition resolver.
Stage-8 is filtered out by condition resolver.
Stage-2 is filtered out by condition resolver.
18/03/20 23:58:05 WARN conf.Configuration: file:/tmp/wangshumin/76745d24-2239-4efe-af2e-6133240d72f3/hive_2018-03-20_23-57-02_739_365299963402973038-1/-local-10014/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
18/03/20 23:58:05 WARN conf.Configuration: file:/tmp/wangshumin/76745d24-2239-4efe-af2e-6133240d72f3/hive_2018-03-20_23-57-02_739_365299963402973038-1/-local-10014/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
Execution log at: /tmp/wangshumin/wangshumin_20180320235702_41a56641-8914-44ff-91d3-e078851e5396.log
2018-03-20 23:58:06 Starting to launch local task to process map join; maximum memory = 518979584
2018-03-20 23:58:06 Dump the side-table for tag: 1 with group count: 2 into file: file:/tmp/wangshumin/76745d24-2239-4efe-af2e-6133240d72f3/hive_2018-03-20_23-57-02_739_365299963402973038-1/-local-10005/HashTable-Stage-4/MapJoin-mapfile01--.hashtable
2018-03-20 23:58:07 Uploaded 1 File to: file:/tmp/wangshumin/76745d24-2239-4efe-af2e-6133240d72f3/hive_2018-03-20_23-57-02_739_365299963402973038-1/-local-10005/HashTable-Stage-4/MapJoin-mapfile01--.hashtable (346 bytes)
2018-03-20 23:58:07 End of local task; Time Taken: 0.995 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 4 out of 5
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1521538874183_0025, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0025/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0025
Hadoop job information for Stage-4: number of mappers: 1; number of reducers: 0
2018-03-20 23:58:15,642 Stage-4 map = 0%,  reduce = 0%
2018-03-20 23:58:20,955 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.1 sec
MapReduce Total cumulative CPU time: 1 seconds 100 msec
Ended Job = job_1521538874183_0025
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 9.8 sec   HDFS Read: 7298 HDFS Write: 208 SUCCESS
Stage-Stage-3: Map: 1  Reduce: 1   Cumulative CPU: 3.29 sec   HDFS Read: 7299 HDFS Write: 208 SUCCESS
Stage-Stage-4: Map: 1   Cumulative CPU: 1.1 sec   HDFS Read: 5279 HDFS Write: 208 SUCCESS
Total MapReduce CPU Time Spent: 14 seconds 190 msec
OK
A 2015-01 33 A 2015-01 33
A 2015-01 33 A 2015-02 10
A 2015-02 10 A 2015-01 33
A 2015-02 10 A 2015-02 10
B 2015-01 30 B 2015-01 30
B 2015-01 30 B 2015-02 15
B 2015-02 15 B 2015-01 30
B 2015-02 15 B 2015-02 15
Time taken: 80.33 seconds, Fetched: 8 row(s)
hive> select A.*,B.* FROM
    > (select username,month,sum(salary) as salary from t_access_times group by username,month) A 
    > inner join 
    > (select username,month,sum(salary) as salary from t_access_times group by username,month) B
    > on
    > A.username=B.username
    > where B.month <= A.month;
Query ID = wangshumin_20180320235952_a81bcf0b-5d0d-41c1-8c81-b5028343387f
Total jobs = 5
Launching Job 1 out of 5
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1521538874183_0026, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0026/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0026
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-03-20 23:59:58,951 Stage-1 map = 0%,  reduce = 0%
2018-03-21 00:00:06,264 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.12 sec
2018-03-21 00:00:12,521 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.31 sec
MapReduce Total cumulative CPU time: 3 seconds 310 msec
Ended Job = job_1521538874183_0026
Launching Job 2 out of 5
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1521538874183_0027, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0027/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0027
Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 1
2018-03-21 00:00:20,962 Stage-3 map = 0%,  reduce = 0%
2018-03-21 00:00:27,339 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 1.32 sec
2018-03-21 00:00:33,613 Stage-3 map = 100%,  reduce = 100%, Cumulative CPU 2.36 sec
MapReduce Total cumulative CPU time: 2 seconds 360 msec
Ended Job = job_1521538874183_0027
Stage-7 is selected by condition resolver.
Stage-8 is filtered out by condition resolver.
Stage-2 is filtered out by condition resolver.
18/03/21 00:00:38 WARN conf.Configuration: file:/tmp/wangshumin/76745d24-2239-4efe-af2e-6133240d72f3/hive_2018-03-20_23-59-52_956_412420808084745003-1/-local-10014/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
18/03/21 00:00:38 WARN conf.Configuration: file:/tmp/wangshumin/76745d24-2239-4efe-af2e-6133240d72f3/hive_2018-03-20_23-59-52_956_412420808084745003-1/-local-10014/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
Execution log at: /tmp/wangshumin/wangshumin_20180320235952_a81bcf0b-5d0d-41c1-8c81-b5028343387f.log
2018-03-21 00:00:39 Starting to launch local task to process map join; maximum memory = 518979584
2018-03-21 00:00:40 Dump the side-table for tag: 1 with group count: 2 into file: file:/tmp/wangshumin/76745d24-2239-4efe-af2e-6133240d72f3/hive_2018-03-20_23-59-52_956_412420808084745003-1/-local-10005/HashTable-Stage-4/MapJoin-mapfile21--.hashtable
2018-03-21 00:00:40 Uploaded 1 File to: file:/tmp/wangshumin/76745d24-2239-4efe-af2e-6133240d72f3/hive_2018-03-20_23-59-52_956_412420808084745003-1/-local-10005/HashTable-Stage-4/MapJoin-mapfile21--.hashtable (346 bytes)
2018-03-21 00:00:40 End of local task; Time Taken: 0.982 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 4 out of 5
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1521538874183_0028, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0028/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0028
Hadoop job information for Stage-4: number of mappers: 1; number of reducers: 0
2018-03-21 00:00:48,950 Stage-4 map = 0%,  reduce = 0%
2018-03-21 00:00:55,308 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.4 sec
MapReduce Total cumulative CPU time: 1 seconds 400 msec
Ended Job = job_1521538874183_0028
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 3.31 sec   HDFS Read: 7298 HDFS Write: 208 SUCCESS
Stage-Stage-3: Map: 1  Reduce: 1   Cumulative CPU: 2.36 sec   HDFS Read: 7299 HDFS Write: 208 SUCCESS
Stage-Stage-4: Map: 1   Cumulative CPU: 1.4 sec   HDFS Read: 5649 HDFS Write: 156 SUCCESS
Total MapReduce CPU Time Spent: 7 seconds 70 msec
OK
A 2015-01 33 A 2015-01 33
A 2015-02 10 A 2015-01 33
A 2015-02 10 A 2015-02 10
B 2015-01 30 B 2015-01 30
B 2015-02 15 B 2015-01 30
B 2015-02 15 B 2015-02 15
Time taken: 64.556 seconds, Fetched: 6 row(s)
hive> select A.username,A.month,max(A.salary) as salary,sum(B.salary) as accumulate
    > from 
    > (select username,month,sum(salary) as salary from t_access_times group by username,month) A 
    > inner join 
    > (select username,month,sum(salary) as salary from t_access_times group by username,month) B
    > on
    > A.username=B.username
    > where B.month <= A.month
    > group by A.username,A.month
    > order by A.username,A.month;
Query ID = wangshumin_20180321000147_2504028d-aeee-4f3d-9e45-377e533d73ec
Total jobs = 7
Launching Job 1 out of 7
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1521538874183_0029, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0029/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0029
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-03-21 00:01:54,503 Stage-1 map = 0%,  reduce = 0%
2018-03-21 00:02:00,730 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.35 sec
2018-03-21 00:02:07,019 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.41 sec
MapReduce Total cumulative CPU time: 2 seconds 410 msec
Ended Job = job_1521538874183_0029
Launching Job 2 out of 7
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1521538874183_0030, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0030/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0030
Hadoop job information for Stage-5: number of mappers: 1; number of reducers: 1
2018-03-21 00:02:14,377 Stage-5 map = 0%,  reduce = 0%
2018-03-21 00:02:19,701 Stage-5 map = 100%,  reduce = 0%, Cumulative CPU 1.43 sec
2018-03-21 00:02:27,003 Stage-5 map = 100%,  reduce = 100%, Cumulative CPU 2.56 sec
MapReduce Total cumulative CPU time: 2 seconds 560 msec
Ended Job = job_1521538874183_0030
Stage-9 is selected by condition resolver.
Stage-10 is filtered out by condition resolver.
Stage-2 is filtered out by condition resolver.
18/03/21 00:02:30 WARN conf.Configuration: file:/tmp/wangshumin/76745d24-2239-4efe-af2e-6133240d72f3/hive_2018-03-21_00-01-47_625_1658391933223343594-1/-local-10016/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
18/03/21 00:02:30 WARN conf.Configuration: file:/tmp/wangshumin/76745d24-2239-4efe-af2e-6133240d72f3/hive_2018-03-21_00-01-47_625_1658391933223343594-1/-local-10016/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
Execution log at: /tmp/wangshumin/wangshumin_20180321000147_2504028d-aeee-4f3d-9e45-377e533d73ec.log
2018-03-21 00:02:31 Starting to launch local task to process map join; maximum memory = 518979584
2018-03-21 00:02:32 Dump the side-table for tag: 1 with group count: 2 into file: file:/tmp/wangshumin/76745d24-2239-4efe-af2e-6133240d72f3/hive_2018-03-21_00-01-47_625_1658391933223343594-1/-local-10007/HashTable-Stage-6/MapJoin-mapfile41--.hashtable
2018-03-21 00:02:32 Uploaded 1 File to: file:/tmp/wangshumin/76745d24-2239-4efe-af2e-6133240d72f3/hive_2018-03-21_00-01-47_625_1658391933223343594-1/-local-10007/HashTable-Stage-6/MapJoin-mapfile41--.hashtable (346 bytes)
2018-03-21 00:02:32 End of local task; Time Taken: 0.974 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 4 out of 7
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1521538874183_0031, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0031/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0031
Hadoop job information for Stage-6: number of mappers: 1; number of reducers: 0
2018-03-21 00:02:39,605 Stage-6 map = 0%,  reduce = 0%
2018-03-21 00:02:45,889 Stage-6 map = 100%,  reduce = 0%, Cumulative CPU 1.32 sec
MapReduce Total cumulative CPU time: 1 seconds 320 msec
Ended Job = job_1521538874183_0031
Launching Job 5 out of 7
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1521538874183_0032, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0032/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0032
Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 1
2018-03-21 00:02:53,381 Stage-3 map = 0%,  reduce = 0%
2018-03-21 00:02:59,704 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 0.96 sec
2018-03-21 00:03:05,955 Stage-3 map = 100%,  reduce = 100%, Cumulative CPU 1.97 sec
MapReduce Total cumulative CPU time: 1 seconds 970 msec
Ended Job = job_1521538874183_0032
Launching Job 6 out of 7
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1521538874183_0033, Tracking URL = http://centoshostnameKL1:8088/proxy/application_1521538874183_0033/
Kill Command = /home/wangshumin/app/hadoop-2.4.1/bin/hadoop job  -kill job_1521538874183_0033
Hadoop job information for Stage-4: number of mappers: 1; number of reducers: 1
2018-03-21 00:03:13,345 Stage-4 map = 0%,  reduce = 0%
2018-03-21 00:03:21,815 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 0.87 sec
2018-03-21 00:03:31,480 Stage-4 map = 100%,  reduce = 100%, Cumulative CPU 2.36 sec
MapReduce Total cumulative CPU time: 2 seconds 360 msec
Ended Job = job_1521538874183_0033
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 2.41 sec   HDFS Read: 7299 HDFS Write: 208 SUCCESS
Stage-Stage-5: Map: 1  Reduce: 1   Cumulative CPU: 2.56 sec   HDFS Read: 7300 HDFS Write: 208 SUCCESS
Stage-Stage-6: Map: 1   Cumulative CPU: 1.32 sec   HDFS Read: 5930 HDFS Write: 212 SUCCESS
Stage-Stage-3: Map: 1  Reduce: 1   Cumulative CPU: 1.97 sec   HDFS Read: 5441 HDFS Write: 212 SUCCESS
Stage-Stage-4: Map: 1  Reduce: 1   Cumulative CPU: 2.36 sec   HDFS Read: 5627 HDFS Write: 64 SUCCESS
Total MapReduce CPU Time Spent: 10 seconds 620 msec
OK
A 2015-01 33 33
A 2015-02 10 43
B 2015-01 30 30
B 2015-02 15 45
Time taken: 104.982 seconds, Fetched: 4 row(s)
hive> 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值