hive 求月访问记录,累计访问记录

求出当月记录和累计记录,如2月累计记录=1月+2月记录,3月累计记录=1月+2月+3月记录

A,2015-01,1
A,2015-01,11
A,2015-01,12
A,2015-01,13
B,2015-01,10
B,2015-01,20
B,2015-01,30
B,2015-01,40
A,2015-01,1
A,2015-01,11
A,2015-01,12
A,2015-02,13
B,2015-01,10
B,2015-02,20
B,2015-02,30
B,2015-01,40

 

 

[root@hdp-nn-01 apps]# beeline
Beeline version 1.2.2 by Apache Hive
beeline>  !connect jdbc:hive2://hdp-nn-01:10000
Connecting to jdbc:hive2://hdp-nn-01:10000
Enter username for jdbc:hive2://hdp-nn-01:10000: root
Enter password for jdbc:hive2://hdp-nn-01:10000:
Connected to: Apache Hive (version 1.2.2)
Driver: Hive JDBC (version 1.2.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://hdp-nn-01:10000> select * from t_access_time;
+-------------------------+----------------------+-----------------------+--+
| t_access_time.username  | t_access_time.month  | t_access_time.salary  |
+-------------------------+----------------------+-----------------------+--+
| A                       | 2015-01              | 1                     |
| A                       | 2015-01              | 11                    |
| A                       | 2015-01              | 12                    |
| A                       | 2015-01              | 13                    |
| B                       | 2015-01              | 10                    |
| B                       | 2015-01              | 20                    |
| B                       | 2015-01              | 30                    |
| B                       | 2015-01              | 40                    |
+-------------------------+----------------------+-----------------------+--+
8 rows selected (1.512 seconds)
0: jdbc:hive2://hdp-nn-01:10000> [root@hdp-nn-01 apps]#
[root@hdp-nn-01 apps]#
[root@hdp-nn-01 apps]#
[root@hdp-nn-01 apps]#
[root@hdp-nn-01 apps]#
[root@hdp-nn-01 apps]#
[root@hdp-nn-01 apps]#
[root@hdp-nn-01 apps]#
[root@hdp-nn-01 apps]#
[root@hdp-nn-01 apps]#
[root@hdp-nn-01 apps]#
[root@hdp-nn-01 apps]#
[root@hdp-nn-01 apps]#
[root@hdp-nn-01 apps]# beeline                      
Beeline version 1.2.2 by Apache Hive
beeline>  !connect jdbc:hive2://hdp-nn-01:10000
Connecting to jdbc:hive2://hdp-nn-01:10000
Enter username for jdbc:hive2://hdp-nn-01:10000: root
Enter password for jdbc:hive2://hdp-nn-01:10000:
Connected to: Apache Hive (version 1.2.2)
Driver: Hive JDBC (version 1.2.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000> select username ,month ,sum(salary) salary from t_access_time group by username,month;
INFO  : Number of reduce tasks not specified. Estimated from input data size: 1
INFO  : In order to change the average load for a reducer (in bytes):
INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
INFO  : In order to limit the maximum number of reducers:
INFO  :   set hive.exec.reducers.max=<number>
INFO  : In order to set a constant number of reducers:
INFO  :   set mapreduce.job.reduces=<number>
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_1539182373092_0001
INFO  : The url to track the job: http://hdp-nn-01:8088/proxy/application_1539182373092_0001/
INFO  : Starting Job = job_1539182373092_0001, Tracking URL = http://hdp-nn-01:8088/proxy/application_1539182373092_0001/
INFO  : Kill Command = /root/apps/hadoop-2.6.5/bin/hadoop job  -kill job_1539182373092_0001
INFO  : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
INFO  : 2018-10-10 23:45:45,273 Stage-1 map = 0%,  reduce = 0%
INFO  : 2018-10-10 23:45:56,000 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 5.34 sec
INFO  : 2018-10-10 23:46:10,379 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 10.51 sec
INFO  : MapReduce Total cumulative CPU time: 10 seconds 510 msec
INFO  : Ended Job = job_1539182373092_0001
+-----------+----------+---------+--+
| username  |  month   | salary  |
+-----------+----------+---------+--+
| A         | 2015-01  | 37      |
| B         | 2015-01  | 100     |
+-----------+----------+---------+--+
2 rows selected (46.304 seconds)
0: jdbc:hive2://hdp-nn-01:10000> delete from t_accces_time;
Error: Error while compiling statement: FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations. (state=42000,code=10294)
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000> select * from t_access_time;
+-------------------------+----------------------+-----------------------+--+
| t_access_time.username  | t_access_time.month  | t_access_time.salary  |
+-------------------------+----------------------+-----------------------+--+
| A                       | 2015-01              | 1                     |
| A                       | 2015-01              | 11                    |
| A                       | 2015-01              | 12                    |
| A                       | 2015-01              | 13                    |
| B                       | 2015-01              | 10                    |
| B                       | 2015-01              | 20                    |
| B                       | 2015-01              | 30                    |
| B                       | 2015-01              | 40                    |
+-------------------------+----------------------+-----------------------+--+
8 rows selected (0.187 seconds)
0: jdbc:hive2://hdp-nn-01:10000> load data local inpath '/root/apps/hive1.txt'  into table t_access_time;
INFO  : Loading data to table default.t_access_time from file:/root/apps/hive1.txt
INFO  : Table default.t_access_time stats: [numFiles=2, totalSize=206]
No rows affected (0.741 seconds)
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000> select * from t_access_time;
+-------------------------+----------------------+-----------------------+--+
| t_access_time.username  | t_access_time.month  | t_access_time.salary  |
+-------------------------+----------------------+-----------------------+--+
| A                       | 2015-01              | 1                     |
| A                       | 2015-01              | 11                    |
| A                       | 2015-01              | 12                    |
| A                       | 2015-01              | 13                    |
| B                       | 2015-01              | 10                    |
| B                       | 2015-01              | 20                    |
| B                       | 2015-01              | 30                    |
| B                       | 2015-01              | 40                    |
| A                       | 2015-01              | 1                     |
| A                       | 2015-01              | 11                    |
| A                       | 2015-02              | 12                    |
| A                       | 2015-01              | 13                    |
| B                       | 2015-01              | 10                    |
| B                       | 2015-01              | 20                    |
| B                       | 2015-02              | 30                    |
| B                       | 2015-01              | 40                    |
+-------------------------+----------------------+-----------------------+--+
16 rows selected (0.173 seconds)
0: jdbc:hive2://hdp-nn-01:10000> select username ,month ,sum(salary) salary from t_access_time group by username,month;
INFO  : Number of reduce tasks not specified. Estimated from input data size: 1
INFO  : In order to change the average load for a reducer (in bytes):
INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
INFO  : In order to limit the maximum number of reducers:
INFO  :   set hive.exec.reducers.max=<number>
INFO  : In order to set a constant number of reducers:
INFO  :   set mapreduce.job.reduces=<number>
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_1539182373092_0002
INFO  : The url to track the job: http://hdp-nn-01:8088/proxy/application_1539182373092_0002/
INFO  : Starting Job = job_1539182373092_0002, Tracking URL = http://hdp-nn-01:8088/proxy/application_1539182373092_0002/
INFO  : Kill Command = /root/apps/hadoop-2.6.5/bin/hadoop job  -kill job_1539182373092_0002
INFO  : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
INFO  : 2018-10-10 23:48:31,221 Stage-1 map = 0%,  reduce = 0%
INFO  : 2018-10-10 23:48:36,391 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.95 sec
INFO  : 2018-10-10 23:48:41,516 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.15 sec
INFO  : MapReduce Total cumulative CPU time: 5 seconds 150 msec
INFO  : Ended Job = job_1539182373092_0002
+-----------+----------+---------+--+
| username  |  month   | salary  |
+-----------+----------+---------+--+
| A         | 2015-01  | 62      |
| A         | 2015-02  | 12      |
| B         | 2015-01  | 170     |
| B         | 2015-02  | 30      |
+-----------+----------+---------+--+
4 rows selected (16.206 seconds)
[root@hdp-nn-01 apps]# beeline
Beeline version 1.2.2 by Apache Hive
beeline>  !connect jdbc:hive2://hdp-nn-01:10000
Connecting to jdbc:hive2://hdp-nn-01:10000
Enter username for jdbc:hive2://hdp-nn-01:10000: root
Enter password for jdbc:hive2://hdp-nn-01:10000:
Connected to: Apache Hive (version 1.2.2)
Driver: Hive JDBC (version 1.2.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000> set hive.exec.mode.local.auto=true;
No rows affected (0.057 seconds)
0: jdbc:hive2://hdp-nn-01:10000> create table t_temp1 as select username ,month ,sum(salary) salary from t_access_time group by username,month;
INFO  : Number of reduce tasks not specified. Estimated from input data size: 1
INFO  : In order to change the average load for a reducer (in bytes):
INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
INFO  : In order to limit the maximum number of reducers:
INFO  :   set hive.exec.reducers.max=<number>
INFO  : In order to set a constant number of reducers:
INFO  :   set mapreduce.job.reduces=<number>
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_local164879396_0001
INFO  : The url to track the job: http://localhost:8080/
INFO  : Job running in-process (local Hadoop)
INFO  : 2018-10-10 23:52:25,322 Stage-1 map = 100%,  reduce = 100%
INFO  : Ended Job = job_local164879396_0001
INFO  : Moving data to: hdfs://hdp-nn-01:9000/user/hive/warehouse/t_temp1 from hdfs://hdp-nn-01:9000/user/hive/warehouse/.hive-staging_hive_2018-10-10_23-52-23_803_4672116048929793587-5/-ext-10001
INFO  : Table default.t_temp1 stats: [numFiles=1, numRows=4, totalSize=53, rawDataSize=49]
No rows affected (2.014 seconds)
0: jdbc:hive2://hdp-nn-01:10000> select * from t_temp1;
+-------------------+----------------+-----------------+--+
| t_temp1.username  | t_temp1.month  | t_temp1.salary  |
+-------------------+----------------+-----------------+--+
| A                 | 2015-01        | 62              |
| A                 | 2015-02        | 12              |
| B                 | 2015-01        | 170             |
| B                 | 2015-02        | 30              |
+-------------------+----------------+-----------------+--+
4 rows selected (0.179 seconds)
0: jdbc:hive2://hdp-nn-01:10000> [root@hdp-nn-01 apps]# beeline
Beeline version 1.2.2 by Apache Hive
beeline>  !connect jdbc:hive2://hdp-nn-01:10000
Connecting to jdbc:hive2://hdp-nn-01:10000
Enter username for jdbc:hive2://hdp-nn-01:10000: root
Enter password for jdbc:hive2://hdp-nn-01:10000:
Connected to: Apache Hive (version 1.2.2)
Driver: Hive JDBC (version 1.2.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000> select * from t_temp1 a  join t_temp1 b  on a.username=b.username;
INFO  : Execution completed successfully
INFO  : MapredLocal task succeeded
INFO  : Number of reduce tasks is set to 0 since there's no reduce operator
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_1539182373092_0003
INFO  : The url to track the job: http://hdp-nn-01:8088/proxy/application_1539182373092_0003/
INFO  : Starting Job = job_1539182373092_0003, Tracking URL = http://hdp-nn-01:8088/proxy/application_1539182373092_0003/
INFO  : Kill Command = /root/apps/hadoop-2.6.5/bin/hadoop job  -kill job_1539182373092_0003
INFO  : Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0
INFO  : 2018-10-10 23:56:53,180 Stage-3 map = 0%,  reduce = 0%
INFO  : 2018-10-10 23:56:58,333 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 1.17 sec
INFO  : MapReduce Total cumulative CPU time: 1 seconds 170 msec
INFO  : Ended Job = job_1539182373092_0003
+-------------+----------+-----------+-------------+----------+-----------+--+
| a.username  | a.month  | a.salary  | b.username  | b.month  | b.salary  |
+-------------+----------+-----------+-------------+----------+-----------+--+
| A           | 2015-01  | 62        | A           | 2015-01  | 62        |
| A           | 2015-02  | 12        | A           | 2015-01  | 62        |
| A           | 2015-01  | 62        | A           | 2015-02  | 12        |
| A           | 2015-02  | 12        | A           | 2015-02  | 12        |
| B           | 2015-01  | 170       | B           | 2015-01  | 170       |
| B           | 2015-02  | 30        | B           | 2015-01  | 170       |
| B           | 2015-01  | 170       | B           | 2015-02  | 30        |
| B           | 2015-02  | 30        | B           | 2015-02  | 30        |
+-------------+----------+-----------+-------------+----------+-----------+--+
8 rows selected (14.906 seconds)
0: jdbc:hive2://hdp-nn-01:10000> select * from t_temp1 a  join t_temp1 b  on a.username=b.username where a.month <=b.month ;
INFO  : Execution completed successfully
INFO  : MapredLocal task succeeded
INFO  : Number of reduce tasks is set to 0 since there's no reduce operator
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_1539182373092_0004
INFO  : The url to track the job: http://hdp-nn-01:8088/proxy/application_1539182373092_0004/
INFO  : Starting Job = job_1539182373092_0004, Tracking URL = http://hdp-nn-01:8088/proxy/application_1539182373092_0004/
INFO  : Kill Command = /root/apps/hadoop-2.6.5/bin/hadoop job  -kill job_1539182373092_0004
INFO  : Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0
INFO  : 2018-10-11 00:01:42,367 Stage-3 map = 0%,  reduce = 0%
INFO  : 2018-10-11 00:01:46,472 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 1.24 sec
INFO  : MapReduce Total cumulative CPU time: 1 seconds 240 msec
INFO  : Ended Job = job_1539182373092_0004
+-------------+----------+-----------+-------------+----------+-----------+--+
| a.username  | a.month  | a.salary  | b.username  | b.month  | b.salary  |
+-------------+----------+-----------+-------------+----------+-----------+--+
| A           | 2015-01  | 62        | A           | 2015-01  | 62        |
| A           | 2015-01  | 62        | A           | 2015-02  | 12        |
| A           | 2015-02  | 12        | A           | 2015-02  | 12        |
| B           | 2015-01  | 170       | B           | 2015-01  | 170       |
| B           | 2015-01  | 170       | B           | 2015-02  | 30        |
| B           | 2015-02  | 30        | B           | 2015-02  | 30        |
+-------------+----------+-----------+-------------+----------+-----------+--+
6 rows selected (13.334 seconds)
0: jdbc:hive2://hdp-nn-01:10000> create table t_temp2 as
0: jdbc:hive2://hdp-nn-01:10000> select  
0: jdbc:hive2://hdp-nn-01:10000> a.username as aname,
0: jdbc:hive2://hdp-nn-01:10000> a.month as amonth,
0: jdbc:hive2://hdp-nn-01:10000> a.salary as asalary,
0: jdbc:hive2://hdp-nn-01:10000> b.username as bname,
0: jdbc:hive2://hdp-nn-01:10000> b.month as bmonth,
0: jdbc:hive2://hdp-nn-01:10000> b.salary as bsalary
0: jdbc:hive2://hdp-nn-01:10000>
0: jdbc:hive2://hdp-nn-01:10000> from t_temp1 a  join t_temp1 b  on a.username=b.username where a.month <=b.month ;
INFO  : Execution completed successfully
INFO  : MapredLocal task succeeded
INFO  : Number of reduce tasks is set to 0 since there's no reduce operator
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_1539182373092_0005
INFO  : The url to track the job: http://hdp-nn-01:8088/proxy/application_1539182373092_0005/
INFO  : Starting Job = job_1539182373092_0005, Tracking URL = http://hdp-nn-01:8088/proxy/application_1539182373092_0005/
INFO  : Kill Command = /root/apps/hadoop-2.6.5/bin/hadoop job  -kill job_1539182373092_0005
INFO  : Hadoop job information for Stage-4: number of mappers: 1; number of reducers: 0
INFO  : 2018-10-11 00:06:07,100 Stage-4 map = 0%,  reduce = 0%
INFO  : 2018-10-11 00:06:12,213 Stage-4 map = 100%,  reduce = 0%, Cumulative CPU 1.24 sec
INFO  : MapReduce Total cumulative CPU time: 1 seconds 240 msec
INFO  : Ended Job = job_1539182373092_0005
INFO  : Moving data to: hdfs://hdp-nn-01:9000/user/hive/warehouse/t_temp2 from hdfs://hdp-nn-01:9000/user/hive/warehouse/.hive-staging_hive_2018-10-11_00-06-00_196_2056446029980412768-7/-ext-10001
INFO  : Table default.t_temp2 stats: [numFiles=1, numRows=6, totalSize=159, rawDataSize=153]
No rows affected (13.422 seconds)
0: jdbc:hive2://hdp-nn-01:10000> select * from t_temp2;
+----------------+-----------------+------------------+----------------+-----------------+------------------+--+
| t_temp2.aname  | t_temp2.amonth  | t_temp2.asalary  | t_temp2.bname  | t_temp2.bmonth  | t_temp2.bsalary  |
+----------------+-----------------+------------------+----------------+-----------------+------------------+--+
| A              | 2015-01         | 62               | A              | 2015-01         | 62               |
| A              | 2015-01         | 62               | A              | 2015-02         | 12               |
| A              | 2015-02         | 12               | A              | 2015-02         | 12               |
| B              | 2015-01         | 170              | B              | 2015-01         | 170              |
| B              | 2015-01         | 170              | B              | 2015-02         | 30               |
| B              | 2015-02         | 30               | B              | 2015-02         | 30               |
+----------------+-----------------+------------------+----------------+-----------------+------------------+--+
6 rows selected (0.103 seconds)
0: jdbc:hive2://hdp-nn-01:10000> select bname,bmonth,sum(asalary) from t_temp2 group by bname,bmonth, bsalary;
INFO  : Number of reduce tasks not specified. Estimated from input data size: 1
INFO  : In order to change the average load for a reducer (in bytes):
INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
INFO  : In order to limit the maximum number of reducers:
INFO  :   set hive.exec.reducers.max=<number>
INFO  : In order to set a constant number of reducers:
INFO  :   set mapreduce.job.reduces=<number>
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_1539182373092_0006
INFO  : The url to track the job: http://hdp-nn-01:8088/proxy/application_1539182373092_0006/
INFO  : Starting Job = job_1539182373092_0006, Tracking URL = http://hdp-nn-01:8088/proxy/application_1539182373092_0006/
INFO  : Kill Command = /root/apps/hadoop-2.6.5/bin/hadoop job  -kill job_1539182373092_0006
INFO  : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
INFO  : 2018-10-11 00:11:48,163 Stage-1 map = 0%,  reduce = 0%
INFO  : 2018-10-11 00:11:53,351 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.38 sec
INFO  : 2018-10-11 00:11:58,484 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.44 sec
INFO  : MapReduce Total cumulative CPU time: 3 seconds 440 msec
INFO  : Ended Job = job_1539182373092_0006
+--------+----------+------+--+
| bname  |  bmonth  | _c2  |
+--------+----------+------+--+
| A      | 2015-01  | 62   |
| A      | 2015-02  | 74   |
| B      | 2015-01  | 170  |
| B      | 2015-02  | 200  |
+--------+----------+------+--+
4 rows selected (15.245 seconds)
0: jdbc:hive2://hdp-nn-01:10000> select bname,bmonth,bsalary,sum(asalary) as tsalary  from t_temp2 group by bname,bmonth, bsalary;
INFO  : Number of reduce tasks not specified. Estimated from input data size: 1
INFO  : In order to change the average load for a reducer (in bytes):
INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
INFO  : In order to limit the maximum number of reducers:
INFO  :   set hive.exec.reducers.max=<number>
INFO  : In order to set a constant number of reducers:
INFO  :   set mapreduce.job.reduces=<number>
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_1539182373092_0007
INFO  : The url to track the job: http://hdp-nn-01:8088/proxy/application_1539182373092_0007/
INFO  : Starting Job = job_1539182373092_0007, Tracking URL = http://hdp-nn-01:8088/proxy/application_1539182373092_0007/
INFO  : Kill Command = /root/apps/hadoop-2.6.5/bin/hadoop job  -kill job_1539182373092_0007
INFO  : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
INFO  : 2018-10-11 00:12:50,334 Stage-1 map = 0%,  reduce = 0%
INFO  : 2018-10-11 00:12:55,495 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.2 sec
INFO  : 2018-10-11 00:12:59,609 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.22 sec
INFO  : MapReduce Total cumulative CPU time: 3 seconds 220 msec
INFO  : Ended Job = job_1539182373092_0007
+--------+----------+----------+----------+--+
| bname  |  bmonth  | bsalary  | tsalary  |
+--------+----------+----------+----------+--+
| A      | 2015-01  | 62       | 62       |
| A      | 2015-02  | 12       | 74       |
| B      | 2015-01  | 170      | 170      |
| B      | 2015-02  | 30       | 200      |
+--------+----------+----------+----------+--+
4 rows selected (13.917 seconds)
0: jdbc:hive2://hdp-nn-01:10000> create table t_final as select bname,bmonth,bsalary,sum(asalary) as tsalary  from t_temp2 group by bname,bmonth, bsalary;
INFO  : Number of reduce tasks not specified. Estimated from input data size: 1
INFO  : In order to change the average load for a reducer (in bytes):
INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
INFO  : In order to limit the maximum number of reducers:
INFO  :   set hive.exec.reducers.max=<number>
INFO  : In order to set a constant number of reducers:
INFO  :   set mapreduce.job.reduces=<number>
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_1539182373092_0008
INFO  : The url to track the job: http://hdp-nn-01:8088/proxy/application_1539182373092_0008/
INFO  : Starting Job = job_1539182373092_0008, Tracking URL = http://hdp-nn-01:8088/proxy/application_1539182373092_0008/
INFO  : Kill Command = /root/apps/hadoop-2.6.5/bin/hadoop job  -kill job_1539182373092_0008
INFO  : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
INFO  : 2018-10-11 00:20:12,409 Stage-1 map = 0%,  reduce = 0%
INFO  : 2018-10-11 00:20:17,539 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.68 sec
INFO  : 2018-10-11 00:20:23,692 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.75 sec
INFO  : MapReduce Total cumulative CPU time: 3 seconds 750 msec
INFO  : Ended Job = job_1539182373092_0008
INFO  : Moving data to: hdfs://hdp-nn-01:9000/user/hive/warehouse/t_final from hdfs://hdp-nn-01:9000/user/hive/warehouse/.hive-staging_hive_2018-10-11_00-20-08_776_6311242436066492297-7/-ext-10001
INFO  : Table default.t_final stats: [numFiles=1, numRows=4, totalSize=67, rawDataSize=63]
No rows affected (16.612 seconds)
0: jdbc:hive2://hdp-nn-01:10000> select * from t_final;
+----------------+-----------------+------------------+------------------+--+
| t_final.bname  | t_final.bmonth  | t_final.bsalary  | t_final.tsalary  |
+----------------+-----------------+------------------+------------------+--+
| A              | 2015-01         | 62               | 62               |
| A              | 2015-02         | 12               | 74               |
| B              | 2015-01         | 170              | 170              |
| B              | 2015-02         | 30               | 200              |
+----------------+-----------------+------------------+------------------+--+
4 rows selected (0.138 seconds)
0: jdbc:hive2://hdp-nn-01:10000>

 

 

 

 

 

 

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值