-- 创建 dual 表(只有在测试的时候用insert)
hive> create table dual(x string);
OK
Time taken: 0.282 seconds
hive> insert into table dual values('');
Query ID = hadoop_20180611233030_645e070e-77f9-4ea4-8b32-ee306424c16b
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1528730871092_0001, Tracking URL = http://hadoop000:8088/proxy/application_1528730871092_0001/
Kill Command = /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop job -kill job_1528730871092_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2018-06-11 23:31:32,290 Stage-1 map = 0%, reduce = 0%
2018-06-11 23:31:37,712 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.54 sec
MapReduce Total cumulative CPU time: 1 seconds 540 msec
Ended Job = job_1528730871092_0001
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: hdfs://hadoop000:9000/user/hive/warehouse/hive3.db/dual/.hive-staging_hive_2018-06-11_23-31-19_987_4145860992197710987-1/-ext-10000
Loading data to table hive3.dual
Table hive3.dual stats: [numFiles=1, numRows=1, totalSize=1, rawDataSize=0]
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Cumulative