HiveQL应用体验

HiveQL应用体验

Hive提供的类SQL的操作体验,从语法上来看,近似于MySQL,从功能上来看,近似于SQL92,不过大家注意了,都只是近似而已。

HiveQL详细语法参见:https://cwiki.apache.org/confluence/display/Hive/LanguageManual

我们先尝试使用Hive来实现之前使用PIG分析access_log的任务,每个ip的点击次数。

创建表对象如下:

hive> create table access_log(

   > ip string,

   > other string

   > )

   > row format delimited fields terminated by ¨ ¨  

   > stored as textfile;

OK

Time taken: 0.106 seconds

加载数据也是用LOAD DATA 命令,这里要处理的文件:

hive> load data local inpath ¨/data/software/access_log.txt¨ overwrite into table access_log;

Copying data from file:/data/software/access_log.txt

Copying file: file:/data/software/access_log.txt

Loading data to table default.access_log

Moved to trash: hdfs://hdnode1:9000/user/hive/warehouse/access_log

OK

Time taken: 0.753 seconds

查询加载的数据,查询出前20条,我了个去,连limit子句都提供了,方便呀:

hive> select * from access_log limit 20;

OK

220.181.108.151 -

208.115.113.82  -

220.181.94.221  -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

112.97.24.243   -

Time taken: 0.156 seconds

借助SQL统计就很方便了,GROUP BY即可。不过考虑到这次要操作的数据量较大,执行GROUP BY将结果集输出到屏幕恐有不妥,因此这里我们将结果保存到另外的表中,执行SQL语句如下:

hive> create table access_result as select ip,count(1) ct from access_log group by ip;

Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks not specified. Estimated from input data size: 1

In order to change the average load for a reducer (in bytes):

 set hive.exec.reducers.bytes.per.reducer=

In order to limit the maximum number of reducers:

 set hive.exec.reducers.max=

In order to set a constant number of reducers:

 set mapred.reduce.tasks=

Starting Job = job_201304220923_0007, Tracking URL = http://hdnode1:50030/jobdetails.jsp?jobid=job_201304220923_0007

Kill Command = /usr/local/hadoop-0.20.2/bin/../bin/hadoop job  -Dmapred.job.tracker=hdnode1:9001 -kill job_201304220923_0007

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1

2013-05-02 17:02:12,037 Stage-1 map = 0%,  reduce = 0%

2013-05-02 17:02:18,082 Stage-1 map = 100%,  reduce = 0%

2013-05-02 17:02:27,128 Stage-1 map = 100%,  reduce = 100%

Ended Job = job_201304220923_0007

Moving data to: hdfs://hdnode1:9000/user/hive/warehouse/access_result

[Warning] could not update stats.

476 Rows loaded to hdfs://hdnode1:9000/tmp/hive-grid/hive_2013-05-02_17-02-05_557_8882025499109535399/-ext-10000

MapReduce Jobs Launched:  

Job 0: Map: 1  Reduce: 1   HDFS Read: 7118627 HDFS Write: 8051 SUCESS

Total MapReduce CPU Time Spent: 0 msec

OK

Time taken: 25.529 seconds

查询数据(由于需要排序,因此又是个M-R任务):

hive> select * from access_result order by ct desc limit 10;

Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks determined at compile time: 1

In order to change the average load for a reducer (in bytes):

 set hive.exec.reducers.bytes.per.reducer=

In order to limit the maximum number of reducers:

 set hive.exec.reducers.max=

In order to set a constant number of reducers:

 set mapred.reduce.tasks=

Starting Job = job_201304220923_0009, Tracking URL = http://hdnode1:50030/jobdetails.jsp?jobid=job_201304220923_0009

Kill Command = /usr/local/hadoop-0.20.2/bin/../bin/hadoop job  -Dmapred.job.tracker=hdnode1:9001 -kill job_201304220923_0009

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1

2013-05-02 17:04:45,208 Stage-1 map = 0%,  reduce = 0%

2013-05-02 17:04:48,220 Stage-1 map = 100%,  reduce = 0%

2013-05-02 17:04:57,260 Stage-1 map = 100%,  reduce = 100%

Ended Job = job_201304220923_0009

MapReduce Jobs Launched:  

Job 0: Map: 1  Reduce: 1   HDFS Read: 8051 HDFS Write: 191 SUCESS

Total MapReduce CPU Time Spent: 0 msec

OK

218.20.24.203   4597

221.194.180.166 4576

119.146.220.12  1850

117.136.31.144  1647

121.28.95.48    1597

113.109.183.126 1596

182.48.112.2    870

120.84.24.200   773

61.144.125.162  750

27.115.124.75   470

Time taken: 20.608 seconds

看吧,不怕不识货,就怕货比货,怪不得PIG要没落呢,相比Hive,PIG那一套可以直接忽视了。

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/7607759/viewspace-761358/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/7607759/viewspace-761358/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值