Hive HQL操作

实验思路

  1. 首先检查Hadoop相关进程,是否已经启动。若未启动,切换到/apps/hadoop/sbin目录下,启动Hadoop。

  2. 然后开启Mysql,用于存放Hive的元数据。(密码:zhangyu

  3. 在终端命令行界面,直接输入Hive命令,启动Hive命令行

  4. 在hive中创建一个表student

  5. 在hive中创建一个表course

创建本地目录/data/ans17,下载数据文件student.txt、course.txt到该目录下,并将student.txt、course.txt文件分别导入到Hive中的student、course表中。创建完成后使用 show tables 查询一下

实验具体流程

1.首先检查Hadoop相关进程,是否已经启动。若未启动,切换到/apps/hadoop/sbin目录下,启动Hadoop。

jps  
cd /apps/hadoop/sbin  
./start-all.sh  

2.然后开启Mysql,用于存放Hive的元数据。(密码:zhangyu)
view plain copy print

sudo service mysql start  

3.切换到/data/hive4目录下,如不存在需提前创建hive4文件夹。

mkdir /data/hive4  
cd /data/hive4  

4.使用wget命令,下载http://192.168.1.100:60000/allfiles/hive4中的文件。

wget http://192.168.1.100:60000/allfiles/hive4/goods_visit  
wget http://192.168.1.100:60000/allfiles/hive4/order_items  
wget http://192.168.1.100:60000/allfiles/hive4/buyer_favorite  

5.在终端命令行界面,直接输入Hive命令,启动Hive命令行。

hive

Order by的演示
1.在Hive中创建一个goods_visit表,有goods_id ,click_num 2个字段,字符类型都为string,以‘\t’为分隔符。

create table goods_visit(goods_id string,click_num int)  
row format delimited fields terminated by '\t'  stored as textfile;

创建完成,查询一下。

show tables;  

2.将本地 /data/hive4下的表goods_visit中数据导入到Hive中的goods_visit表中。

load data local inpath'/data/hive4/goods_visit' into table goods_visit;  

3.使用Order by对商品点击次数从大到小排序,并通过limit取出10条数据。

select * from goods_visit order by click_num desc limit 10;  

输出如下

1.	hive> select * from goods_visit order by click_num desc limit 10;  
2.	Query ID = zhangyu_20170316035151_cf5ad24a-37a4-448c-8875-1e717af2a8ab  
3.	Total jobs = 1  
4.	Launching Job 1 out of 1  
5.	Number of reduce tasks determined at compile time: 1  
6.	In order to change the average load for a reducer (in bytes):  
7.	  set hive.exec.reducers.bytes.per.reducer=<number>  
8.	    In order to limit the maximum number of reducers:  
9.	    set hive.exec.reducers.max=<number>  
10.	    In order to set a constant number of reducers:  
11.	    set mapreduce.job.reduces=<number>  
12.	    Starting Job = job_1489635985320_0001, Tracking URL = http://ab0eecb92928:8088/proxy/application_1489635985320_0001/  
13.	    Kill Command = /apps/hadoop/bin/hadoop job  -kill job_1489635985320_0001  
14.	    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1  
15.	    2017-03-16 03:52:21,796 Stage-1 map = 0%,  reduce = 0%  
16.	    2017-03-16 03:52:28,785 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.36 sec  
17.	    2017-03-16 03:52:36,138 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.76 sec  
18.	    MapReduce Total cumulative CPU time: 4 seconds 760 msec  
19.	    Ended Job = job_1489635985320_0001  
20.	    MapReduce Jobs Launched:  
21.	    Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 4.76 sec   HDFS Read: 88110 HDFS Write: 132 SUCCESS  
22.	    Total MapReduce CPU Time Spent: 4 seconds 760 msec  
23.	    OK  
24.	    1015320 22458  
25.	    1014040 11240  
26.	    1015580 5487  
27.	    1013460 5333  
28.	    1016840 5326  
29.	    1010199 4321  
30.	    1010362 4155  
31.	    1010121 4033  
32.	    1011153 3746  
33.	    1015061 3721  
34.	    Time taken: 41.766 seconds, Fetched: 10 row(s)  

Sort by 的演示
1.为演示Sort by效果 ,我将Reduce个数设置为三个,命令如下:

set mapred.reduce.tasks=3;  

2.为某电商创建一个订单明细表,名为order_items,包含item_id 、order_id 、goods_id 、goods_number 、shop_price 、goods_price 、goods_amount 七个字段,字符类型都为string,以‘\t’为分隔符。

create table order_items(item_id string,order_id string,goods_id string,goods_number string, shop_price string,goods_price string,goods_amount string)  
row format delimited fields terminated by '\t'  stored as textfile; 

3.将本地/data/hive4/下的表order_items中数据导入到Hive中的order_items表中。

load data local inpath '/data/hive4/order_items' into table order_items;

4.按商品ID(goods_id)进行排序。

select * from order_items sort by goods_id;  

输出如下:

1.	hive> select * from order_items sort by goods_id limit 20;  
2.	Query ID = zhangyu_20161209030404_1fe429da-7b59-4992-bec0-74c8187a3716  
3.	Total jobs = 2  
4.	Launching Job 1 out of 2  
5.	Number of reduce tasks not specified. Defaulting to jobconf value of: 3  
6.	In order to change the average load for a reducer (in bytes):  
7.	  set hive.exec.reducers.bytes.per.reducer=<number>  
8.	    In order to limit the maximum number of reducers:  
9.	    set hive.exec.reducers.max=<number>  
10.	    In order to set a constant number of reducers:  
11.	    set mapreduce.job.reduces=<number>  
12.	    Starting Job = job_1481083320872_0042, Tracking URL = http://dc988a3efb65:8088/proxy/application_1481083320872_0042/  
13.	    Kill Command = /apps/hadoop/bin/hadoop job  -kill job_1481083320872_0042  
14.	    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 3  
15.	    2016-12-09 03:04:22,006 Stage-1 map = 0%,  reduce = 0%  
16.	    2016-12-09 03:04:30,297 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 5.95 sec  
17.	    2016-12-09 03:04:37,592 Stage-1 map = 100%,  reduce = 33%, Cumulative CPU 8.74 sec  
18.	    2016-12-09 03:04:38,630 Stage-1 map = 100%,  reduce = 67%, Cumulative CPU 11.53 sec  
19.	    2016-12-09 03:04:39,668 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 14.43 sec  
20.	    MapReduce Total cumulative CPU time: 14 seconds 430 msec  
21.	    Ended Job = job_1481083320872_0042  
22.	    Launching Job 2 out of 2  
23.	    Number of reduce tasks determined at compile time: 1  
24.	    In order to change the average load for a reducer (in bytes):  
25.	    set hive.exec.reducers.bytes.per.reducer=<number>  
26.	    In order to limit the maximum number of reducers:  
27.	    set hive.exec.reducers.max=<number>  
28.	    In order to set a constant number of reducers:  
29.	    set mapreduce.job.reduces=<number>  
30.	    Starting Job = job_1481083320872_0043, Tracking URL = http://dc988a3efb65:8088/proxy/application_1481083320872_0043/  
31.	    Kill Command = /apps/hadoop/bin/hadoop job  -kill job_1481083320872_0043  
32.	    Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1  
33.	    2016-12-09 03:04:47,572 Stage-2 map = 0%,  reduce = 0%  
34.	    2016-12-09 03:04:52,754 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 1.37 sec  
35.	    2016-12-09 03:04:58,977 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 3.06 sec  
36.	    MapReduce Total cumulative CPU time: 3 seconds 60 msec  
37.	    Ended Job = job_1481083320872_0043  
38.	    MapReduce Jobs Launched:  
39.	    Stage-Stage-1: Map: 1  Reduce: 3   Cumulative CPU: 14.43 sec   HDFS Read: 8811070 HDFS Write: 3297 SUCCESS  
40.	    Stage-Stage-2: Map: 1  Reduce: 1   Cumulative CPU: 3.06 sec   HDFS Read: 10201 HDFS Write: 658 SUCCESS  
41.	    Total MapReduce CPU Time Spent: 17 seconds 490 msec  
42.	    OK  

Group by的演示
1.为某电商创建一个买家收藏夹表,名为buy_favorite,有buyer_id 、goods_id 、dt 三个字段,字符类型都为string,以‘\t’为分隔符。

create table buyer_favorite(buyer_id string,goods_id string,dt string)  
row format delimited fields terminated by '\t'  stored as textfile;  

2.将本地/data/hive4/下的表buyer_favorite中数据导入到Hive中的buyer_favorite表中。

load data local inpath '/data/hive4/buyer_favorite' into table buyer_favorite;

3.按dt分组查询每天的buyer_id数量。

select dt,count(buyer_id) from buyer_favorite group by dt;  

Distribute by的演示
1.为演示Distribute by效果 ,我将Reduce个数设置为三个,命令如下:

set mapred.reduce.tasks=3; 

2.使用买家收藏夹表,按用户ID(buyer_id)做分发(distribute by),输出到本地/data/hive4/out中。

insert overwrite local directory '/data/hive4/out' select * from buyer_favorite distribute by buyer_id;  

输出如下:

1.	hive> insert overwrite local directory '/data/hive4/out' select * from buyer_favorite distribute by buyer_id;  
2.	Query ID = zhangyu_20161209060808_ac823317-6a3e-4328-86ae-e7a742239d1c  
3.	Total jobs = 1  
4.	Launching Job 1 out of 1  
5.	Number of reduce tasks not specified. Defaulting to jobconf value of: 3  
6.	In order to change the average load for a reducer (in bytes):  
7.	  set hive.exec.reducers.bytes.per.reducer=<number>  
8.	    In order to limit the maximum number of reducers:  
9.	    set hive.exec.reducers.max=<number>  
10.	    In order to set a constant number of reducers:  
11.	    set mapreduce.job.reduces=<number>  
12.	    Starting Job = job_1481083320872_0052, Tracking URL = http://dc988a3efb65:8088/proxy/application_1481083320872_0052/  
13.	    Kill Command = /apps/hadoop/bin/hadoop job  -kill job_1481083320872_0052  
14.	    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 3  
15.	    2016-12-09 06:08:17,091 Stage-1 map = 0%,  reduce = 0%  
16.	    2016-12-09 06:08:22,361 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.14 sec  
17.	    2016-12-09 06:08:28,558 Stage-1 map = 100%,  reduce = 33%, Cumulative CPU 4.49 sec  
18.	    2016-12-09 06:08:29,591 Stage-1 map = 100%,  reduce = 67%, Cumulative CPU 6.67 sec  
19.	    2016-12-09 06:08:30,628 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 8.73 sec  
20.	    MapReduce Total cumulative CPU time: 8 seconds 730 msec  
21.	    Ended Job = job_1481083320872_0052  
22.	    Copying data to local directory /data/hive4/out  
23.	    Copying data to local directory /data/hive4/out  
24.	    MapReduce Jobs Launched:  
25.	    Stage-Stage-1: Map: 1  Reduce: 3   Cumulative CPU: 8.73 sec   HDFS Read: 216881 HDFS Write: 199946 SUCCESS  
26.	    Total MapReduce CPU Time Spent: 8 seconds 730 msec  
27.	    OK  
28.	    Time taken: 21.62 seconds  

3.切换到linux本地窗口,查看目录/data/hive4/out下的文件。

cd /data/hive4/out  	
ls

数据按buyer_id分发到三个文件中。
Cluster by 的演示
Cluster by除了具有Distribute by的功能外还兼具Sort by的功能,相当于Distribute by+ Sort by的结合,但是排序只能是倒叙排序,不能指定排序规则为ASC或者DESC。
1.将Reduce个数设置为3个。

set mapred.reduce.tasks=3;  

2.按buyer_id将buyer_favorite分发成三个文件,并按buyer_id排序。

select * from buyer_favorite cluster by buyer_id;

Order by 与Sort by 对比
Order by的查询结果是全部数据全局排序,它的Reduce数只有一个,Reduce任务繁重,因此数据量大的情况下将会消耗很长时间去执行,而且可能不会出结果,因此必须指定输出条数。
Sort by是在每个Reduce端做排序,它的Reduce数可以有多个,它保证了每个Reduce出来的数据是有序的,但多个Reduce出来的数据合在一起未必是有序的,因此在Sort by做完局部排序后,还要再做一次全局排序,相当于先在小组内排序,然后只要将各小组排序即可,在数据量大的情况下,可以提升不少的效率。
Distribute by 与Group by 对比
Distribute by是通过设置的条件在Map端拆分数据给Reduce端的,按照指定的字段对数据划分到不同的输出Reduce文件中。
Group by它的作用是通过一定的规则将一个数据集划分成若干个小的区域,然后针对若干个小区域进行数据处理,例如某电商想统计一年内商品销售情况,可以使用Group by将一年的数据按月划分,然后统计出每个月热销商品的前十名。
两者相比,都是按Key值划分数据,都使用Reduce操作,唯一不同的是Distribute by只是单纯的分散数据,而Group by把相同Key的数据聚集到一起,后续必须是聚合操作。

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值