Hive(六)生产上完成TopN统计

现有城市信息和产品信息两张表在MySQL中,另外有用户点击产品日志以文本形式存在hdfs上,现要求统计每个个城市区域下点击量前三的产品名,具体信息见下方。

DROP TABLE city_info;
CREATE TABLE city_info (
  city_id int DEFAULT NULL,
  city_name varchar(255) DEFAULT NULL,
  area varchar(255) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
insert  into `city_info`(`city_id`,`city_name`,`area`) values (1,'BEIJING','NC'),(2,'SHANGHAI','EC'),(3,'NANJING','EC'),(4,'GUANGZHOU','SC'),(5,'SANYA','SC'),(6,'WUHAN','CC'),(7,'CHANGSHA','CC'),(8,'XIAN','NW'),(9,'CHENGDU','SW'),(10,'HAERBIN','NE');

desc city_info;
Field    |Type        |Null|Key|Default|Extra|
---------|------------|----|---|-------|-----|
city_id  |int(11)     |YES |   |       |     |
city_name|varchar(255)|YES |   |       |     |
area     |varchar(255)|YES |   |       |     |

select * from city_info;
city_id|city_name|area|
-------|---------|----|
      1|BEIJING  |NC  |
      2|SHANGHAI |EC  |
      3|NANJING  |EC  |
      4|GUANGZHOU|SC  |
      5|SANYA    |SC  |
      6|WUHAN    |CC  |
      7|CHANGSHA |CC  |
      8|XIAN     |NW  |
      9|CHENGDU  |SW  |
     10|HAERBIN  |NE  |
DROP TABLE product_info;
CREATE TABLE product_info(
  product_id int DEFAULT NULL,
  product_name varchar(50) DEFAULT NULL,
  extend_info varchar(50) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

desc product_info;
Field       |Type       |Null|Key|Default|Extra|
------------|-----------|----|---|-------|-----|
product_id  |int(11)    |YES |   |       |     |
product_name|varchar(50)|YES |   |       |     |
extend_info |varchar(50)|YES |   |       |     |

select * from product_info limit 10;

product_id|product_name|extend_info         |
----------|------------|--------------------|
         3|product3    |{"product_status":1}|
         1|product1    |{"product_status":1}|
         1|product1    |{"product_status":1}|
         2|product2    |{"product_status":1}|
         3|product3    |{"product_status":1}|
         4|product4    |{"product_status":1}|
         5|product5    |{"product_status":1}|
         6|product6    |{"product_status":1}|
         7|product7    |{"product_status":1}|
         8|product8    |{"product_status":1}|
[root@10-9-15-140 data]# more user_click.txt
95,2bf501a7637549c89cf55342331b15db,2016-05-05 21:01:56,1,72
95,2bf501a7637549c89cf55342331b15db,2016-05-05 21:52:26,1,68
95,2bf501a7637549c89cf55342331b15db,2016-05-05 21:17:03,1,40
95,2bf501a7637549c89cf55342331b15db,2016-05-05 21:32:07,1,21
95,2bf501a7637549c89cf55342331b15db,2016-05-05 21:26:06,1,63
95,2bf501a7637549c89cf55342331b15db,2016-05-05 21:03:11,1,60
95,2bf501a7637549c89cf55342331b15db,2016-05-05 21:43:43,1,30
95,2bf501a7637549c89cf55342331b15db,2016-05-05 21:09:58,1,96
95,2bf501a7637549c89cf55342331b15db,2016-05-05 21:18:45,1,71
95,2bf501a7637549c89cf55342331b15db,2016-05-05 21:42:39,1,8
95,2bf501a7637549c89cf55342331b15db,2016-05-05 21:24:30,1,6
95,2bf501a7637549c89cf55342331b15db,2016-05-05 21:29:49,1,26
95,5b8cdcb0b18645a19f4e3e34a241012e,2016-05-05 20:24:12,1,83
95,5b8cdcb0b18645a19f4e3e34a241012e,2016-05-05 20:07:50,1,62
95,5b8cdcb0b18645a19f4e3e34a241012e,2016-05-05 20:19:31,1,61
95,5b8cdcb0b18645a19f4e3e34a241012e,2016-05-05 20:40:51,1,46
95,5b8cdcb0b18645a19f4e3e34a241012e,2016-05-05 20:19:07,1,99
95,5b8cdcb0b18645a19f4e3e34a241012e,2016-05-05 20:46:26,1,69
95,5b8cdcb0b18645a19f4e3e34a241012e,2016-05-05 20:16:10,1,4
95,dd01372f92654525a758a9625f728158,2016-05-05 15:37:02,1,67
95,dd01372f92654525a758a9625f728158,2016-05-05 15:35:16,1,59
95,dd01372f92654525a758a9625f728158,2016-05-05 15:03:40,1,65
95,dd01372f92654525a758a9625f728158,2016-05-05 15:45:43,1,52
95,dd01372f92654525a758a9625f728158,2016-05-05 15:32:02,1,54
95,dd01372f92654525a758a9625f728158,2016-05-05 15:25:32,1,89
...
[root@10-9-15-140 data]# wc -l user_click.txt
11448 user_click.txt

解决思路

1)city_info表和product_info表通过sqoop放到Hive里面 
2)通过user_click关联Hive里面的city_info和product_info 
3)再使用窗口函数求分组内的TOPN将结果sqoop导入MySQL 
4)shell脚本封装这个业务线的所有代码的思路,需要提及的一点,因为city_info/product_info数据变动少,所以通过其他的脚本导入,这个shell脚本不涉及,但我下面步骤依然会写出来。 
5)使用crontab触发,每天凌晨2点开始执行 注意点: 
a) 每次创建的临时表,在执行之前一定要先删除,要使用if not exits 
b) 关键的执行要有日志输出 
c) shell脚本如何解决幂等性问题

MySQL导入Hive

# 这里给出hive里的city_info的表结构
hive (ruoze_d7)> create table city_info(
               >  city_id int,
               >  city_name string,
               >  area string
               > )
               > row format delimited fields terminated by '\t';
OK
Time taken: 0.141 seconds
hive (ruoze_d7)>

# 导入city_info
[hadoop@10-9-15-140 ~]$ sqoop import \
--connect "jdbc:mysql://localhost:3306/ruoze_d7" \
--username root \
--password 123456 \
--table city_info \
--split-by 'city_id' \
--fields-terminated-by '\t' \
--hive-import \
--hive-database ruoze_d7 \
--target-dir '/user/hive/warehouse/ruoze_d7.db/city_info' \
--delete-target-dir \
-m 2

# 这里给出hive里的product_info的表结构
hive (ruoze_d7)> create table product_info(
               >  product_id int,
               >  product_name string,
               >  extend_info string
               > )
               > row format delimited fields terminated by '\t';
OK
Time taken: 0.527 seconds
hive (ruoze_d7)>

# 导入product_info
[hadoop@10-9-15-140 ~]$ sqoop import \
--connect "jdbc:mysql://localhost:3306/ruoze_d7" \
--username root \
--password 123456 \
--table product_info \
--split-by 'product_id' \
--fields-terminated-by '\t' \
--hive-import \
--hive-database ruoze_d7 \
--target-dir '/user/hive/warehouse/ruoze_d7.db/product_info' \
--delete-target-dir \
-m 2

ps:如果你第一次用sqoop的话,会出现三个问题,请看以下文章解答。
sqoop导入hive表报错Could not load org.apache.hadoop.hive.conf.HiveConf
sqoop 导入 hive SemanticException [Error 10072]: Database does not exist
sqoop异常:Exception in thread “main” java.lang.NoClassDefFoundError: org/json/JSONObject

user_click加载数据

生产上hive的user_click表肯定是个一直数据增长的表,所以该表肯定是个分区表。但是一般来说清洗好的前一天数据会直接放在user_click表存放hdfs上路径上,比如分区表存放路径为hdfs://hadoop001:9000/user/hive/warehouse/ruoze_d7.db/user_click,那么生产上会将2016-05-05日志清洗好并在该路径上创建分区路径。这时候你查询分区表不会出现该分区数据,该怎么高效的将数据刷新到分区表呢?请看下方代码

# 先给出user_click表结构
hive (ruoze_d7)> create table user_click(
 user_id int,
 session_id string,
 action_time string,
 city_id int,
 product_id int
)
partitioned by(day string)
row format delimited fields terminated by ',';

# 加载数据
hive (ruoze_d7)> load data local inpath '/home/hadoop/data/user_click.txt' into table user_click PARTITION (day='2016-05-05');

# 刷新分区表,另一种刷新方式不推荐,过于暴力
hive (ruoze_d7)> alter table user_click add if not exists partition(day='2016-05-05');

三表关联生成临时表

临时表有区域名,产品名,点击量三个字段。

hive (ruoze_d7)> drop table if exists tmp_product_area_clicks;
hive (ruoze_d7)> create table tmp_product_area_clicks as
               > select b.area,a.product_id,c.product_name,count(1) as click_count from user_click a
               > left join city_info b on a.city_id=b.city_id
               > left join product_info c on a.product_id=c.product_id 
               > where a.day='2016-05-05'
               > group by b.area,a.product_id,c.product_name;

窗口函数得到TopN结果

使用row_number()函数

hive (ruoze_d7)> drop table if exists result_product_area_clicks_top3;
hive (ruoze_d7)> create table result_product_area_clicks_top3
               > row format delimited fields terminated by '\t' as
               > select * from ( 
               > select 
               > "2016-05-05" day,product_id,product_name,area,click_count, 
               > row_number() over(partition by area order by click_count desc) rank
               > from tmp_product_area_clicks
               > ) t where t.rank<=3;

Hive导出MySQL

# 我们事先在MySQL创建好结果表,下面为表结构
create table result_product_area_clicks_top3(
day varchar(15),
product_id int(11),
product_name varchar(50),
area varchar(10),
click_count int(11),
rank int(10)
)
# 为了幂等性,会将MySQL结果表该日期的数据先删掉
# 日期会在脚本中更改
delete from result_product_area_clicks_top3 where day='2016-05-05';

[hadoop@10-9-15-140 ~]$ sqoop export \
--connect "jdbc:mysql://localhost:3306/ruoze_d7" \
--username root \
--password 123456 \
--table result_product_area_clicks_top3 \
--export-dir '/user/hive/warehouse/ruoze_d7.db/result_product_area_clicks_top3' \
--columns "day,product_id,product_name,area,click_count,rank" \
--fields-terminated-by '\t' \
-m 2

shell脚本编写

先倒入city_info和product_info

[hadoop@10-9-15-140 ~]$ vim city_product.sh
#!/bin/bash
CURRENT=`date +%Y-%m-%d_%H:%M:%S`
USE_DAY=`date --date '1 day ago' +%Y-%m-%d`
echo '当前使用的日期为:'$USE_DAY''

echo ''$CURRENT',开始创建表city_info'
HIVE_CITY_SQL="use ruoze_d7;drop table if exists city_info;create table if not exists city_info(
city_id int,
city_name string,
area string
)
row format delimited fields terminated by '\t';"

hive -e "${HIVE_CITY_SQL}"

echo ''$CURRENT',开始创建表product_info'
HIVE_PROD_SQL="use ruoze_d7;drop table if exists product_info;  create table if not  exists
 product_info(
product_id int,
product_name string,
extend_info string
)
row format delimited fields terminated by '\t';"

hive -e "${HIVE_PROD_SQL}"

echo ''$CURRENT',city_info load hive'
sqoop import \
--connect "jdbc:mysql://localhost:3306/ruoze_d7" \
--username root \
--password 123456 \
--table city_info \
--split-by 'city_id' \
--fields-terminated-by '\t' \
--hive-import \
--hive-database ruoze_d7 \
--target-dir '/user/hive/warehouse/ruoze_d7.db/city_info' \
--delete-target-dir \
-m 2

echo ''$CURRENT',product_info load hive'
sqoop import \
--connect "jdbc:mysql://localhost:3306/ruoze_d7" \
--username root \
--password 123456 \
--table product_info \
--split-by 'product_id' \
--fields-terminated-by '\t' \
--hive-import \
--hive-database ruoze_d7 \
--target-dir '/user/hive/warehouse/ruoze_d7.db/product_info' \
--delete-target-dir \
-m 2

hive离线是一天一次,是今天某个时间去运行昨天的数据,所以要在shell脚本中获取前一天,该命令为’date --date ‘1 day ago’ +%Y-%m-%d’。下面就是shell脚本代码。

[hadoop@10-9-15-140 ~]$ vim top3.sh
#!/bin/bash

CURRENT=`date +%Y-%m-%d_%H:%M:%S`
USE_DAY=`date --date '1 day ago' +%Y-%m-%d`
echo '当前使用的日期为:'$USE_DAY''

echo ''$CURRENT',开始创建表user_click'
HIVE_USER_SQL="use ruoze_d7;drop table if exists user_click;
 create table user_click(
 user_id int,
 session_id string,
 action_time string,
 city_id int,
 product_id int
)
partitioned by(day string)
row format delimited fields terminated by ',';"
hive -e "${HIVE_USER_SQL}"

echo ''$CURRENT',加载数据'
HIVE_LOAD_SQL="use ruoze_d7;load data local inpath '/home/hadoop/data/user_click.txt' into table user_click PARTITION (day='${USE_DAY}');"
hive -e "${HIVE_LOAD_SQL}"

echo ''$CURRENT',开始刷新分区'
HIVE_PARTITION_SQL="use ruoze_d7;alter table ruoze_d7.user_click add if not exists partition(day='${USE_DAY}');"
hive -e "${HIVE_PARTITION_SQL}"

echo ''$CURRENT',开始创建临时表,其中数据为每个区域下每个产品的点击数'
HIVE_TMP_SQL="use ruoze_d7;drop table if exists tmp_product_area_clicks;
create table tmp_product_area_clicks as
select b.area,a.product_id,c.product_name,count(1) as click_count from user_click a
left join city_info b on a.city_id=b.city_id
left join product_info c on a.product_id=c.product_id 
where a.day='${USE_DAY}'
group by b.area,a.product_id,c.product_name;"
hive -e "${HIVE_TMP_SQL}"

echo ''$CURRENT',开始创建结果表,其中数据为每个区域下每个产品的前三点击数'
HIVE_RESULT_SQL="use ruoze_d7;drop table if exists result_product_area_clicks_top3;
create table result_product_area_clicks_top3
row format delimited fields terminated by '\t' as
select * from ( 
select '${USE_DAY}' day,product_id,product_name,area,click_count,
row_number() over(partition by area order by click_count desc) rank
from tmp_product_area_clicks
) t where t.rank<=3;"
hive -e "${HIVE_RESULT_SQL}"

echo ''$CURRENT',保持幂等性,开始删除MySQL结果表中当前'$USE_DAY'数据'
MySQL_DETELE_SQL="use ruoze_d7; delete from result_product_area_clicks_top3 where day='${USE_DAY}';"
sudo mysql -uroot -p123456 -e "${MySQL_DETELE_SQL}"

echo ''$CURRENT',开始将Hive结果表导入MySQL'
sqoop export \
--connect jdbc:mysql://localhost:3306/ruoze_d7 \
--password 123456 \
--username root \
--table result_product_area_clicks_top3 \
--export-dir /user/hive/warehouse/ruoze_d7.db/result_product_area_clicks_top3 \
--columns "day,product_id,product_name,area,click_count,rank" \
--fields-terminated-by '\t' \
-m 2
echo ''$CURRENT',整个流程结束,请查看MySQL中数据是否导入'

定时后台执行

使用crontab来做定时,具体见下方代码

[hadoop@10-9-15-140 ~]$ crontab -e
* 2 * * * nohup /home/hadoop/top3.sh >> /tmp/top3_logs.log 2>&1 &
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值