现在有城市信息和产品信息两张表存放在MySQL中(固定的信息一般存储在MySQL中),用户行为日志存放在hdfs上。要统计各个区域下最受欢迎的产品的top 3(受欢迎程度以点击量衡量)
1)city_info表和product_info表放到hive里面
2)通过user_click关联hive里面的city_info和product_info
3)再使用窗口函数求分组内的topn
MySQL导入hive
创建city_info表
create table city_info(
city_id int,
city_name string,
area string
)
row format delimited fields terminated by '\t';
通过sqoop,将MySQL中的city_info导入到hive中
sqoop import \
--connect "jdbc:mysql://localhost:3306/d7" \
--username root \
--password root \
--table city_info \
--split-by 'city_id' \
--fields-terminated-by '\t' \
--hive-import \
--hive-database d7 \
--target-dir '/user/hive/warehouse/d7.db/city_info' \
--delete-target-dir \
-m 2
创建product_info表
create table product_info(
product_id int,
product_name string,
extend_info string
)
row format delimited fields terminated by '\t';
通过sqoop,将MySQL中的product_info导入到hive中
sqoop import \
--connect jdbc:mysql://localhost:3306/ruozedata \
--password root \
--username root \
--table product_info \
--hive-database d7_hive \
--hive-table product_info \
--hive-import \
--hive-overwrite \
--delete-target-dir \
--fields-terminated-by '\t' \
--split-by product_id \
-m 2
此时,所有的表都在hive中,那么后续操作就是写sql
user_click
生产上,hive里的user_click表是一个数据一直增长的表,所以该表肯定是个分区表。
创建 user_click表
create table user_click(
user_id int,
session_id string,
action_time string,
city_id int,
product_id int
) partitioned by (date string)
row format delimited fields terminated by ',';
load data local inpath '/home/hadoop/data/topn/init/user_click.txt' overwrite into table user_click partition(date='2016-05-05');
create table tmp_product_click_basic_info
as
select u.product_id, u.city_id, c.city_name, c.area
from
(select product_id, city_id from user_click where date='2016-05-05' ) u
join
(select city_id, city_name, area from city_info ) c
on u.city_id = c.city_id;
create table tmp_area_product_click_count
as
select product_id,area,count(1) click_count from tmp_product_click_basic_info group by product_id,area;
create table tmp_area_product_click_count_full_info
as
select
a.product_id,a.area,a.click_count,b.product_name
from tmp_area_product_click_count a join product_info b
on a.product_id = b.product_id;
drop table area_product_click_count_top3;
create table area_product_click_count_top3
row format delimited fields terminated by '\t'
as
select * from
(
select
"2016-05-05" day,product_id,product_name,area,click_count,
row_number() over(partition by area order by click_count desc) r
from tmp_area_product_click_count_full_info
) t where t.r<=3;
hive导出MySQL
create table area_product_click_count_top3(
day varchar(15),
product_id int(11),
product_name varchar(50),
area varchar(10),
click_count int(11),
r int(10)
)
sqoop export \
--connect jdbc:mysql://localhost:3306/ruozedata \
--password root \
--username root \
--table area_product_click_count_top3 \
--export-dir /user/hive/warehouse/d7_hive.db/area_product_click_count_top3 \
--columns "day,product_id,product_name,area,click_count,r" \
--fields-terminated-by '\t' \
-m 2
使用shell脚本封装这个业务线的所有代码的思路
1)我们离线是一天一次,今天凌晨去运行昨天的数据,通过Linux命令获取当前到昨天的日期(格式是2016-05-05 -1)
2)city_info、product_info通过其他的脚本导入
3)所有的sql封装到shell
注意点:
- 每次创建的临时表,在执行之前一定要先删除,要使用if not exits
- 关键的执行要有日志输出
- 统计结果输出,如何解决幂等性问题
所有的业务逻辑都封装到shell里面了,要使用crontab出发,每天凌晨2点开始执行。