离线数仓(四)

本文详细介绍了离线数仓的搭建过程,包括ODS层的创建数据库、启动日志表和事件日志表,DWD层的数据解析和加载,以及业务知识如用户活跃度相关指标的计算。此外,还涵盖了DWS层和ADS层的构建,涉及用户活跃、新增、留存、沉默、回流、流失等关键业务需求的实现。
摘要由CSDN通过智能技术生成

离线数仓

1、数仓搭建之ODS层

1.1、创建数据库

1)创建gmall数据库

  • hive (default)> create database gmall;

2)使用gmall数据库

  • hive (default)> use gmall;

1.2、创建启动日志表ods_start_log

1)创建输入数据是lzo输出是text,支持json解析的分区表

hive (gmall)>
drop table if exists ods_start_log;
CREATE EXTERNAL TABLE ods_start_log (line string)
PARTITIONED BY (dt string)
STORED AS
INPUTFORMAT ‘com.hadoop.mapred.DeprecatedLzoTextInputFormat’
OUTPUTFORMAT ‘org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat’
LOCATION ‘/warehouse/gmall/ods/ods_start_log’;

2)加载数据

hive (gmall)>
load data inpath ‘/origin_data/gmall/log/topic_start/2021-08-01’ into table gmall.ods_start_log partition(dt=‘2021-08-01’);

3)为lzo压缩文件创建索引

hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar com.hadoop.compression.lzo.DistributedLzoIndexer /warehouse/gmall/ods/ods_start_log/dt=2021-08-01

1.3、创建事件日志表ods_event_log

1)创建输入数据是lzo输出是text,支持json解析的分区表

hive (gmall)>
drop table if exists ods_event_log;
CREATE EXTERNAL TABLE ods_event_log(line string)
PARTITIONED BY (dt string)
STORED AS
INPUTFORMAT ‘com.hadoop.mapred.DeprecatedLzoTextInputFormat’
OUTPUTFORMAT ‘org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat’
LOCATION ‘/warehouse/gmall/ods/ods_event_log’;

2)加载数据

hive (gmall)>
load data inpath ‘/origin_data/gmall/log/topic_event/2021-08-01’ into table gmall.ods_event_log partition(dt=‘2021-08-01’);

3)为lzo压缩文件创建索引

hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar com.hadoop.compression.lzo.DistributedLzoIndexer /warehouse/gmall/ods/ods_event_log/dt=2021-08-01

1.4、ODS层加载数据脚本

1)在hadoop103的/opt/module/hive/myscripts目录下创建脚本

  • [atguigu@hadoop103 myscripts]$ vim ods_log.sh

在脚本中编写如下内容:

#!/bin/bash
#向ods的两个表中导入每天的数据,为数据创建LZO索引
#接受要导入数据的日期参数,-n可以判断后面的参数是否为null,如果为null,返回true,否则返回false
if [ -n "$1" ]
then
        #默认日期为昨天的
        do_date=$1
else
        do_date=$(date -d yesterday +%F)
fi

echo ===日志日期为 $do_date===

APP=gmall
sql="
load data inpath '/origin_data/gmall/log/topic_start/$do_date' into table $APP.ods_start_log partition(dt='$do_date');

load data inpath '/origin_data/gmall/log/topic_event/$do_date' into table $APP.ods_event_log partition(dt='$do_date');"

hive -e "$sql"

hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar com.hadoop.compression.lzo.DistributedLzoIndexer /warehouse/gmall/ods/ods_start_log/dt=$do_date

hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar com.hadoop.compression.lzo.DistributedLzoIndexer /warehouse/gmall/ods/ods_event_log/dt=$do_date

说明1:
[ -n 变量值 ] 判断变量的值,是否为空
– 变量的值,非空,返回true
– 变量的值,为空,返回false
– 一定要为判断的变量名加上双引号
说明2:
查看date命令的使用,[atguigu@hadoop103 ~]$ date --help

2)增加脚本执行权限

  • [atguigu@hadoop103 myscripts]$ chmod u+x ods_log.sh

3)脚本使用

  • [atguigu@hadoop103 myscripts]$ bash ods_log.sh 2021-08-03

在这里插入图片描述

2、数仓搭建之DWD层

对ODS层数据进行清洗(去除空值,脏数据,超过极限范围的数据,行式存储改为列存储,改压缩格式)。

2.1、DWD层启动表数据解析

get_json_object 函数
———————————————————————————
get_json_object(json_txt, path) - Extract a json object from path
Extract json object from a json string based on json path specified, and return json string of the extracted json object. It will return null if the input json string is invalid.
A limited version of JSONPath supported:
$ : Root object 代表整个JSON对象
. : Child operator 代表获取JSON对象中子元素(属性)的操作符
[] : Subscript operator for array 获取JSONArray中的某个元素

获取18:
select get_json_object(’{“name”:“jack”,“age”:18,“parents”:
[{“name”:“oldjack”,“age”:48},{“name”:“jackmom”,“age”:49}]}’,’$.age’)

获取49:
select get_json_object(’{“name”:“jack”,“age”:18,“parents”:
[{“name”:“oldjack”,“age”:48},{“name”:“jackmom”,“age”:49}]}’,’$.age’)

1)建表语句

hive (gmall)> 
drop table if exists dwd_start_log;
CREATE EXTERNAL TABLE dwd_start_log(
`mid_id` string,
`user_id` string, 
`version_code` string, 
`version_name` string, 
`lang` string, 
`source` string, 
`os` string, 
`area` string, 
`model` string,
`brand` string, 
`sdk_version` string, 
`gmail` string, 
`height_width` string,  
`app_time` string,
`network` string, 
`lng` string, 
`lat` string, 
`entry` string, 
`open_ad_type` string, 
`action` string, 
`loading_time` string, 
`detail` string, 
`extend1` string
)
PARTITIONED BY (dt string)
stored as parquet
location '/warehouse/gmall/dwd/dwd_start_log/'
TBLPROPERTIES('parquet.compression'='lzo');

2)向启动表导入数据

hive (gmall)> 
insert overwrite table dwd_start_log
PARTITION (dt='2021-08-03')
select 
    get_json_object(line,'$.mid') mid_id,
    get_json_object(line,'$.uid') user_id,
    get_json_object(line,'$.vc') version_code,
    get_json_object(line,'$.vn') version_name,
    get_json_object(line,'$.l') lang,
    get_json_object(line,'$.sr') source,
    get_json_object(line,'$.os') os,
    get_json_object(line,'$.ar') area,
    get_json_object(line,'$.md') model,
    get_json_object(line,'$.ba') brand,
    get_json_object(line,'$.sv') sdk_version,
    get_json_object(line,'$.g') gmail,
    get_json_object(line,'$.hw') height_width,
    get_json_object(line,'$.t') app_time,
    get_json_object(line,'$.nw') network,
    get_json_object(line,'$.ln') lng,
    get_json_object(line,'$.la') lat,
    get_json_object(line,'$.entry') entry,
    get_json_object(line,'$.open_ad_type') open_ad_type,
    get_json_object(line,'$.action') action,
    get_json_object(line,'$.loading_time') loading_time,
    get_json_object(line,'$.detail') detail,
    get_json_object(line,'$.extend1') extend1
from ods_start_log 
where dt='2021-08-03';

2.2、DWD层启动表加载数据脚本

1)在hadoop103的/opt/module/myscripts目录下创建脚本

  • [atguigu@hadoop103 myscripts]$ vim dwd_start_log.sh

在脚本中编写如下内容

#!/bin/bash
#向ods的两个表中导入每天的数据,为数据创建LZO索引

#接受要导入数据的日期参数,-n可以判断后面的参数是否为null,如果为null,返回true,否则返回false
if [ -n "$1" ]
then
        #默认日期为昨天的
        do_date=$1
else
        do_date=$(date -d yesterday +%F)
fi

echo ===日志日期为 $do_date===

sql="
insert overwrite table gmall.dwd_start_log
PARTITION (dt='$do_date')
select 
    get_json_object(line,'$.mid') mid_id,
    get_json_object(line,'$.uid') user_id,
    get_json_object(line,'$.vc') version_code,
    get_json_object(line,'$.vn') version_name,
    get_json_object(line,'$.l') lang,
    get_json_object(line,'$.sr') source,
    get_json_object(line,'$.os') os,
    get_json_object(line,'$.ar') area,
    get_json_object(line,'$.md') model,
    get_json_object(line,'$.ba') brand,
    get_json_object(line,'$.sv') sdk_version,
    get_json_object(line,'$.g') gmail,
    get_json_object(line,'$.hw') height_width,
    get_json_object(line,'$.t') app_time,
    get_json_object(line,'$.nw') network,
    get_json_object(line,'$.ln') lng,
    get_json_object(line,'$.la') lat,
    get_json_object(line,'$.entry') entry,
    get_json_object(line,'$.open_ad_type') open_ad_type,
    get_json_object(line,'$.action') action,
    get_json_object(line,'$.loading_time') loading_time,
    get_json_object(line,'$.detail') detail,
    get_json_object(line,'$.extend1') extend1
from gmall.ods_start_log 
where dt='$do_date';

"
hive -e "$sql"

2)增加脚本执行权限

  • [atguigu@hadoop103 myscripts]$ chmod u+x dwd_start_log.sh

3)脚本使用

  • [atguigu@hadoop103 myscripts]$ dwd_start_log.sh 2021-08-03

在这里插入图片描述

2.3、DWD层事件表数据解析

1)创建事件日志基础明细表

hive (gmall)> 
drop table if exists dwd_base_event_log;
CREATE EXTERNAL TABLE dwd_base_event_log(
`mid_id` string,
`user_id` string, 
`version_code` string, 
`version_name` string, 
`lang` string, 
`source` string, 
`os` string, 
`area` string, 
`model` string,
`brand` string, 
`sdk_version` string, 
`gmail` string, 
`height_width` string, 
`app_time` string, 
`network` string, 
`lng` string, 
`lat` string, 
`event_name` string, 
`event_json` string, 
`server_time` string)
PARTITIONED BY (`dt` string)
stored as parquet
location '/warehouse/gmall/dwd/dwd_base_event_log/'
TBLPROPERTIES('parquet.compression'='lzo');

2)编写UDF和UDTF函数(省略)

生成jar包hive-functions-0.0.1-SNAPSHOT.jar
在hive目录下
mkdir auxlib(新建目录)
将hive-functions-0.0.1-SNAPSHOT.jar放到该目录下

创建永久函数与开发好的java class关联:

hive (gmall)>
create function base_analizer as 'com.atguigu.udf.MyUDF';
hive (gmall)>
create function flat_analizer as 'com.atguigu.udtf.MyUDTF';

2.4、DWD层事件表加载数据脚本

1)在hadoop103的/opt/module/myscripts目录下创建脚本

  • [atguigu@hadoop103 myscripts]$ vim dwd_base_log.sh

在脚本中编写如下内容

#!/bin/bash
#接受要导入数据的日期参数,-n可以判断后面的参数是否为null,如果为null,返回true,否则返回false
if [ -n "$1" ]
then
        #默认日期为昨天的
        do_date=$1
else
        do_date=$(date -d yesterday +%F)
fi

echo ===日志日期为 $do_date===

sql="
use gmall;

insert overwrite TABLE gmall.dwd_base_event_log PARTITION(dt='$do_date')
SELECT
base_analizer(line,'mid') mid_id, 
base_analizer(line,'uid') user_id, 
base_analizer(line,'vc') version_code, 
base_analizer(line,'vn') version_name, 
base_analizer(line,'l') lang,
base_analizer(line,'sr') source, 
base_analizer(line,'os') os, 
base_analizer(line,'ar') area, 
base_analizer(line,'md') model,
base_analizer(line,'ba') brand, 
base_analizer(line,'sv') sdk_version, 
base_analizer(line,'g') gmail, 
base_analizer(line,'hw') height_width,
base_analizer(line,'t') app_time, 
base_analizer(line,'nw') network,
base_analizer(line,'ln') lng, 
base_analizer(line,'la') lat, 
en event_name, 
ej event_json, 
base_analizer(line,'ts') server_time
FROM gmall.ods_event_log 
LATERAL VIEW flat_analizer(base_analizer(line,'et')) tmp as en,ej
WHERE dt='$do_date';
"
hive -e "$sql"

2)增加脚本执行权限

  • [atguigu@hadoop103 myscripts]$ chmod u+x dwd_base_log.sh

3)脚本使用

  • [atguigu@hadoop103 myscripts]$ dwd_base_log.sh 2021-08-03

在这里插入图片描述

2.5、DWD层事件表获取

~1.商品点击表

1)建表语句

hive (gmall)> 
drop table if exists dwd_display_log;
CREATE EXTERNAL TABLE dwd_display_log(
`mid_id` string,
`user_id` string,
`version_code` string,
`version_name` string,
`lang` string,
`source` string,
`os` string,
`area` string,
`model` string,
`brand` string,
`sdk_version` string,
`gmail` string,
`height_width` string,
`app_time` string,
`network` string,
`lng` string,
`lat` string,
`action` string,
`goodsid` string,
`place` string,
`extend1` string,
`category` string,
`server_time` string
)
PARTITIONED BY (dt string)
stored as parquet
location '/warehouse/gmall/dwd/dwd_display_log/'
TBLPROPERTIES('parquet.compression'='lzo');


~2.商品详情页表

1)建表语句

hive (gmall)> 
drop table if exists dwd_newsdetail_log;
CREATE EXTERNAL TABLE dwd_newsdetail_log(
`mid_id` string,
`user_id` string, 
`version_code` string, 
`version_name` string, 
`lang` string, 
`source` string, 
`os` string, 
`area` string, 
`model` string,
`brand` string, 
`sdk_version` string, 
`gmail` string, 
`height_width` string, 
`app_time` string,  
`network` string, 
`lng` string, 
`lat` string, 
`entry` string,
`action` string,
`goodsid` string,
`showtype` string,
`news_staytime` string,
`loading_time` string,
`type1` string,
`category` string,
`server_time` string)
PARTITIONED BY (dt string)
stored as parquet
location '/warehouse/gmall/dwd/dwd_newsdetail_log/'
TBLPROPERTIES('parquet.compression'='lzo');

~3.商品列表页表

1)建表语句

hive (gmall)> 
drop table if exists dwd_loading_log;
CREATE EXTERNAL TABLE dwd_loading_log(
`mid_id` string,
`user_id` string, 
`version_code` string, 
`version_name` string, 
`lang` string, 
`source` string, 
`os` string, 
`area` string, 
`model` string,
`brand` string, 
`sdk_version` string, 
`gmail` string,
`height_width` string,  
`app_time` string,
`network` string, 
`lng` string, 
`lat` string, 
`action` string,
`loading_time` string,
`loading_way` string,
`extend1` string,
`extend2` string,
`type` string,
`type1` string,
`server_time` string)
PARTITIONED BY (dt string)
stored as parquet
location '/warehouse/gmall/dwd/dwd_loading_log/'
TBLPROPERTIES('parquet.compression'='lzo');

~4.广告表

1)建表语句

hive (gmall)> 
drop table if exists dwd_ad_log;
CREATE EXTERNAL TABLE dwd_ad_log(
`mid_id` string,
`user_id` string, 
`version_code` string, 
`version_name` string, 
`lang` string, 
`source` string, 
`os` string, 
`area` string, 
`model` string,
`brand` string, 
`sdk_version` string, 
`gmail` string, 
`height_width` string,  
`app_time` string,
`network` string, 
`lng` string, 
`lat` string, 
`entry` string,
`action` string,
`content` string,
`detail` string,
`ad_source` string,
`behavior` string,
`newstype` string,
`show_style` string,
`server_time` string)
PARTITIONED BY (dt string)
stored as parquet
location '/warehouse/gmall/dwd/dwd_ad_log/'
TBLPROPERTIES('parquet.compression'='lzo');

~5.消息通知表

1)建表语句

hive (gmall)> 
drop table if exists dwd_notification_log;
CREATE EXTERNAL TABLE dwd_notification_log(
`mid_id` string,
`user_id` string, 
`version_code` string, 
`version_name` string, 
`lang` string,
`source` string, 
`os` string, 
`area` string, 
`model` string,
`brand` string, 
`sdk_version` string, 
`gmail` string, 
`height_width` string,  
`app_time` string,
`network` string, 
`lng` string, 
`lat` string, 
`action` string,
`noti_type` string,
`ap_time` string,
`content` string,
`server_time` string
)
PARTITIONED BY (dt string)
stored as parquet
location '/warehouse/gmall/dwd/dwd_notification_log/'
TBLPROPERTIES('parquet.compression
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值