用户日志Json串的处理

场景题(数仓处理项目的一个缩影): 用户行为日志(ods_user_logs)中有关浏览商品的数据如下,数据schema如下:

    user_id,
    view_params,
    exts,
    ct

## 数据如下

    user_id
    view_params:"order_condi排序条件01默认排序02价格排序03成交量排序&order_type 1升序 0降序 &key 查询热词 如苹果手机、华为、三只松鼠、图书等等(假设我们这里只针对产品信息搜索,即target_type=04)"
    
    exts: {target_type 查询类型
                target_category 查询商品分类如100表示手机类别
                target_ids:[] 查询结果显示的商品ID记录
              }
    
    ct 浏览事件发送时间

   

{
        "user_id":"u0001",
        "view_params":"order_condition=03&order_type=1&key=华为手机",
        "exts":{"target_type":"04","target_category":"100",
                "target_ids":"["1","2","3"]"
                }
        "ct":"1567429965000"
    } 


## 需求如下:
用到的函数:

split(split(t.view_params,'&')[0],'=')[1]  可嵌套的切分函数split,

from_unixtime(cast(t.ct/1000 as bigint),'yyyyMMddHH') 时间戳转具体时间函数from_unixtime 和 强制类型转换函数 cast,

lateral view json_tuple(line,"user_id","view_params","exts","ct") tmp 
as user_id,view_params,exts,ct  一次性处理多个json字段的函数json_tuple,结合虚拟表lateral view 使用
get_json_object(get_json_object(b,'$.exts'),'$.target_type') as target_type 一次性处理一个json字段的可嵌套的函数get_json_object,

str_to_map(get_json_object(b,'$.view_params'),"&","=")['order_type'] as order_type    str_to_map(text, delimiter1, delimiter2)['key']分隔符1将文本分成K-V对,分隔符2分割每个K-V对

regexp_extract(get_json_object(b,'$.view_params'),'.*=(.*)\\&.*=(.*)\\&.*=(.*)',3) as key    正则分组获取字符串中相应值的函数regexp_extract,
regexp_replace(target_ids,'\\\\[\\\\]"',"") target_ids  正则匹配替换字符串的函数

concat_ws(",",collect_set(target_id)) target_ids concat_ws(separator, str1, str2,...) 指定分隔符(第一位)连接字符串的函数。collect_set函数把某列值连接成数组的函数。

row_number() over(sort by pv desc) rn  连续不重复的排名函数row_number

 ### 1. 首先解析上表字段并把结果放到dw_user_logs

 

insert into dw_user_logs


第一种方式:
select 
user_id,
split(split(t.view_params,'&')[0],'=')[1] as order_condition,
split(split(t.view_params,'&')[1],'=')[1] as order_type,
split(split(t.view_params,'&')[2],'=')[1] as key,
 target_type ,  
 target_category,
 target_ids,
from_unixtime(cast(t.ct/1000 as bigint),'yyyyMMddHH') as ct
from
(
select 
user_id,
view_params,
exts,
ct
from ods_user_logs
lateral view json_tuple(line,"user_id","view_params","exts","ct") tmp 
as user_id,view_params,exts,ct
) t1
lateral view json_tuple(exts,"target_type", "target_category","target_ids") tmp
as  target_type ,  target_category,target_ids 
;

第二种方式:
select
get_json_object(b,'$.user_id') as user_id,
str_to_map(get_json_object(b,'$.view_params'),"&","=")['order_condition'] as order_condition,
str_to_map(get_json_object(b,'$.view_params'),"&","=")['order_type'] as order_type,
str_to_map(get_json_object(b,'$.view_params'),"&","=")['key'] as key,
get_json_object(get_json_object(b,'$.exts'),'$.target_type') as target_type,
get_json_object(get_json_object(b,'$.exts'),'$.target_category') as target_category,
get_json_object(get_json_object(b,'$.exts'),'$.target_ids') as target_ids,
from_unixtime(cast((get_json_object(b,'$.ct')/1000)as bigint),'yyyyMMddHH') as ct
from log_json
;

第三种方式:
select
get_json_object(b,'$.user_id') as user_id,
regexp_extract(get_json_object(b,'$.view_params'),'.*=(.*)\\&.*=(.*)\\&.*=(.*)',1) as order_condition,
regexp_extract(get_json_object(b,'$.view_params'),'.*=(.*)\\&.*=(.*)\\&.*=(.*)',2) as order_type,
regexp_extract(get_json_object(b,'$.view_params'),'.*=(.*)\\&.*=(.*)\\&.*=(.*)',3) as key,
get_json_object(get_json_object(b,'$.exts'),'$.target_type') as target_type,
get_json_object(get_json_object(b,'$.exts'),'$.target_category') as target_category,
get_json_object(get_json_object(b,'$.exts'),'$.target_ids') as target_ids,
from_unixtime(cast((get_json_object(b,'$.ct')/1000)as bigint),'yyyyMMddHH') as ct
from log_json
;

 


### 2. 请统计每天每小时的用户浏览统计

  时间                产品分类    UV浏览人次        PV浏览次数
  2019091809        100         10000                     12000 

  

select 
hour,
 target_category,
 count(distinct user_id) uv,
count(user_id)  pv
from
(select 
user_id,
 target_category ,  
ct hour
from
dw_user_logs
) t1
group by target_category,ct

 

### 3. 请统计产品分类的热门产品TopN(以浏览次数作为比较条件)

  产品分类    产品      
  100     1,2,3   
  101     6,7,8   
  102     10,11,12

 select
 target_category,
concat_ws(",",collect_set(target_id)) target_ids
from  
 (
 select
 target_category,
 target_id 
  from
  (
 select
 target_category,
row_number() over(sort by pv desc) rn,
target_id
from
(
select 
 target_category,
 target_id,
count(target_id)  pv
from
(
select 
 target_category ,
 target_id
from
select 
 target_category ,
 regexp_replace(target_ids,'\\[\\]"',"") target_ids
from
dw_user_logs
) t0
lateral view explode(split(target_ids,",")) tmp as target_id
) t1
group by target_category,target_id
) t2
) t3 where rn<=3
) t4
group by target_category

 
 

### 4. 统计查询热词

  热词         查询用户数量uv    查询数量pv
  Mate30     100000      250000
  apple11    99999       199999

 
select 
key ,
count(distinct user_id) uv,
count(key)  pv
from
dw_user_logs
group by key
;
 
 

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值