使用hive和sqoop来实现统计24小时每个时段的PV和UV,storm计算网站UV(去重计算模式)

【案例】使用hive和sqoop来实现网站基本指标,PV和UV
1、PV统计网页浏览总量
2、UV去重
-》【需求】统计24小时每个时段的PV和UV
-》建分区表,按天一级,按小时一级,多级分区
-》第一步分析需求
-》第二步获取时间字段,天,小时
-》对于时间格式进行数据清洗,比如:2015-08-28 18:10:00,从中获取日期和小时
-》获取需要有用的字段:id、url、guid、trackTime
-》第三步数据分析
-》使用select sql
-》第四步使用sqoop导出


-》预期结果
日期 小时 PV UV


建库:

create database track_log2;

建表:源表

create table yhd_source2(
    id              string,
    url             string,
    referer         string,
    keyword         string,
    type            string,
    guid            string,
    pageId          string,
    moduleId        string,
    linkId          string,
    attachedInfo    string,
    sessionId       string,
    trackerU        string,
    trackerType     string,
    ip              string,
    trackerSrc      string,
    cookie          string,
    orderCode       string,
    trackTime       string,
    endUserId       string,
    firstLink       string,
    sessionViewNo   string,
    productId       string,
    curMerchantId   string,
    provinceId      string,
    cityId          string,
    fee             string,
    edmActivity     string,
    edmEmail        string,
    edmJobId        string,
    ieVersion       string,
    platform        string,
    internalKeyword string,
    resultSum       string,
    currentPage     string,
    linkPosition    string,
    buttonPosition  string
)
row format delimited fields terminated by '\t'
stored as textfile;


shift+alt 下拉列式编辑


加载数据:

load data local inpath '/opt/datas/2015082818' into table yhd_source;
load data local inpath '/opt/datas/2015082819' into table yhd_source;

分区的方式:静态分区

create table yhd_part1(
    id string,
    url string,
    guid string
)
partitioned by (date string,hour string)
row format delimited fields terminated by '\t';

加载数据,来源于source源表

insert into table yhd_part1 partition (date='20150828',hour='18') select id,url,guid from yhd_qingxi where date='28' and hour='18';
insert into table yhd_part1 partition (date='20150828',hour='19') select id,url,guid from yhd_qingxi where date='28' and hour='19';


select id,date,hour from yhd_part1 where date='20150828' and hour='18';


建一张清洗表,将时间字段清洗,提取部分的时间字段出来

create table yhd_qingxi(
id string,
url string,
guid string,
date string,
hour string
)
row format delimited fields terminated by '\t';

字段截取,天&小时

insert into table yhd_qingxi select id,url,guid,substring(trackTime,9,2) date,substring(trackTime,12,2) hour from yhd_source;

分区的方式:动态分区

<property>
  <name>hive.exec.dynamic.partition</name>
  <value>true</value>
  <description>Whether or not to allow dynamic partitions in DML/DDL.</description>
</property>


-》默认值是true,代表允许使用动态分区实现

<property>
  <name>hive.exec.dynamic.partition.mode</name>
  <value>strict</value>
  <description>In strict mode, the user must specify at least one static partition in case the user accidentally overwrites all partitions.</description>
</property>

-》set hive.exec.dynamic.partition.mode=nonstrict;  使用非严格模式

建表:

create table yhd_part2(
    id string,
    url string,
    guid string
)
partitioned by (date string,hour string)
row format delimited fields terminated by '\t';

执行动态分区:

insert into table yhd_part2 partition (date,hour) select * from yhd_qingxi;

-》也可以不写select *  ,但是要写全字段
-》首先根据select * 找到表,按照里面的字段date hour进行匹配

实现PV和UV的统计

PV实现:

select date,hour,count(url) PV from yhd_part1 group by date,hour;

-》按照天和小时进行分区
-》结果:
+-----------+-------+--------+--+
|   date    | hour  |   pv   |
+-----------+-------+--------+--+
| 20150828  | 18    | 64972  |
| 20150828  | 19    | 61162  |
+-----------+-------+--------+--+

UV实现:

select date,hour,count(distinct guid) UV from yhd_part1 group by date,hour; 

-》结果:
+-----------+-------+--------+--+
|   date    | hour  |   uv   |
+-----------+-------+--------+--+
| 20150828  | 18    | 23938  |
| 20150828  | 19    | 22330  |
+-----------+-------+--------+--+


endUserId  guid
登录的身份:
-》游客
-》会员
无论是游客还是会员都会有一个guid
endUserId应该是只针对于会员的,使用账号登录的
将PV和UV结合统计

create table if not exists result as select date,hour,count(url) PV ,count(distinct guid) UV from yhd_part1 group by date,hour; 


-》结果:
+--------------+--------------+------------+------------+--+
| result.date  | result.hour  | result.pv  | result.uv  |
+--------------+--------------+------------+------------+--+
| 20150828     | 18           | 64972      | 23938      |
| 20150828     | 19           | 61162      | 22330      |
+--------------+--------------+------------+------------+--+


将结果导出到mysql表中

先在mysql建表:用于保存结果集

create table if not exists save2(
    date varchar(30) not null,
    hour varchar(30) not null,
    pv varchar(30) not null,
    uv varchar(30) not null,
    primary key(date,hour)
);


使用sqoop实现导出到mysql

bin/sqoop export \
--connect \
jdbc:mysql://bigdata-senior02.ibeifeng.com:3306/sqoop \
--username root \
--password 123456 \
--table save2 \
--export-dir /user/hive/warehouse/track_log2.db/result \
--num-mappers 1 \
--input-fields-terminated-by '\001'

+----------+------+-------+-------+
| date     | hour | pv    | uv    |
+----------+------+-------+-------+
| 20150828 | 18   | 64972 | 23938 |
| 20150828 | 19   | 61162 | 22330 |
+----------+------+-------+-------+

hive默认的分隔符:\001

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------

需求分析:

UV统计

方案分析:

1,传统的方式是把session_id放入Set实现自动去重,Set.size()获得UV,但是这种方式只能在单机上有效

2,可行的方案(类似WordCount的计算去重word总数):

bolt1通过fieldGrouping进行多线程局部汇总,下一级bolt2进行单线程保存session_id和count数到Map且进行遍历,可以得到:

PV,UV,访问深度(每个session_id的浏览数)

按日期统计

2014-05-01 UV数

去重需求分析:

既然去重,必须持久化数据:

1,内存

数据结构map(中小企业)

2,no-sql分布式数据库,如hbase(大企业)

storm应用场景广泛

但能做的复杂度有限,通常都是汇总型的。

你如果想做数据分析,很难,但是你可以做一些数据分析之前的工作,就是源数据预处理,写库

spout:

package base;

import backtype.storm.spout.SpoutOutputCollector;
import backtype.storm.task.TopologyContext;
import backtype.storm.topology.IRichSpout;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.tuple.Fields;
import backtype.storm.tuple.Values;

import java.util.Map;
import java.util.Queue;
import java.util.Random;
import java.util.concurrent.ConcurrentLinkedQueue;

/**
 * Created by Administrator on 2016/10/7.
 */
public class SourceSpout implements IRichSpout{

    /*
    数据源Spout
     */
    private static final long serialVersionUID = 1L;
    Queue<String> queue = new ConcurrentLinkedQueue<String>();
    SpoutOutputCollector collector = null;
    String str = null;

    @Override
    public void open(Map map, TopologyContext topologyContext, SpoutOutputCollector spoutOutputCollector) {
        try{
            this.collector = spoutOutputCollector;
            Random random = new Random();
            String[] hosts = {"www.taobao.com"};
            String[] session_id = { "ABYH6Y4V4SCVXTG6DPB4VH9U123", "XXYH6YCGFJYERTT834R52FDXV9U34", "BBYH61456FGHHJ7JL89RG5VV9UYU7",
                    "CYYH6Y2345GHI899OFG4V9U567", "VVVYH6Y4V4SFXZ56JIPDPB4V678" };
            String[] time = { "2014-01-07 08:40:50", "2014-01-07 08:40:51", "2014-01-07 08:40:52", "2014-01-07 08:40:53",
                    "2014-01-07 09:40:49", "2014-01-07 10:40:49", "2014-01-07 11:40:49", "2014-01-07 12:40:49" };
            for (int i = 0;i < 100; i++){
                queue.add(hosts[0]+"\t"+session_id[random.nextInt(5)]+"\t"+time[random.nextInt(8)]);

            }
        }catch (Exception e){
            e.printStackTrace();
        }
    }

    @Override
    public void close() {

    }

    @Override
    public void activate() {

    }

    @Override
    public void deactivate() {

    }

    @Override
    public void nextTuple() {
        if(queue.size() >= 0){
            collector.emit(new Values(queue.poll()));
            try {
                Thread.sleep(200);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }

    @Override
    public void ack(Object o) {
        System.out.println("spout ack:"+o.toString());

    }

    @Override
    public void fail(Object o) {
        System.out.println("spout fail:"+o.toString());

    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
        outputFieldsDeclarer.declare(new Fields("log"));
    }    @Override
    public Map<String, Object> getComponentConfiguration() {
        return null;
    }
}


格式化:

package user_visit;

import backtype.storm.task.TopologyContext;
import backtype.storm.topology.BasicOutputCollector;
import backtype.storm.topology.IBasicBolt;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.tuple.Fields;
import backtype.storm.tuple.Tuple;
import backtype.storm.tuple.Values;
import tools.DataFmt;

import java.util.Map;

/**
 * Created by Administrator on 2016/10/8.
 */
public class FmtLogBolt implements IBasicBolt{
    /*
    这个相比于irich的好处就是不用显性的去回调它的ask和fail方法

     */
    private static final long serialVersionUID = 1L;

    @Override
    public void prepare(Map map, TopologyContext topologyContext) {

    }

    String eachLog = null;
    @Override
    public void execute(Tuple tuple, BasicOutputCollector basicOutputCollector) {
        eachLog = tuple.getString(0);
        if(eachLog != null && eachLog.length() > 0){
            //日期,session_id
            basicOutputCollector.emit(new Values(DataFmt.getCountDate(eachLog.split("\t")[2],DataFmt.date_short),eachLog.split("\t")[1]));

        }
    }

    @Override
    public void cleanup() {

    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
        //这两个名称的作用就是在下一级通过这两个名称获取
        outputFieldsDeclarer.declare(new Fields("date","session_id"));
    }    @Override
    public Map<String, Object> getComponentConfiguration() {
        return null;
    }
}


局部汇总:

package user_visit;

import backtype.storm.task.TopologyContext;
import backtype.storm.topology.BasicOutputCollector;
import backtype.storm.topology.IBasicBolt;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.tuple.Fields;
import backtype.storm.tuple.Tuple;
import backtype.storm.tuple.Values;

import java.util.HashMap;
import java.util.Map;

/**
 * Created by Administrator on 2016/10/8.
 * 统计每个session_id的pv
 */
public class DeepVisitBolt implements IBasicBolt{
    /*

     */
    private static final long serialVersionUID = 1L;

    @Override
    public void prepare(Map map, TopologyContext topologyContext) {

    }
    //map存局部汇总的值
    Map<String, Integer> counts = new HashMap<String, Integer>();
    @Override
    public void execute(Tuple tuple, BasicOutputCollector basicOutputCollector) {
        String dateString = tuple.getStringByField("date");
        String session_id = tuple.getStringByField("session_id");
        /*
        我们要去重,就需要把我们要去重的东西放到map的key里面
         */
        Integer count = counts.get(dateString+"_"+session_id);
        if(count == null){
            count = 0;
        }
        count++;
        counts.put(dateString+"_"+session_id, count);
        //这是我们的局部汇总,我们需要把它发到我们的下一级做一个总的汇总
        basicOutputCollector.emit(new Values(dateString+"_"+session_id, count));


    }

    @Override
    public void cleanup() {

    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
        outputFieldsDeclarer.declare(new Fields("date_session_id", "count"));

    }    @Override
    public Map<String, Object> getComponentConfiguration() {
        return null;
    }
}


统计:
 

package user_visit;

import backtype.storm.task.TopologyContext;
import backtype.storm.topology.BasicOutputCollector;
import backtype.storm.topology.FailedException;
import backtype.storm.topology.IBasicBolt;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.tuple.Tuple;
import tools.DataFmt;

import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;

/**
 * Created by Administrator on 2016/10/6.
 */
public class UVSumBolt implements IBasicBolt{

    private static final long serialVersionUID = 1L;
    String cur_date = null;
    long beginTime = System.currentTimeMillis();
    long endTime = 0;

    Map<String, Integer> counts = new HashMap<String, Integer>();
    @Override
    public void prepare(Map map, TopologyContext topologyContext) {
        cur_date = DataFmt.getCountDate("2014-01-07", DataFmt.date_short);
    }


    @Override
    public void execute(Tuple tuple, BasicOutputCollector basicOutputCollector) {

        try{
            endTime = System.currentTimeMillis();
            long PV = 0;//总数
            long UV = 0;//个数,去重后
            String dateSession_id = tuple.getString(0);
            Integer countInteger = tuple.getInteger(1);

            //判断数据是不是当天的,而且比当前日期还要打
            if(!dateSession_id.startsWith(cur_date) && DataFmt.parseDate(dateSession_id.split("_")[0]).after(DataFmt.parseDate(cur_date))){
                cur_date = dateSession_id.split("_")[0];
                counts.clear();
            }


            counts.put(dateSession_id, countInteger);


            if(endTime - beginTime >= 2000){

                //获取word去重个数,遍历counts的keyset,取count
                Iterator<String> i2 = counts.keySet().iterator();
                while(i2.hasNext()){
                    String key = i2.next();
                    if(key != null){
                        if(key.startsWith(cur_date)){
                            UV ++;
                            PV += counts.get(key);
                        }
                    }
                }
                System.out.println("PV=" + PV + "; UV=" + UV);


            }

        }catch (Exception e){
            throw new FailedException("SumBolt fail!");
        }


    }

    @Override
    public void cleanup() {

    }

    @Override
    public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {

    }

    @Override
    public Map<String, Object> getComponentConfiguration() {
        return null;
    }
}

 

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 6
    评论
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值