电信日志处理

实现步骤

实现流程

  1. 使用Flume收集数据
  2. 落地到HDFS系统
  3. 创建Hive的外部表管理HDFS上的数据
  4. 利用HQL语句处理项目业务逻辑
  5. 使用Sqoop将HDFS上的数据导出到MySQL
  6. 数据可视化

Flume的组件工作说明

Flume在收集的过程中以天为单位进行收集, Hive在处理的过程按照以天为分区条件继而对每天的数据进行统计分析, 对于时间而言, 一种思路是把日志文件名称里面的日志信息获取出来, 第二种思路Flume在收集的过程中将当天的日志时间记录下来(推荐使用)

创建并编辑zebra.conf文件
a1.sources=r1
a1.channels=c1
a1.sinks=s1

# 配置Source
a1.sources.r1.type=spooldir
a1.sources.r1.spoolDir=/home/zebra
a1.sources.r1.interceptors=i1
a1.sources.r1.interceptors.i1.type=timestamp

# 配置Sink
a1.sinks.s1.type=hdfs
a1.sinks.s1.hdfs.path=hdfs://hadoop01:9000/zebra/reportTime=%Y-%m-%d
a1.sinks.s1.hdfs.fileType=DataStream
a1.sinks.s1.hdfs.rollInterval=30
a1.sinks.s1.hdfs.rollSize=0
a1.sinks.s1.hdfs.rollCount=0

a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
# 每次提供100条数据发给sink
a1.channels.c1.transactionCapacity=100

a1.sources.r1.channels=c1
# 给s1绑定一个通道
a1.sinks.s1.channel=c1
执行命令启动Flume
../bin/flume-ng agent --name a1 -c ../conf/ -f ../data/zebra.conf -Dflume.root.logger=INFO,console

创建数据库

启动Hive服务
hive --service metastore &
启动Hive数据库
创建zebra数据库
create database zebra;
use zebra;

建表查询数据处理

创建zebra表(外部表)
create external table zebra(a1 string, a2 string, a3 string, a4 string, a5 string, a6 string, a7 string, a8 string, a9 string, a10 string, a11 string, a12 string, a13 string, a14 string, a15 string, a16 string, a17 string, a18 string, a19 string, a20 string, a21 string, a22 string, a23 string, a24 string, a25 string, a26 string, a27 string, a28 string, a29 string, a30 string, a31 string, a32 string, a33 string, a34 string, a35 string, a36 string, a37 string, a38 string, a39 string, a40 string, a41 string, a42 string, a43 string, a44 string, a45 string, a46 string, a47 string, a48 string, a49 string, a50 string, a51 string, a52 string, a53 string, a54 string, a55 string, a56 string, a57 string, a58 string, a59 string, a60 string, a61 string, a62 string, a63 string, a64 string, a65 string, a66 string, a67 string, a68 string, a69 string, a70 string, a71 string, a72 string, a73 string, a74 string, a75 string, a76 string, a77 string) partitioned by (reportTime string) row format delimited fields terminated by '|' stored as textfile location '/zebra' ;
查看表结构
hive> desc zebra;
OK
a1                      string
a2                      string
a3                      string
a4                      string
a5                      string
a6                      string
a7                      string
a8                      string
a9                      string
a10                     string
a11                     string
a12                     string
a13                     string
a14                     string
a15                     string
a16                     string
a17                     string
a18                     string
a19                     string
a20                     string
a21                     string
a22                     string
a23                     string
a24                     string
a25                     string
a26                     string
a27                     string
a28                     string
a29                     string
a30                     string
a31                     string
a32                     string
a33                     string
a34                     string
a35                     string
a36                     string
a37                     string
a38                     string
a39                     string
a40                     string
a41                     string
a42                     string
a43                     string
a44                     string
a45                     string
a46                     string
a47                     string
a48                     string
a49                     string
a50                     string
a51                     string
a52                     string
a53                     string
a54                     string
a55                     string
a56                     string
a57                     string
a58                     string
a59                     string
a60                     string
a61                     string
a62                     string
a63                     string
a64                     string
a65                     string
a66                     string
a67                     string
a68                     string
a69                     string
a70                     string
a71                     string
a72                     string
a73                     string
a74                     string
a75                     string
a76                     string
a77                     string
reporttime              string

# Partition Information
# col_name              data_type               comment

reporttime              string
Time taken: 0.472 seconds, Fetched: 83 row(s)
增加分区
alter table zebra add partition(reportTime='2022-06-10') location '/zebra/reportTime=2022-06-10';
数据清洗
建立数据清洗表
create table dataclear(reportTime string, appType bigint, appSubType bigint, userIp string, userPort bigint, appServerIp string, appServerPort bigint, host string, cellid string, appTypeCode bigint, interruptType string, transStatus bigint, trafficUL bigint, trafficDL bigint, retranUL bigint, retranDL bigint, procdureStartTime bigint, procdureEndTime bigint) row format delimited fields terminated by '|';
从zebra表中提取数据
下标位置字段标识数据类型字段释义
16Cell IDbyteUE所在小区的ECI
18App Type Codebyte业务类型编码,参见附录D XDR类型编码定义
23App Sub-typebyte应用小类 根据集团定义的识别规则识别出来的小类, 参见《中国移动数据流量DPI识别能力规范》。 集团未定义的各厂家根据自己的DPI进行识别
26USER_IPbyte终端用户的IPv4地址,如无则填全F
28User Portbyte用户的四层端口号
30App Server IPbyte访问服务器的IPv4地址,如无则填全F
32App Server Portbyte访问的服务器的端口
58HOSTchar访问域名
19ProcdureStartTimelong请求起始时间
20ProcdureEndTimelong请求结束时间
22App Typebyte应用大类更多信息参见《中国移动数据流量DPI识别能力规范》
33UL Databyte上行流量
34DL Databyte下行流量
39RetranULbyte上行TCP重传报文数
40RetranDLbyte下行TCP重传报文数
54HTTP/WAP事务状态byteHTTP/WAP2.0层的响应码,参见附录A 状态编码
insert overwrite table dataclear select concat(reportTime,' ','00:00:00'), a23, a24, a27, a29, a31, a33, a59, a17, a19, a68, a55, a34, a35, a40, a41, a20, a21 from zebra;
处理业务逻辑得到业务逻辑表
create table dataproc (reportTime string, appType bigint, appSubtype bigint, userIp string, userPort bigint, appServerIP string, appServerPort bigint,host string, cellid string, attempts bigint, accepts bigint, trafficUL bigint, trafficDL bigint, retranUL bigint, retranDL bigint, failCount bigint, transDelay bigint) row format delimited fields terminated by '|';
根据业务规则,做字段处理
insert overwrite table dataproc select reportTime, appType, appSubtype, userIp, userPort, appServerIP, appServerPort, host, if(cellid == '', "000000000", cellid), if(appTypeCode == 103,1,0), if(appTypeCode == 103 and find_in_set(transStatus,"10,11,12,13,14,15,32,33,34,35,36,37,38,48,49,50,51,52,53,54,55,199,200,201,202,203,204,205,206,302,304,306")!=0 and interruptType == 0,1,0), if(apptypeCode == 103, trafficUL, 0), if(apptypeCode == 103, trafficDL, 0),  if(apptypeCode == 103, retranUL, 0), if(apptypeCode == 103, retranDL, 0), if(appTypeCode == 103 and transStatus == 1 and interruptType == 0, 1,0),if(appTypeCode == 103, procdureEndTime - procdureStartTime, 0) from dataclear; 
查询关心的信息,以应用受欢迎程度表为例:
create table D_H_HTTP_APPTYPE(hourid string, appType int, appSubtype int, attempts bigint, accepts bigint, succRatio double, trafficUL bigint, trafficDL bigint, totalTraffic bigint, retranUL bigint, retranDL bigint, retranTraffic bigint, failCount bigint, transDelay bigint) row format delimited fields terminated by '|';
根据总表dataproc,按条件做聚合以及字段的累加
insert overwrite table D_H_HTTP_APPTYPE select reporttime, apptype, appsubtype, sum(attempts), sum(accepts), round(sum(accepts) / sum(attempts), 2), sum(trafficUL), sum(trafficDL), sum(trafficUL) + sum(trafficDL), sum(retranUL), sum(retranDL), sum(retranUL) + sum(retranDL), sum(failCount), sum(transDelay) from dataproc group by reporttime, apptype, appsubtype;
查询前五名受欢迎的APP
select hourid,apptype,sum(totalTraffic) as tt from D_H_HTTP_APPTYPE group by hourid,apptype sort by tt desc limit 5;

在这里插入图片描述

将HDFS上的数据导入到MySQL数据库

mysql建表

create table D_H_HTTP_APPTYPE(hourid  datetime, appType int, appSubtype int, attempts bigint, accepts bigint, succRatio double, trafficUL bigint, trafficDL bigint, totalTraffic bigint, retranUL bigint, retranDL bigint, retranTraffic bigint, failCount bigint, transDelay bigint)

进入sqoop的bin目录 进行数据迁移

sh sqoop export --connect jdbc:mysql://hadoop01:3306/zebra -username root -password root --table D_H_HTTP_APPTYPE --export-dir '/home/software/hive-2.3.7/warehouse/zebra.db/d_h_http_apptype/000000_0' --table D_H_HTTP_APPTYPE -m 1 --fields-terminated-by '|

`

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值