2020.11.18课堂笔记(Spark数据分析及处理)

使用Spark完成下列日志分析项目需求:

日志数据清洗

数据文件展示: test.log

2018-09-04T20:27:31+08:00	http://datacenter.bdqn.cn/logs/user?actionBegin=1536150451540&actionClient=Mozilla%2F5.0+%28Windows+NT+10.0%3B+WOW64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F58.0.3029.110+Safari%2F537.36+SE+2.X+MetaSr+1.0&actionEnd=1536150451668&actionName=startEval&actionTest=0&actionType=3&actionValue=272090&clientType=001_kgc&examType=001&ifEquipment=web&isFromContinue=false&skillIdCount=0&skillLevel=0&testType=jineng&userSID=B842B843AE317425D53D0C567A903EF7.exam-tomcat-node3.exam-tomcat-node3&userUID=272090&userUIP=1.180.18.157	GET	200	192.168.168.64	-	-	Apache-HttpClient/4.1.2 (java 1.5)
2018-09-04T20:27:31+08:00	http://datacenter.bdqn.cn/logs/user?actionBegin=1536150451617&actionClient=Mozilla%2F5.0+%28Windows+NT+6.1%3B+WOW64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F63.0.3239.132+Safari%2F537.36&actionEnd=1536150451705&actionName=viewQuestionAnalysis&actionTest=0&actionType=3&actionValue=272878&clientType=001_bdqn&examType=001&ifEquipment=web&questionId=32415&skillIdCount=0&userSID=EDEC6A9CF8220BE663A22BDD13E428E7.exam-tomcat-node3.exam-tomcat-node3&userUID=272878&userUIP=117.152.82.106	GET	200	192.168.168.63	-	-	Apache-HttpClient/4.1.2 (java 1.5)
2018-09-04T20:27:31+08:00	http://datacenter.bdqn.cn/logs/learn?ifEquipment=005&userSID=sodwly%40163.com&actionValue=23710&actionBegin=1536150451762&actionName=StartLearn&userUIP=192.168.168.14&actionType=3&actionClient=Mozilla%2F5.0+%28Windows+NT+10.0%3B+WOW64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F63.0.3239.26+Safari%2F537.36+Core%2F1.63.6726.400+QQBrowser%2F10.2.2265.400&actionEnd=1536150451766&userUID=719786&actionTest=0	GET	200	192.168.168.124	-	-	Apache-HttpClient/4.1.2 (java 1.5)
2018-09-04T20:27:31+08:00	http://datacenter.bdqn.cn/logs/user?userUID=0&actionClient=Android&actionBegin=&actionEnd=&actionType=3&actionName=MinimizeAPP&actionValue=&actionPrepend=&actionTest=1&ifEquipment=mobile&projectName=ykt	GET	200	112.25.60.17	223.88.187.31	-	Mozilla/5.0 (Linux; U; Android 6.0; zh-cn; Redmi Note 4 Build/MRA58K) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1
2018-09-04T20:27:31+08:00	http://datacenter.bdqn.cn/logs/user?actionBegin=1536150451690&actionClient=Mozilla%2F5.0+%28Windows+NT+10.0%3B+WOW64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F63.0.3239.132+Safari%2F537.36&actionEnd=1536150451783&actionName=viewQuestionAnalysis&actionTest=0&actionType=3&actionValue=261533&clientType=001_bdqn&examType=001&ifEquipment=web&questionId=35406&skillIdCount=0&userSID=E02CD8FDB92BD9A3D777B7425D4FE8A0.kgc-tiku-node1.kgc-tiku-node1&userUID=261533&userUIP=115.200.118.135	GET	200	192.168.168.79	-	-	Apache-HttpClient/4.1.2 (java 1.5)
2018-09-04T20:27:31+08:00	http://datacenter.bdqn.cn/logs/downXml?ifEquipment=005&userSID=sodwly%40163.com&actionValue=ACCP8Y2SPSSM005&actionBegin=1536150451854&actionName=core_downXml&userUIP=192.168.168.14&actionType=3&actionClient=Mozilla%2F5.0+%28Windows+NT+10.0%3B+WOW64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F63.0.3239.26+Safari%2F537.36+Core%2F1.63.6726.400+QQBrowser%2F10.2.2265.400&actionEnd=1536150451854&userUID=719786&actionTest=0	GET	200	192.168.168.148	-	-	Apache-HttpClient/4.1.2 (java 1.5)
2018-09-04T20:27:31+08:00	http://datacenter.bdqn.cn/logs/user?actionBegin=1536150451711&actionClient=Mozilla%2F5.0+%28Windows+NT+10.0%3B+WOW64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F63.0.3239.132+Safari%2F537.36&actionEnd=1536150451851&actionName=answerQuestion&actionTest=0&actionType=3&actionValue=240388&clientType=001_bdqn&examType=001&ifEquipment=web&isCorrect=true&questionIds=[J@6b153ba2&questionType=0&skillIdCount=0&userSID=A8757D6E5E651FC27C444FD43CFC8A60.kgc-tiku-node1.kgc-tiku-node1&userUID=240388&userUIP=118.250.183.100	GET	200	192.168.168.78	-	-	Apache-HttpClient/4.1.2 (java 1.5)
2018-09-04T20:27:31+08:00	http://datacenter.kgc.cn/logs/user?userUID=18157094&userSID=3oqc2v7thdd15ttkq6e7vac2t0&userUIP=61.135.161.35&actionClient=Mozilla%2F5.0+%28Windows+NT+10.0%3B+Win64%3B+x64%3B+rv%3A61.0%29+Gecko%2F20100101+Firefox%2F61.0&actionBegin=1536150451&actionEnd=1536150451&actionType=3&actionPrepend=http%3A%2F%2Fwww.kgc.cn%2Fmy%2FjobOE.shtml&actionTest=1&ifEquipment=web&actionName=PlayOnlineTime&pic=32156&pre=31&playtime=60	GET	200	183.136.133.44	118.144.132.209	http://www.kgc.cn/my/jobOE.shtml	Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:61.0) Gecko/20100101 Firefox/61.0
2018-09-04T20:27:31+08:00	http://datacenter.bdqn.cn/logs/user?userUID=0&actionClient=Android&actionBegin=&actionEnd=&actionType=3&actionName=MinimizeAPP&actionValue=&actionPrepend=&actionTest=1&ifEquipment=mobile&projectName=ykt	GET	200	112.25.60.17	223.88.187.31	-	Mozilla/5.0 (Linux; U; Android 6.0; zh-cn; Redmi Note 4 Build/MRA58K) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1
2018-09-04T20:27:31+08:00	http://datacenter.bdqn.cn/logs/user?actionBegin=1536150451867&actionClient=Mozilla%2F5.0+%28Windows+NT+10.0%3B+Win64%3B+x64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F64.0.3282.140+Safari%2F537.36+Edge%2F17.17134&actionEnd=1536150451907&actionName=RegisteredExam&actionTest=0&actionType=11&actionValue=268254&clientType=001&examType=002&ifEquipment=web&userSID=FCB17F95F054758707952F545ECE09C4.kgc-tiku-node2.kgc-tiku-node2&userUID=268254&userUIP=117.136.0.153	GET	200	192.168.168.64	-	-	Apache-HttpClient/4.1.2 (java 1.5)

读入日志文件并转化为RDD[Row]类型

按照Tab切割数据
过滤掉字段数量少于8个的

package nj.zb.kb09.project

import java.util.Properties

import org.apache.commons.lang.StringUtils
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.types.{LongType, StringType, StructField, StructType}
import org.apache.spark.sql.{DataFrame, Dataset, Row, SparkSession}

object DataClear {
  def main(args: Array[String]): Unit = {
    val spark: SparkSession = SparkSession.builder().master("local[*]").appName("clearDemo").getOrCreate()
    import spark.implicits._
    val sc: SparkContext = spark.sparkContext
    val linesRdd: RDD[String] = sc.textFile("in/test.log")

    // 按照Tab切割数据,过滤掉字段数量少于8个的,生成RowRdd
    val rowRdd: RDD[Row] = linesRdd
      .map(x => x.split("\t"))
      .filter(x => x.length == 8)
      .map(x => Row(x(0).trim, x(1).trim, x(2).trim, x(3).trim, x(4).trim, x(5).trim, x(6).trim, x(7).trim))
	// 定义schema
    val schema = StructType(
      Array(
        StructField("event_time", StringType),
        StructField("url", StringType),
        StructField("method", StringType),
        StructField("status", StringType),
        StructField("sip", StringType),
        StructField("user_uip", StringType),
        StructField("action_prepend", StringType),
        StructField("action_client", StringType)
      )
    )
    // 生成原始数据的DataFrame
    val orgDF: DataFrame = spark.createDataFrame(rowRdd,schema)
    //df.show(10,false)

对数据进行清洗

按照第一列和第二列对数据进行去重
过滤掉状态码非200
过滤掉event_time为空的数据

//按照第一列和第二列对数据进行去重 过滤掉状态码非200  过滤掉event_time为空的数据 
val ds1: Dataset[Row] = orgDF
  .dropDuplicates("event_time","url")  //表示去除某几列重复的行数据 不加参数表示去除完全相同的行
  .filter(x=>x(3)=="200")  //或者 filter(x=>x.getAs("status")=="200")
  .filter(x=>StringUtils.isNotEmpty(x(0).toString)) //filter(x=>x(0).equals("")==false
//ds1.show(10,false)   false表示不缩略显示数据内容

将url按照”&”以及”=”切割

val dfDetail: Dataset[Row] = ds1.map(row => {
  val urlArray: Array[String] = row.getAs[String]("url").split("\\?")
  // row(1).toString.split("\\?")  将url字段按?切割,得到数组urlArray
  var map: Map[String, String] = Map("params" -> null)  //定义一个内容为空的 Map
  
  if (urlArray.length == 2) {
    map = urlArray(1)  //取出数组下标为1的元素  actionBegin=1536150451540&userUID=272090
      .split("&")  // 把String类型的字符串 按照 & 切割得到数组  Array[String]   Array(actionBegin=1536150451540,userUID=272090)
      .map(x => x.split("="))    //将新数组的元素按照 = 切割,得到Array[Array[String]]  Array(Array(actionBegin, 1536150451540),Array(userUID, 272090))
      .filter(_.length == 2)     //判断Array[Array[String]]数组,数组元素长度是否为2
      .map(x => (x(0), x(1)))    //转化为元组 是键值对的形式 Array[(String, String)]  Array((actionBegin,1536150451540),(userUID,272090))
      .toMap					 //以kv的形式保存在map中 Map[String, String]  Map(userUID -> 272090, actionBegin -> 1536150451540)
  }
  //以元组的形式返回,元组中可定义的元素上限个数22个注意不要超过  
  (row.getAs[String]("event_time"),    //row.getAs[String]
    row.getAs[String]("method"),       //需要明确声明字段的类型,否则会为nothing,发生报错
    row.getAs[String]("status"),
    map.getOrElse("actionBegin", ""),   //map.getOrElse(key,"") 
    map.getOrElse("actionClient", ""),  //根据key去取value的值,没有找到key则为空
    map.getOrElse("actionEnd", ""),
    map.getOrElse("actionName", ""),
    map.getOrElse("actionTest", ""),
    map.getOrElse("actionType", ""),
    map.getOrElse("actionValue", ""),
    map.getOrElse("clientType", ""),
    map.getOrElse("examType", ""),
    map.getOrElse("ifEquipment", ""),
    map.getOrElse("isFromContinue", ""),
    map.getOrElse("skillIdCount", ""),
    map.getOrElse("skillLevel", ""),
    map.getOrElse("testType", ""),
    map.getOrElse("userSID", ""),
    map.getOrElse("userUID", ""),
    map.getOrElse("userUIP", "")
  )
}).toDF()   //转成DataFrame的形式,这时候是没有自定义schema的,可以在.toDF里定义列名否则为默认
  dfDetail.printSchema()
  dfDetail.show()
val detailRdd1: RDD[Row] = dfDetail.rdd  // 得到DatFrame的数据rdd
// 定义DataFrame的schema  要与数据一一对应,且类型要匹配
val detailSchema: StructType = StructType(
  Array(
    StructField("event_time", StringType),
    StructField("method", StringType),
    StructField("status", StringType),
    StructField("actionBegin", StringType),
    StructField("actionClient", StringType),
    StructField("actionEnd", StringType),
    StructField("actionName", StringType),
    StructField("actionTest", StringType),
    StructField("actionType", StringType),
    StructField("actionValue", StringType),
    StructField("clientType", StringType),
    StructField("examType", StringType),
    StructField("ifEquipment", StringType),
    StructField("isFromContinue", StringType),
    StructField("skillIdCount", StringType),
    StructField("skillLevel", StringType),
    StructField("testType", StringType),
    StructField("userSID", StringType),
    StructField("userUID", StringType),
    StructField("userUIP", StringType)
  )
)
//创建DataFrame  数据rdd+元数据schema
val detailDF: DataFrame = spark.createDataFrame(detailRdd1,detailSchema)
// detailDF.show(3,false)   输出前3行结果,false不缩略显示字段内容

保存数据

将数据写入mysql表中
在pom.xml文件中添加mysql的依赖:

<dependency>
  <groupId>mysql</groupId>
  <artifactId>mysql-connector-java</artifactId>
  <version>5.1.43</version>
</dependency>

连接数据库:

//在MySQL中创建库:create database kb09db;
val url="jdbc:mysql://192.168.237.100:3306/kb09db"
val prop = new Properties()
prop.setProperty("user","root")    //mysql 用户
prop.setProperty("password","ok")  //mysql 密码
prop.setProperty("driver","com.mysql.jdbc.Driver")   //将MySQL的driver添加依赖
println("----------开始写入MySQL----------")
//write.mode写入数据库的模式:overwrite 覆盖 append 追加
detailDF.write.mode("overwrite").jdbc(url,"logdetail",prop)
println("----------写入MySQL结束----------")
orgDF.write.mode("overwrite").jdbc(url,"logorg",prop)

用户留存分析

计算用户的次日留存率

求当天新增用户总数n
求当天新增的用户ID与次日登录的用户ID的交集,得出新增用户次日登录总数m (次日留存数)
m/n*100%

1.连接数据库读取之前清洗过的数据:

package nj.zb.kb09.project

import java.text.SimpleDateFormat
import java.util.Properties

import org.apache.spark.SparkContext
import org.apache.spark.sql.expressions.UserDefinedFunction
import org.apache.spark.sql.{DataFrame, Dataset, Row, SparkSession}

object UserAnalysis {
  def main(args: Array[String]): Unit = {
    val spark: SparkSession = SparkSession.builder().master("local[*]").appName("clearDemo").getOrCreate()
    import spark.implicits._
    val sc: SparkContext = spark.sparkContext
    //create database kb09db;
    val url="jdbc:mysql://192.168.237.100:3306/kb09db"
    val prop = new Properties()
    prop.setProperty("user","root")
    prop.setProperty("password","ok")
    prop.setProperty("driver","com.mysql.jdbc.Driver")

    val detailDF: DataFrame = spark.read.jdbc(url,"logdetail",prop)
    //detailDF.show(1,false)
    //println(detailDF.count())
    //detailDF.createTempView("detailDF")  可以通过临时视图的方式进行操作,写SQL语句,此方法省略

2.event_time列的数据形式为 2018-09-04T20:27:31+08:00 用户自定义方法修改数据格式

val changeTimeFun: UserDefinedFunction = spark.udf.register("changeTime", (x: String) => {
  //val str: String = x.substring(1,10)  substring取左闭右开的区间 
  // x:"2018-09-04T20:27:31+08:00"  x.substring(1,10) 结果为char[9]@11375 "018-09-04"
  val time: Long = new SimpleDateFormat("yyyy-MM-dd").parse(x.substring(0, 10)).getTime
  time //转成时间戳的形式
})  //用户自定义函数:注册成为changeTime(x:String)函数

3.用户注册信息表,即注册行为是registered筛选成一个结果集

//所有的注册用户信息( userID   register_time   注册行为 )
val regisDF: DataFrame = detailDF
  .filter(detailDF("actionName") === "registered")
  .select("event_time", "actionName", "userUID")
  .withColumnRenamed("event_time","register_time")
  .withColumnRenamed("userUID","regUID")
//完成转换时间格式只保留日期,更改字段名,对相同日期相同用户的行为distinct去重 
val registDF2: Dataset[Row] = regisDF.select($"regUID",
  changeTimeFun($"register_time").alias("register_date"),
  $"actionName").distinct()

4.对用户登录行为进行相同的操作

//所有的用户登录信息( userID   signin_time   登录行为 )
val signinDF: DataFrame = detailDF
  .filter(detailDF("actionName") === "Signin")
  .select("event_time", "actionName", "userUID")
  .withColumnRenamed("event_time", "signin_time")
  .withColumnRenamed("userUID","sigUID")
val signinDF2: Dataset[Row] = signinDF.select($"sigUID",
  changeTimeFun($"signin_time").alias("signin_date"),
  $"actionName").distinct()

5.每日注册用户的次日留存数
用户注册信息和用户登录信息内连接,筛选登录日期和注册日期相差为一天的用户记录数

//signinDF2 和 regisDF2 内连接 得到相同用户的注册和登录时间
val joindf: DataFrame = regisDF2.join(signinDF2,signinDF2("sigUID")===regisDF2("regUID"))
val ds: DataFrame = joinDF2
  .filter(joinDF2("register_date") === joinDF2("signin_date") - 86400000)
  .groupBy("register_date")  //根据用户注册时间分组,符合次日登录的用户数
  .count().withColumnRenamed("count","sigcount")  //和当日总注册用户数做区分

6.每日注册用户数

val frame1: DataFrame = registDF2.groupBy("register_date").count().withColumnRenamed("count","regcount")

7.得到每日用户的次日留存率

val frame2: DataFrame = ds.join(frame1,"register_date")  //相同字段名称的列连接,会把相同的值作为连接条件,合并成一个列
//frame2.show()
frame2.map(x=>(x.getAs[Long]("register_date")
  ,x.getAs[Long]("sigcount").toDouble/x.getAs[Long]("regcount")))
  .show()

输出结果:

+-------------+-----------------+
|           _1|               _2|
+-------------+-----------------+
|1535990400000|0.931758530183727|
+-------------+-----------------+

计算用户的次周留存率

次周留存,15日留存,30日留存和次日留存的求法是相同的,求出相应的新增用户登录次数总m即可
例如次周留存:
1.求当天新增用户总数n
2,求当天新增的用户ID与7日登录的用户ID的交集,得出新增用户次日登录总数m (7日留存数)
3.m/n*100%

活跃用户分析

统计分析需求

读取数据库,统计每天的活跃用户数
统计规则:有看课和买课行为的用户才属于活跃用户
对UID进行去重

活跃用户地域信息分析

统计分析需求

读取原始日志数据
解析url获取用户的访问IP
通过IP库获得IP对应的省市区地址(UDF)
求出每个地域人数的所占百分比

用户浏览深度分析

统计分析需求

读取日志信息,以天为计量单位,通过depth值来表示用户的浏览深度
统计每个depth阶段的用户的个数,反映出每个url的访问人数,针对性的优化页面,来提高网站的转化率,对用户产生粘性
计算规则:当前url的个数作为depth的值
一个用户今天浏览了三个页面
一个url今天被50个人访问

spark处理空值的问题:

val dfUsers3=dfUsersBirth.crossJoin(dfAvgAge)
.withColumn("new_birthyear",
when(   col("f_birthyear").isNull,col("avg_year")  )
.otherwise( col("f_birthyear") )
)
dfUsers3.show()
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值