<Zhuuu_ZZ>Spark项目之log日志数据分析处理

一 项目准备

  • 需要分析处理的数据如下
    在这里插入图片描述
  • 日志数据字段数据字典
    在这里插入图片描述
    有需要的点击链接获取
    链接: 项目资料.提取码:599q

二 项目需求

  • 使用Spark完成下列日志分析项目需求:
    • 日志数据清洗
    • 用户留存分析
    • 活跃用户分析

三 项目战斗

1、数据清洗

  • 读入日志文件并转化为RDD[Row]类型
    • 按照Tab切割数据
    • 过滤掉字段数量少于8个的
  • 对数据进行清洗
    • 按照第一列和第二列对数据进行去重
    • 过滤掉状态码非200
    • 过滤掉event_time为空的数据
    • 将url按照”&”以及”=”切割
  • 保存数据
    • 将数据写入mysql表中

日志字段拆分分析

//日志自定义字段:
event_time
url
method
status
sip
user_uip
action_prepend
action_client
  • 日志拆分字段分析
2018-09-04T20:27:31+08:00	http://datacenter.bdqn.cn/logs/user?actionBegin=1536150451540&actionClient=Mozilla%2F5.0+%28Windows+NT+10.0%3B+WOW64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F58.0.3029.110+Safari%2F537.36+SE+2.X+MetaSr+1.0&actionEnd=1536150451668&actionName=startEval&actionTest=0&actionType=3&actionValue=272090&clientType=001_kgc&examType=001&ifEquipment=web&isFromContinue=false&skillIdCount=0&skillLevel=0&testType=jineng&userSID=B842B843AE317425D53D0C567A903EF7.exam-tomcat-node3.exam-tomcat-node3&userUID=272090&userUIP=1.180.18.157	GET	200	192.168.168.64	-	-   Apache-HttpClient/4.1.2 (java 1.5)

//先按照\t切割,给每一列加上字段名:
event_time : 2018-09-04T20:27:31+08:00	
url : http://datacenter.bdqn.cn/logs/user?actionBegin=1536150451540&actionClient=Mozilla%2F5.0+%28Windows+NT+10.0%3B+WOW64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F58.0.3029.110+Safari%2F537.36+SE+2.X+MetaSr+1.0&actionEnd=1536150451668&actionName=startEval&actionTest=0&actionType=3&actionValue=272090&clientType=001_kgc&examType=001&ifEquipment=web&isFromContinue=false&skillIdCount=0&skillLevel=0&testType=jineng&userSID=B842B843AE317425D53D0C567A903EF7.exam-tomcat-node3.exam-tomcat-node3&userUID=272090&userUIP=1.180.18.157	
method : GET	
status : 200	
sip : 192.168.168.64	
user_uip : -	
action_prepend : -   
action_client : Apache-HttpClient/4.1.2 (java 1.5)

//再把url这一列按照?切割:
http://datacenter.bdqn.cn/logs/user?actionBegin=1536150451540&actionClient=Mozilla%2F5.0+%28Windows+NT+10.0%3B+WOW64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F58.0.3029.110+Safari%2F537.36+SE+2.X+MetaSr+1.0&actionEnd=1536150451668&actionName=startEval&actionTest=0&actionType=3&actionValue=272090&clientType=001_kgc&examType=001&ifEquipment=web&isFromContinue=false&skillIdCount=0&skillLevel=0&testType=jineng&userSID=B842B843AE317425D53D0C567A903EF7.exam-tomcat-node3.exam-tomcat-node3&userUID=272090&userUIP=1.180.18.157	

http://datacenter.bdqn.cn/logs/user
actionBegin=1536150451540&actionClient=Mozilla%2F5.0+%28Windows+NT+10.0%3B+WOW64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F58.0.3029.110+Safari%2F537.36+SE+2.X+MetaSr+1.0&actionEnd=1536150451668&actionName=startEval&actionTest=0&actionType=3&actionValue=272090&clientType=001_kgc&examType=001&ifEquipment=web&isFromContinue=false&skillIdCount=0&skillLevel=0&testType=jineng&userSID=B842B843AE317425D53D0C567A903EF7.exam-tomcat-node3.exam-tomcat-node3&userUID=272090&userUIP=1.180.18.157	

//再把数组中的第二个值按照&切割:
actionBegin=1536150451540
actionClient=Mozilla%2F5.0+%28Windows+NT+10.0%3B+WOW64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F58.0.3029.110+Safari%2F537.36+SE+2.X+MetaSr+1.0
actionEnd=1536150451668
actionName=startEval
actionTest=0
actionType=3
actionValue=272090
clientType=001_kgc
examType=001
ifEquipment=web
isFromContinue=false
skillIdCount=0
skillLevel=0
testType=jineng
userSID=B842B843AE317425D53D0C567A903EF7.exam-tomcat-node3.exam-tomcat-node3
userUID=272090
userUIP=1.180.18.157	


//再按照=切割:
actionBegin   1536150451540
actionClient   Mozilla%2F5.0+%28Windows+NT+10.0%3B+WOW64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F58.0.3029.110+Safari%2F537.36+SE+2.X+MetaSr+1.0
actionEnd   1536150451668
actionName   startEval
actionTest   0
actionType   3
actionValue   272090
clientType   001_kgc
examType   001
ifEquipment   web
isFromContinue   false
skillIdCount   0
skillLevel   0
testType   jineng
userSID   B842B843AE317425D53D0C567A903EF7.exam-tomcat-node3.exam-tomcat-node3
userUID   272090
userUIP   1.180.18.157


//取每个array数组第一个为键:
actionBegin
actionClient
actionEnd
actionName
actionTest
actionType
actionValue
clientType
examType
ifEquipment
isFromContinue
skillIdCount
skillLevel
testType
userSID
userUID
userUIP

//合并所有字段:
event_time
url
actionBegin
actionClient
actionEnd
actionName
actionTest
actionType
actionValue
clientType
examType
ifEquipment
isFromContinue
skillIdCount
skillLevel
testType
userSID
userUID
userUIP
method
status 
sip
user_uip
action_prepend
action_client

IDEA开发程序

import java.util.Properties

import org.apache.commons.lang.StringUtils
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.types.{StringType, StructField, StructType}
import org.apache.spark.sql.{DataFrame, Dataset, Row, SparkSession}

object DataClear {
  def main(args: Array[String]): Unit = {
    val spark: SparkSession = SparkSession.builder().master("local[*]")
      .appName("DataClear").getOrCreate()
    val sc: SparkContext = spark.sparkContext
     import spark.implicits._
    //加载数据
val lineRDD: RDD[String] = sc.textFile("in/project/test.log")

    //按照制表符切分,过滤掉字段数量不为8个的,并给每一列自定义列名
    val RowRDD: RDD[Row] = lineRDD.map(x => x.split("\t")).filter(x => x.length == 8).map(x => Row(x(0), x(1).trim
      , x(2).trim, x(3).trim, x(4).trim, x(5).trim, x(6).trim, x(7).trim))
    val schema = StructType(
      Array(
        StructField("event_time", StringType),
        StructField("url", StringType),
        StructField("method", StringType),
        StructField("status", StringType),
        StructField("sip", StringType),
        StructField("user_uip", StringType),
        StructField("action_prepend", StringType),
        StructField("action_client", StringType)
      )
    )
    val orgDF: DataFrame = spark.createDataFrame(RowRDD,schema)
//   orgDF.printSchema()
//    orgDF.show()

    //按照第一列和第二列对数据进行去重 过滤掉状态码非200 过滤掉event_time为空的数据
    val ds1: Dataset[Row] = orgDF
                          .dropDuplicates("event_time", "url")
                         .filter(x => x(3) == "200") //Row,一行进来,找到下标为3的这一列等于200的留下了
//                       .filter(x=>x(0).equals("")==false)  //和下方操作等价
                        .filter(x => StringUtils.isNotEmpty(x(0).toString))

    //将url按照”&”以及”=”切割
    val detailDF: DataFrame = ds1.map(row => {
      val strings: Array[String] = row.getAs[String]("url").split("\\?")//Row类型,一行进来找到其中url这一列对应的值切割成数组。
      var map: Map[String, String] = Map("params" -> "null")
      if (strings.length == 2) {
        //     strings(1).split("&").toString.split("=") //数组toString为一个地址,再按=切割,filter判断长度都不为2所以为空
        //          .filter(_.length == 2).map(x => (x(0), x(1))).toMap
        val str: Array[String] = strings(1).split("&")
        map = str.map(x => x.split("=")) //调用array数组的map方法,改变类型和数值,不要跟spark算子弄混了
          .filter(x => x.length == 2).map(x => (x(0), x(1))).toMap //把数组的每一个元素即二元组转化成Map键值对
      }
      (row.getAs[String]("event_time"),
        map.getOrElse("actionBegin", ""),
        map.getOrElse("actionClient", ""),
        map.getOrElse("actionEnd", ""),
        map.getOrElse("actionName", ""),
        map.getOrElse("actionTest", ""),
        map.getOrElse("actionType", ""),
        map.getOrElse("actionValue", ""),
        map.getOrElse("clientType", ""),
        map.getOrElse("examType", ""),
        map.getOrElse("ifEquipment", ""),
        map.getOrElse("isFromContinue", ""),
        map.getOrElse("testType", ""),
        map.getOrElse("userSID", ""),
        map.getOrElse("userUID", ""),
        map.getOrElse("userUIP", ""),
        row.getAs[String]("method"),
        row.getAs[String]("status"),
        row.getAs[String]("sip"),
        row.getAs[String]("user_uip"),
        row.getAs[String]("action_prepend"),
        row.getAs[String]("action_client"))
    }).toDF(
      "event_time",
      "actionBegin",
      "actionClient",
      "actionEnd",
      "actionName",
      "actionTest",
      "actionType",
      "actionValue",
      "clientType",
      "examType",
      "ifEquipment",
      "isFromContinue",
      "testType",
      "userSID",
      "userUID",
      "userUIP",
      "method",
      "status",
      "sip",
      "user_uip",
      "action_prepend",
      "action_client"
    )
    detailDF.show(2,false)

    //将数据写入mysql表中
   val url="jdbc:mysql://192.168.198.201:3306/test"
    val prop=new Properties()
    prop.setProperty("user","root")
    prop.setProperty("password","ok")
    prop.setProperty("driver","com.mysql.jdbc.Driver")

    detailDF.write.mode("overwrite").jdbc(url,"detailDF",prop)
    orgDF.write.mode("overwrite").jdbc(url,"orgDF",prop)

  }
}

2、用户留存分析

  • 计算用户的次日留存率

    • 求当天新增用户总数n
    • 求当天新增的用户ID与次日登录的用户ID的交集,得出新增用户次日登录总数m (次日留存数)
    • m/n*100%
  • 计算用户的次周留存率

package project


import java.text.SimpleDateFormat
import java.util.Properties

import org.apache.spark.SparkContext
import org.apache.spark.sql.expressions.UserDefinedFunction
import org.apache.spark.sql.{DataFrame, Dataset, Row, SparkSession}

object UserAnalysis {
  def main(args: Array[String]): Unit = {
    val spark: SparkSession = SparkSession.builder().appName("useranalysis").master("local[*]").getOrCreate()
    val sc: SparkContext = spark.sparkContext
    import spark.implicits._

    //从mysql中读取数据
    val url="jdbc:mysql://192.168.198.201:3306/test"
    val user="root"
    val password="ok"
    val driver="com.mysql.jdbc.Driver"
    val prop=new Properties()
    prop.setProperty("user",user)
    prop.setProperty("password",password)
    prop.setProperty("driver",driver)

    val detailDF: DataFrame = spark.read.jdbc(url,"detailDF",prop)
//   detailDF.printSchema()
//    detailDF.show()

    //定义一个udf函数可以将时间切割成年月日后转化为时间戳,以方便计算
    val TimeFun: UserDefinedFunction = spark.udf.register("time", (x: String) => {
      val time: Long = new SimpleDateFormat("yyyy-MM-dd").parse(x.toString.substring(0,10)).getTime
      time
    })

    //过滤出注册用户
    val registDS: Dataset[Row] = detailDF
      .filter(detailDF("actionName")==="Registered")
      .withColumnRenamed("event_time","regist_time")
      .select("userUID","regist_time")
      .withColumnRenamed("userUID","registUID")
    //过滤出登录用户
    val signDS: Dataset[Row] = detailDF
        .filter("actionName='Signin'")
        .withColumnRenamed("event_time","sign_time")
        .select("userUID","sign_time")
       .withColumnRenamed("userUID","signUID")

    //调用udf函数并过滤掉重复数据
      val registDS1: Dataset[Row] = registDS.select(registDS("registUID"),TimeFun(registDS("regist_time")).as("regist_time")).distinct()
       val signDS1: Dataset[Row] = signDS.select(signDS("signUID"),TimeFun(signDS("sign_time")).as("sign_time")).distinct()
    //注册用户与登录用户以条件为UID关联,求出交集
   val joinDF: DataFrame = registDS1.join(signDS1,registDS1("registUID")===signDS1("signUID"))//spark中如果有空值,count会报错,所以不能用left
//   joinDF.show(3,false)

    println("次日留存率")
    //过滤出次日留存数据并根据时间分组求出每日注册的次日登录数
    val daysignDF: DataFrame = joinDF.filter(joinDF("sign_time") - joinDF("regist_time") === 86400000)
      .groupBy("sign_time")  //这里的分组条件也可为regist_time,因为我上面已经以条件过滤好了,所以登录时间指向了注册时间
                                     //我如果按照注册时间分组其实统计的还是对应的次日的登录数。并且这样在下方关联求joinCountDF时关联条件直接为注册时间就可以了。
      .count().withColumnRenamed("count","sign_count")

    //注册用户以时间分组,求出每日的注册数
     val dayregistDF: DataFrame = registDS1.groupBy("regist_time").count().withColumnRenamed("count","regist_count")

    //以时间相差一天为条件关联上面两DF
     val joinCountDF: DataFrame = dayregistDF.join(daysignDF,daysignDF("sign_time")-dayregistDF("regist_time")===86400000)
    //求出次日留存率
    val dayKeepDF: DataFrame = joinCountDF.map(x => (x.getAs[Long]("regist_time"),
      x.getAs[Long]("regist_count"),
      x(3).toString.toLong,
      x(3).toString.toDouble / x(1).toString.toDouble
    )
    ).toDF("regist_time", "regist_count", "sign_count", "oneDaykeep")
  dayKeepDF.show()

    println("次周留存率")
    //过滤出次日留存数据并根据时间分组求出每日注册的次周登录数
    val weeksignDF: DataFrame = joinDF.filter(joinDF("sign_time") - joinDF("regist_time") === 86400000*7)
      .groupBy("sign_time")  //这里的分组条件也可为regist_time,因为我上面已经以条件过滤好了,所以登录时间指向了注册时间
      //我如果按照注册时间分组其实统计的还是对应的次日的登录数。并且这样在下方关联求joinCountDF时关联条件直接为注册时间就可以了。
      .count().withColumnRenamed("count","sign_count")

    //注册用户以时间分组,求出每日的注册数
    val dayregistDF1: DataFrame = registDS1.groupBy("regist_time").count().withColumnRenamed("count","regist_count")

    //以时间相差七天为条件关联上面两DF
    val joinCountDF1: DataFrame = dayregistDF1.join(daysignDF,daysignDF("sign_time")-dayregistDF("regist_time")===86400000*7)
    //求出次周留存率
    val weekKeepDF: DataFrame = joinCountDF1.map(x => (x.getAs[Long]("regist_time"),
      x.getAs[Long]("regist_count"),
      x(3).toString.toLong,
      x(3).toString.toDouble / x(1).toString.toDouble
    )
    ).toDF("regist_time", "regist_count", "sign_count", "weekkeep")
    weekKeepDF.show()

//结果写入数据库
 dayKeepDF.write.mode("append").jdbc(url,"dayKeepDF",prop)
    weekKeepDF.write.mode("append").jdbc(url,"weekKeepDF",prop)
  }
}


/*
次日留存率
+-------------+------------+----------+-----------------+
|  regist_time|regist_count|sign_count|       oneDaykeep|
+-------------+------------+----------+-----------------+
|1535990400000|         381|       355|0.931758530183727|
+-------------+------------+----------+-----------------+

次周留存率
+-----------+------------+----------+--------+
|regist_time|regist_count|sign_count|weekkeep|
+-----------+------------+----------+--------+
+-----------+------------+----------+--------+

3 活跃用户分析

  • 统计分析需求
    • 读取数据库,统计每天的活跃用户数
    • 统计规则:有看课和买课行为的用户才属于活跃用户
    • 对UID进行去重
  println("统计每天的活跃用户数")
    //detailDF.show(3,false)

    val splitTime: UserDefinedFunction = spark.udf.register("splitTime", (x: String) => {
      x.substring(0, 10)
    })
       //第一种:dropDuplicates
   val activeDF: DataFrame = detailDF
      .filter("actionName in ('StartLearn','BuyCourse')")
   // .filter(detailDF("actionName").isin("StartLearn","BuyCourse"))
   // .filter($"actionName".isin("StartLearn","BuyCourse"))
      .select($"userUID", splitTime(detailDF("event_time")).as("active_time"), $"actionName")
      .dropDuplicates("userUID", "active_time")
      .groupBy("active_time")
      .count()
      
    activeDF.show()
      //第二种方法:distinct
   detailDF
  .filter($"actionName" === "StartLearn" || $"actionName" === "BuyCourse")
  .map(x => {(x.getAs("userUID").toString, x.getAs("event_time").toString.substring(0, 10))})
  .withColumnRenamed("_2", "date")
  .distinct()
  .groupBy("date")
  .count()
  .orderBy("date")
  .show()

  //写入mysql库
  activeDF.write.mode("append").jdbc(url,"activeDF",prop)
/*
+-----------+-----+
|active_time|count|
+-----------+-----+
| 2018-09-04|  275|
| 2018-09-05|  255|
+-----------+-----+

+----------+-----+
|      date|count|
+----------+-----+
|2018-09-04|  275|
|2018-09-05|  255|
+----------+-----+

四 项目拓展之复杂Json格式的log日志处理分析

  • 日志详情
    在这里插入图片描述

Spark-Shell测试环境

//需要分析的数据
// 1593136280858|{"cm":{"ln":"-55.0","sv":"V2.9.6","os":"8.0.4","g":"C6816QZ0@gmail.com","mid":"489","nw":"3G","l":"es","vc":"4","hw":"640*960","ar":"MX","uid":"489","t":"1593123253541","la":"5.2","md":"sumsung-18","vn":"1.3.4","ba":"Sumsung","sr":"I"},"ap":"app","et":[{"ett":"1593050051366","en":"loading","kv":{"extend2":"","loading_time":"14","action":"3","extend1":"","type":"2","type1":"201","loading_way":"1"}},{"ett":"1593108791764","en":"ad","kv":{"activityId":"1","displayMills":"78522","entry":"1","action":"1","contentType":"0"}},{"ett":"1593111271266","en":"notification","kv":{"ap_time":"1593097087883","action":"1","type":"1","content":""}},{"ett":"1593066033562","en":"active_background","kv":{"active_source":"3"}},{"ett":"1593135644347","en":"comment","kv":{"p_comment_id":1,"addtime":"1593097573725","praise_count":973,"other_id":5,"comment_id":9,"reply_count":40,"userid":7,"content":"辑赤蹲慰鸽抿肘捎"}}]}
// 1593136280858|{"cm":{"ln":"-114.9","sv":"V2.7.8","os":"8.0.4","g":"NW0S962J@gmail.com","mid":"490","nw":"3G","l":"pt","vc":"8","hw":"640*1136","ar":"MX","uid":"490","t":"1593121224789","la":"-44.4","md":"Huawei-8","vn":"1.0.1","ba":"Huawei","sr":"O"},"ap":"app","et":[{"ett":"1593063223807","en":"loading","kv":{"extend2":"","loading_time":"0","action":"3","extend1":"","type":"1","type1":"102","loading_way":"1"}},{"ett":"1593095105466","en":"ad","kv":{"activityId":"1","displayMills":"1966","entry":"3","action":"2","contentType":"0"}},{"ett":"1593051718208","en":"notification","kv":{"ap_time":"1593095336265","action":"2","type":"3","content":""}},{"ett":"1593100021275","en":"comment","kv":{"p_comment_id":4,"addtime":"1593098946009","praise_count":220,"other_id":4,"comment_id":9,"reply_count":151,"userid":4,"content":"抄应螟皮釉倔掉汉蛋蕾街羡晶"}},{"ett":"1593105344120","en":"praise","kv":{"target_id":9,"id":7,"type":1,"add_time":"1593098545976","userid":8}}]}

//读取json文件
val fileRDD=sc.textFile("hdfs://192.168.198.201:9000/zhu/op.log")

//将文件按|切割,并转化成二元组
var jsonStrRDD=fileRDD.map(x=>x.split('|')).map(x=>(x(0),x(1)))  //因为|是字符,所以用单引号。双引号是字符串


//将上方二元组的第一位加上名称添加到第二位的末尾
val jsonRDD=jsonStrRDD.map(x=>{var jsonStr=x._2;jsonStr=jsonStr.substring(0,jsonStr.length-1);jsonStr+",\"id\":\""+x._1+"\"}"})

//(id加在最前:)
//val jsonFirstRDD=jsonStrRDD.map(x=>{var jsonStr=x._2;jsonStr=jsonStr.substring(1,jsonStr.length);"{\"id\":\""+x._1+"\","+jsonStr})

//导入所需包
import spark.implicits._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import org.apache.spark.sql._

//将RDD转成DF
val jsonDF=jsonRDD.toDF

//使用get_json_object将json解析成字段,使用get_json_object是因为某一字段的值是json形式kv键值对,而如果是单纯的值就可以直接$"字段.子字段"表示。
val jsonDF2=jsonDF.select(get_json_object($"value","$.cm").alias("cm"),get_json_object($"value","$.ap").alias("ap"),get_json_object($"value","$.et").alias("et"),get_json_object($"value","$.id").alias("id"))


//将字段cm中的字段再次解析字段
val jsonDF3=jsonDF2.select($"id",$"ap",get_json_object($"cm","$.ln").alias("ln"),get_json_object($"cm","$.sv").alias("sv"),get_json_object($"cm","$.os").alias("os"),get_json_object($"cm","$.g").alias("g"),get_json_object($"cm","$.mid").alias("mid"),get_json_object($"cm","$.nw").alias("nw"),get_json_object($"cm","$.l").alias("l"),get_json_object($"cm","$.vc").alias("vc"),get_json_object($"cm","$.hw").alias("hw"),get_json_object($"cm","$.ar").alias("ar"),get_json_object($"cm","$.uid").alias("uid"),get_json_object($"cm","$.t").alias("t"),get_json_object($"cm","$.la").alias("la"),get_json_object($"cm","$.md").alias("md"),get_json_object($"cm","$.vn").alias("vn"),get_json_object($"cm","$.ba").alias("ba"),get_json_object($"cm","$.sr").alias("sr"),$"et")

//一个数组里有多个类似的json串,使用from_json将et中的数据解析,
val jsonDF4=jsonDF3.select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"l",$"vc",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",$"sr",
from_json($"et",ArrayType(StructType(StructField("ett",StringType)::StructField("en",StringType)::StructField("kv",StringType)::Nil))).alias("event"))

//二维数组的话使用explode将event中多个一维数组转成多行
val jsonDF5=jsonDF4.select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"l",$"vc",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",$"sr",
explode($"event").alias("event"))

//将event中的ett,en,kv拆成3列
val jsonDF6=jsonDF5.select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"l",$"vc",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",$"sr",$"event.ett",$"event.en",$"event.kv")

//按字段en中的每一种状态不同切割kv
val loadDF=jsonDF6.filter("en='loading'").select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"l",$"vc",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",$"sr",
$"ett",$"en",get_json_object($"kv","$.extend2").alias("extend2"),get_json_object($"kv","$.loading_time").alias("loading_time"),get_json_object($"kv","$.action").alias("action"),get_json_object($"kv","$.extend1").alias("extend1"),
get_json_object($"kv","$.type").alias("type"),get_json_object($"kv","$.type1").alias("type1"),get_json_object($"kv","$.loading_way").alias("loading_way"))

val adDF=jsonDF6.filter($"en"==="ad").select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"l",$"vc",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",$"sr",
$"ett",$"en",get_json_object($"kv","$.activityId").alias("activityId"),get_json_object($"kv","$.displayMills").alias("displayMills"),get_json_object($"kv","$.entry").alias("entry"),get_json_object($"kv","$.action").alias("action"),
get_json_object($"kv","$.contentType").alias("contentType"))

val notificationDF=jsonDF6.filter(jsonDF6("en")==="notification").select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"l",$"vc",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",$"sr",
$"ett",$"en",get_json_object($"kv","$.ap_time").alias("ap_time"),get_json_object($"kv","$.action").alias("action"),get_json_object($"kv","$.type").alias("type"),get_json_object($"kv","$.content").alias("content"))

val activeDF=jsonDF6.filter(jsonDF6("en")==="active_background").select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"l",$"vc",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",$"sr",
$"ett",$"en",get_json_object($"kv","$.active_source").alias("active_source"))

val commentDF=jsonDF6.filter(jsonDF6("en")==="comment").select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"l",$"vc",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",$"sr",
$"ett",$"en",get_json_object($"kv","$.p_comment_id").alias("p_comment_id"),get_json_object($"kv","$.addtime").alias("addtime"),get_json_object($"kv","$.praise_count").alias("praise_count"),get_json_object($"kv","$.other_id").alias("other_id"),
get_json_object($"kv","$.comment_id").alias("comment_id"),get_json_object($"kv","$.reply_count").alias("reply_count"),get_json_object($"kv","$.userid").alias("userid"),
get_json_object($"kv","$.content").alias("content"))

val praiseDF=jsonDF6.filter(jsonDF6("en")==="praise").select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"l",$"vc",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",$"sr",
$"ett",$"en",get_json_object($"kv","$.target_id").alias("target_id"),get_json_object($"kv","$.id").alias("id"),get_json_object($"kv","$.type").alias("type"),get_json_object($"kv","$.add_time").alias("add_time"),
get_json_object($"kv","$.userid").alias("userid"))

//打印最终结果的结构和数据
loadDF.printSchema
loadDF.show(false)
adDF.printSchema
adDF.show(false)
notificationDF.printSchema
notificationDF.show(false)
activeDF.printSchema
activeDF.show(false)
commentDF.printSchema
commentDF.show(false)
praiseDF.printSchema
praiseDF.show(false)

//将最终结果集保存至hive
//首先注册上面DF为临时视图
loadDF.createOrReplaceTempView("loadDF")
adDF.createOrReplaceTempView("adDF")
notificationDF.createOrReplaceTempView("notificationDF")
activeDF.createOrReplaceTempView("activeDF")
commentDF.createOrReplaceTempView("commentDF")
praiseDF.createOrReplaceTempView("praiseDF")

//然后创建库并切换库
spark.sql("create database if not exists kb09")
spark.sql("use kb09")

//最后写入到hive
spark.sql("create table if not exists load_DF as select * from loadDF")
spark.sql("create table if not exists ad_DF as select * from adDF")
spark.sql("create table if not exists notification_DF as select * from notificationDF")
spark.sql("create table if not exists active_DF as select * from activeDF")
spark.sql("create table if not exists comment_DF as select * from commentDF")
spark.sql("create table if not exists praise_DF as select * from praiseDF")

IDEA开发环境

  • linux虚拟机输入nohup hive --service metastore &开启hive的元数据库,即RunJar服务。
  • 因为最终数据要上传至Hive,所以IDEA中添加如下依赖
 <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-hive_2.11</artifactId>
      <version>2.1.1</version>
    </dependency>
package project

import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types.StructType

object jsonDemo {
  def main(args: Array[String]): Unit = {
    val spark: SparkSession = SparkSession.builder().appName("json").master("local[*]")
      .config("hive.metastore.uris", "thrift://192.168.198.201:9083")
      .enableHiveSupport()
      .getOrCreate()
    val sc: SparkContext = spark.sparkContext
    import spark.implicits._   //toDF
    import  org.apache.spark.sql._
    import org.apache.spark.sql.types._  //new StructType..
    import org.apache.spark.sql.functions._  //from_json,get_json_object...

    val fileRDD: RDD[String] = sc.textFile("in/project/op.log")
    val jsonRDD: RDD[String] = fileRDD.map(x => x.split('|')).map(x =>(x(0),x(1)))
      .map(x => {
     x._2.substring(0, x._2.length - 1) + ",\"id\":\"" + x._1 + "\"}"

      })
    val jsonDF: DataFrame = jsonRDD.toDF()
//     jsonDF.printSchema()
//    jsonDF.show(false)

      //添加结构
    val ett=new StructType()
      .add($"ett".string)
      .add($"en".string)
      .add($"kv".string)

    val common=new StructType()
      .add($"ln".string).add($"sv".string).add($"os".string).add($"g".string).add($"mid".string).add($"nw".string)
      .add($"hw".string).add($"ar".string).add($"uid".string).add($"t".string).add($"la".string).add($"md".string)
      .add($"vn".string).add($"ba".string).add($"sr".string)

    val schema=new StructType()
      .add($"cm".struct(common))
      .add($"ap".string)
      .add($"et".array(ett))
      .add($"id".string)

    val frame: Dataset[Row] = jsonDF.select(from_json($"value",schema).alias("values"))
//   frame.printSchema()
//   frame.show(false)

    val frame2: DataFrame = frame.select(                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
      $"values.id".alias("id"),$"values.ap".alias("ap"),$"values.cm.ln".alias("ln"),$"values.cm.sv".alias("sv"),
     $"values.cm.os".alias("os"),$"values.cm.g".alias("g"),$"values.cm.mid".alias("mid"),$"values.cm.nw".alias("nw"),
     $"values.cm.hw".alias("hw"),$"values.cm.ar".alias("ar"),$"values.cm.uid".alias("uid"),$"values.cm.t".alias("t"),
     $"values.cm.la".alias("la"),$"values.cm.md".alias("md"),$"values.cm.vn".alias("vn"),$"values.cm.ba".alias("ba"),$"values.cm.sr".alias("sr"),
     explode($"values.et").alias("event")) 
//    frame2.printSchema()
//   frame2.show(false)

    val frame3: DataFrame = frame2.select(
      $"id",$"ap",$"ln",$"sv",$"os", $"g",$"mid",$"nw",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",$"sr",
      $"event.ett",$"event.en",$"event.kv"
    )

    val loadingDF: DataFrame = frame3.where($"en" === "loading")
      .select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",
        $"sr",$"ett",$"en",
        get_json_object($"kv", "$.extend2").alias("extend2"),
        get_json_object($"kv", "$.loading_time").alias("loading_time"),
        get_json_object($"kv", "$.action").alias("action"),
        get_json_object($"kv", "$.extend1").alias("extend1"),
        get_json_object($"kv", "$.type").alias("type"),
        get_json_object($"kv", "$.loading_way").alias("loading_way")
      )

    val adDF: DataFrame = frame3.where($"en" === "ad")
      .select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",
        $"sr",$"ett",$"en",
        get_json_object($"kv", "$.activityId").alias("activityId"),
        get_json_object($"kv", "$.displayMills").alias("displayMills"),
        get_json_object($"kv", "$.entry").alias("entry"),
        get_json_object($"kv", "$.action").alias("action"),
        get_json_object($"kv", "$.contentType").alias("contentType")
      )

    val notificationDF: DataFrame = frame3.where($"en" === "notification")
      .select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",
        $"sr",$"ett",$"en",
        get_json_object($"kv", "$.ap_time").alias("ap_time"),
        get_json_object($"kv", "$.action").alias("action"),
        get_json_object($"kv", "$.type").alias("type"),
        get_json_object($"kv", "$.content").alias("content")
      )

    val activeBackgroundDF: DataFrame = frame3.where($"en" === "active_background")
      .select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",
        $"sr",$"ett",$"en",
        get_json_object($"kv", "$.active_source").alias("active_source")
      )

    val commentDF: DataFrame = frame3.where($"en" === "comment")
      .select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",
        $"sr",$"ett",$"en",
        get_json_object($"kv", "$.p_comment_id").alias("p_comment_id"),
        get_json_object($"kv", "$.addtime").alias("addtime"),
        get_json_object($"kv", "$.praise_count").alias("praise_count"),
        get_json_object($"kv", "$.other_id").alias("other_id"),
        get_json_object($"kv", "$.comment_id").alias("comment_id"),
        get_json_object($"kv", "$.reply_count").alias("reply_count"),
        get_json_object($"kv", "$.userid").alias("userid"),
        get_json_object($"kv", "$.content").alias("content")
      )

    val praiseDF: DataFrame = frame3.where($"en" === "praise")
      .select($"id",$"ap",$"ln",$"sv",$"os",$"g",$"mid",$"nw",$"hw",$"ar",$"uid",$"t",$"la",$"md",$"vn",$"ba",
        $"sr",$"ett",$"en",
        get_json_object($"kv", "$.target_id").alias("target_id"),
        get_json_object($"kv", "$.id").alias("pid"),
        get_json_object($"kv", "$.type").alias("type"),
        get_json_object($"kv", "$.add_time").alias("add_time"),
        get_json_object($"kv", "$.userid").alias("userid")
      )

     loadingDF.show(false)
    adDF.show(false)
    notificationDF.show(false)
    activeBackgroundDF.show(false)
    commentDF.show(false)
    praiseDF.show(false)

    //首先注册上面DF为临时视图
    loadingDF.createOrReplaceTempView("loadingDF")
    adDF.createOrReplaceTempView("adDF")
    notificationDF.createOrReplaceTempView("notificationDF")
    activeBackgroundDF.createOrReplaceTempView("activeDF")
    commentDF.createOrReplaceTempView("commentDF")
    praiseDF.createOrReplaceTempView("praiseDF")


    //创库设计权限问题,如未解决IDEA中创库会是个文件夹没有.db后缀。所以建议创库用虚拟机提前创建好。
    //spark.sql("create database if not exists kb09")
    spark.sql("use kb09")

    //最后写入到hive
    spark.sql("create table if not exists load_DF as select * from loadingDF")
    spark.sql("create table if not exists ad_DF as select * from adDF")
    spark.sql("create table if not exists notification_DF as select * from notificationDF")
    spark.sql("create table if not exists active_DF as select * from activeDF")
    spark.sql("create table if not exists comment_DF as select * from commentDF")
    spark.sql("create table if not exists praise_DF as select * from praiseDF")


  }
}

  • 3
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值