SparkCore文件OLAP数据清洗

案例

1. 统计广告ID

数据源
Advert.txt
因为有些数据太多了,我就只把第一行的数据搞下来

1516609143867 6 7 64 16
  • 数据格式:
timestamp   province   city	userid	adid
时间点 	  省份	   城市	用户     广告
用户ID范围:0-99
省份,城市,ID相同:0-9
adid:0-19
  • 需求:
1.统计每一个省份点击TOP3的广告ID
2.统计每一个省份每一个小时的TOP3广告ID 

-原理
在这里插入图片描述

  • 代码
package com.qf
import org.apache.log4j.{Level,Logger}
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf,SparkContext}
import org.slf4j.LoggerFactory
import scala.collection.JavaConversions._
//统计每个省份点击的TOP3的广告ID
object Demo10 {
  private val logger = LoggerFactory.getLogger(Demo10.getClass.getSimpleName)
  def main(args: Array[String]): Unit = {
    Logger.getLogger("org").setLevel(Level.WARN)
     val sc = new SparkContext("local[*]","Demo10",new SparkConf())
     val logRDD:RDD[String] = sc.textFile("D:\\data\\practice\\Advert.txt")
     val res  = logRDD.map(_.split("\\s+")).map(arr =>(arr(1)+"_"+arr(4),1)).reduceByKey(_+_)
       .map{ case (pa,cnt) =>{
         val parmas:Array[String] = pa.split("_")
         (parmas(0),(parmas(1),cnt))
       }}.groupByKey().mapValues(values => values.toList.sortWith((ac1,ac2) => ac1._2 >ac2._2).take(3)).collectAsMap()
    println( res +"\t")
    sc.stop()

  }

}

2. 基站停留时间TopN

数据源
19735E1C66.log

18688888888,20160327082400,16030401EAFB68F1E3CDF819735E1C66,1
18101056888,20160327082500,16030401EAFB68F1E3CDF819735E1C66,1
18688888888,20160327170000,16030401EAFB68F1E3CDF819735E1C66,0
18101056888,20160327180000,16030401EAFB68F1E3CDF819735E1C66,0

lac_info.txt

9F36407EAD8829FC166F14DDE7970F68,116.304864,40.050645,6
CC0710CC94ECC657A8561DE549D940E0,116.303955,40.041935,6
16030401EAFB68F1E3CDF819735E1C66,116.296302,40.032296,6

  • 数据格式
19735E1C66.log  这个文件中存储着日志信息
 文件组成:手机号,时间戳,基站ID 连接状态(1连接 0断开)
lac_info.txt 这个文件中存储基站信息
 文件组成  基站ID, 经,纬度
  • 需求
根据用户产生日志的信息,在那个基站停留时间最长
在一定时间范围内,求所用户经过的所有基站所停留时间最长的Top2
  • 思路
1.获取用户产生的日志信息并切分
2.用户在基站停留的总时长
3.获取基站的基础信息
4.把经纬度的信息join到用户数据中
5.求出用户在某些基站停留的时间top2
  • 代码
package com.qf
import org.apache.log4j.{Level,Logger}
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf,SparkContext}
import org.slf4j.LoggerFactory
import scala.collection.JavaConversions._
//统计每个省份点击的TOP3的广告ID
object Demo10 {
  private val logger = LoggerFactory.getLogger(Demo10.getClass.getSimpleName)
  def main(args: Array[String]): Unit = {
    Logger.getLogger("org").setLevel(Level.WARN)
     val sc = new SparkContext("local[*]","Demo10",new SparkConf())
     val logRDD:RDD[String] = sc.textFile("D:\\data\\practice\\第二题数据-lacduration\\log")
     //切分日志
    val userInflogRDD:RDD[((String,String),Long)]=logRDD.map(line =>{
      val fields:Array[String] = line.split(",")
      val phone = fields(0)
      val time = fields(1).toLong
      val lac = fields(2)
      val eventType = fields(3)
      val ts = if(eventType.equals("1")) - time else time
      ((phone,lac),ts)
    })
   //停留时长
    val sumRDD:RDD[((String,String),Long)] = userInflogRDD.reduceByKey(_+_)
    sumRDD.foreach(println)
    //处理数据
    val lacAndPTRDD:RDD[(String,(String,Long))] = sumRDD.map{
      case ((phone,lac),ts) =>((lac,(phone,ts)))
    }
    //加载基站数据
    val lacInfoRDD:RDD[String] = sc.textFile("D:\\data\\practice\\第二题数据-lacduration\\lac_info.txt")
    //切分基站数据
    val lacAndXYRDD:RDD[(String,(String,String))]=lacInfoRDD.map(line =>{
      val fields:Array[String] = line.split(",")
      val lac = fields(0)
      val x = fields(1)
      val y = fields(2)
      (lac,(x,y))
    })
    //把经纬度join
    val joinRDD:RDD[(String,((String,Long),(String,String)))]=lacAndPTRDD join lacAndXYRDD
    //为了进行分组,需要数据整合
    val phoneAndTXYRDD:RDD[(String,Long,(String,String))]=joinRDD.map{
      case (lac,((phone,ts),(x,y))) =>(phone,ts,(x,y))
    }
    //按照手机号进行分组
    val groupRDD:RDD[(String,Iterable[(String,Long,(String,String))])] = phoneAndTXYRDD.groupBy(_._1)
    //按照时长进行排序,因为是降序排序所以需要反转
    val sortRDD:RDD[(String,List[(String,Long,(String,String))])] = groupRDD.mapValues(_.toList.sortBy(_._2).reverse)
   //数据的整合
    val resRDD:RDD[(String,List[(Long,(String,String))])]=sortRDD.map{
      case (phone,info) => {
        val filterList:List[(Long,(String,String))] = info.map{
          case (phone2,ts,xy) => (ts,xy)
        }
        (phone,filterList)
      }
    }
    //取值输出
   val list:List[(String,List[(Long,(String,String))])]=resRDD.mapValues(_.take(2)).collect().toList

    println(list)
    sc.stop()

  }

}

3. ip所属区域的访问量

数据源
http.log

20090121000132095572000|125.213.100.123|show.51.com|/shoplist.php?phpfile=shoplist2.php&style=1&sex=137|Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; Mozilla/4.0(Compatible Mozilla/4.0(Compatible-EmbeddedWB 14.59 http://bsalsa.com/ EmbeddedWB- 14.59  from: http://bsalsa.com/ )|http://show.51.com/main.php|
20090121000132124542000|117.101.215.133|www.jiayuan.com|/19245971|Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; TencentTraveler 4.0)|http://photo.jiayuan.com/index.php?uidhash=d1c3b69e9b8355a5204474c749fb76ef|__tkist=0; myloc=50%7C5008; myage=2009; PROFILE=14469674%3A%E8%8B%A6%E6%B6%A9%E5%92%96%E5%95%A1%3Am%3Aphotos2.love21cn.com%2F45%2F1b%2F388111afac8195cc5d91ea286cdd%3A1%3A%3Ahttp%3A%2F%2Fimages.love21cn.com%2Fw4%2Fglobal%2Fi%2Fhykj_m.jpg; last_login_time=1232454068; SESSION_HASH=8176b100a84c9a095315f916d7fcbcf10021e3af; RAW_HASH=008a1bc48ff9ebafa3d5b4815edd04e9e7978050; COMMON_HASH=45388111afac8195cc5d91ea286cdd1b; pop_1232093956=1232468896968; pop_time=1232466715734; pop_1232245908=1232469069390; pop_1219903726=1232477601937; LOVESESSID=98b54794575bf547ea4b55e07efa2e9e; main_search:14469674=%7C%7C%7C00; registeruid=14469674; REG_URL_COOKIE=http%3A%2F%2Fphoto.jiayuan.com%2Fshowphoto.php%3Fuid_hash%3D0319bc5e33ba35755c30a9d88aaf46dc%26total%3D6%26p%3D5; click_count=0%2C3363619

ip.txt

1.0.1.0|1.0.3.255|16777472|16778239|亚洲|中国|福建|福州||电信|350100|China|CN|119.306239|26.075302

3.1 批注

	这个案例数据中有一个sql文件直接在数据库软件中执行,可以创建出一张表,这张可以用于将计算结果写入到数据库中,用于后续JDBCRDD使用

3.2 代码

package com.qf

import org.apache.log4j.{Level, Logger}
import org.apache.spark.broadcast.Broadcast
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
import org.slf4j.LoggerFactory

object Dmeo11 {
  private val logger = LoggerFactory.getLogger(Dmeo11.getClass.getSimpleName)

  def main(args: Array[String]): Unit = {
   Logger.getLogger("org").setLevel(Level.WARN)
    val sc = new SparkContext("local[*]","Demo10",new SparkConf())
    val httpRDD:RDD[String] = sc.textFile("D:\\data\\practice\\第三题数据-ipsearch\\http.log")
    val ipRDD:RDD[String] = sc.textFile("D:\\data\\practice\\第三题数据-ipsearch\\ip.txt")
    //拆分数据
    val provice2IpRDD = ipRDD.map(line =>{
      val fields = line.split("\\|")
      val startIP = ip2Long(fields(0))
      val endIP = ip2Long(fields(1))
      val province = fields(6)
      (province,startIP,endIP)
    })
    //对ip进行一个排序
    val provice2IpsRDD:Array[(String,Long,Long)] = provice2IpRDD.collect.sortWith{
      case ((pro1,sInp1,eIp1),(pro2,sIp2,eIp2)) => sInp1 < sIp2
    }
    //设置广播变量
    val proviceIpBC:Broadcast[Array[(String,Long,Long)]] = sc.broadcast(provice2IpsRDD)
    //处理行为日志
    val province2CntsRDD:RDD[(String,Int)]=httpRDD.map(line =>{
      val fields:Array[String] = line.split("\\|")
      val ip = ip2Long(fields(1))
      val ipArray = proviceIpBC.value
      val index = binarySearch(ipArray,ip)
      if(index == -1)(null ,-1)
      else (ipArray(index)._1,1)
    }).filter(_._1 != null).reduceByKey(_+_)
    //把结果保存到mysql
    provice2IpRDD.foreach(println)
    println("打印一下有哪些省份而已")
    province2CntsRDD.foreach(println)

    sc.stop()
  }
  //ipv4就是一个32位地址信息,每一个字节都能表示十进制数字255,所以把这个ip地址表示的十进制的数

  def ip2Long(ip:String):Long = {
    val fields = ip.split("\\.")
    var ipNum = 0L
    for(field <- fields) ipNum = field.toLong | ipNum  << 8 //其实就相当于 * 256
    ipNum
  }
  //定义一个二分查找法
  //查询到就返回索引
  //没有查询到就返回-1
  def binarySearch(ipArray:Array[(String,Long,Long)],ip:Long):Int = {
    var start = 0
    var end = ipArray.length
    while (start <= end){
      val mid = (start + end) /2
      val startIp = ipArray(mid)._2
      val endIp = ipArray(mid)._3
      if(ip >= startIp && ip <= endIp){
        return mid
      }else if(ip < startIp){
        end = mid - 1
      }else {
        start = mid + 1
      }
    }
    return -1
  }

}

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值