Spark实时ETL·中国移动充值活动交易额指标统计

1、充值活动交易额指标统计

在这里插入图片描述

1.1 ODS实时分流

1.1.1 需求

  • 使用Canal抓取MySQL的Binlog日志,并使用Spark Streaming进行ODS实时分流,使MySQL的每张业务表对应Kafka相应的Topic

1.1.2 Canal配置

在这里插入图片描述

  • 修改MySQL配置文件,开启Binlog日志
server_id=1
log-bin=mysql-bin
binlog_format=row
binlog-do-db=gmall-2020-07
  • 创建Canal账号,并赋权限
CREATE USER canal IDENTIFIED BY 'canal';
GRANT SELECT, SHOW VIEW, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';
  • 配置canal.properties
canal.port = 11111
canal.zkServers =
canal.serverMode = kafka
canal.mq.servers = hodoop1:9092
  • 配置instance.properties
canal.instance.master.address=hodoop1:3306
canal.mq.topic=ODS_DB_GMALL1122_C

1.1.3 Spark Streaming 实时分流

object BaseDBCanalApp {
  def main(args: Array[String]): Unit = {

    val sparkConf: SparkConf = new SparkConf().setAppName("base_db_canal_app").setMaster("local[*]")
    val ssc = new StreamingContext(sparkConf, Seconds(3))

    val topic = "ODS_DB_GMALL1122_C";
    val groupId = "base_db_canal_group"

    // 假如宕机,则从redis中获取offset,实现断点消费
    val offset: Map[TopicPartition, Long] = OffsetManager.getOffset(groupId, topic)
    var inputDstream: InputDStream[ConsumerRecord[String, String]] = null
    // 如果offset并不为空,说明不是第一次消费,直接根据offset创建
    if (offset != null && offset.size != 0) {
      inputDstream = MyKafkaUtil.getKafkaStream(topic, ssc, offset, groupId)
    } else {
      // 第一次创建流,直接创建
      inputDstream = MyKafkaUtil.getKafkaStream(topic, ssc, groupId)
    }
    // 取得偏移量
    var offsetRanges: Array[OffsetRange] = null
    val inputGetOffsetDstream: DStream[ConsumerRecord[String, String]] = inputDstream.transform(rdd => {
      offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
      rdd
    })
    // 将流中的数据转换成JSON对象
    val dbJsonObjDstream: DStream[JSONObject] = inputGetOffsetDstream.map(
      record => {
        val jsonString: String = record.value()
        val jsonObj: JSONObject = JSON.parseObject(jsonString)
        jsonObj
      }
    )
    // 对数据进行消费,采用行动算子,同时注意提交offset
    dbJsonObjDstream.foreachRDD {
      // 先消费RDD
      rdd => {rdd.foreachPartition { josnObjItr => {
        for (jsonObj <- josnObjItr) {
          val dataArr: JSONArray = jsonObj.getJSONArray("data")
          for (i <- 0 until dataArr.size()) {
            val dataJsonObj: JSONObject = dataArr.getJSONObject(i)
            // 根据表格建立topics
            val topic = "ODS_T_" + jsonObj.getString("table").toUpperCase
            val id: String = dataJsonObj.getString("id")
            MyKafkaSink.send(topic, id, dataJsonObj.toJSONString)
          }
        }
      }
      }
    }
        // 消费完成后提交offset
      OffsetManager.saveOffset(groupId, topic, offsetRanges)
    }
    ssc.start()
    ssc.awaitTermination()
  }
}

1.2 Stream-Static Join

1.2.1 需求

  • 使用Spark Streaming消费Kafka数据,与MySQL中的活动日期维表进行关联,合成营销主题宽表,并写入MySQL,供短信营销服务使用

1.2.2 Stream-Topic

字段描述
user_id用户ID
user_level用户等级
tag_phone营销手机
submit_time充值时间
money充值金额

1.2.3 Static维表

字段描述
id自增ID
active_id活动ID
active_name活动名称
user_level目标用户
create_time活动开始时间
deadline_time活动截止时间
active_describe活动描述

1.2.4 Spark Streaming 流-维 Join

package com.bigData.spark

import com.alibaba.fastjson.{JSON, JSONException, JSONObject}
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.log4j.{Level, Logger}
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{Durations, StreamingContext}

case class ActiveBean(user_level:String,create_time:String,deadline_time:String,active_describe:String)
object StreamStaicJoin {
  def main(args: Array[String]): Unit = {

    //设置日志等级
    Logger.getLogger("org").setLevel(Level.WARN)

    //Kafka 参数
    val kafkaParams= Map[String, Object](
      "bootstrap.servers" -> "localhost:9092",
      "key.deserializer" -> classOf[StringDeserializer],
      "value.deserializer" -> classOf[StringDeserializer],
      "auto.offset.reset" -> "latest",
      "enable.auto.commit" -> (true: java.lang.Boolean),
      "group.id" -> "ODS_T_charge")

    //spark环境
    val sparkConf = new SparkConf().setAppName(this.getClass.getSimpleName.replace("$","")).setMaster("local[3]")
    val ssc = new StreamingContext(sparkConf,Durations.seconds(10))

    /** 1) 静态数据: MysQL活动表*/
    val userInfo=ssc.sparkContext.parallelize(Array(
      ActiveBean("DDAEF_20200501","劳动节充值20减5","5","2020/05/01 00:00:00","2020/05/01 23:59:59","公司为庆祝五一劳动节,开展充值20减5,大家快来啊"),
      ActiveBean("AABBC_20200721","三周年店庆全场8.5折","2","2020/07/21 00:00:00","2020/07/28 23:59:59","公司为成立三周年,全场大酬宾全场8.5折,大家快来买啊")
    )).map(active=>(active.user_level,active))


    /** 2) 流式数据: 用户发的充值Topic数据*/
    /** 数据示例:
      * submit_time:充值时间、money:充值金额、tag_phone:营销手机、userID:用户ID、user_level:会员等级、id:事件ID
      * {"submit_time": "2020-07-22 10:04:00", "money": 20, "tag_phone": 13511001235, "userID": "user_1", "user_level": 2, "id": 4909846540155641457} */

    val kafkaDStream=KafkaUtils.createDirectStream[String,String](
      ssc,
      LocationStrategies.PreferConsistent,
      ConsumerStrategies.Subscribe[String,String](Set("testTopic3"),kafkaParams)
    ).map(item=>parseJson(item.value())).map(item=>{
      val user_level = item.getString("user_level")
      val eventTime = item.getString("submit_time")
      val money= item.getInteger("money")
      val tag_phone = item.getInteger("tag_phone")
      (user_level,(eventTime,money,tag_phone))
    })


    /** 3) 流与静态数据做Join (RDD Join 方式)*/
    kafkaDStream.foreachRDD(_.join(userInfo).foreach(println))

    ssc.start()
    ssc.awaitTermination()

  }

  /**json解析*/
  def parseJson(log:String):JSONObject={
    var ret:JSONObject=null
    try{
      ret=JSON.parseObject(log)
    }catch {
      //异常json数据处理
      case e:JSONException => println(log)
    }
    ret
  }

}

1.3 活动指标统计

在这里插入图片描述

1.3.1 需求

  • 活动指标统计:统计当日新增付费用户

1.3.2 实现

//基本转换 补充日期字段
val orderInfoDstream: DStream[OrderInfo] = inputGetOffsetDstream.map { record =>
  val jsonString: String = record.value()
  // 将JSON数据转换成bean对象
  val orderInfo: OrderInfo = JSON.parseObject(jsonString,classOf[OrderInfo])
  //  日期格式 2020-04-24 19:06:10
  val datetimeArr: Array[String] = orderInfo.create_time.split(" ")
  orderInfo.create_date=datetimeArr(0)
  val timeArr: Array[String] = datetimeArr(1).split(":")
  orderInfo.create_hour=timeArr(0)
  orderInfo
}
// 以分区为单位,进行一次查询
val orderInfoWithfirstDstream: DStream[OrderInfo] = orderInfoDstream.mapPartitions { orderInfoItr =>
  // 将集合中的每个分区的数据转换成一个List
  val orderInfoList: List[OrderInfo] = orderInfoItr.toList
  if(orderInfoList.size>0){
    // 获取所有用户的id列表
      val userIdList: List[String] = orderInfoList.map(_.user_id.toString)
      // sql,查询用户的状态,是否首次消费
      var sql = "select user_id,if_consumed from user_state1122 where user_id in ('" + userIdList.mkString("','") + "')"
      val userStateList: List[JSONObject] = PhoenixUtil.queryList(sql)

      // 将根据这个分区的查询的结果,由List转换成Map,这样就相当于构建了一个索引,类似于map join
    // List[JSONObj]=>List[Tumple2]=>Map[String,String]
      val userStateMap: Map[String, String] = userStateList.map(userStateJsonObj =>
        //注意返回字段的大小写!!!!!!!!
        (userStateJsonObj.getString("USER_ID"), userStateJsonObj.getString("IF_CONSUMED"))
      ).toMap
      for (orderInfo <- orderInfoList) {
        // 假如在Map
        val userIfConsumed: String = userStateMap.getOrElse(orderInfo.user_id.toString, null)
        if (userIfConsumed != null && userIfConsumed == "1") {
          orderInfo.if_first_order = "0"
        } else {
          orderInfo.if_first_order = "1"
        }
      }
  }
  orderInfoList.toIterator
}

1.4 数据大屏指标统计

1.4.1 需求

  • 使用Spark Streaming实时统计每分钟的充值金额和订单量,并写入Redis

1.4.2 实现

val provinceBroadcast = "100=\"北京\"\n  200=\"广东\"\n  210=\"上海\"\n  220=\"天津\"\n  230=\"重庆\"\n  240=\"辽宁\"\n  250=\"江苏\"\n  270=\"湖北\"\n  280=\"四川\"\n  290=\"陕西\"\n  311=\"河北\"\n  351=\"山西\"\n  371=\"河南\"\n  431=\"吉林\"\n  451=\"黑龙江\"\n  471=\"内蒙古\"\n  531=\"山东\"\n  551=\"安徽\"\n  571=\"浙江\"\n  591=\"福建\"\n  731=\"湖南\"\n  771=\"广西\"\n  791=\"江西\"\n  851=\"贵州\"\n  871=\"云南\"\n  891=\"西藏\"\n  898=\"海南\"\n  931=\"甘肃\"\n  951=\"宁夏\"\n  971=\"青海\"\n  991=\"新疆\""
val province_Map = provinceBroadcast.split("\n  ").map(s=>(s.split("=")(0),s.split("=")(1))).toMap
/**
  * "bussinessRst": "0000",
  * "channelCode": "6900",
  * "chargefee": "1000",
  * "clientIp": "117.136.79.101",
  * "gateway_id": "WXPAY",
  * "interFacRst": "0000",
  * "logOutTime": "20170412030030067",
  * "orderId": "384663607178845909",
  * "payPhoneNo": "15015541313",
  * "phoneno": "15015541313",
  * "provinceCode": "200",
  * "rateoperateid": "1513",
  * "receiveNotifyTime": "20170412030030017",
  * "requestId": "20170412030007090581518228485394",
  * "retMsg": "接口调用成功",
  * "serverIp": "10.255.254.10",
  * "serverPort": "8714",
  * "serviceName": "payNotifyReq",
  * "shouldfee": "1000",
  * "srcChannel": "11",
  * "sysId": "01"
  */
val rowRDD = lines.map(obj=>{

  val jsonObj = JSON.parseObject(obj)

  // 充值结果
  val bussinessRst = jsonObj.getString("bussinessRst")

  // 充值金额
  val fee = if("0000".equals(bussinessRst))
    jsonObj.getDouble("chargefee")
  else
    0.0

  // 充值是否成功
  val isSucc = if("0000".equals(bussinessRst)) 1 else 0

  // 充值开始时间
  val starttime = jsonObj.getString("requestId")

  // 充值结束时间
  val receivetime = jsonObj.getString("receiveNotifyTime")

  // 省份编号 => 省份
  val pcode = jsonObj.getString("provinceCode")
  val province = province_Map.get(pcode).toString

  // 充值花费时间
  val costtime = if("0000".equals(bussinessRst)) TimeUtils.costtime(starttime,receivetime) else 0

  // 年月日时分
  (
    starttime.substring(0,10),//yyyyMMddHH
    starttime.substring(0,12),//yyyyMMddHHmm
    (1,isSucc,fee,costtime),
    province
  )

})

def kpi_general(baseData: RDD[(String, String, List[Double], String)]): Unit = {
    baseData.map(tp => (tp._1, tp._2)).reduceByKey((list1, list2) => {
      //将所有的元素拉链为一个列表之后进行相加计算
      list1.zip(list2).map(tp => tp._1 + tp._2)
    })
    .foreachPartition(partition => {
	    val jedis = Jpools.getJedis
	    partition.foreach(tp => {
	      //所有的数据都计算完成之后,显示在数据库中
	      jedis.hincrBy("A-" + tp._1, "total", tp._2(0).toLong)
	      jedis.hincrBy("A-" + tp._1, "succ", tp._2(1).toLong)
	      jedis.hincrByFloat("A-" + tp._1, "money", tp._2(2))
	      jedis.hincrBy("A-" + tp._1, "cost", tp._2(3).toLong)
	      // key的有效期
	      jedis.expire("A-" + tp._1, 48 * 60 * 60)
	    })
	    jedis.close()
	})
}
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值