sparkstreaming和kafka集成知识回顾

SparkStreaming的Recevier方式和直连方式有什么区别?

Recevier接收固定时间间隔的数据(放在内存当中),使用kafka高级的API,自动维护偏移量,达到固定时间才进行处理,效率低,并且容易丢失数据

Direct直连方式,相当于直接连到kafka的分区上,使用kafka底层的API,效率高,需要自己维护偏移量

启动kafka

/bigdata/kafka_2.11-0.10.2.1/bin/kafka-server-start.sh -daemon /bigdata/kafka_2.11-0.10.2.1/config/server.properties 

停止kafka

/bigdata/kafka_2.11-0.10.2.1/bin/kafka-server-stop.sh  

创建topic

/bigdata/kafka_2.11-0.10.2.1/bin/kafka-topics.sh --create --zookeeper node-1.xiaoniu.com:2181,node-2.xiaoniu.com:2181,node-3.xiaoniu.com:2181 --replication-factor 3 --partitions 3 --topic my-topic 

列出所有topic

/bigdata/kafka_2.11-0.10.2.1/bin/kafka-topics.sh --list --zookeeper node-1.xiaoniu.com:2181,node-2.xiaoniu.com:2181,node-3.xiaoniu.com:2181

查看某个topic信息

/bigdata/kafka_2.11-0.10.2.1/bin/kafka-topics.sh --describe --zookeeper node-1.xiaoniu.com:2181,node-2.xiaoniu.com:2181,node-3.xiaoniu.com:2181 --topic my-topic

启动一个命令行的生产者

/bigdata/kafka_2.11-0.10.2.1/bin/kafka-console-producer.sh --broker-list node-1.xiaoniu.com:9092,node-1.xiaoniu.xom:9092,node-3.xiaoniu.com:9092 --topic xiaoniu

启动一个命令行的消费者

/bigdata/kafka_2.11-0.10.2.1/bin/kafka-console-consumer.sh --zookeeper node-1.xiaoniu.com:2181,node-2.xiaoniu.com:2181,node-3.xiaoniu.com:2181 --topic my-topic --from-beginning

SparkStreaming直连kafka分区,消费kafka数据的小demo

主函数

package cn.edu360.day08

import kafka.common.TopicAndPartition
import kafka.message.MessageAndMetadata
import kafka.serializer.StringDecoder
import kafka.utils.{ZKGroupTopicDirs, ZkUtils}
import org.I0Itec.zkclient.ZkClient
import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
import org.apache.spark.streaming.dstream.InputDStream
import org.apache.spark.streaming.kafka.{HasOffsetRanges, KafkaUtils, OffsetRange}
import org.apache.spark.streaming.{Duration, StreamingContext}

object OrderCount {

  def main(args: Array[String]): Unit = {

    //指定组名
    val group = "g1"

    val conf = new SparkConf().setAppName("KafkaDirectWordCount").setMaster("local[4]")

    val ssc = new StreamingContext(conf, Duration(5000))

    val broadcastRef = IPUtils.broadIpRules(ssc,"E:\\BaiduNetdiskDownload\\小牛大数据\\spark资料\\课件与代码04\\ip\\ip.txt")

    //指定消费的topic名字
    val topic = "oders"

    //指定kafka的broker地址(SparkStream的Task直连到Kafka的分区上,用更加底层的API消费,效率更高)
    val brokerList = "master:9092,slave1:9092,slave2:9092"
    //指定zk地址,后期更新消费的偏移量时使用(以后可以使用redis或者mysql来记录偏移量)
    val zkQuorm = "master:2181,slave1:2181,slave2:2181"
    //创建stream时使用的topic名字集合,SparkStreaming可以同时消费多个topic数据
    val topics: Set[String] = Set(topic)
    //创建一个ZKGroupTopicDirs对象,其实是指定往zk中写入数据的目录,用于保存偏移量
    val topicDirs = new ZKGroupTopicDirs(group, topic)
    //获取zookeeper中的路径"/g001/offsets/wc"
    val zkTopicPath = s"${topicDirs.consumerOffsetDir}"
    //val zkTopicPath = topicDirs.consumerOffsetDir

    //准备kafka的参数
    val kafkaParams = Map(
      "metadata.broker.list" -> brokerList,
      "group.id" -> group,
      //指定从头开始读取数据
      "auto.offset.reset" -> kafka.api.OffsetRequest.SmallestTimeString
    )

    //zookeeper的host和ip,创建一个client,用于更新偏移量
    //是zookeeper的一个客户端,可以从zk中读取偏移量数据,并更新偏移量
    val zkClient = new ZkClient(zkQuorm)
    //查询该路径下是否有子节点(默认有子节点为我们自己保存不同partition时生成的)
    val children = zkClient.countChildren(zkTopicPath)

    var kafkaStream: InputDStream[(String, String)] = null

    //如果zookeeper中有保存offset,我们会利用这个offset作为kafkaStream的起始位置
    var fromOffsets: Map[TopicAndPartition, Long] = Map()

    //如果保存过offset
    if (children > 0) {

      for (i <- 0 until children) {
        val partitionOffset = zkClient.readData[String](s"$zkTopicPath/${i}")
        //主题和分区合在一起
        val tp = TopicAndPartition(topic, i)

        //将不同partition对应的offset增加到fromOffsets中
        fromOffsets += (tp -> partitionOffset.toLong)
      }

      //这个会将kafka的消息进行transform,最终kafka的数据都会变成(kafka的key,message)这样的tuple
      val messageHandler = (mmd :MessageAndMetadata[String,String]) =>(mmd.key(),mmd.message())

      //通过KafkaUtils创建直连的DStream(fromOffsets参数的作用是:按照前面计算好了的偏移量继续消费数据)
      kafkaStream = KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder,(String,String)](ssc,kafkaParams,fromOffsets,messageHandler)
    }else{
      kafkaStream = KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc,kafkaParams,topics)
    }

    //偏移量的范围
    var offsetRanges = Array[OffsetRange]()

    //直连方式只有在KafkaDStream的RDD中才能获取偏移量,那么就不能DStream的Transformation
    //所以只能在kafkaStream调用foreachRDD,获取RDD的偏移量,然后就是对RDD进行操作了
    //依次迭代KafkaDStream中的KafkaRDD
    //如果使用直连方式累加数据,那么就要在外部的数据库中进行累加
    //kafkaStream.foreachRDD里面的业务逻辑是在Driver端执行
    kafkaStream.foreachRDD(kafkaRDD =>{
      //只有KafkaRDD可以强转成HasOffsetRanges,并获取到偏移量
      offsetRanges = kafkaRDD.asInstanceOf[HasOffsetRanges].offsetRanges
      val lines: RDD[String] = kafkaRDD.map(_._2)

      //整理数据
      val fields: RDD[Array[String]] = lines.map(_.split(" "))

      //计算成交总金额
        CalculateUtils.calculateIncome(fields)
      //计算商品分类金额
        CalculateUtils.calculateItem(fields)
      //计算区域成交金额
        CalculateUtils.calculateZone(fields,broadcastRef)


      //一个分区更新一次
      //偏移量
      for(o <- offsetRanges){
        //  /g001/offsets/wc/0
        val zkPath = s"${topicDirs.consumerOffsetDir}/${o.partition}"
        //将该partition的offset保存到zookeeper中
        //  /g001/offsets/wc/0/20000
        ZkUtils.updatePersistentPath(zkClient,zkPath,o.untilOffset.toString)
      }

    })

    ssc.start()
    ssc.awaitTermination()

  }



}

 工具类

package cn.edu360.day08

import cn.edu360.day04.MyUtils
import org.apache.spark.broadcast.Broadcast
import org.apache.spark.rdd.RDD
import redis.clients.jedis.Jedis

object CalculateUtils {

  def calculateIncome(fields:RDD[Array[String]]) = {
    //将计算的数据写入到Redis
    val priceRDD: RDD[Double] = fields.map(arr => {
      val price: Double = arr(4).toDouble
      price
    })
    //reduce是一个Action,会把结果返回到Driver端
    //将当前批次的总金额返回
     val sum: Double = priceRDD.reduce(_+_)
    //获取jedis连接
    val conn: Jedis = JedisConnectionPool.getConnection()
    //将历史值和当前值进行累加
    conn.incrByFloat(Constant.TOTAL_INCOME,sum)
    //释放连接
    conn.close()

  }

  //计算分类的成交金额
  def calculateItem(fields:RDD[Array[String]]) = {
    //对fileds的map方法是在Driver端调用的
    val itemAndPrice: RDD[(String, Double)] = fields.map(arr => {
      //分类
      val item = arr(2)
      //金额
      val price = arr(4).toDouble

      (item, price)

    })
    //按照商品分类进行聚合
    val reduced: RDD[(String, Double)] = itemAndPrice.reduceByKey(_+_)

    //将当前批次的结果写入到redis
    //foreachPartition是一个Action
    reduced.foreachPartition(part =>{
      //获取jedis连接
      //这个连接其实是在Executor中获取的
      //JedisConnectionPool在一个Executor进程中只有一个实例
      val conn: Jedis = JedisConnectionPool.getConnection()
      part.foreach(t =>{
        conn.incrByFloat(t._1,t._2)
      })
      //将当期分区的数据更新完再关闭连接
      conn.close()
    })




  }

  //根据IP计算归属地,然后按照省份进行聚合
  def calculateZone(fields:RDD[Array[String]],broadcastRef:Broadcast[Array[(Long, Long, String)]]) = {

    val zoneAndPrice: RDD[(String, Double)] = fields.map(arr => {
      val ip = arr(1)
      val price = arr(4).toDouble
      val ipNum = MyUtils.ip2Long(ip)
      //在Executor中获取到广播变量的全部规则
      val allRules: Array[(Long, Long, String)] = broadcastRef.value
      //二分法查找
      val index = MyUtils.binarySearch(allRules, ipNum)
      var province = "未知"

      if (index != -1) {
        province = allRules(index)._3
      }
      //省份,订单金额
      (province, price)

    })
    zoneAndPrice
    //按照省份进行聚合
     val reduced: RDD[(String, Double)] = zoneAndPrice.reduceByKey(_+_)

    reduced.foreachPartition(part =>{
      //获取jedis连接
      //这个连接其实是在Executor中获取的
      //JedisConnectionPool在一个Executor进程中只有一个实例
      val conn: Jedis = JedisConnectionPool.getConnection()
      part.foreach(t =>{
        conn.incrByFloat(t._1,t._2)
      })
      //将当期分区的数据更新完再关闭连接
      conn.close()
    })

  }

}
package cn.edu360.day04
import java.sql.{Connection, DriverManager}
import java.text.SimpleDateFormat

import org.apache.commons.lang.time.FastDateFormat

import scala.io.{BufferedSource, Source}
object MyUtils {

  def ip2Long(ip: String): Long = {
    val fragments = ip.split("[.]")
    var ipNum = 0L
    for (i <- 0 until fragments.length){
      ipNum =  fragments(i).toLong | ipNum << 8L
    }
    ipNum
  }


  def readRules(path: String): Array[(Long, Long, String)] = {
    //读取ip规则
    val bf: BufferedSource = Source.fromFile(path)
    val lines: Iterator[String] = bf.getLines()
    //对ip规则进行整理,并放入到内存
    val rules: Array[(Long, Long, String)] = lines.map(line => {
      val fileds = line.split("[|]")
      val startNum = fileds(2).toLong
      val endNum = fileds(3).toLong
      val province = fileds(6)
      (startNum, endNum, province)
    }).toArray
    rules
  }

  def binarySearch(lines: Array[(Long, Long, String)], ip: Long) : Int = {
    var low = 0
    var high = lines.length - 1
    while (low <= high) {
      val middle = (low + high) / 2
      if ((ip >= lines(middle)._1) && (ip <= lines(middle)._2))
        return middle
      if (ip < lines(middle)._1)
        high = middle - 1
      else {
        low = middle + 1
      }
    }
    -1
  }

  def data2Mysql(it: Iterator[(String, Int)]):Unit ={
    val conn: Connection = DriverManager.getConnection("jdbc:mysql://localhost:3306/bigdata?characterEncoding=UTF-8", "root", "admin")
    val pstm = conn.prepareStatement("INSERT INTO access_log VALUES (?, ?)")
    //将一个分区中的每一条数据拿出来
    it.foreach(tp =>{
      pstm.setString(1,tp._1)

      pstm.setInt(2,tp._2)

      pstm.executeUpdate()

    })

    if(pstm != null){
      pstm.close()
    }
    if(conn != null){
      conn.close()
    }
  }

  def main(args: Array[String]): Unit = {

    //将数据读取到内存中
    val rules: Array[(Long, Long, String)] = readRules("E:\\BaiduNetdiskDownload\\小牛大数据\\spark资料\\课件与代码04\\ip\\ip.txt")

    //将IP地址转换成十进制
    val ipNum = ip2Long("111.198.38.185")

    //查找
    val index: Int = binarySearch(rules,ipNum)

    val tp = rules(index)

    val province = tp._3

    println(index)
    println(province)


  }

}

 

 

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值