spark读取kafka数据_SparkStreaming和Kafka基于Direct Approach管理offset(一)


在之前的文章《解析SparkStreaming和Kafka集成的两种方式》中已详细介绍SparkStreaming和Kafka集成主要有Receiver based Approach和Direct Approach。同时对比了二者的优劣势,以及针对不同的Spark、Kafka集成版本处理方式的支持:


本文主要介绍,SparkStreaming和Kafka使用Direct Approach方式处理任务时,如何自己管理offset?

SparkStreaming通过Direct Approach接收数据的入口:

KafkaUtils.createDirectStream。在调用该方法时,会先创建:

KafkaCluster:val kc = new KafkaCluster(kafkaParams)
KafkaCluster负责和Kafka,该类会获取Kafka的分区信息、创建DirectKafkaInputDStream,每个DirectKafkaInputDStream对应一个topic,每个DirectKafkaInputDStream也会持有一个KafkaCluster实例。

到了计算周期后,会调用DirectKafkaInputDStream的compute方法,执行以下操作:

获取对应Kafka Partition的untilOffset,以确定需要获取数据的区间
构建KafkaRDD实例。每个计算周期里,DirectKafkaInputDStream和KafkaRDD是一一对应的
将相关的offset信息报给InputInfoTracker
返回该RDD
关于KafkaRDD和Kafka的分区对应关系,可以参考这篇文章:

《重要 | Spark分区并行度决定机制》

SparkStreaming和Kafka通过Direct方式集成,自己管理offsets代码实践:

1. 业务逻辑处理

/**
* @Author: 微信公众号-大数据学习与分享
*/
object SparkStreamingKafkaDirect {
 
  def main(args: Array[String]) {
    if (args.length < 3) {
      System.err.println(
        s"""
           |Usage: SparkStreamingKafkaDirect <brokers> <topics> <groupid>
           |  <brokers> is a list of one or more Kafka brokers
           |  <topics> is a list of one or more kafka topics to consume from
           |  <groupid> is a consume group
           |
        """.stripMargin)
      System.exit(1)
    }
 
    val Array(brokers, topics, groupId) = args
 
    val sparkConf = new SparkConf().setAppName("DirectKafka")
    sparkConf.setMaster("local[*]")
    sparkConf.set("spark.streaming.kafka.maxRatePerPartition", "10")
    sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
 
    val ssc = new StreamingContext(sparkConf, Seconds(6))
 
    val topicsSet = topics.split(",").toSet
    val kafkaParams = Map[String, String](
      "metadata.broker.list" -> brokers,
      "group.id" -> groupId,
      "auto.offset.reset" -> "smallest"
    )
 
    val km = new KafkaManager(kafkaParams)
 
    val streams = km.createDirectStream[String, String, StringDecoder, StringDecoder](
      ssc, kafkaParams, topicsSet)
 
   streams.foreachRDD(rdd => {
      if (!rdd.isEmpty()) {
        // 先处理消息
        do something...
 
        // 再更新offsets
        km.updateZKOffsets(rdd)
      }
    })
 
    ssc.start()
    ssc.awaitTermination()
  }
}


2. offset管理核心逻辑
2.1 利用zookeeper

注意:自定义的KafkaManager必须在包org.apache.spark.streaming.kafka下

package org.apache.spark.streaming.kafka
 
/**
* @Author: 微信公众号-大数据学习与分享
*  Spark-Streaming和Kafka直连方式:自己管理offsets
*/
class KafkaManager(val kafkaParams: Map[String, String]) extends Serializable {
  private val kc = new KafkaCluster(kafkaParams)
 
  def createDirectStream[
  K: ClassTag,
  V: ClassTag,
  KD <: Decoder[K] : ClassTag,
  VD <: Decoder[V] : ClassTag](ssc: StreamingContext,
                               kafkaParams: Map[String, String],
                               topics: Set[String]): InputDStream[(K, V)] = {
    val groupId = kafkaParams.get("group.id").get
 
    //从zookeeper上读取offset前先根据实际情况更新offset
    setOrUpdateOffsets(topics, groupId)
 
    //从zookeeper上读取offset开始消费message
    val messages = {
      //获取分区      //Either处理异常的类,通常Left表示异常,Right表示正常
      val partitionsE: Either[Err, Set[TopicAndPartition]] = kc.getPartitions(topics)
      if (partitionsE.isLeft) throw new SparkException(s"get kafka partition failed:${partitionsE.left.get}")
 
      val partitions = partitionsE.right.get
 
      val consumerOffsetsE = kc.getConsumerOffsets(groupId, partitions)
      if (consumerOffsetsE.isLeft) throw new SparkException(s"get kafka consumer offsets failed:${consumerOffsetsE.left.get}")
      val consumerOffsets = consumerOffsetsE.right.get
 
      KafkaUtils.createDirectStream[K, V, KD, VD, (K, V)](ssc, kafkaParams, consumerOffsets, (mmd: MessageAndMetadata[K, V]) => (mmd.key, mmd.message))
    }
    messages
  }
 
  /** 创建数据流之前,根据实际情况更新消费offsets */
  def setOrUpdateOffsets(topics: Set[String], groupId: String): Unit = {
    topics.foreach { topic =>
      var hasConsumed = true
      //获取每一个topic分区
      val partitionsE = kc.getPartitions(Set(topic))
      if (partitionsE.isLeft) throw new SparkException(s"get kafka partition failed:${partitionsE.left.get}")
 
      //正常获取分区结果
      val partitions = partitionsE.right.get
      //获取消费偏移量
      val consumerOffsetsE = kc.getConsumerOffsets(groupId, partitions)
      if (consumerOffsetsE.isLeft) hasConsumed = false
 
      if (hasConsumed) {
        val earliestLeaderOffsetsE = kc.getEarliestLeaderOffsets(partitions)
        if (earliestLeaderOffsetsE.isLeft) throw new SparkException(s"get earliest leader offsets failed: ${earliestLeaderOffsetsE.left.get}")
 
        val earliestLeaderOffsets: Map[TopicAndPartition, KafkaCluster.LeaderOffset] = earliestLeaderOffsetsE.right.get
        val consumerOffsets: Map[TopicAndPartition, Long] = consumerOffsetsE.right.get
 
        var offsets: mutable.HashMap[TopicAndPartition, Long] = mutable.HashMap[TopicAndPartition, Long]()
        consumerOffsets.foreach { case (tp, n) =>
          val earliestLeaderOffset = earliestLeaderOffsets(tp).offset
          //offsets += (tp -> n)
          if (n < earliestLeaderOffset) {
            println("consumer group:" + groupId + ",topic:" + tp.topic + ",partition:" + tp.partition + "offsets已过时,更新为:" + earliestLeaderOffset)
            offsets += (tp -> earliestLeaderOffset)
          }
          println(n, earliestLeaderOffset, kc.getLatestLeaderOffsets(partitions).right)
        }
        println("map...." + offsets)
        if (offsets.nonEmpty) kc.setConsumerOffsets(groupId, offsets.toMap)
 
        //        val cs = consumerOffsetsE.right.get
        //        val lastest = kc.getLatestLeaderOffsets(partitions).right.get
        //        val earliest = kc.getEarliestLeaderOffsets(partitions).right.get
        //        var newCS: Map[TopicAndPartition, Long] = Map[TopicAndPartition, Long]()
        //        cs.foreach { f =>
        //          val max = lastest.get(f._1).get.offset
        //          val min = earliest.get(f._1).get.offset
        //          newCS += (f._1 -> f._2)
        //          //如果zookeeper中记录的offset在kafka中不存在(已过期)就指定其现有kafka的最小offset位置开始消费
        //          if (f._2 < min) {
        //            newCS += (f._1 -> min)
        //          }
        //          println(max + "-----" + f._2 + "--------" + min)
        //        }
        //        if (newCS.nonEmpty) kc.setConsumerOffsets(groupId, newCS)
      } else {
        println("没有消费过....")
        val reset = kafkaParams.get("auto.offset.reset").map(_.toLowerCase)
 
        val leaderOffsets: Map[TopicAndPartition, LeaderOffset] = if (reset == Some("smallest")) {
          val leaderOffsetsE = kc.getEarliestLeaderOffsets(partitions)
          if (leaderOffsetsE.isLeft) throw new SparkException(s"get earliest leader offsets failed: ${leaderOffsetsE.left.get}")
          leaderOffsetsE.right.get
        } else {
          //largest
          val leaderOffsetsE = kc.getLatestLeaderOffsets(partitions)
          if (leaderOffsetsE.isLeft) throw new SparkException(s"get latest leader offsets failed: ${leaderOffsetsE.left.get}")
          leaderOffsetsE.right.get
        }
        val offsets = leaderOffsets.map { case (tp, lo) => (tp, lo.offset) }
        kc.setConsumerOffsets(groupId, offsets)
 
        /*
        val reset = kafkaParams.get("auto.offset.reset").map(_.toLowerCase)
    val result = for {
      topicPartitions <- kc.getPartitions(topics).right
      leaderOffsets <- (if (reset == Some("smallest")) {
        kc.getEarliestLeaderOffsets(topicPartitions)
      } else {
        kc.getLatestLeaderOffsets(topicPartitions)
      }).right
    } yield {
      leaderOffsets.map { case (tp, lo) =>
          (tp, lo.offset)
      }
    }
        */
 
      }
    }
  }
 
  /** 更新zookeeper上的消费offsets */
  def updateZKOffsets(rdd: RDD[(String, String)]): Unit = {
    val groupId = kafkaParams("group.id")
    val offsetList = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
    offsetList.foreach { offset =>
      val topicAndPartition = TopicAndPartition(offset.topic, offset.partition)
      val o = kc.setConsumerOffsets(groupId, Map((topicAndPartition, offset.untilOffset)))
      if (o.isLeft) println(s"Error updating the offset to Kafka cluster: ${o.left.get}")
    }
  }
}


关注 微信公众号:大数据学习与分享,获取更多技术干货
 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值