Spark2.11 两种流操作 + Kafka

Spark2.11 两种流操作 + Kafka

Spark2.x 自从引入了 Structured Streaming 后,未来数据操作将逐步转化到 DataFrame/DataSet,以下将介绍 Spark2.x 如何与 Kafka0.10+整合

Structured Streaming + Kafka

  1. 引包
groupId = org.apache.spark
artifactId = spark-sql-kafka-0-10_2.11
version = 2.1.1

为了让更直观的展示包的依赖,以下是我的工程 sbt 文件

name := "spark-test"
version := "1.0"
scalaVersion := "2.11.7"
// https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11
libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.1.1" % "provided"
// https://mvnrepository.com/artifact/org.apache.spark/spark-mllib_2.11
libraryDependencies += "org.apache.spark" % "spark-mllib_2.11" % "2.1.1" % "provided"
// https://mvnrepository.com/artifact/org.apache.spark/spark-streaming_2.11
libraryDependencies += "org.apache.spark" % "spark-streaming_2.11" % "2.1.1" % "provided"
// https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.7.3"
// https://mvnrepository.com/artifact/mysql/mysql-connector-java
libraryDependencies += "mysql" % "mysql-connector-java" % "5.1.38"
// https://mvnrepository.com/artifact/org.apache.kafka/kafka_2.11
libraryDependencies += "org.apache.kafka" % "kafka_2.11" % "0.10.2.1"
//libraryDependencies += "org.apache.spark" % "spark-streaming-kafka-0-10_2.11" % "2.1.1"
libraryDependencies += "org.apache.spark" % "spark-sql-kafka-0-10_2.11" % "2.1.1"
  1. Structured Streaming 连接 Kafka
def main(args: Array[String]): Unit = {

    val spark = SparkSession
      .builder()
      .appName("Spark structured streaming Kafka example")
      //      .master("local[2]")
      .getOrCreate()

    val inputstream = spark.readStream
      .format("kafka")
      .option("kafka.bootstrap.servers", "127.0.0.1:9092")
      .option("subscribe", "testss")
      .load()
    import spark.implicits._
    val query = inputstream.select($"key", $"value")
      .as[(String, String)].map(kv => kv._1 + " " + kv._2).as[String]
      .writeStream
      .outputMode("append")
      .format("console")
      .start()

    query.awaitTermination()
  }

流的元数据如下

ColumnType
keybinary
valuebinary
topicstring
partitionint
offsetlong
timestamplong
timestampTypeint

可配参数

Optionvaluemeaning
assignjson string {“topicA”:[0,1],”topicB”:[2,4]}用于指定消费的 TopicPartitions,assignsubscribesubscribePattern 是三种消费方式,只能同时指定一个
subscribeA comma-separated list of topics用于指定要消费的 topic
subscribePatternJava regex string使用正则表达式匹配消费的 topic
kafka.bootstrap.serversA comma-separated list of host:portkafka brokers

不能配置的参数
- group.id: 对每个查询,kafka 自动创建一个唯一的 group
- auto.offset.reset: 可以通过 startingOffsets 指定,Structured Streaming 会对任何流数据维护 offset, 以保证承诺的 exactly once.
- key.deserializer: 在 DataFrame 上指定,默认 ByteArrayDeserializer
- value.deserializer: 在 DataFrame 上指定,默认 ByteArrayDeserializer
- enable.auto.commit:
- interceptor.classes:

Stream + Kafka

  1. 从最新offset开始消费

    def main(args: Array[String]): Unit = {
        val kafkaParams = Map[String, Object](
          "bootstrap.servers" -> "localhost:9092",
          "key.deserializer" -> classOf[StringDeserializer],
          "value.deserializer" -> classOf[StringDeserializer],
          "group.id" -> "use_a_separate_group_id_for_each_stream",
          "auto.offset.reset" -> "latest",
          "enable.auto.commit" -> (false: java.lang.Boolean)
        )
    
        val ssc =new StreamingContext(OpContext.sc, Seconds(2))
        val topics = Array("test")
        val stream = KafkaUtils.createDirectStream[String, String](
          ssc,
          PreferConsistent,
          Subscribe[String, String](topics, kafkaParams)
        )
        stream.foreachRDD(rdd=>{
          val offsetRanges=rdd.asInstanceOf[HasOffsetRanges].offsetRanges
          rdd.foreachPartition(iter=>{
            val o: OffsetRange = offsetRanges(TaskContext.get.partitionId)
            println(s"${o.topic} ${o.partition} ${o.fromOffset} ${o.untilOffset}")
          })
          stream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
        })
    
    //    stream.map(record => (record.key, record.value)).print(1)
        ssc.start()
        ssc.awaitTermination()
      }
  2. 从指定的offset开始消费

    def main(args: Array[String]): Unit = {
      val kafkaParams = Map[String, Object](
        "bootstrap.servers" -> "localhost:9092",
        "key.deserializer" -> classOf[StringDeserializer],
        "value.deserializer" -> classOf[StringDeserializer],
        "group.id" -> "use_a_separate_group_id_for_each_stream",
        //      "auto.offset.reset" -> "latest",
        "enable.auto.commit" -> (false: java.lang.Boolean)
      )
      val ssc = new StreamingContext(OpContext.sc, Seconds(2))
      val fromOffsets = Map(new TopicPartition("test", 0) -> 1100449855L)
      val stream = KafkaUtils.createDirectStream[String, String](
        ssc,
        PreferConsistent,
        Assign[String, String](fromOffsets.keys.toList, kafkaParams, fromOffsets)
      )
    
      stream.foreachRDD(rdd => {
        val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
        for (o <- offsetRanges) {
          println(s"${o.topic} ${o.partition} ${o.fromOffset} ${o.untilOffset}")
        }
        stream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
      })
    
      //    stream.map(record => (record.key, record.value)).print(1)
      ssc.start()
      ssc.awaitTermination()
    }
  • 7
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值