SparkStream中的DStreams

一、DStreams的输入,有receive和direct两种方式,一般采用direct方式

import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
/*
    auto.offset.reset
    earliest 先去zookeeper获取offset,如果有直接使用,如果没有自从最开始的地方开始消费
    latest  先去zookeeper获取offset,如果有直接使用,如果没有自从最新的偏移量开始
    none 先去zookeeper获取offset,如果有直接使用,如果没有直接报错
 */
object KafkaDirectorDemo {
  def main(args: Array[String]): Unit = {
    //构建conf ssc 对象
    val conf = new SparkConf().setAppName("Kafka_director").setMaster("local")
    val ssc = new StreamingContext(conf,Seconds(5))
    //设置数据检查点进行累计计算
//    ssc.checkpoint("hdfs://192.168.25.101:9000/checkpoint")

    val kafkaParams = Map[String, Object](
      "bootstrap.servers" -> "CentOS1:9092,CentOS2:9092,CentOS3:9092",//用于初始化链接到集群的地址
      "key.deserializer" -> classOf[StringDeserializer],//key序列化
      "value.deserializer" -> classOf[StringDeserializer],//value序列化
      "group.id" -> "group1",//用于标识这个消费者属于哪个消费团体
      "auto.offset.reset" -> "latest",//偏移量 latest自动重置偏移量为最新的偏移量
      "enable.auto.commit" -> (false: java.lang.Boolean)//如果是true,则这个消费者的偏移量会在后台自动提交
    )
    //kafka 设置kafka读取topic
    val topics = Array("first", "second")
    //需要导入包
    /*
    <dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
    <version>2.3.1</version>
    </dependency>
     */
    //LocationStrategies.PreferConsistent任务尽量均匀分布在各个executor节点
    // 创建DStream,返回接收到的输入数据
    // LocationStrategies:根据给定的主题和集群地址创建consumer
    // LocationStrategies.PreferConsistent:持续的在所有Executor之间分配分区
    // ConsumerStrategies:选择如何在Driver和Executor上创建和配置Kafka Consumer
    // ConsumerStrategies.Subscribe:订阅一系列主题
    //createDirectStream[String,String] 指定消费kafka的message的key/value的类型
    //ConsumerStrategies.Subscribe[String,String]指定key/value的类型
    val dStreaming = KafkaUtils.createDirectStream(ssc,LocationStrategies.PreferConsistent,Subscribe[String, String](topics, kafkaParams))
    val rdd = dStreaming.map(record => (record.key, record.value))

    rdd.print()
    rdd.count().print()
    println("~~~~")
    rdd.countByValue().print()
    dStreaming.foreachRDD(rdd=>rdd.foreach(println(_)))
    
    ssc.start()
    ssc.awaitTermination()
  }
}

二、DStream的转化操作

1.无状态

无状态的转化操作就是简单的将DStream中的RDD进行转化,函数和RDD相似

2.有状态

(1)追踪状态变化(UpdateStateByKey)

 定义更新状态方法,参数values为当前批次数据,state为以往批次数据

Some()为统计当前批次和过往批次的数据和

package com.bigdata.streaming

import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}

object WorldCount {
  def main(args: Array[String]) {
    // 定义更新状态方法,参数values为当前批次单词频度,state为以往批次单词频度
    val updateFunc = (values: Seq[Int], state: Option[Int]) => {
      val currentCount = values.foldLeft(0)(_ + _)
      val previousCount = state.getOrElse(0)
      Some(currentCount + previousCount)
    }

    val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")
    val ssc = new StreamingContext(conf, Seconds(3))
    ssc.checkpoint(".")

    // Create a DStream that will connect to hostname:port, like localhost:9999
    val lines = ssc.socketTextStream("master01", 9999)

    // Split each line into words
    val words = lines.flatMap(_.split(" "))

    //import org.apache.spark.streaming.StreamingContext._ // not necessary since Spark 1.3
    // Count each word in each batch
    val pairs = words.map(word => (word, 1))


    // 使用updateStateByKey来更新状态,统计从运行开始以来单词总的次数
    val stateDstream = pairs.updateStateByKey[Int](updateFunc)
    stateDstream.print()

    //val wordCounts = pairs.reduceByKey(_ + _)

    // Print the first ten elements of each RDD generated in this DStream to the console
    //wordCounts.print()

    ssc.start()             // Start the computation
    ssc.awaitTermination()  // Wait for the computation to terminate
    //ssc.stop()
  }

}

(2)Window Operations

批次间隔:多久执行一次,每一次统计该批次时间内的数据

窗口宽度:统计最近批次之前的一定批次的数据

滑动宽度:多久进行一个窗口宽度的统计

import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}

object WorldCount {

  def main(args: Array[String]) {
    // 定义更新状态方法,参数values为当前批次单词频度,state为以往批次单词频度
    val updateFunc = (values: Seq[Int], state: Option[Int]) => {
      val currentCount = values.foldLeft(0)(_ + _)
      val previousCount = state.getOrElse(0)
      Some(currentCount + previousCount)
    }

    val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")
    val ssc = new StreamingContext(conf, Seconds(3))
    ssc.checkpoint(".")

    // Create a DStream that will connect to hostname:port, like localhost:9999
    val lines = ssc.socketTextStream("master01", 9999)

    // Split each line into words
    val words = lines.flatMap(_.split(" "))

    //import org.apache.spark.streaming.StreamingContext._ // not necessary since Spark 1.3
    // Count each word in each batch
    val pairs = words.map(word => (word, 1))
    
    //Seconds(12)表示窗口宽度为12s
    //Seconds(6)表示滑动宽度为6s 
    val wordCounts = pairs.reduceByKeyAndWindow((a:Int,b:Int) => (a + b),Seconds(12), Seconds(6))

    // Print the first ten elements of each RDD generated in this DStream to the console
    wordCounts.print()

    ssc.start()             // Start the computation
    ssc.awaitTermination()  // Wait for the computation to terminate
    //ssc.stop()
  }
}

三、DStreams的输出

开发阶段一般直接使用println打印测试

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值