Spark Streaming生成RDD并执行Spark Job源码内幕解密

本文深入探讨Spark Streaming的工作原理,揭示DStream如何产生RDD并执行Spark Job。通过实例分析,展示了DStream的生命周期,包括InputDStream、TransformationDStream和ForeachDStream的生成过程。分析表明,DStream是RDD的抽象模板,实际执行时形成RDD DAG,并在特定时间产生RDD。通过对compute方法和generateJobs的跟踪,阐述了DStream到RDD的转换细节。
摘要由CSDN通过智能技术生成

本博文主要包含以下内容:

  • DStream产生RDD的案例实战演示
  • DStream作为RDD模板的原理机制
  • 常见DStream生产RDD源码解密

这种常见的DStream包含三种类型,一种是输入的级别的InputDStream,第二种transformationDStream,第三种输出级别的ForeachDStream。

博文主要代码如下:

object NetworkWordCount {
  def main(args: Array[String]) {
    if (args.length < 2) {
      System.err.println("Usage: NetworkWordCount <hostname> <port>")
      System.exit(1)
    }

    StreamingExamples.setStreamingLogLevels()

    // Create the context with a 1 second batch size
    val sparkConf = new SparkConf().setAppName("NetworkWordCount")
    val ssc = new StreamingContext(sparkConf, Seconds(120))

    // Create a socket stream on target ip:port and count the
    // words in input stream of \n delimited text (eg. generated by 'nc')
    // Note that no duplication in storage level only for running locally.
    // Replication necessary in distributed scenario for fault tolerance.
    val lines = ssc.socketTextStream("master",9999)
    val words = lines.flatMap(_.split(" "))
    val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
    wordCounts.print()
    ssc.start()
    ssc.awaitTermination()
  }
}
// scalastyle:on println

通过集群集群处理数据,处理结果如下:

16/09/08 09:18:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 4.0 (TID 2) in 51 ms on localhost (1/1)
16/09/08 09:18:00 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool 
16/09/08 09:18:00 INFO scheduler.JobScheduler: Finished job streaming job 1473297480000 ms.0 from job set of time 1473297480000 ms
16/09/08 09:18:00 INFO scheduler.JobScheduler: Total delay: 0.927 s for time 1473297480000 ms (execution: 0.670 s)
16/09/08 09:18:00 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer()
16/09/08 09:18:00 INFO scheduler.InputInfoTracker: remove old batch metadata: 
16/09/08 09:18:15 INFO storage.MemoryStore: Block input-0-1473297495000 stored as bytes in memory (estimated size 16.0 B, free 89.8 KB)
16/09/08 09:18:15 INFO storage.BlockManagerInfo: Added input-0-1473297495000 in memory on localhost:53535 (size: 16.0 B, free: 511.1 MB)
16/09/08 09:18:15 WARN storage.BlockManager: Block input-0-1473297495000 replicated to only 0 peer(s) instead of 1 peers
16/09/08 09:18:15 INFO receiver.BlockGenerator: Pushed block input-0-1473297495000
16/09/08 09:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1473297600000 ms.0 from job set of time 1473297600000 ms
16/09/08 09:20:00 INFO scheduler.JobScheduler: Added jobs for time 1473297600000 ms
16/09/08 09:20:00 INFO spark.SparkContext: Starting job: print at NetWorkWordCount.scala:24
16/09/08 09:20:00 INFO scheduler.DAGScheduler: Registering RDD 7 (map at NetWorkWordCount.scala:23)
16/09/08 09:20:00 INFO scheduler.DAGScheduler: Got job 3 (print at NetWorkWordCount.scala:24) with 1 output partitions
16/09/08 09:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 6 (print at NetWorkWordCount.scala:24)
1
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值