本博文主要包含以下内容:
- DStream产生RDD的案例实战演示
- DStream作为RDD模板的原理机制
- 常见DStream生产RDD源码解密
这种常见的DStream包含三种类型,一种是输入的级别的InputDStream,第二种transformationDStream,第三种输出级别的ForeachDStream。
博文主要代码如下:
object NetworkWordCount {
def main(args: Array[String]) {
if (args.length < 2) {
System.err.println("Usage: NetworkWordCount <hostname> <port>")
System.exit(1)
}
StreamingExamples.setStreamingLogLevels()
// Create the context with a 1 second batch size
val sparkConf = new SparkConf().setAppName("NetworkWordCount")
val ssc = new StreamingContext(sparkConf, Seconds(120))
// Create a socket stream on target ip:port and count the
// words in input stream of \n delimited text (eg. generated by 'nc')
// Note that no duplication in storage level only for running locally.
// Replication necessary in distributed scenario for fault tolerance.
val lines = ssc.socketTextStream("master",9999)
val words = lines.flatMap(_.split(" "))
val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
}
}
// scalastyle:on println
通过集群集群处理数据,处理结果如下:
16/09/08 09:18:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 4.0 (TID 2) in 51 ms on localhost (1/1)
16/09/08 09:18:00 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool
16/09/08 09:18:00 INFO scheduler.JobScheduler: Finished job streaming job 1473297480000 ms.0 from job set of time 1473297480000 ms
16/09/08 09:18:00 INFO scheduler.JobScheduler: Total delay: 0.927 s for time 1473297480000 ms (execution: 0.670 s)
16/09/08 09:18:00 INFO scheduler.ReceivedBlockTracker: Deleting batches ArrayBuffer()
16/09/08 09:18:00 INFO scheduler.InputInfoTracker: remove old batch metadata:
16/09/08 09:18:15 INFO storage.MemoryStore: Block input-0-1473297495000 stored as bytes in memory (estimated size 16.0 B, free 89.8 KB)
16/09/08 09:18:15 INFO storage.BlockManagerInfo: Added input-0-1473297495000 in memory on localhost:53535 (size: 16.0 B, free: 511.1 MB)
16/09/08 09:18:15 WARN storage.BlockManager: Block input-0-1473297495000 replicated to only 0 peer(s) instead of 1 peers
16/09/08 09:18:15 INFO receiver.BlockGenerator: Pushed block input-0-1473297495000
16/09/08 09:20:00 INFO scheduler.JobScheduler: Starting job streaming job 1473297600000 ms.0 from job set of time 1473297600000 ms
16/09/08 09:20:00 INFO scheduler.JobScheduler: Added jobs for time 1473297600000 ms
16/09/08 09:20:00 INFO spark.SparkContext: Starting job: print at NetWorkWordCount.scala:24
16/09/08 09:20:00 INFO scheduler.DAGScheduler: Registering RDD 7 (map at NetWorkWordCount.scala:23)
16/09/08 09:20:00 INFO scheduler.DAGScheduler: Got job 3 (print at NetWorkWordCount.scala:24) with 1 output partitions
16/09/08 09:20:00 INFO scheduler.DAGScheduler: Final stage: ResultStage 6 (print at NetWorkWordCount.scala:24)
1