xingzhiqing的博客

万难比不过一难,开了弓,就不回头。

spark,scala并行求和

scala> val text=sc.textFile("/home/sc/Desktop/data.txt")


16/08/08 02:57:19 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 38.8 KB, free 124.7 KB)


16/08/08 02:57:24 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 4.2 KB, free 128.9 KB)


16/08/08 02:57:24 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on localhost:51836 (size: 4.2 KB, free: 517.4 MB)


16/08/08 02:57:24 INFO SparkContext: Created broadcast 4 from textFile at <console>:27
text: org.apache.spark.rdd.RDD[String] = /home/sc/Desktop/data.txt MapPartitionsRDD[14] at textFile at <console>:27






scala> val int=text.flatMap(line => line.split(" "))
int: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[15] at flatMap at <console>:29


scala> val double = int.map(_.toDouble)


double: org.apache.spark.rdd.RDD[Double] = MapPartitionsRDD[16] at map at <console>:31






scala> val rdd1 = double.reduce(_ + _)


16/08/08 02:59:45 INFO FileInputFormat: Total input paths to process : 1


16/08/08 02:59:47 INFO SparkContext: Starting job: reduce at <console>:33


16/08/08 02:59:47 INFO DAGScheduler: Got job 1 (reduce at <console>:33) with 1 output partitions


16/08/08 02:59:47 INFO DAGScheduler: Final stage: ResultStage 2 (reduce at <console>:33)


16/08/08 02:59:47 INFO DAGScheduler: Parents of final stage: List()
16/08/08 02:59:47 INFO DAGScheduler: Missing parents: List()


16/08/08 02:59:48 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[16] at map at <console>:31), which has no missing parents


16/08/08 02:59:54 INFO MemoryStore: Block broadcast_5 stored as values in memory (estimated size 3.6 KB, free 132.4 KB)


16/08/08 03:00:07 INFO MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 2046.0 B, free 134.4 KB)


16/08/08 03:00:07 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on localhost:51836 (size: 2046.0 B, free: 517.4 MB)
16/08/08 03:00:07 INFO SparkContext: Created broadcast 5 from broadcast at DAGScheduler.scala:1006
16/08/08 03:00:07 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[16] at map at <console>:31)


16/08/08 03:00:07 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
16/08/08 03:00:09 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, localhost, partition 0,PROCESS_LOCAL, 2133 bytes)


16/08/08 03:00:09 INFO Executor: Running task 0.0 in stage 2.0 (TID 2)


16/08/08 03:00:09 INFO HadoopRDD: Input split:file:/home/sc/Desktop/data.txt:0+351
16/08/08 03:00:10 INFO Executor: Finished task 0.0 in stage 2.0 (TID 2). 2163 bytes result sent to driver


16/08/08 03:00:10 INFO DAGScheduler: ResultStage 2 (reduce at <console>:33) finished in 2.840 s


16/08/08 03:00:10 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 2858 ms on localhost (1/1)


16/08/08 03:00:10 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 


16/08/08 03:00:10 INFO DAGScheduler: Job 1 finished: reduce at <console>:33, took 23.077075
 s
rdd1: Double = 64.023721






scala> 

阅读更多
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/xingzhiqing/article/details/52348898
文章标签: spark scala 并行
个人分类: 记录
上一篇RDD[Vector]
下一篇kmeans各种
想对作者说点什么? 我来说一句

没有更多推荐了,返回首页

关闭
关闭