spark,scala并行求和

原创 2016年08月28日 23:18:57
scala> val text=sc.textFile("/home/sc/Desktop/data.txt")


16/08/08 02:57:19 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 38.8 KB, free 124.7 KB)


16/08/08 02:57:24 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 4.2 KB, free 128.9 KB)


16/08/08 02:57:24 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on localhost:51836 (size: 4.2 KB, free: 517.4 MB)


16/08/08 02:57:24 INFO SparkContext: Created broadcast 4 from textFile at <console>:27
text: org.apache.spark.rdd.RDD[String] = /home/sc/Desktop/data.txt MapPartitionsRDD[14] at textFile at <console>:27






scala> val int=text.flatMap(line => line.split(" "))
int: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[15] at flatMap at <console>:29


scala> val double = int.map(_.toDouble)


double: org.apache.spark.rdd.RDD[Double] = MapPartitionsRDD[16] at map at <console>:31






scala> val rdd1 = double.reduce(_ + _)


16/08/08 02:59:45 INFO FileInputFormat: Total input paths to process : 1


16/08/08 02:59:47 INFO SparkContext: Starting job: reduce at <console>:33


16/08/08 02:59:47 INFO DAGScheduler: Got job 1 (reduce at <console>:33) with 1 output partitions


16/08/08 02:59:47 INFO DAGScheduler: Final stage: ResultStage 2 (reduce at <console>:33)


16/08/08 02:59:47 INFO DAGScheduler: Parents of final stage: List()
16/08/08 02:59:47 INFO DAGScheduler: Missing parents: List()


16/08/08 02:59:48 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[16] at map at <console>:31), which has no missing parents


16/08/08 02:59:54 INFO MemoryStore: Block broadcast_5 stored as values in memory (estimated size 3.6 KB, free 132.4 KB)


16/08/08 03:00:07 INFO MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 2046.0 B, free 134.4 KB)


16/08/08 03:00:07 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on localhost:51836 (size: 2046.0 B, free: 517.4 MB)
16/08/08 03:00:07 INFO SparkContext: Created broadcast 5 from broadcast at DAGScheduler.scala:1006
16/08/08 03:00:07 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[16] at map at <console>:31)


16/08/08 03:00:07 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
16/08/08 03:00:09 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, localhost, partition 0,PROCESS_LOCAL, 2133 bytes)


16/08/08 03:00:09 INFO Executor: Running task 0.0 in stage 2.0 (TID 2)


16/08/08 03:00:09 INFO HadoopRDD: Input split:file:/home/sc/Desktop/data.txt:0+351
16/08/08 03:00:10 INFO Executor: Finished task 0.0 in stage 2.0 (TID 2). 2163 bytes result sent to driver


16/08/08 03:00:10 INFO DAGScheduler: ResultStage 2 (reduce at <console>:33) finished in 2.840 s


16/08/08 03:00:10 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 2858 ms on localhost (1/1)


16/08/08 03:00:10 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 


16/08/08 03:00:10 INFO DAGScheduler: Job 1 finished: reduce at <console>:33, took 23.077075
 s
rdd1: Double = 64.023721






scala> 

版权声明:本文为博主原创文章,未经博主允许不得转载。

Spark RDD API详解(一) Map和Reduce

本文用实例介绍Spark中RDD和MapReduce相关的API。
  • jewes
  • jewes
  • 2014年10月08日 17:31
  • 90901

Scala求和例子

def sum(f: Int => Int)(a: Int)(b: Int): Int = { @annotation.tailrec def loop(n: Int, acc: Int...

Scala 强大的集合数据操作示例

Scala是数据挖掘算法领域最有力的编程语言之一,语言本身是面向函数,这也符合了数据挖掘算法的常用场景:在原始数据集上应用一系列的变换,语言本身也对集合操作提供了众多强大的函数,本文将以List类型为...
  • pzw_0612
  • pzw_0612
  • 2015年05月23日 18:03
  • 60532

centOS下安装hue

一、下载hue https://github.com/cloudera/hue 二、编译 进入目录下make apps进行编译  出现如下错误: 在包含自 _mysql.c:36 的文件中:...

[svc]linux常用手头命令-md版

相关代码env配置文件• ~/.bash_profile:用户每次登录时执行 • ~/.bashrc:每次进入新的Bash环境时执行 • ~/.bash_logout:用户每次退出登...
  • iiiiher
  • iiiiher
  • 2017年09月04日 12:02
  • 690

scala 并行集合在spark中的应用

一.scala并行集合现在有一个集合,对它的每个元素进行处理,比如: val arr = List[String]("a","b","c") arr.foreach(println(_)...
  • lsshlsw
  • lsshlsw
  • 2015年11月12日 00:40
  • 2296

矩阵按行(列)求和CUDA并行算法设计

通过矩阵按行求和与按列求和两个示例介绍CUDA并行算法设计的思路,希望对大家有所帮助。很多公司招聘CUDA工程师面试时也会考察这个题目。...

【CUDA并行编程之三】Cuda矢量求和运算

本文将通过矢量求和运算来说明基本的Cuda并行编程的基本概念。所谓矢量求和运算,就是两个数组数据中对应的元素两两相加,并将结果保存在第三个数组中。如下图所示: 1.基于CPU的矢量求和: 代码非常...

OpenMP 之for指令并行求和(学习笔记)

OpenMP 并行求和并行区域方法 开启两个并行线程程序如下:

【CUDA并行编程之三】Cuda矢量求和运算

本文将通过矢量求和运算来说明基本的Cuda并行编程的基本概念。所谓矢量求和运算,就是两个数组数据中对应的元素两两相加,并将结果保存在第三个数组中。如下图所示: 1.基于CPU的矢量求...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:spark,scala并行求和
举报原因:
原因补充:

(最多只允许输入30个字)