【第22期】观点:IT 行业加班,到底有没有价值?

spark,scala并行求和

原创 2016年08月28日 23:18:57
scala> val text=sc.textFile("/home/sc/Desktop/data.txt")


16/08/08 02:57:19 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 38.8 KB, free 124.7 KB)


16/08/08 02:57:24 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 4.2 KB, free 128.9 KB)


16/08/08 02:57:24 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on localhost:51836 (size: 4.2 KB, free: 517.4 MB)


16/08/08 02:57:24 INFO SparkContext: Created broadcast 4 from textFile at <console>:27
text: org.apache.spark.rdd.RDD[String] = /home/sc/Desktop/data.txt MapPartitionsRDD[14] at textFile at <console>:27






scala> val int=text.flatMap(line => line.split(" "))
int: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[15] at flatMap at <console>:29


scala> val double = int.map(_.toDouble)


double: org.apache.spark.rdd.RDD[Double] = MapPartitionsRDD[16] at map at <console>:31






scala> val rdd1 = double.reduce(_ + _)


16/08/08 02:59:45 INFO FileInputFormat: Total input paths to process : 1


16/08/08 02:59:47 INFO SparkContext: Starting job: reduce at <console>:33


16/08/08 02:59:47 INFO DAGScheduler: Got job 1 (reduce at <console>:33) with 1 output partitions


16/08/08 02:59:47 INFO DAGScheduler: Final stage: ResultStage 2 (reduce at <console>:33)


16/08/08 02:59:47 INFO DAGScheduler: Parents of final stage: List()
16/08/08 02:59:47 INFO DAGScheduler: Missing parents: List()


16/08/08 02:59:48 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[16] at map at <console>:31), which has no missing parents


16/08/08 02:59:54 INFO MemoryStore: Block broadcast_5 stored as values in memory (estimated size 3.6 KB, free 132.4 KB)


16/08/08 03:00:07 INFO MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 2046.0 B, free 134.4 KB)


16/08/08 03:00:07 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on localhost:51836 (size: 2046.0 B, free: 517.4 MB)
16/08/08 03:00:07 INFO SparkContext: Created broadcast 5 from broadcast at DAGScheduler.scala:1006
16/08/08 03:00:07 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[16] at map at <console>:31)


16/08/08 03:00:07 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
16/08/08 03:00:09 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, localhost, partition 0,PROCESS_LOCAL, 2133 bytes)


16/08/08 03:00:09 INFO Executor: Running task 0.0 in stage 2.0 (TID 2)


16/08/08 03:00:09 INFO HadoopRDD: Input split:file:/home/sc/Desktop/data.txt:0+351
16/08/08 03:00:10 INFO Executor: Finished task 0.0 in stage 2.0 (TID 2). 2163 bytes result sent to driver


16/08/08 03:00:10 INFO DAGScheduler: ResultStage 2 (reduce at <console>:33) finished in 2.840 s


16/08/08 03:00:10 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 2858 ms on localhost (1/1)


16/08/08 03:00:10 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 


16/08/08 03:00:10 INFO DAGScheduler: Job 1 finished: reduce at <console>:33, took 23.077075
 s
rdd1: Double = 64.023721






scala> 

版权声明:本文为博主原创文章,未经博主允许不得转载。 举报

相关文章推荐

spark常用函数:transformation和action

1、RDD提供了两种类型的操作:transformation和action 所有的transformation都是采用的懒策略,如果只是将transformation提交是不会执行计算的,计算只有在...

Scala求和例子

def sum(f: Int => Int)(a: Int)(b: Int): Int = { @annotation.tailrec def loop(n: Int, acc: Int...

spark安装入门

下载: http://spark.apache.org/downloads.html 下载带hadoop版本:spark-1.5.1-bin-hadoop2.6.tgz 启动:  ./bin/spa...

Spark总结(二)——RDD的Transformation操作

1、创建RDD的两种方式: (1)、从HDFS读入数据产生RDD; (2)、有其他已存在的RDD转换得到新的RDD;scala> val textFile = sc.textFile("hdfs:...

spark从hdfs上读取文件运行wordcount

1.配置环境说明 hadoop配置节点:sg202(namenode SecondaryNameNode)  sg206(datanode) sg207(datanode) sg208(datanod...

【Windows】【Scala + Spark】【Eclipse】单机开发环境搭建 - 及示例程序

Java7 –系统变量 JAVA_HOME -> C:\ProgramFiles\Java\jdk1.7.0_71 CLASSPATH ->%JAVA_HOME%\lib\dt.jar;%JAVA_H...

Spark算子:RDD基本转换操作(1)–map、flagMap、distinct

关键字:Spark算子、Spark RDD基本转换、map、flatMap、distinct map 将一个RDD中的每个数据项,通过map中的函数映射变为一个新的元素。 输入分区与...

spark shell的学习

1. 进入SPARK_HOME/bin下运行命令:
  • yeruby
  • yeruby
  • 2014-11-12 15:13
  • 22262

基于IntelliJ IDEA开发Spark的Maven项目——Scala语言

 基于IntelliJ IDEA开发Spark的Maven项目——Scala语言 1、Maven管理项目在JavaEE普遍使用,开发Spark项目也不例外,而Scala语言开发Spar...

分别用Eclipse和IDEA搭建Scala+Spark开发环境

开发机器上安装jdk1.7.0_60和scala2.10.4,配置好相关环境变量。网上资料很多,安装过程忽略。此外,Eclipse使用Luna4.4.1,IDEA使用14.0.2版本。 1. Ecl...
收藏助手
不良信息举报
您举报文章:深度学习:神经网络中的前向传播和反向传播算法推导
举报原因:
原因补充:

(最多只允许输入30个字)