这几天学习了Spark RDD transformation 和 action ,做个笔记记录下心得,顺便分享给大家。
下面的表格列出了目前所支持的转换和动作(详情请参见 RDD API doc):
转换(transformation)
转换 | 含义 |
map(func) | 返回一个新分布式数据集,由每一个输入元素经过func函数转换后组成 |
filter(func) | 返回一个新数据集,由经过func函数计算后返回值为true的输入元素组成 |
flatMap(func) | 类似于map,但是每一个输入元素可以被映射为0或多个输出元素(因此func应该返回一个序列,而不是单一元素) |
mapPartitions(func) | 类似于map,但独立地在RDD的每一个分块上运行,因此在类型为T的RDD上运行时,func的函数类型必须是Iterator[T] => Iterator[U] |
mapPartitionsWithSplit(func) | 类似于mapPartitions, 但func带有一个整数参数表示分块的索引值。因此在类型为T的RDD上运行时,func的函数类型必须是(Int, Iterator[T]) => Iterator[U] |
sample(withReplacement,fraction, seed) | 根据fraction指定的比例,对数据进行采样,可以选择是否用随机数进行替换,seed用于指定随机数生成器种子 |
union(otherDataset) | 返回一个新的数据集,新数据集是由源数据集和参数数据集联合而成 |
distinct([numTasks])) | 返回一个包含源数据集中所有不重复元素的新数据集 |
groupByKey([numTasks]) | 在一个(K,V)对的数据集上调用,返回一个(K,Seq[V])对的数据集 注意:默认情况下,只有8个并行任务来做操作,但是你可以传入一个可选的numTasks参数来改变它 |
reduceByKey(func, [numTasks]) | 在一个(K,V)对的数据集上调用时,返回一个(K,V)对的数据集,使用指定的reduce函数,将相同key的值聚合到一起。类似groupByKey,reduce任务个数是可以通过第二个可选参数来配置的 |
sortByKey([ascending], [numTasks]) | 在一个(K,V)对的数据集上调用,K必须实现Ordered接口,返回一个按照Key进行排序的(K,V)对数据集。升序或降序由ascending布尔参数决定 |
join(otherDataset, [numTasks]) | 在类型为(K,V)和(K,W)类型的数据集上调用时,返回一个相同key对应的所有元素对在一起的(K, (V, W))数据集 |
cogroup(otherDataset, [numTasks]) | 在类型为(K,V)和(K,W)的数据集上调用,返回一个 (K, Seq[V], Seq[W])元组的数据集。这个操作也可以称之为groupwith |
cartesian(otherDataset) | 笛卡尔积,在类型为 T 和 U 类型的数据集上调用时,返回一个 (T, U)对数据集(两两的元素对) |
完整的转换列表可以在RDD API doc中获得。
动作(actions)
动作 | 含义 |
reduce(func) | 通过函数func(接受两个参数,返回一个参数)聚集数据集中的所有元素。这个功能必须可交换且可关联的,从而可以正确的被并行执行。 |
collect() | 在驱动程序中,以数组的形式,返回数据集的所有元素。这通常会在使用filter或者其它操作并返回一个足够小的数据子集后再使用会比较有用。 |
count() | 返回数据集的元素的个数。 |
first() | 返回数据集的第一个元素(类似于take(1)) |
take(n) | 返回一个由数据集的前n个元素组成的数组。注意,这个操作目前并非并行执行,而是由驱动程序计算所有的元素 |
takeSample(withReplacement,num, seed) | 返回一个数组,在数据集中随机采样num个元素组成,可以选择是否用随机数替换不足的部分,Seed用于指定的随机数生成器种子 |
saveAsTextFile(path) | 将数据集的元素,以textfile的形式,保存到本地文件系统,HDFS或者任何其它hadoop支持的文件系统。对于每个元素,Spark将会调用toString方法,将它转换为文件中的文本行 |
saveAsSequenceFile(path) | 将数据集的元素,以Hadoop sequencefile的格式,保存到指定的目录下,本地系统,HDFS或者任何其它hadoop支持的文件系统。这个只限于由key-value对组成,并实现了Hadoop的Writable接口,或者隐式的可以转换为Writable的RDD。(Spark包括了基本类型的转换,例如Int,Double,String,等等) |
countByKey() | 对(K,V)类型的RDD有效,返回一个(K,Int)对的Map,表示每一个key对应的元素个数 |
foreach(func) | 在数据集的每一个元素上,运行函数func进行更新。这通常用于边缘效果,例如更新一个累加器,或者和外部存储系统进行交互,例如HBase |
完整的转换列表可以在RDD API doc中获得。
1. 启动spark-shell
- SPARK_MASTER=local[4] ./spark-shell.sh
- Welcome to
- ____ __
- / __/__ ___ _____/ /__
- _\ \/ _ \/ _ `/ __/ '_/
- /___/ .__/\_,_/_/ /_/\_\ version 0.8.1
- /_/
- Using Scala version 2.9.3 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_20)
- Initializing interpreter...
- 14/04/04 10:49:44 INFO server.Server: jetty-7.x.y-SNAPSHOT
- 14/04/04 10:49:44 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:5757
- Creating SparkContext...
- 14/04/04 10:49:50 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started
- 14/04/04 10:49:50 INFO spark.SparkEnv: Registering BlockManagerMaster
- 14/04/04 10:49:50 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-local-20140404104950-5dd2
2.我们就拿根目录下的CHANGES.txt和README.txt文件做示例吧。
- scala> sc
- res0: org.apache.spark.SparkContext = org.apache.spark.SparkContext@5849b49d
- scala> val changes = sc.textFile("CHANGES.txt")
- 14/04/04 10:51:39 INFO storage.MemoryStore: ensureFreeSpace(44905) called with curMem=0, maxMem=339585269
- 14/04/04 10:51:39 INFO storage.MemoryStore: Block broadcast_0 stored as values to memory (estimated size 43.9 KB, free 323.8 MB)
- changes: org.apache.spark.rdd.RDD[String] = MappedRDD[1] at textFile at <console>:12
- scala> changes foreach println
- 14/04/04 10:52:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- 14/04/04 10:52:03 WARN snappy.LoadSnappy: Snappy native library not loaded
- 14/04/04 10:52:03 INFO mapred.FileInputFormat: Total input paths to process : 1
- 14/04/04 10:52:03 INFO spark.SparkContext: Starting job: foreach at <console>:15
- 14/04/04 10:52:03 INFO scheduler.DAGScheduler: Got job 0 (foreach at <console>:15) with 1 output partitions (allowLocal=false)
- 14/04/04 10:52:03 INFO scheduler.DAGScheduler: Final stage: Stage 0 (foreach at <console>:15)
- 14/04/04 10:52:03 INFO scheduler.DAGScheduler: Parents of final stage: List()
- 14/04/04 10:52:03 INFO scheduler.DAGScheduler: Missing parents: List()
- 14/04/04 10:52:03 INFO scheduler.DAGScheduler: Submitting Stage 0 (MappedRDD[1] at textFile at <console>:12), which has no missing parents
- 14/04/04 10:52:03 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from Stage 0 (MappedRDD[1] at textFile at <console>:12)
- 14/04/04 10:52:03 INFO local.LocalTaskSetManager: Size of task 0 is 1664 bytes
- 14/04/04 10:52:03 INFO executor.Executor: Running task ID 0
- 14/04/04 10:52:03 INFO storage.BlockManager: Found block broadcast_0 locally
- 14/04/04 10:52:03 INFO rdd.HadoopRDD: Input split: file:/app/home/hadoop/shengli/spark-0.8.1-incubating-bin-hadoop1/CHANGES.txt:0+65951
- Spark Change Log
- Release 0.8.1-incubating
- d03589d Mon Dec 9 23:10:00 2013 -0800
- Merge pull request #248 from colorant/branch-0.8
- [Fix POM file for mvn assembly on hadoop 2.2 Yarn]
- 3e1f78c Sun Dec 8 21:34:12 2013 -0800
- Merge pull request #195 from dhardy92/fix_DebScriptPackage
- [[Deb] fix package of Spark classes adding org.apache prefix in scripts embeded in .deb]
- c14f373 Sat Dec 7 22:35:31 2013 -0800
- Merge pull request #241 from pwendell/master
- [Update broken links and add HDP 2.0 version string]
- 9c9e71e Sat Dec 7 12:47:26 2013 -0800
- Merge pull request #241 from pwendell/branch-0.8
- [Fix race condition in JobLoggerSuite [0.8 branch]]
- 92597c0 Sat Dec 7 11:58:00 2013 -0800
- Merge pull request #240 from pwendell/master
- [SPARK-917 Improve API links in nav bar]
- cfca70e Sat Dec 7 01:15:20 2013 -0800
- Merge pull request #236 from pwendell/shuffle-docs
- [Adding disclaimer for shuffle file consolidation]
现在要找出所有包含Merge的文字
filter
现在要找出所有包含Merge的文字
- scala> val mergeLines = changes.filter(_.contains("Merge"))
- mergeLines: org.apache.spark.rdd.RDD[String] = FilteredRDD[2] at filter at <console>:14
- scala> mergeLines foreach println
- 14/04/04 10:54:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- 14/04/04 10:54:52 WARN snappy.LoadSnappy: Snappy native library not loaded
- 14/04/04 10:54:52 INFO mapred.FileInputFormat: Total input paths to process : 1
- 14/04/04 10:54:52 INFO spark.SparkContext: Starting job: foreach at <console>:17
- 14/04/04 10:54:52 INFO scheduler.DAGScheduler: Got job 0 (foreach at <console>:17) with 1 output partitions (allowLocal=false)
- 14/04/04 10:54:52 INFO scheduler.DAGScheduler: Final stage: Stage 0 (foreach at <console>:17)
- 14/04/04 10:54:52 INFO scheduler.DAGScheduler: Parents of final stage: List()
- 14/04/04 10:54:52 INFO scheduler.DAGScheduler: Missing parents: List()
- 14/04/04 10:54:52 INFO scheduler.DAGScheduler: Submitting Stage 0 (FilteredRDD[2] at filter at <console>:14), which has no missing parents
- 14/04/04 10:54:52 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from Stage 0 (FilteredRDD[2] at filter at <console>:14)
- 14/04/04 10:54:52 INFO local.LocalTaskSetManager: Size of task 0 is 1733 bytes
- 14/04/04 10:54:52 INFO executor.Executor: Running task ID 0
- 14/04/04 10:54:52 INFO storage.BlockManager: Found block broadcast_0 locally
- 14/04/04 10:54:52 INFO rdd.HadoopRDD: Input split: file:/app/home/hadoop/shengli/spark-0.8.1-incubating-bin-hadoop1/CHANGES.txt:0+65951
- Merge pull request #248 from colorant/branch-0.8
- Merge pull request #195 from dhardy92/fix_DebScriptPackage
- Merge pull request #241 from pwendell/master
- Merge pull request #241 from pwendell/branch-0.8
- Merge pull request #240 from pwendell/master
- Merge pull request #236 from pwendell/shuffle-docs
- Merge pull request #237 from pwendell/formatting-fix
- Merge pull request #235 from pwendell/master
- Merge pull request #234 from alig/master
- Merge pull request #199 from harveyfeng/yarn-2.2
- Merge pull request #101 from colorant/yarn-client-scheduler
- Merge pull request #191 from hsaputra/removesemicolonscala
- Merge pull request #178 from hsaputra/simplecleanupcode
- Merge pull request #189 from tgravescs/sparkYarnErrorHandling
- Merge pull request #232 from markhamstra/FiniteWait
- Merge pull request #231 from pwendell/branch-0.8
- Merge pull request #228 from pwendell/master
- Merge pull request #227 from pwendell/master
- Merge pull request #223 from rxin/transient
- Merge pull request #95 from aarondav/perftest
- Merge pull request #218 from JoshRosen/spark-970-pyspark-unicode-error
- Merge pull request #181 from BlackNiuza/fix_tasks_number
- Merge pull request #219 from sundeepn/schedulerexception
- Merge pull request #201 from rxin/mappartitions
- Merge pull request #197 from aarondav/patrick-fix
- Merge pull request #200 from mateiz/hash-fix
- Merge pull request #193 from aoiwelle/patch-1
- Merge pull request #196 from pwendell/master
- Merge pull request #174 from ahirreddy/master
- Merge pull request #166 from ahirreddy/simr-spark-ui
- Merge pull request #137 from tgravescs/sparkYarnJarsHdfsRebase
- Merge pull request #165 from NathanHowell/kerberos-master
- Merge pull request #153 from ankurdave/stop-spot-cluster
- Merge pull request #160 from xiajunluan/JIRA-923
- Merge pull request #175 from kayousterhout/no_retry_not_serializable
- Merge pull request #173 from kayousterhout/scheduler_hang
map
- scala> mergeLines.map(line=>line.split(" "))
- res2: org.apache.spark.rdd.RDD[Array[java.lang.String]] = MappedRDD[3] at map at <console>:17
- scala> mergeLines.map(line=>line.split(" ")) take 10
- 14/04/04 11:05:24 INFO spark.SparkContext: Starting job: take at <console>:17
- 14/04/04 11:05:24 INFO scheduler.DAGScheduler: Got job 1 (take at <console>:17) with 1 output partitions (allowLocal=true)
- 14/04/04 11:05:24 INFO scheduler.DAGScheduler: Final stage: Stage 1 (take at <console>:17)
- 14/04/04 11:05:24 INFO scheduler.DAGScheduler: Parents of final stage: List()
- 14/04/04 11:05:24 INFO scheduler.DAGScheduler: Missing parents: List()
- 14/04/04 11:05:24 INFO scheduler.DAGScheduler: Computing the requested partition locally
- 14/04/04 11:05:24 INFO rdd.HadoopRDD: Input split: file:/app/home/hadoop/shengli/spark-0.8.1-incubating-bin-hadoop1/CHANGES.txt:0+65951
- 14/04/04 11:05:24 INFO spark.SparkContext: Job finished: take at <console>:17, took 0.010779038 s
- res3: Array[Array[java.lang.String]] = Array(Array("", "", Merge, pull, request, #248, from, colorant/branch-0.8), Array("", "", Merge, pull, request, #195, from, dhardy92/fix_DebScriptPackage), Array("", "", Merge, pull, request, #241, from, pwendell/master), Array("", "", Merge, pull, request, #241, from, pwendell/branch-0.8), Array("", "", Merge, pull, request, #240, from, pwendell/master), Array("", "", Merge, pull, request, #236, from, pwendell/shuffle-docs), Array("", "", Merge, pull, request, #237, from, pwendell/formatting-fix), Array("", "", Merge, pull, request, #235, from, pwendell/master), Array("", "", Merge, pull, request, #234, from, alig/master), Array("", "", Merge, pull, request, #199, from, harveyfeng/yarn-2.2))
可以看出,map最后输出的数据集是Array(array1,array2....)这样的嵌套数组
- scala> val splitedLine = mergeLines.map(line=>line.split(" "))
- splitedLine: org.apache.spark.rdd.RDD[Array[java.lang.String]] = MappedRDD[5] at map at <console>:16
flatMap其实就是将数据集给扁平化了,变成了1个Seq或者Array
- scala> changes.flatMap(line=>line.split(" ")) take 10
- 14/04/04 11:18:26 INFO spark.SparkContext: Starting job: take at <console>:15
- 14/04/04 11:18:26 INFO scheduler.DAGScheduler: Got job 17 (take at <console>:15) with 1 output partitions (allowLocal=true)
- 14/04/04 11:18:26 INFO scheduler.DAGScheduler: Final stage: Stage 17 (take at <console>:15)
- 14/04/04 11:18:26 INFO scheduler.DAGScheduler: Parents of final stage: List()
- 14/04/04 11:18:26 INFO scheduler.DAGScheduler: Missing parents: List()
- 14/04/04 11:18:26 INFO scheduler.DAGScheduler: Computing the requested partition locally
- 14/04/04 11:18:26 INFO rdd.HadoopRDD: Input split: file:/app/home/hadoop/shengli/spark-0.8.1-incubating-bin-hadoop1/CHANGES.txt:0+65951
- 14/04/04 11:18:26 INFO spark.SparkContext: Job finished: take at <console>:15, took 0.003913527 s
- res26: Array[java.lang.String] = Array(Spark, Change, Log, "", Release, 0.8.1-incubating, "", "", "", d03589d)
union
- scala> val wordcount = changes.flatMap(line=>line.split(" ")).map(word=>(word,1))
- wordcount: org.apache.spark.rdd.RDD[(java.lang.String, Int)] = MappedRDD[22] at map at <console>:14
- map后打印出来如下:
- (#534,1)
- (from,1)
- (stephenh/removetrycatch,1)
- (,1)
- (,1)
- ([Remove,1)
- (try/catch,1)
- (block,1)
- (that,1)
- (can't,1)
- (be,1)
- (hit.],1)
- (,1)
- (,1)
- (,1)
- 用groupByKey来形成shuffle后的结果
- wordcount.groupByKey() foreach println
- 这里的key是[Automatically,LogQuery.
- value是一个数组list[v]
- ([Automatically,ArrayBuffer(1))
- (LogQuery,ArrayBuffer(1))
- (2d3eae2,ArrayBuffer(1))
- (#130,ArrayBuffer(1))
- (8e9bd93,ArrayBuffer(1))
- (8,ArrayBuffer(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1))
- 用reduceByKey来计算wordcount,个人感觉reduceByKey = groupByKey + aggregate function on list[v]
- wordcount.reduceByKey(_+_) foreach println
- 结果如下:
- (instructions,2)
- (Adds,1)
- (ssh,1)
- (#691,1)
- (#863,1)
- (README],2)
- (#676,1)
- (javadoc,1)
- (#571,1)
- (0f1b7a0,1)
- (shimingfei/joblogger,1)
- (links,2)
- (memory-efficient,1)
- (pwendell/akka-standalone,1)
- (hsaputra/update-pom-asf,1)
- (method],1)
- (mapPartitionsWithIndex],1)
- (key-value,1)
- (22:19:00,1)
- (sbt,4)
- (e5b9ed2,1)
- (loss,1)
- (stephenh/volatile,1)
- (code,6)
distinct
- distinct类似java中的set,去重。
- scala> wordcount.count()
- res43: Long = 12222
- wordcount.distinct.count()
- res44: Long = 3354
sortByKey
依据key排序,true为升序,false降序
- wordcount.sortByKey(true) take 10
- res10: Array[(java.lang.String, Int)] = Array(("",1), ("",1), ("",1), ("",1), ("",1), ("",1), ("",1), ("",1), ("",1), ("",1))
- wordcount.sortByKey(false) take 10
- res11: Array[(java.lang.String, Int)] = Array(({0,1},1), (zookeeper,1), (zip,1), (zip,1), (zip,1), (zero-sized,1), (zero,1), (yarn],1), (yarn.version,1), (yarn.version,1))
join
这里为了避免""值在join的影响,过滤掉""元素
- val wordcount = changes.flatMap(line=>line.split(" ")).filter(_!="").map(word=>(word,1)).reduceByKey(_+_)
- scala> val readme = sc.textFile("README.md")
- scala> val readMeWordCount = readme.flatMap(_.split(" ")).filter(_!="").map(word=>(word,1)).reduceByKey(_+_)
- scala> wordcount.join(readMeWordCount)
- res3: org.apache.spark.rdd.RDD[(java.lang.String, (Int, Int))] = FlatMappedValuesRDD[36] at join at <console>:21
- 结果会将相同的key的value整合到一个list里,即key,list[v from wordcount..., v from readMeWordCount...]
- wordcount.join(readMeWordCount) take 20
- res6: Array[(java.lang.String, (Int, Int))] = Array((at,(1,2)), (do,(1,1)), (by,(9,5)), (ASF,(2,1)), (adding,(3,1)), (all,(2,1)), (versions,(6,4)), (sbt/sbt,(1,6)), (under,(2,2)), (set,(7,1)))
cogroup
这个操作是将两个数据集join后,相同的key,的value会变成[seq1 from left],[seq from right]
- wordcount.cogroup(readMeWordCount,2).filter(_._1==("do")) take 10
- res18: Array[(java.lang.String, (Seq[Int], Seq[Int]))] = Array((do,(ArrayBuffer(1),ArrayBuffer(1))))
- 这两个ArrayBuffer第一个是来自左表,第二个来自右表。
cartesian
笛卡尔积,,,你懂的。。。m*n,,数据量大了话,不要随便试玩。。
- wordcount.count
- 3353
- readMeWordCount.count
- res25: Long = 324
- wordcount.cartesian(readMeWordCount,2)
- res23: Long = 1086372
- 3353 * 324 = 1086372
原创,转载请注明出处 http://blog.csdn.net/oopsoom/article/details/22918991,谢谢。