一、说明
1、foreachPartition属于action运算操作,而mapPartitions是转化操作;在应用场景上区别是mapPartitions可以获取返回值,继续在返回RDD上做其他的操作,而foreachPartition因为没有返回值并且是action操作,所以使用它一般都是在程序末尾比如说要落地数据到存储系统中如mysql,es,或者hbase中,可以用它。
2、官方提供Transformation操作
map
filter
flatMap
mapPartitions
mapPartitionsWithIndex
sample
union
intersection
distinct
groupByKey
reduceByKey
aggregateByKey
sortByKey
join
cogroup
cartesian
pipe
coalesce
repartition
repartitionAndSortWithinPartitions
3、官方提供Action操作
reduce
collect
count
first
take
takeSample
takeOrdered
saveAsTextFile
saveAsSequenceFile
saveAsObjectFile
countByKey
foreach
二、代码
package com.cn.spark
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
import scala.collection.mutable.ArrayBuffer
object SparkMap extends App {
val conf: SparkConf = new SparkConf().setMaster("local[1]").setAppName("TestMapAndMapPartitions")
val sc: SparkContext = new SparkContext(conf)
sc.setLogLevel("WARN")
val rdd: RDD[Int] = sc.parallelize(Array(1,2,3,4,5,6))
/**
* mapPartitions是一个转化操作
* mapPartitions每次处理一批数据
* 将 rdd分成x批数据进行处理
* p是其中一批数据
* mapPartitions返回一批数据(iterator)
* mapPartitions返回值必须是迭代器(iterator)
*/
private val v_value: RDD[Int] = rdd.mapPartitions(p => {
var arr: ArrayBuffer[Int] = new ArrayBuffer[Int]()
p.foreach(ele => {
arr.+=(ele)
})
arr.foreach(println) //没有触发action,打印不出来
arr.iterator
})
//v_value.foreach(println) //触发action,可以打印出来
println("================================")
/**
* foreachPartition是一个action操作
*/
private val f_value: Unit = rdd.foreachPartition(p => {
var arr: ArrayBuffer[Int] = new ArrayBuffer[Int]()
p.foreach(ele => {
arr.+=(ele)
})
arr.foreach(println) //可以打印
})
}
输出:
.......
20/07/16 09:38:08 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.230.1, 62716, None)
20/07/16 09:38:08 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.230.1, 62716, None)
================================
1
2
3
4
5
6
Process finished with exit code 0