1、先看源码
1)ReduceByKey
def reduceByKey(partitioner: Partitioner, func: (V, V) => V): RDD[(K, V)] = self.withScope {
combineByKeyWithClassTag[V]((v: V) => v, func, func, partitioner)
}
/**
* Merge the values for each key using an associative and commutative reduce function. This will
* also perform the merging locally on each mapper before sending results to a reducer, similarly
* to a "combiner" in MapReduce. Output will be hash-partitioned with numPartitions partitions.
*/
def reduceByKey(func: (V, V) => V, numPartitions: Int): RDD[(K, V)] = self.withScope {
reduceByKey(new HashPartitioner(numPartitions), func)
}
/**
* Merge the values for each key using an associative and commutative reduce function. This will
* also perform the merging locally on each mapper before sending results to a reducer, similarly
* to a "combiner" in MapReduce. Output will be hash-partitioned with the existing partitioner/
* parallelism level.
*/
def reduceByKey(func: (V, V) => V): RDD[(K, V)] = self.withScope {
reduceByKey(defaultPartitioner(self), func)
}
2)GroupByKey
def groupByKey(partitioner: Partitioner): RDD[(K, Iterable[V])] = self.withScope {
// groupByKey shouldn't use map side combine because map side combine does not
// reduce the amount of data shuffled and requires all map side data be inserted
// into a hash table, leading to more objects in the old gen.
val createCombiner = (v: V) => CompactBuffer(v)
val mergeValue = (buf: CompactBuffer[V], v: V) => buf += v
val mergeCombiners = (c1: CompactBuffer[V], c2: CompactBuffer[V]) => c1 ++= c2
val bufs = combineByKeyWithClassTag[CompactBuffer[V]](
createCombiner, mergeValue, mergeCombiners, partitioner, mapSideCombine = false)
bufs.asInstanceOf[RDD[(K, Iterable[V])]]
}
3)AggregateByKey
def aggregateByKey[U: ClassTag](zeroValue: U, partitioner: Partitioner)(seqOp: (U, V) => U,
combOp: (U, U) => U): RDD[(K, U)] = self.withScope {
// Serialize the zero value to a byte array so that we can get a new clone of it on each key
val zeroBuffer = SparkEnv.get.serializer.newInstance().serialize(zeroValue)
val zeroArray = new Array[Byte](zeroBuffer.limit)
zeroBuffer.get(zeroArray)
lazy val cachedSerializer = SparkEnv.get.serializer.newInstance()
val createZero = () => cachedSerializer.deserialize[U](ByteBuffer.wrap(zeroArray))
// We will clean the combiner closure later in `combineByKey`
val cleanedSeqOp = self.context.clean(seqOp)
combineByKeyWithClassTag[U]((v: V) => cleanedSeqOp(createZero(), v),
cleanedSeqOp, combOp, partitioner)
}
4)combineByKeyWithClassTag
def combineByKeyWithClassTag[C](
createCombiner: V => C,
mergeValue: (C, V) => C,
mergeCombiners: (C, C) => C,
partitioner: Partitioner,
mapSideCombine: Boolean = true,
serializer: Serializer = null)(implicit ct: ClassTag[C]): RDD[(K, C)] = self.withScope {
require(mergeCombiners != null, "mergeCombiners must be defined") // required as of Spark 0.9.0
if (keyClass.isArray) {
if (mapSideCombine) {
throw new SparkException("Cannot use map-side combining with array keys.")
}
if (partitioner.isInstanceOf[HashPartitioner]) {
throw new SparkException("HashPartitioner cannot partition array keys.")
}
}
val aggregator = new Aggregator[K, V, C](
self.context.clean(createCombiner),
self.context.clean(mergeValue),
self.context.clean(mergeCombiners))
if (self.partitioner == Some(partitioner)) {
self.mapPartitions(iter => {
val context = TaskContext.get()
new InterruptibleIterator(context, aggregator.combineValuesByKey(iter, context))
}, preservesPartitioning = true)
} else {
new ShuffledRDD[K, V, C](self, partitioner)
.setSerializer(serializer)
.setAggregator(aggregator)
.setMapSideCombine(mapSideCombine)
}
}
2、从源码看区别
1、ReduceByKey 和 GroupByKey的区别
先看reduceByKey的源码,可以看出来是直接调用 c
ombineByKeyWithClassTag 这个方法,
groupByKey同样也是调用c
ombineByKeyWithClassTag 这个方法,注意看 这个参数 mapSideCombine = false,而c
ombineByKeyWithClassTag 的mapSideCombine 默认为true,mapSideCombine就是执行预聚合。
在流程图上看:
从上面的区别上,我们可以看出,在数据量少的数据集,可以使用groupByKey,但对于数据量大的集,使用groupByKey的话,会有大量的数据经过shuffle过程,占用大量的内存,所以,我们在代码要尽量避免使用groupByKey,用reduceByKey代替。
2、ReduceByKey 和AggregateByKey的区别
reduceByKey可以认为是aggregateByKey的简化版
aggregateByKey,分为三个参数,,多提供了一个函数,Seq Function
就是说自己可以控制如何对每个partition中的数据进行先聚合,类似于mapreduce中的,map-side combine
然后才是对所有partition中的数据进行全局聚合
例子:
rdd.aggregateByKey(zeroValue)((L, str) => L += str, (L1, L2) => L1 ++= L2)
3、AggregateByKey 和 GroupByKey的区别
同ReduceByKey 和 GroupByKey的区别
4、看下combineByKeyWithClassTag的实现流程
由源码中看出,该函数中主要包含的参数:
createCombiner:V=>C
mergeValue:(C,V)=>C
mergeCombiners:(C,C)=>R
partitioner:Partitioner
mapSideCombine:Boolean=true
serializer:Serializer=null
这里的每一个参数都对分别对应这聚合操作的各个阶段
参数详解:
1、createCombiner:V=>C 分组内的创建组合的函数。通俗点将就是对读进来的数据进行初始化,其把当前的值作为参数,可以对该值做一些转换操作,转换为我们想要的数据格式
2、mergeValue:(C,V)=>C 该函数主要是分区内的合并函数,作用在每一个分区内部。其功能主要是将V合并到之前(createCombiner)的元素C上,注意,这里的C指的是上一函数转换之后的数据格式,而这里的V指的是原始数据格式(上一函数为转换之前的)
3、mergeCombiners:(C,C)=>R 该函数主要是进行多分取合并,此时是将两个C合并为一个C,例如两个C:(Int)进行相加之后得到一个R:(Int)
4、partitioner:自定义分区数,默认是hashPartitioner
5、mapSideCombine:Boolean=true 该参数是设置是否在map端进行combine操作
函数工作流程:
首先明确该函数遍历的数据是(k,v)对的rdd数据
1、combinByKey会遍历rdd中每一个(k,v)数据对,对该数据对中的k进行判断,判断该(k,v)对中的k是否在之前出现过,如果是第一次出现,则调用createCombiner函数,对该k对应的v进行初始化操作(可以做一些转换操作),也就是创建该k对应的累加其的初始值
2、如果这是一个在处理当前分区之前遇到的k,会调用mergeCombiners函数,把该k对应的累加器的value与这个新的value进行合并操作