reducedByKey
PairRDDFunctions 隐式增强类 中的方法
这个方法调用了一个默认的分区器并传入聚合函数
def reduceByKey(func: (V, V) => V): RDD[(K, V)] = self.withScope {
reduceByKey(defaultPartitioner(self), func)
}
之后调用PairRDDFunctions里的方法 combineByKeyWithClassTag
def reduceByKey(partitioner: Partitioner, func: (V, V) => V): RDD[(K, V)] = self.withScope {
combineByKeyWithClassTag[V]((v: V) => v, func, func, partitioner)
}
传入三个函数 一个分区器
(v: V) => v 遇到第一个value 保存下来
func 在map端先聚合
之后在reduce端聚合
可以修改
partitioner 默认分区器
combineByKeyWithClassTag 方法
def combineByKeyWithClassTag[C](
createCombiner: V => C,
mergeValue: (C, V) => C,
mergeCombiners: (C, C) => C,
partitioner: Partitioner,
mapSideCombine: Boolean = true, 在不在map端聚合一次 可以使得效率更快
serializer: Serializer = null)(implicit ct: ClassTag[C]): RDD[(K, C)] = self.withScope {
require(mergeCombiners != null, “mergeCombiners must be defined”) // required as of Spark 0.9.0
if (keyClass.isArray) { key不能是数组 否则会报错
if (mapSideCombine) {
throw new SparkException(“Cannot use map-side combining with array keys.”)
}
if (partitioner.isInstanceOf[HashPartitioner]) {
throw new SparkException(“HashPartitioner cannot partition array keys.”)
}
}
val aggregator = new Aggregator[K, V, C]( 设置函数用的
self.context.clean(createCombiner),
self.context.clean(mergeValue),
self.context.clean(mergeCombiners))
if (self.partitioner == Some(partitioner)) { 确定分区器
self.mapPartitions(iter => {
val context = TaskContext.get()
new InterruptibleIterator(context, aggregator.combineValuesByKey(iter, context))
}, preservesPartitioning = true)
} else {
new ShuffledRDD[K, V, C](self, partitioner)
.setSerializer(serializer)
.setAggregator(aggregator)
.setMapSideCombine(mapSideCombine)
}
}
最终是调用了 ShuffledRDD
ShuffledRDD
构造好 等待调用
shuffle 调优
改变序列化方式 java的太慢了
现在map端聚合一次 可以少网络传输几次