combineByKey 作为spark 的核心算子之一,有必要详细了解。reduceByKey 和groupByKey 等健值对算子底层都实现该算子。(1.6.0版更新为combineByKeyWithClassTag)
combineByKey 源码定义:
def combineByKey[C](createCombiner: (V) => C, mergeValue: (C, V) => C, mergeCombiners: (C, C) => C): RDD[(K, C)]
def combineByKey[C](createCombiner: (V) => C, mergeValue: (C, V) => C, mergeCombiners: (C, C) => C, numPartitions: