感觉自己好久没有更新过博客了,本人最近有点儿迷失,特来写篇技术博客,以做自警
不知道大家有没有注意到,大家在编写spark程序调用sortBy/sortByKey这两个算子的时候大家会不会有这样子的疑问,他们两个明明是transformation,为啥在执行的时候却触发了作业的执行呢?今天就和大家一起一探究竟?
val wordCountRdd = spark.sparkContext.textFile(path).
flatMap(_.split(" ")).
map(word => (word, 1)).
reduceByKey(_ + _)
val sortByCountDescRdd = wordCountRdd.sortBy(-_._2)
当你在shell输入一下的code时,会发现如下图:
出现了类似action的运行条,到底怎么回事儿呢?
首先sortBy的实现就是调用了sortByKey,所以我们只关注sortByKey的实现
def sortByKey(ascending: Boolean = true, numPartitions: Int = self.partitions.length)
: RDD[(K, V)] = self.withScope
{
val part = new RangePartitioner(numPartitions, self, ascending)
new ShuffledRDD[K, V, V](self, part)
.setKeyOrdering(if (ascending) ordering else ordering.reverse)
}
此时先看RangePartitioner分区器的实现,只挑重要部分开始描述啊
private[spark] object RangePartitioner {
/**
* Sketches the input RDD via reservoir sampling on each partition.
*
* @param rdd the input RDD to sketch
* @param sampleSizePerPartition max sample size per partition
* @return (total number of items, an array of (partitionId, number of items, sample))
*/
def sketch[K : ClassTag](
rdd: RDD[K],
sampleSizePerPartition: Int): (Long, Array[(Int, Long, Array[K])]) = {
val shift = rdd.id
// val classTagK = classTag[K] // to avoid serializing the entire partitioner object
val sketched = rdd.mapPartitionsWithIndex { (idx, iter) =>
val seed = byteswap32(idx ^ (shift << 16))
val (sample, n) = SamplingUtils.reservoirSampleAndCount(
iter, sampleSizePerPartition, seed)
Iterator((idx, n, sample))
}.collect()
val numItems = sketched.map(_._2).sum
(numItems, sketched)
}
在这里调用了RDD的collect action算子出发了作业的运行,其实此处的collection是对key进行采样已确认key的分布情况,总之是在为做全局排序做准备。
想知道详细的请查看spark源码执行吧