Spark目前支持Hash分区和Range分区,和用户自定义分区。
Hash分区为当前的默认分区,分区器直接决定了RDD中分区的个数、RDD中每条数据经过Shuffle后进入哪个分区,进而决定了Reduce的个数。
- 只有Key-Value类型的RDD才有分区器,非Key-Value类型的RDD分区的值是None
- 每个RDD的分区ID范围:0~(numPartitions - 1),决定这个值是属于哪个分区的
1)Hash分区
对于给定的key,计算其hashCode,并除以分区个数取余
class HashPartitioner(partitions: Int) extends Partitioner {
require(partitions >= 0, s"Number of partitions ($partitions) cannot be negative.")
def numPartitions: Int = partitions
def getPartition(key: Any): Int = key match {
case null => 0
case _ => Utils.nonNegativeMod(key.hashCode, numPartitions)
}
override def equals(other: Any): Boolean = other match {
case h: HashPartitioner => h.numPartitions == numPartitions
case _ => false
}
override def hashCode: Int = numPartitions
}
2)Range分区
将一定范围内的数据映射到一个分区中,尽量保证每个分区数据均匀,而且分区间有序