Spark:coalesce repartition 源码分析

Spark版本:2.4.0

源代码位置:org/apache/spark/rdd/RDD.scala

应用示例:
scala> val x=(1 to 10).toList
x: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
scala> val df1=x.toDF("df")
df1: org.apache.spark.sql.DataFrame = [number: int]
scala> df1.rdd.partitions.size
res0: Int = 2 //原数据分区数为2

//coalesce减少分区
scala> val df2=df1.coalesce(1)  
df2: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [number: int]
scala> df2.rdd.partitions.size
res1: Int = 1

//coalesce增加分区
scala> val df3=df1.coalesce(4)
df3: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [number: int]
scala> df3.rdd.partitions.size
res3: Int = 2 // 注意:使用coalesce增加分区未生效

//repartition减少分区
scala> val df2=df1.repartition(1)
df2: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [number: int]
scala> df2.rdd.partitions.size
res4: Int = 1

//repartition增加分区
scala> val df3=df1.repartition(4)
df3: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [number: int]
scala> df3.rdd.partitions.size
res5: Int = 4 // 注意:使用repartition增加分区生效
源代码:
/**
   * Return a new RDD that is reduced into `numPartitions` partitions.
   *
   * This results in a narrow dependency, e.g. if you go from 1000 partitions
   * to 100 partitions, there will not be a shuffle, instead each of the 100
   * new partitions will claim 10 of the current partitions. If a larger number
   * of partitions is requested, it will stay at the current number of partitions.
   *
   * However, if you're doing a drastic coalesce, e.g. to numPartitions = 1,
   * this may result in your computation taking place on fewer nodes than
   * you like (e.g. one node in the case of numPartitions = 1). To avoid this,
   * you can pass shuffle = true. This will add a shuffle step, but means the
   * current upstream partitions will be executed in parallel (per whatever
   * the current partitioning is).
   *
   * @note With shuffle = true, you can actually coalesce to a larger number
   * of partitions. This is useful if you have a small number of partitions,
   * say 100, potentially with a few partitions being abnormally large. Calling
   * coalesce(1000, shuffle = true) will result in 1000 partitions with the
   * data distributed using a hash partitioner. The optional partition coalescer
   * passed in must be serializable.
   */
  def coalesce(numPartitions: Int, shuffle: Boolean = false,
               partitionCoalescer: Option[PartitionCoalescer] = Option.empty)
              (implicit ord: Ordering[T] = null)
      : RDD[T] = withScope {
    require(numPartitions > 0, s"Number of partitions ($numPartitions) must be positive.")
    if (shuffle) {
      /** Distributes elements evenly across output partitions, starting from a random partition. */
      val distributePartition = (index: Int, items: Iterator[T]) => {
        var position = new Random(hashing.byteswap32(index)).nextInt(numPartitions)
        items.map { t =>
          // Note that the hash code of the key will just be the key itself. The HashPartitioner
          // will mod it with the number of total partitions.
          position = position + 1
          (position, t)
        }
      } : Iterator[(Int, T)]

      // include a shuffle step so that our upstream tasks are still distributed
      new CoalescedRDD(
        new ShuffledRDD[Int, T, T](
          mapPartitionsWithIndexInternal(distributePartition, isOrderSensitive = true),
          new HashPartitioner(numPartitions)),
        numPartitions,
        partitionCoalescer).values
    } else {
      new CoalescedRDD(this, numPartitions, partitionCoalescer)
    }
  }

以上源码显示,coalesce的shuffle参数默认为false,适用于将分区数减少的操作,其实就是将分区合并,在不修改shuffle=true的前提下增大分区数的操作是不会产生效果的

/**
   * Return a new RDD that has exactly numPartitions partitions.
   *
   * Can increase or decrease the level of parallelism in this RDD. Internally, this uses
   * a shuffle to redistribute data.
   *
   * If you are decreasing the number of partitions in this RDD, consider using `coalesce`,
   * which can avoid performing a shuffle.
   */
  def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] = withScope {
    coalesce(numPartitions, shuffle = true)
  }

以上源码显示,repartition最终调用 coalesce(numPartitions, shuffle = true),此处将shuffle设置为true,根据上侧coalesce可知道当shuffle为true时会对数据进行重分区,因此repartition适用于增加和减少分区,不过在生产时有减少分区的操作尽量使用coalesce操作,这样就不会产生shuffle操作

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值