Spark repartition VS coalesce

17 篇文章 0 订阅
12 篇文章 0 订阅

Spark repartition VS coalesce

demo
val data=sc.parallelize(Array("a","b","c","d","e","a","b","e","b","f"),2)
scala>  data.glom().collect()
res1: Array[Array[String]] = Array(Array(a, b, c, d, e), Array(a, b, e, b, f))
//减少分区但是会产生shuffle
scala>  data.repartition(1).glom().collect()
res2: Array[Array[String]] = Array(Array(a, b, c, d, e, a, b, e, b, f))
//分区不变,产生shuffle
scala> data.repartition(2).glom().collect()
res3: Array[Array[String]] = Array(Array(a, c, e, b, b), Array(b, d, a, e, f))
//增加分区,产生shuffle
scala>  data.repartition(3).glom().collect()
res4: Array[Array[String]] = Array(Array(c, b, f), Array(a, d, e), Array(b, e, a, b))
//减少分区 不会产生shuffle
scala>   data.coalesce(1).glom().collect()
res5: Array[Array[String]] = Array(Array(a, b, c, d, e, a, b, e, b, f))
//分区不变 不产生shuffle
scala>  data.coalesce(2).glom().collect()
res6: Array[Array[String]] = Array(Array(a, b, c, d, e), Array(a, b, e, b, f))
//增加分区 没效果
scala>  data.coalesce(3).glom().collect()
res7: Array[Array[String]] = Array(Array(a, b, c, d, e), Array(a, b, e, b, f))
//设置为true 减少分区 但是会产生shuffle
scala>  data.coalesce(1,true).glom().collect()
res8: Array[Array[String]] = Array(Array(a, b, c, d, e, a, b, e, b, f))
//设置为true 不变分区 但是会产生shuffle
scala> data.coalesce(2,true).glom().collect()
res9: Array[Array[String]] = Array(Array(a, c, e, b, b), Array(b, d, a, e, f))

//设置为true 增加分区可以但是会产生shuffle
scala> data.coalesce(3,true).glom().collect()
res10: Array[Array[String]] = Array(Array(c, b, f), Array(a, d, e), Array(b, e, a, b))
ui效果图

repartition

repartition

repartition
coalesce

coalesce
coalesce

coalesce

coalesce
coalesce

源码查看
  • repartition
/**
   * Return a new RDD that has exactly numPartitions partitions.
   *
   * Can increase or decrease the level of parallelism in this RDD. Internally, this uses
   * a shuffle to redistribute data.  产生shuffle
   *
   * If you are decreasing the number of partitions in this RDD, consider using `coalesce`,
   * which can avoid performing a shuffle. 减少分区用`coalesce` 可以避免shuffle
   *
   * TODO Fix the Shuffle+Repartition data loss issue described in SPARK-23207.
   */
  def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] = withScope {
    coalesce(numPartitions, shuffle = true)
  }
  通过源码可以看出 repartition 调的是coalesce,并且shuffle是 true

  • coalesce
/**
   * Return a new RDD that is reduced into `numPartitions` partitions.
   *
   * This results in a narrow dependency, e.g. if you go from 1000 partitions 窄依赖
   * to 100 partitions, there will not be a shuffle, instead each of the 100 减少分区不会产生shuffle
   * new partitions will claim 10 of the current partitions. If a larger number
   * of partitions is requested, it will stay at the current number of partitions.分区数据保存在当前分区
   *
   * However, if you're doing a drastic coalesce, e.g. to numPartitions = 1,
   * this may result in your computation taking place on fewer nodes than
   * you like (e.g. one node in the case of numPartitions = 1). To avoid this,
   * you can pass shuffle = true. This will add a shuffle step, but means the
   * current upstream partitions will be executed in parallel (per whatever
   * the current partitioning is). 也就是减少分区比较狠的话,并行度低,可以设置shuffle为true,这样可以并行执行
   *
   * @note With shuffle = true, you can actually coalesce to a larger number
   * of partitions. This is useful if you have a small number of partitions,
   * say 100, potentially with a few partitions being abnormally large. Calling
   * coalesce(1000, shuffle = true) will result in 1000 partitions with the
   * data distributed using a hash partitioner. The optional partition coalescer
   * passed in must be serializable. 也就是分区数据量大,可以将shuffle设置为true 增加并行度
   */
  def coalesce(numPartitions: Int, shuffle: Boolean = false,
               partitionCoalescer: Option[PartitionCoalescer] = Option.empty)
              (implicit ord: Ordering[T] = null)
      : RDD[T] = withScope {
    require(numPartitions > 0, s"Number of partitions ($numPartitions) must be positive.")
    if (shuffle) {
      /** Distributes elements evenly across output partitions, starting from a random partition. */
      val distributePartition = (index: Int, items: Iterator[T]) => {
        var position = new Random(hashing.byteswap32(index)).nextInt(numPartitions)
        items.map { t =>
          // Note that the hash code of the key will just be the key itself. The HashPartitioner
          // will mod it with the number of total partitions.
          position = position + 1
          (position, t)
        }
      } : Iterator[(Int, T)]

      // include a shuffle step so that our upstream tasks are still distributed
      new CoalescedRDD(
        new ShuffledRDD[Int, T, T](
          mapPartitionsWithIndexInternal(distributePartition, isOrderSensitive = true),
          new HashPartitioner(numPartitions)),
        numPartitions,
        partitionCoalescer).values
    } else {
    //新建一个合并的CoalescedRDD
      new CoalescedRDD(this, numPartitions, partitionCoalescer)
    }
  }

  
/**
 * Represents a coalesced RDD that has fewer partitions than its parent RDD
 * This class uses the PartitionCoalescer class to find a good partitioning of the parent RDD
 * so that each new partition has roughly the same number of parent partitions and that
 * the preferred location of each new partition overlaps with as many preferred locations of its
 * parent partitions
 * @param prev RDD to be coalesced
 * @param maxPartitions number of desired partitions in the coalesced RDD (must be positive)
 * @param partitionCoalescer [[PartitionCoalescer]] implementation to use for coalescing
 */
private[spark] class CoalescedRDD[T: ClassTag](
    @transient var prev: RDD[T],
    maxPartitions: Int,
    partitionCoalescer: Option[PartitionCoalescer] = None)
  extends RDD[T](prev.context, Nil) {  // Nil since we implement getDependencies
总结
  • 使用场景
    coalesced 一般用于减少分区,不会产生shuffle,用在filter 等过滤了大部分数据后,可以减少小文件产生
    repartition 一般用于增加分区,但是会产生shuffle,但是可以增加并行度,比如1个大文件不支持spilt读进来就一个分区,这样资源利用率不高,可以增加并行度,合理利用资源,加快执行速度,代价就是 产生shuffle,好处就是处理速度加快
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值