大数据之Spark调优:数据倾斜

数据倾斜现象

1、现象
绝大多数 task 任务运行速度很快,但是就是有那么几个 task 任务运行极其缓慢,慢慢
的可能就接着报内存溢出的问题。
在这里插入图片描述
2、原因
数据倾斜一般是发生在 shuffle 类的算子,比如 distinct、groupByKey、reduceByKey、
aggregateByKey、join、cogroup 等,涉及到数据重分区,如果其中某一个 key 数量特别大,
就发生了数据倾斜。

数据倾斜大 key 定位

从所有 key 中,把其中每一个 key 随机取出来一部分,然后进行一个百分比的推算,这是用局部取推算整体,虽然有点不准确,但是在整体概率上来说,我们只需要大概就可以定位那个最多的 key 了。
执行:

spark-submit --master yarn --deploy-mode client --driver-memory 1g --numexecutors 3 --executor-cores 2 --executor-memory 6g --class com.atguigu.sparktuning.join.SampleKeyDemo spark-tuning-1.0-SNAPSHOT-jar with-dependencies.jar

正式代码:

package com.atguigu.sparktuning.join

import com.atguigu.sparktuning.utils.InitUtil
import org.apache.spark.SparkConf
import org.apache.spark.sql.{DataFrame, Row, SparkSession}

object SampleKeyDemo {

  def main( args: Array[String] ): Unit = {

    val sparkConf = new SparkConf().setAppName("BigJoinDemo")
      .set("spark.sql.shuffle.partitions", "36")
      .setMaster("local[*]")
    val sparkSession: SparkSession = InitUtil.initSparkSession(sparkConf)

    println("=============================================csc courseid sample=============================================")
    val cscTopKey: Array[(Int, Row)] = sampleTopKey(sparkSession,"sparktuning.course_shopping_cart","courseid")
    println(cscTopKey.mkString("\n"))

    println("=============================================sc courseid sample=============================================")
    val scTopKey: Array[(Int, Row)] = sampleTopKey(sparkSession,"sparktuning.sale_course","courseid")
    println(scTopKey.mkString("\n"))

    println("=============================================cp orderid sample=============================================")
    val cpTopKey: Array[(Int, Row)] = sampleTopKey(sparkSession,"sparktuning.course_pay","orderid")
    println(cpTopKey.mkString("\n"))

    println("=============================================csc orderid sample=============================================")
    val cscTopOrderKey: Array[(Int, Row)] = sampleTopKey(sparkSession,"sparktuning.course_shopping_cart","orderid")
    println(cscTopOrderKey.mkString("\n"))
  }


  def sampleTopKey( sparkSession: SparkSession, tableName: String, keyColumn: String ): Array[(Int, Row)] = {
    val df: DataFrame = sparkSession.sql("select " + keyColumn + " from " + tableName)
    val top10Key = df
      .select(keyColumn).sample(false, 0.1).rdd // 对key不放回采样
      .map(k => (k, 1)).reduceByKey(_ + _) // 统计不同key出现的次数
      .map(k => (k._2, k._1)).sortByKey(false) // 统计的key进行排序
      .take(10)
    top10Key
  }


}

单表数据倾斜优化

为了减少 shuffle 数据量以及 reduce 端的压力,通常 Spark SQL 在 map 端会做一个
partial aggregate(通常叫做预聚合或者偏聚合),即在 shuffle 前将同一分区内所属同 key 的
记录先进行一个预结算,再将结果进行 shuffle,发送到 reduce 端做一个汇总,类似 MR 的
提前 Combiner,所以执行计划中 HashAggregate 通常成对出现。
1、适用场景
聚合类的 shuffle 操作,部分 key 数据量较大,且大 key 的数据分布在很多不同的切片。
2、解决逻辑
两阶段聚合(加盐局部聚合+去盐全局聚合)
3、案例演示

spark-submit --master yarn --deploy-mode client --driver-memory 1g --numexecutors 3 --executor-cores 2 --executor-memory 6g --class com.atguigu.sparktuning.skew.SkewAggregationTuning spark-tuning-1.0-SNAPSHOT-jar-with-dependencies.jar

具体代码:

package com.atguigu.sparktuning.skew

import java.util.Random

import com.atguigu.sparktuning.utils.InitUtil
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession

object SkewAggregationTuning {
  def main( args: Array[String] ): Unit = {

    val sparkConf = new SparkConf().setAppName("SkewAggregationTuning")
      .set("spark.sql.shuffle.partitions", "36")
//      .setMaster("local[*]")
    val sparkSession: SparkSession = InitUtil.initSparkSession(sparkConf)

    sparkSession.udf.register("random_prefix", ( value: Int, num: Int ) => randomPrefixUDF(value, num))
    sparkSession.udf.register("remove_random_prefix", ( value: String ) => removeRandomPrefixUDF(value))


    val sql1 =
      """
        |select
        |  courseid,
        |  sum(course_sell) totalSell
        |from
        |  (
        |    select
        |      remove_random_prefix(random_courseid) courseid,
        |      course_sell
        |    from
        |      (
        |        select
        |          random_courseid,
        |          sum(sellmoney) course_sell
        |        from
        |          (
        |            select
        |              random_prefix(courseid, 6) random_courseid,
        |              sellmoney
        |            from
        |              sparktuning.course_shopping_cart
        |          ) t1
        |        group by random_courseid
        |      ) t2
        |  ) t3
        |group by
        |  courseid
      """.stripMargin


    val sql2=
      """
        |select
        |  courseid,
        |  sum(sellmoney)
        |from sparktuning.course_shopping_cart
        |group by courseid
      """.stripMargin

    sparkSession.sql(sql1).show(10000)


//    while(true){}
  }


  def randomPrefixUDF( value: Int, num: Int ): String = {
    new Random().nextInt(num).toString + "_" + value
  }

  def removeRandomPrefixUDF( value: String ): String = {
    value.toString.split("_")(1)
  }
}

Join 数据倾斜优化

广播 Join

1、适用场景
适用于小表 join 大表。小表足够小,可被加载进 Driver 并通过 Broadcast 方法广播到各
个 Executor 中。
2、解决逻辑
在小表 join 大表时如果产生数据倾斜,那么广播 join 可以直接规避掉此 shuffle 阶段。
直接优化掉 stage。并且广播 join 也是 Spark Sql 中最常用的优化方案。
3、案例演示
2.2.2 中的 PartitionTuning 案例关闭了广播 join,可以看到数据倾斜

spark-submit --master yarn --deploy-mode client --driver-memory 1g --numexecutors 3 --executor-cores 2 --executor-memory 6g --class com.atguigu.sparktuning.skew.SkewMapJoinTuning spark-tuning-1.0-SNAPSHOT-jar-with-dependencies.jar

具体代码:

package com.atguigu.sparktuning.skew

import com.atguigu.sparktuning.utils.InitUtil
import org.apache.spark.SparkConf
import org.apache.spark.sql.{SaveMode, SparkSession}

object SkewMapJoinTuning {
  def main( args: Array[String] ): Unit = {
    val sparkConf = new SparkConf().setAppName("SkewMapJoinTuning")
      .set("spark.sql.autoBroadcastJoinThreshold", "10m")
//      .setMaster("local[*]")
      .set("spark.sql.shuffle.partitions", "36")
    val sparkSession: SparkSession = InitUtil.initSparkSession(sparkConf)

    //查询出三张表 并进行join 插入到最终表中
    val saleCourse = sparkSession.sql("select * from sparktuning.sale_course")
    val coursePay = sparkSession.sql("select * from sparktuning.course_pay")
      .withColumnRenamed("discount", "pay_discount")
      .withColumnRenamed("createtime", "pay_createtime")
    val courseShoppingCart = sparkSession.sql("select * from sparktuning.course_shopping_cart")
      .drop("coursename")
      .withColumnRenamed("discount", "cart_discount")
      .withColumnRenamed("createtime", "cart_createtime")


    saleCourse
      .join(courseShoppingCart, Seq("courseid", "dt", "dn"), "right")
      .join(coursePay, Seq("orderid", "dt", "dn"), "left")
      .select("courseid", "coursename", "status", "pointlistid", "majorid", "chapterid", "chaptername", "edusubjectid"
        , "edusubjectname", "teacherid", "teachername", "coursemanager", "money", "orderid", "cart_discount", "sellmoney",
        "cart_createtime", "pay_discount", "paymoney", "pay_createtime", "dt", "dn")
      .write.mode(SaveMode.Overwrite).saveAsTable("sparktuning.salecourse_detail")
  }

}





/*
    val sqlstr =
      """
        |select
        |  sc.courseid,
        |  sc.coursename,
        |  status,
        |  pointlistid,
        |  majorid,
        |  chapterid,
        |  chaptername,
        |  edusubjectid,
        |  edusubjectname,
        |  teacherid,
        |  teachername,
        |  coursemanager,
        |  money,
        |  csc.orderid,
        |  csc.discount cart_discount,
        |  sellmoney,
        |  csc.createtime cart_createtime,
        |  cp.discount pay_discount,
        |  paymoney,
        |  cp.createtime pay_createtime,
        |  sc.dt,
        |  sc.dn
        |from course_shopping_cart csc
        |join course_pay cp
        |  on csc.orderid=cp.orderid and csc.dt=cp.dt and csc.dn=cp.dn
        |join sale_course sc
        |  on sc.courseid=csc.courseid and sc.dt=csc.dt and sc.dn=csc.dn
      """.stripMargin*/

拆分大 key 打散大表 扩容小表

1、适用场景
适用于 join 时出现数据倾斜。
2、解决逻辑
1)将存在倾斜的表,根据抽样结果,拆分为倾斜 key(skew 表)和没有倾斜 key
(common)的两个数据集。
2)将 skew 表的 key 全部加上随机前缀,然后对另外一个不存在严重数据倾斜的数据
集(old 表)整体与随机前缀集作笛卡尔乘积(即将数据量扩大 N 倍,得到 new 表)。
3)打散的 skew 表 join 扩容的 new 表
union
Common 表 join old 表
以下为打散大 key 和扩容小表的实现思路
1)打散大表:实际就是数据一进一出进行处理,对大 key 前拼上随机前缀实现打散
2)扩容小表:实际就是将 DataFrame 中每一条数据,转成一个集合,并往这个集合里
循环添加 10 条数据,最后使用 flatmap 压平此集合,达到扩容的效果.
3、案例演示

spark-submit --master yarn --deploy-mode client --driver-memory 1g --num executors 3 --executor-cores 2 --executor-memory 6g --class com.atguigu.sparktuning.skew.SkewJoinTuning spark-tuning-1.0-SNAPSHOT jar-with-dependencies.jar

具体代码:

package com.atguigu.sparktuning.skew

import com.atguigu.sparktuning.bean.{CourseShoppingCart, SaleCourse}
import com.atguigu.sparktuning.utils.InitUtil
import org.apache.spark.SparkConf
import org.apache.spark.sql._

import scala.collection.mutable.ArrayBuffer
import scala.util.Random

object SkewJoinTuning {
  def main( args: Array[String] ): Unit = {
    val sparkConf = new SparkConf().setAppName("SkewJoinTuning")
      .set("spark.sql.autoBroadcastJoinThreshold", "-1")
      .set("spark.sql.shuffle.partitions", "36")
          .setMaster("local[*]")
    val sparkSession: SparkSession = InitUtil.initSparkSession(sparkConf)

    scatterBigAndExpansionSmall(sparkSession)

    //    while(true){}
  }


  /**
    * 打散大表  扩容小表 解决数据倾斜
    *
    * @param sparkSession
    */
  def scatterBigAndExpansionSmall( sparkSession: SparkSession ): Unit = {
    import sparkSession.implicits._
    val saleCourse = sparkSession.sql("select *from sparktuning.sale_course")
    val coursePay = sparkSession.sql("select * from sparktuning.course_pay")
      .withColumnRenamed("discount", "pay_discount")
      .withColumnRenamed("createtime", "pay_createtime")
    val courseShoppingCart = sparkSession.sql("select * from sparktuning.course_shopping_cart")
      .withColumnRenamed("discount", "cart_discount")
      .withColumnRenamed("createtime", "cart_createtime")

    // TODO 1、拆分 倾斜的key
    val commonCourseShoppingCart: Dataset[Row] = courseShoppingCart.filter(item => item.getAs[Long]("courseid") != 101 && item.getAs[Long]("courseid") != 103)
    val skewCourseShoppingCart: Dataset[Row] = courseShoppingCart.filter(item => item.getAs[Long]("courseid") == 101 || item.getAs[Long]("courseid") == 103)

    //TODO 2、将倾斜的key打散  打散36份
    val newCourseShoppingCart = skewCourseShoppingCart.mapPartitions(( partitions: Iterator[Row] ) => {
      partitions.map(item => {
        val courseid = item.getAs[Long]("courseid")
        val randInt = Random.nextInt(36)
        CourseShoppingCart(courseid, item.getAs[String]("orderid"),
          item.getAs[String]("coursename"), item.getAs[String]("cart_discount"),
          item.getAs[String]("sellmoney"), item.getAs[String]("cart_createtime"),
          item.getAs[String]("dt"), item.getAs[String]("dn"), randInt + "_" + courseid)
      })
    })
    //TODO 3、小表进行扩容 扩大36倍
    val newSaleCourse = saleCourse.flatMap(item => {
      val list = new ArrayBuffer[SaleCourse]()
      val courseid = item.getAs[Long]("courseid")
      val coursename = item.getAs[String]("coursename")
      val status = item.getAs[String]("status")
      val pointlistid = item.getAs[Long]("pointlistid")
      val majorid = item.getAs[Long]("majorid")
      val chapterid = item.getAs[Long]("chapterid")
      val chaptername = item.getAs[String]("chaptername")
      val edusubjectid = item.getAs[Long]("edusubjectid")
      val edusubjectname = item.getAs[String]("edusubjectname")
      val teacherid = item.getAs[Long]("teacherid")
      val teachername = item.getAs[String]("teachername")
      val coursemanager = item.getAs[String]("coursemanager")
      val money = item.getAs[String]("money")
      val dt = item.getAs[String]("dt")
      val dn = item.getAs[String]("dn")
      for (i <- 0 until 36) {
        list.append(SaleCourse(courseid, coursename, status, pointlistid, majorid, chapterid, chaptername, edusubjectid,
          edusubjectname, teacherid, teachername, coursemanager, money, dt, dn, i + "_" + courseid))
      }
      list
    })

    // TODO 4、倾斜的大key 与  扩容后的表 进行join
    val df1: DataFrame = newSaleCourse
      .join(newCourseShoppingCart.drop("courseid").drop("coursename"), Seq("rand_courseid", "dt", "dn"), "right")
      .join(coursePay, Seq("orderid", "dt", "dn"), "left")
      .select("courseid", "coursename", "status", "pointlistid", "majorid", "chapterid", "chaptername", "edusubjectid"
        , "edusubjectname", "teacherid", "teachername", "coursemanager", "money", "orderid", "cart_discount", "sellmoney",
        "cart_createtime", "pay_discount", "paymoney", "pay_createtime", "dt", "dn")


    // TODO 5、没有倾斜大key的部分 与 原来的表 进行join
    val df2: DataFrame = saleCourse
      .join(commonCourseShoppingCart.drop("coursename"), Seq("courseid", "dt", "dn"), "right")
      .join(coursePay, Seq("orderid", "dt", "dn"), "left")
      .select("courseid", "coursename", "status", "pointlistid", "majorid", "chapterid", "chaptername", "edusubjectid"
        , "edusubjectname", "teacherid", "teachername", "coursemanager", "money", "orderid", "cart_discount", "sellmoney",
        "cart_createtime", "pay_discount", "paymoney", "pay_createtime", "dt", "dn")

    // TODO 6、将 倾斜key join后的结果 与 普通key join后的结果,uinon起来
    df1
      .union(df2)
      .write.mode(SaveMode.Overwrite).insertInto("sparktuning.salecourse_detail")
  }


}
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值