Spark Bloom Filter Join_bloomfilteraggregate(2),大数据开发基础学习教程

先自我介绍一下,小编浙江大学毕业,去过华为、字节跳动等大厂,目前阿里P7

深知大多数程序员,想要提升技能,往往是自己摸索成长,但自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!

因此收集整理了一份《2024年最新大数据全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友。
img
img
img
img
img

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,涵盖了95%以上大数据知识点,真正体系化!

由于文件比较多,这里只是将部分目录截图出来,全套包含大厂面经、学习笔记、源码讲义、实战项目、大纲路线、讲解视频,并且后续会持续更新

如果你需要这些资料,可以添加V获取:vip204888 (备注大数据)
img

正文

1.4 触发条件

设计文档中写的触发条件

  1. 小表在broadcast join当中(存疑)
  2. 小表有过滤器
  3. 小表是Scan (-> Project) -> Filter的建档形式,否则依赖流增加可能延长查询时间
  4. 小表是确定性的
  5. 大表端有shuffle,小表可以通过shuffl传送bloomFilter结果
  6. join的列上没有应用DPP

2 InjectRuntimeFilter

InjectRuntimeFilter是Spark源码中对应的优化器类,只执行一次(FixedPoint(1)和Once的差异是Once强制幂等)

Batch("InjectRuntimeFilter", FixedPoint(1),
  InjectRuntimeFilter) :+

apply中定义了规则的整体流程,前面是两个条件判断

// 相关子查询不支持,相关子查询的子查询结果依赖于主查询,不能应用
case s: Subquery if s.correlated => plan
// 相关的配置开关是否开启
case _ if !conf.runtimeFilterSemiJoinReductionEnabled &&
  !conf.runtimeFilterBloomFilterEnabled => plan
case _ =>
  // 应用优化规则,尝试注入运行时过滤器
  val newPlan = tryInjectRuntimeFilter(plan)
  // semi join配置未开或者规则应用后无变化,不处理
  if (conf.runtimeFilterSemiJoinReductionEnabled && !plan.fastEquals(newPlan)) {
  // 子查询重写成semi/anti join
    RewritePredicateSubquery(newPlan)
  } else {
    newPlan
  }

相关的配置为,默认bloomFilter开启了,Semi join关闭的

val RUNTIME\_FILTER\_SEMI\_JOIN\_REDUCTION\_ENABLED =
  buildConf("spark.sql.optimizer.runtimeFilter.semiJoinReduction.enabled")
    .doc("When true and if one side of a shuffle join has a selective predicate, we attempt " +
      "to insert a semi join in the other side to reduce the amount of shuffle data.")
    .version("3.3.0")
    .booleanConf
    .createWithDefault(false)
    
val RUNTIME\_BLOOM\_FILTER\_ENABLED =
  buildConf("spark.sql.optimizer.runtime.bloomFilter.enabled")
    .doc("When true and if one side of a shuffle join has a selective predicate, we attempt " +
      "to insert a bloom filter in the other side to reduce the amount of shuffle data.")
    .version("3.3.0")
    .booleanConf
    .createWithDefault(true)

2.1 tryInjectRuntimeFilter

tryInjectRuntimeFilter使用核心的处理流程,尝试应用Runtime Filter,整体代码如下

private def tryInjectRuntimeFilter(plan: LogicalPlan): LogicalPlan = {
  var filterCounter = 0
  val numFilterThreshold = conf.getConf(SQLConf.RUNTIME\_FILTER\_NUMBER\_THRESHOLD)
  plan transformUp {
    case join @ ExtractEquiJoinKeys(joinType, leftKeys, rightKeys, _, _, left, right, hint) =>
      var newLeft = left
      var newRight = right
      (leftKeys, rightKeys).zipped.foreach((l, r) => {
        // Check if:
        // 1. There is already a DPP filter on the key
        // 2. There is already a runtime filter (Bloom filter or IN subquery) on the key
        // 3. The keys are simple cheap expressions
        if (filterCounter < numFilterThreshold &&
          !hasDynamicPruningSubquery(left, right, l, r) &&
          !hasRuntimeFilter(newLeft, newRight, l, r) &&
          isSimpleExpression(l) && isSimpleExpression(r)) {
          val oldLeft = newLeft
          val oldRight = newRight
          if (canPruneLeft(joinType) && filteringHasBenefit(left, right, l, hint)) {
            newLeft = injectFilter(l, newLeft, r, right)
          }
          // Did we actually inject on the left? If not, try on the right
          if (newLeft.fastEquals(oldLeft) && canPruneRight(joinType) &&
            filteringHasBenefit(right, left, r, hint)) {
            newRight = injectFilter(r, newRight, l, left)
          }
          if (!newLeft.fastEquals(oldLeft) || !newRight.fastEquals(oldRight)) {
            filterCounter = filterCounter + 1
          }
        }
      })
      join.withNewChildren(Seq(newLeft, newRight))
  }
}

过程中有很多的条件判断,应用Runtime Filter的基本条件:

  1. 插入的Runtime Filter没超过阈值(默认10)
  2. 等值条件的Key上不能有DPP、Runtime Filter
  3. 等值条件的Key是一个简单表达式(即没有套上UDF等)

之后根据条件,选择将Runtime Filter应用到左子树还是右子树,条件为

  1. Join类型支持下推(比如RightOuter只能用于左子树)
  2. Application端支持通过joins、aggregates、windows下推过滤条件
  3. Creation端有过滤条件
  4. 当前join是shuffle join或者是一个子结构中包含shuffle的broadcast join
  5. Application端的扫描数据大于阈值(默认10G)

提到的两个阈值的配置项

val RUNTIME\_FILTER\_NUMBER\_THRESHOLD =
  buildConf("spark.sql.optimizer.runtimeFilter.number.threshold")
    .doc("The total number of injected runtime filters (non-DPP) for a single " +
      "query. This is to prevent driver OOMs with too many Bloom filters.")
    .version("3.3.0")
    .intConf
    .checkValue(threshold => threshold >= 0, "The threshold should be >= 0")
    .createWithDefault(10)

val RUNTIME\_BLOOM\_FILTER\_APPLICATION\_SIDE\_SCAN\_SIZE\_THRESHOLD =
  buildConf("spark.sql.optimizer.runtime.bloomFilter.applicationSideScanSizeThreshold")
    .doc("Byte size threshold of the Bloom filter application side plan's aggregated scan " +
      "size. Aggregated scan byte size of the Bloom filter application side needs to be over " +
      "this value to inject a bloom filter.")
    .version("3.3.0")
    .bytesConf(ByteUnit.BYTE)
    .createWithDefaultString("10GB")

2.2 injectFilter

injectFilter是核心进行Runtime Filter规则应用的地方,在此处,bloomFilter和Semi Join是互斥的,只能有一个执行

if (conf.runtimeFilterBloomFilterEnabled) {
  injectBloomFilter(
    filterApplicationSideExp,
    filterApplicationSidePlan,
    filterCreationSideExp,
    filterCreationSidePlan
  )
} else {
  injectInSubqueryFilter(
    filterApplicationSideExp,
    filterApplicationSidePlan,
    filterCreationSideExp,
    filterCreationSidePlan
  )

2.3 injectBloomFilter

2.3.1 执行条件

首先进行一个判断,在Creation端的数据不能大于阈值(Creation端数据量大会导致bloomFilter的误判率高,最终过滤效果差)

// Skip if the filter creation side is too big
if (filterCreationSidePlan.stats.sizeInBytes > conf.runtimeFilterCreationSideThreshold) {
  return filterApplicationSidePlan
}

阈值配置默认10M

val RUNTIME\_BLOOM\_FILTER\_CREATION\_SIDE\_THRESHOLD =
  buildConf("spark.sql.optimizer.runtime.bloomFilter.creationSideThreshold")
    .doc("Size threshold of the bloom filter creation side plan. Estimated size needs to be " +
      "under this value to try to inject bloom filter.")
    .version("3.3.0")
    .bytesConf(ByteUnit.BYTE)
    .createWithDefaultString("10MB")

Creation端的数据是一个预估数据,是LogicalPlan中的属性LogicalPlanStats获取的,分是否开启CBO,具体获取方式待研究

def stats: Statistics = statsCache.getOrElse {
  if (conf.cboEnabled) {
    statsCache = Option(BasicStatsPlanVisitor.visit(self))
  } else {
    statsCache = Option(SizeInBytesOnlyStatsPlanVisitor.visit(self))
  }
  statsCache.get
}

2.3.2 创建Creation端的聚合

就是创建一个bloomFilter的聚合函数BloomFilterAggregate,是AggregateFunction的子类,属于Expression。根据统计信息中是否存在行数,会传入不同的参数

val rowCount = filterCreationSidePlan.stats.rowCount
val bloomFilterAgg =
  if (rowCount.isDefined && rowCount.get.longValue > 0L) {
    new BloomFilterAggregate(new XxHash64(Seq(filterCreationSideExp)), rowCount.get.longValue)
  } else {
    new BloomFilterAggregate(new XxHash64(Seq(filterCreationSideExp)))
  }

2.3.3 创建Application端的过滤条件

根据1.3中的描述,此处就是把上节中Creation端创建的bloomFilter过滤条件构建成Application端的条件
  Alias就是一个别名的效果;ColumnPruning就是进行列裁剪,后续不需要的列不读取;ConstantFolding就是进行常量折叠;ScalarSubquery是标量子查询,标量子查询的查询结果是一行一列的值(单一值)
  BloomFilterMightContain就是一个内部标量函数,检查数据是否由bloomFilter包含,继承自Predicate,返回boolean值

val alias = Alias(bloomFilterAgg.toAggregateExpression(), "bloomFilter")()
val aggregate =
  ConstantFolding(ColumnPruning(Aggregate(Nil, Seq(alias), filterCreationSidePlan)))
val bloomFilterSubquery = ScalarSubquery(aggregate, Nil)
val filter = BloomFilterMightContain(bloomFilterSubquery,
  new XxHash64(Seq(filterApplicationSideExp)))

最终结果是在原Application端的计划树上加一个filter,如下就是最终的返回结果

Filter(filter, filterApplicationSidePlan)

2.4 injectInSubqueryFilter

injectInSubqueryFilter整体流程与injectBloomFilter差不多,差异应该是在Application端生成的过滤条件变成in

val actualFilterKeyExpr = mayWrapWithHash(filterCreationSideExp)
val alias = Alias(actualFilterKeyExpr, actualFilterKeyExpr.toString)()
val aggregate =
  ColumnPruning(Aggregate(Seq(filterCreationSideExp), Seq(alias), filterCreationSidePlan))
if (!canBroadcastBySize(aggregate, conf)) {
  // Skip the InSubquery filter if the size of `aggregate` is beyond broadcast join threshold,
  // i.e., the semi-join will be a shuffled join, which is not worthwhile.
  return filterApplicationSidePlan
}
val filter = InSubquery(Seq(mayWrapWithHash(filterApplicationSideExp)),
  ListQuery(aggregate, childOutputs = aggregate.output))
Filter(filter, filterApplicationSidePlan)

这里有一个小优化就是mayWrapWithHash,当数据类型的大小超过int时,就是把数据转为hash

// Wraps `expr` with a hash function if its byte size is larger than an integer.
private def mayWrapWithHash(expr: Expression): Expression = {
  if (expr.dataType.defaultSize > IntegerType.defaultSize) {
    new Murmur3Hash(Seq(expr))
  } else {
    expr
  }
}

3 BloomFilterAggregate

类有三个核心参数:

  1. child:子表达式,就是InjectRuntimeFilter里传的XxHash64,目前看起来数据先经过XxHash64处理成long再放入BloomFilter
  2. estimatedNumItemsExpression:估计的数据量,如果InjectRuntimeFilter没拿到统计信息,就用配置的默认值
  3. numBitsExpression:要使用的bit数
case class BloomFilterAggregate(
    child: Expression,
    estimatedNumItemsExpression: Expression,
    numBitsExpression: Expression,

estimatedNumItemsExpression和numBitsExpression对应的配置如下

val RUNTIME\_BLOOM\_FILTER\_EXPECTED\_NUM\_ITEMS =
  buildConf("spark.sql.optimizer.runtime.bloomFilter.expectedNumItems")
    .doc("The default number of expected items for the runtime bloomfilter")
    .version("3.3.0")
    .longConf
    .createWithDefault(1000000L)
    
val RUNTIME\_BLOOM\_FILTER\_NUM\_BITS =
  buildConf("spark.sql.optimizer.runtime.bloomFilter.numBits")
    .doc("The default number of bits to use for the runtime bloom filter")
    .version("3.3.0")
    .longConf
    .createWithDefault(8388608L)

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以添加V获取:vip204888 (备注大数据)
img

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!

er of bits to use for the runtime bloom filter")
.version(“3.3.0”)
.longConf
.createWithDefault(8388608L)



**网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。**

**需要这份系统化的资料的朋友,可以添加V获取:vip204888 (备注大数据)**
[外链图片转存中...(img-9KxDZbQf-1713153116513)]

**一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**

  • 15
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Hutool BloomFilter 是一个基于布隆过滤器算法实现的工具类库,可以快速判断一个元素是否存在于大规模数据集中。它具有空间效率高、查询速度快等优点,常用于缓存、去重、反垃圾邮件等场景。 下面是使用 Hutool BloomFilter 的步骤: 1. 引入 Hutool BloomFilter 依赖 ```xml <dependency> <groupId>cn.hutool</groupId> <artifactId>hutool-bloomfilter</artifactId> <version>5.7.8</version> </dependency> ``` 2. 创建 BloomFilter 实例 ```java BloomFilter<String> bloomFilter = new BloomFilter<>(1000000, 0.01); ``` 这里创建了一个容量为 1000000,误差率为 0.01 的 BloomFilter 实例。 3. 添加元素到 BloomFilter 中 ```java bloomFilter.add("hello"); bloomFilter.add("world"); ``` 通过 add 方法将元素添加到 BloomFilter 中。 4. 判断元素是否存在于 BloomFilter 中 ```java boolean exists = bloomFilter.contains("hello"); ``` 使用 contains 方法判断元素是否存在于 BloomFilter 中。 完整示例代码: ```java import cn.hutool.core.lang.Console; import cn.hutool.bloomfilter.BloomFilter; public class BloomFilterDemo { public static void main(String[] args) { BloomFilter<String> bloomFilter = new BloomFilter<>(1000000, 0.01); bloomFilter.add("hello"); bloomFilter.add("world"); boolean exists = bloomFilter.contains("hello"); Console.log(exists); exists = bloomFilter.contains("hutool"); Console.log(exists); } } ``` 输出结果: ``` true false ``` 注意:BloomFilter 是一个概率性数据结构,误判率与容量和哈希函数数量有关。在实际使用中,需要根据实际情况选择合适的参数。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值