问题描述
我们的 spark基于DataSource V1版本,整合了kudu表,可以直接使用sql操作读写kudu表。目前我们的kudu-1.7.0版本,随着kudu表的使用场景不断增加,kudu的查询的性能也暴露出来很多问题。此外,随着kudu版本的升级,支持了许多新特性。比如,1.9版本的kudu支持了limit操作,且limit的性能非常高,基本不会随着数据的增长而增长,查询时间保持在1s以内。
目前,如果spark执行一个select from limit的操作,如果查询的是kudu表,会进行一个全表扫描,将结果全部返回,在spark这边在各分区进行locallimit,再将最终结果进行globalLimit, 返回一个limit的结果。 这样的问题非常明显:
1.无法利用kudu的limit特性加快查询
2. 如果kudu源表的数据非常大,查询时间会随着数据的增大迅速上升,且非常容易造成OOM
因此,如果能够在查询kudu时,将limit操作下推到数据源,利用kudu本身的limit特性,就会非常地快。
我们经过测试,查询一个10亿条数据的kudu表,执行一个limit 操作大概会花掉30-40min的时间,这简直让人无法忍受。
解决方案
由于基于DataSource V1版本,因此我们在BaseRelation添加了一个变量,limit,用于记录是否有limit操作
var limit: Long = Long.MaxValue
然后我在PushDownOperatorsToDataSource规则中新增了一个方法,用于对DataSource表进行limit下推:
/**
* created by XXXX on 2019/07/22
*
* to push down limit to datasources.
* Cases about to [[BaseRelation]] is to push down limit of datasources implemented by datasource API V1
* We do not push limit when order by kudu table , because kudu client does not support order by
*
* @param logicalPlan
* @return
*/
private def pushDownLimitToDataSources(logicalPlan: LogicalPlan): LogicalPlan = logicalPlan.transformDown {
case l: GlobalLimit => {
val localLimit = l.child.asInstanceOf[LocalLimit]
val limitValue = localLimit.limitExpr match {
case IntegerLiteral(limit) => limit
case _ => -1
}
val newPlan: LogicalPlan = localLimit.child match {
// A datasource table only with limit specified
case r@LogicalRelation(baseRelation, _, _, _) =>
// logWarning(s"push down limit $limitValue to kudu datasource $r on select * without where")
baseRelation.limit = limitValue
r
// a data source table with Filter and Limit specified
case f@Filter(condition, r@LogicalRelation(baseRelation, _, _, _)) =>
if (supportsAllFiltersAndPredicates(condition, baseRelation)) {
// logWarning(s"push down limit $limitValue to kudu datasource $r for select * with where")
baseRelation.limit = limitValue
}
f
case p@Project(_, Filter(condition, r@LogicalRelation(baseRelation, _, _, _))) =>
if (supportsAllFiltersAndPredicates(condition, baseRelation)) {
// logWarning(s"push down limit $limitValue to kudu datasource $r on select with where ")
baseRelation.limit = limitValue
}
p
case p@Project(_, r@LogicalRelation(baseRelation, _, _, _)) =>
// logWarning(s"push down limit $limitValue to kudu datasource $r on select some columns without where ")
baseRelation.limit = limitValue
p
case p: LogicalPlan =>
p
}
l.copy(l.limitExpr, child = localLimit.copy(child = newPlan))
}
}
这里使用模式匹配对几种可以进行limit下推的情况进行的匹配,并且,如果DataSource存在谓词下推的情况,需要判断其是否支持所有的谓词下推,如果不支持,该情况下是不能进行limit下推的,否则会导致结果不正确。
我们把它加到规则的最后:
override def apply(plan: LogicalPlan): LogicalPlan = {
// Note that, we need to collect the target operator along with PROJECT node, as PROJECT may
// appear in many places for column pruning.
// TODO: Ideally column pruning should be implemented via a plan property that is propagated
// top-down, then we can simplify the logic here and only collect target operators.
val filterPushed = plan transformUp {
case FilterAndProject(fields, condition, r@DataSourceV2Relation(_, reader)) =>
val (candidates, nonDeterministic) =
splitConjunctivePredicates(condition).partition(_.deterministic)
val stayUpFilters: Seq[Expression] = reader match {
case r: SupportsPushDownCatalystFilters =>
r.pushCatalystFilters(candidates.toArray)
case r: SupportsPushDownFilters =>
// A map from original Catalyst expressions to corresponding translated data source
// filters. If a predicate is not in this map, it means it cannot be pushed down.
val translatedMap: Map[Expression, sources.Filter] = candidates.flatMap { p =>
DataSourceStrategy.translateFilter(p).map(f => p -> f)
}.toMap
// Catalyst predicate expressions that cannot be converted to data source filters.
val nonConvertiblePredicates = candidates.filterNot(translatedMap.contains)
// Data source filters that cannot be pushed down. An unhandled filter means
// the data source cannot guarantee the rows returned can pass the filter.
// As a result we must return it so Spark can plan an extra filter operator.
val unhandledFilters = r.pushFilters(translatedMap.values.toArray).toSet
val unhandledPredicates = translatedMap.filter { case (_, f) =>
unhandledFilters.contains(f)
}.keys
nonConvertiblePredicates ++ unhandledPredicates
case _ => candidates
}
val filterCondition = (stayUpFilters ++ nonDeterministic).reduceLeftOption(And)
val withFilter = filterCondition.map(Filter(_, r)).getOrElse(r)
if (withFilter.output == fields) {
withFilter
} else {
Project(fields, withFilter)
}
}
// TODO: add more push down rules.
val columnPruned = pushDownRequiredColumns(filterPushed, filterPushed.outputSet)
//After push down filters, we may push down LIMIT too
// 这里调用下推规则
val limitPushed = pushDownLimitToDataSources(columnPruned)
// After column pruning, we may have redundant PROJECT nodes in the query plan, remove them.
RemoveRedundantProject(limitPushed)
}
上面代码的倒数第二句,我们进行了limit下推,这样我们就可以在读取kudu的时候,拿到limit的设置了,最后,我们在读取kudu生成kuduRDD时,将limit参数设置到scanner中即可。
测试结果
最终,我们使用下推的后limit操作查询苦读, 在查询10亿条数据的kudu表时,用时不会超过1s,而原来需要超过半个小时,这是3000+倍性能提升,如果数据持续增大,性能提升将更大。
总结
最近想写博客总结下工作中遇到的这些问题,又去翻看了此前这部分的代码,发现这个方案一个不太好的地方,当时由于工作不久,代码风格上还没有养成好的习惯,现在一看,我觉得在一个trait上加一个变量这种编码方式非常不优雅,更为优雅的方式,是新增一个抽象类并声明一个变量,让BaseRelation继承这个抽象类,这样会看起来舒服些。毕竟在接口上定义变量,不是很符合常规的编程习惯。
但是这个功能让我对于spark的性能优化有了更深入的理解,是一个很好的练手机会,也让我在工作中受到了正向的肯定,还是一次不出的经历。