创建一个接近输入语言语义的方言,可以在MLIR中进行分析、转换和优化,这些分析、转换和优化需要高级语言信息,并且通常在AST上表现出来。例如,CLang有一个相当重量级的机制来处理C++的模板实例化。
我们将编译器转换分成两类:本地转换和全局转换。在本章中,我们将重点介绍如何使用TOY方言及其高级语义来执行LLVM中难以执行的本地模式匹配转换,为此,我们使用MLIR的通用DAG重写器。
有两种方法可以用来实现模式匹配转换:1.命令式,C++模式匹配和重写;2.声明性的,基于规则的模式匹配和使用表驱动的重新:Declarative Rewrite Rules(DRR)。注意:DRR的使用要求使用ODS定义operations。
使用C++模式匹配和重写优化转置
以一个简单的例子开始:尝试消除由两个连续的transpose operation的序列,transpose(transpose(x)) -> x。下面是对应的Toy实例代码:
def transpose_transpose(x) {
return transpose(transpose(x));
}
对应的IR如下:
func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> {
%0 = toy.transpose(%arg0 : tensor<*xf64>) to tensor<*xf64>
%1 = toy.transpose(%0 : tensor<*xf64>) to tensor<*xf64>
toy.return %1 : tensor<*xf64>
}
这是一个展示转换的很好的例子,与Toy IR匹配非常简单,但是对于LLVM来说,这是很难理解的。例如Clang无法优化临时数组,用简单的循环计算转置的C++代码如下:
#define N 100
#define M 100
void sink(void *);
void double_transpose(int A[N][M]) {
int B[M][N];
for(int i = 0; i < N; ++i) {
for(int j = 0; j < M; ++j) {
B[j][i] = A[i][j];
}
}
for(int i = 0; i < N; ++i) {
for(int j = 0; j < M; ++j) {
A[i][j] = B[j][i];
}
}
sink(A);
}
对于一种简单的C++重写方法,包括在IR中匹配一个类似树的模式,并将它替换为一组不同的操作,我们可以通过实现RewritePattern插入MLIR的Canonicalizer(规范化) pass:
/// Fold transpose(transpose(x)) -> x
struct SimplifyRedundantTranspose : public mlir::OpRewritePattern<TransposeOp> {
/// We register this pattern to match every toy.transpose in the IR.
/// The "benefit" is used by the framework to order the patterns and process
/// them in order of profitability.
SimplifyRedundantTranspose(mlir::MLIRContext *context)
: OpRewritePattern<TransposeOp>(context, /*benefit=*/1) {}
/// This method is attempting to match a pattern and rewrite it. The rewriter
/// argument is the orchestrator of the sequence of rewrites. It is expected
/// to interact with it to perform any changes to the IR from here.
mlir::LogicalResult
matchAndRewrite(TransposeOp op,
mlir::PatternRewriter &rewriter) const override {
// Look through the input of the current transpose.
mlir::Value transposeInput = op.getOperand();
TransposeOp transposeInputOp = transposeInput.getDefiningOp<TransposeOp>();
// Input defined by another transpose? If not, no match.
if (!transposeInputOp)
return failure();
// Otherwise, we have a redundant transpose. Use the rewriter.
rewriter.replaceOp(op, {transposeInputOp.getOperand()});
return success();
}
};
此重写器的实现在ToyCombine.cpp中,这个规范化pass以贪心、迭代的方式应用在operation的转换上。为了确保规范化pass能够应用在新的转换上,设置HasCanonicizer=1并将模式注册到规范化框架中。
// Register our patterns for rewrite by the Canonicalization framework.
void TransposeOp::getCanonicalizationPatterns(
OwningRewritePatternList &results, MLIRContext *context) {
results.insert<SimplifyRedundantTranspose>(context);
}
toy.cpp若要添加optimization pipeline,我们还需要更新我们的主文件,在MLIR中,优化都是以类似于LLVM的PassManager来管理运行:
mlir::PassManager pm(module.getContext());
pm.addNestedPass<mlir::FuncOp>(mlir::createCanonicalizerPass());
运行样例程序观察匹配结果:
toyc-ch3 test/Examples/Toy/Ch3/transpose_transpose.toy -emit=mlir -opt
输出:
func @transpose_transpose(%arg0: tensor<*xf64>) -> tensor<*xf64> {
"toy.return"(%arg0) : (tensor<*xf64>) -> ()
}
原文讲到%0参数仍然还在这个函数中,但master分支上的代码编译运行后没有%0
使用DRR优化Reshape
Declarative、Rule-based pattern-match and rewrite(DRR)是一种基于DAG的重写器,它为模式匹配和重写规则提供了基于表的语法:
class Pattern<
dag sourcePattern, list<dag> resultPatterns,
list<dag> additionalConstraints = [],
dag benefitsAdded = (addBenefit 0)>;
类似于SimplifyRedundantTranspose,使用DRR来消除冗余的Reshape更简单:
// Reshape(Reshape(x)) = Reshape(x)
def ReshapeReshapeOptPattern : Pat<(ReshapeOp(ReshapeOp $arg)),
(ReshapeOp $arg)>;
与每个DRR模式对应的自动生成的C++代码可以在 BUILD/tools/mlir/examples/toy/Ch3/ToyCombine.inc中找到。
当以参数和返回结果的某些属性为转换条件时,DRR还提供了一种添加参数约束的方式。例如,当输入和输出形状相同时,就可以消除多余的reshape。
def TypesAreIdentical :
Constraint<CPred<"$0.getType() == $1.getType()">>;
def RedundantReshapeOptPattern :
Pat<(ReshapeOp:$res $arg), (replaceWithValue $arg),
[(TypesAreIdentical $res, $arg)]>;
一些优化可能需要对指令参数进行额外的转换,这就使用NativeCodeCall实现,它允许通过C++的内联函数,这种优化的一个例子是FoldConstantReshape,在这里,我们通过改变不变的位置和消除reshape operation来优化常量的reshape:
def ReshapeConstant :
NativeCodeCall<"$0.reshape(($1.getType()).cast<ShapedType>())">;
def FoldConstantReshapeOptPattern : Pat<
(ReshapeOp:$res (ConstantOp $arg)),
(ConstantOp (ReshapeConstant $arg, $res))>;
我们使用以下tirvial_reshape.toy来演示reshape的优化:
def main() {
var a<2,1> = [1, 2];
var b<2,1> = a;
var c<2,1> = b;
print(c);
}
module {
func @main() {
%0 = toy.constant dense<[1.000000e+00, 2.000000e+00]> : tensor<2xf64>
%1 = toy.reshape(%0 : tensor<2xf64>) to tensor<2x1xf64>
%2 = toy.reshape(%1 : tensor<2x1xf64>) to tensor<2x1xf64>
%3 = toy.reshape(%2 : tensor<2x1xf64>) to tensor<2x1xf64>
toy.print %3 : tensor<2x1xf64>
toy.return
}
}
运行以下命令观察优化结果:
输出:
func @main() {
%0 = "toy.constant"() {value = dense<[[1.000000e+00], [2.000000e+00]]> : tensor<2x1xf64>} : () -> tensor<2x1xf64>
"toy.print"(%0) : (tensor<2x1xf64>) -> ()
"toy.return"() : () -> ()
}
欢迎关注亦梦云烟的微信公众号: 亦梦智能计算