ONNX预训练模型加载

tvm官网中,对从ONNX预训练模型中加载模型的教程说明

教程来自于:https://docs.tvm.ai/tutorials/frontend/from_onnx.html#sphx-glr-tutorials-frontend-from-onnx-py

首先我对教程进行了一些修改,很多东西没有必要,比如不是每次都需要从网上下载图片和模型,super_resolution.onnx和cat.png都预先下载到了文件同目录下,

同时,最新版本的tvm中不支持Python2.7,我没有编译llvm,所以我把我的设置都改到了cuda上,在24行和32行有体现,注意最新版本

 1 import onnx
 2 import numpy as np
 3 import tvm
 4 import tvm.relay as relay
 5 # from tvm.contrib.download import download_testdata
 6 
 7 # model_url = ''.join(['https://gist.github.com/zhreshold/',
 8 #                      'bcda4716699ac97ea44f791c24310193/raw/',
 9 #                      '93672b029103648953c4e5ad3ac3aadf346a4cdc/',
10 #                      'super_resolution_0.2.onnx'])
11 # model_path = download_testdata(model_url, 'super_resolution.onnx', module='onnx')
12 # now you have super_resolution.onnx on disk
13 onnx_model = onnx.load('super_resolution.onnx')
14 
15 from PIL import Image
16 # img_url = 'https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true'
17 # img_path = download_testdata(img_url, 'cat.png', module='data')
18 img_path = 'cat.png'
19 img = Image.open(img_path).resize((224, 224))
20 img_ycbcr = img.convert("YCbCr")  # convert to YCbCr
21 img_y, img_cb, img_cr = img_ycbcr.split()
22 x = np.array(img_y)[np.newaxis, np.newaxis, :, :]
23 
24 target = 'cuda'
25 
26 input_name = '1'
27 shape_dict = {input_name: x.shape}
28 sym, params = relay.frontend.from_onnx(onnx_model, shape_dict)
29 print(sym)
30 
31 with relay.build_config(opt_level=1):
32     intrp = relay.build_module.create_executor('graph', sym, tvm.gpu(0), target)
33 
34 dtype = 'float32'
35 tvm_output = intrp.evaluate(sym)(tvm.nd.array(x.astype(dtype)), **params).asnumpy()

第28行有一个从模型加载的函数from_onnx

官方的解释:tvm.relay.frontend.from_onnx(model, shape=None, dtype='float32')

Convert a ONNX model into an equivalent Relay Function.

ONNX graphs are represented as Python Protobuf objects. The companion parameters will be handled automatically. However, the input names from onnx graph is vague, mixing inputs and network weights/bias such as “1”, “2”… For convenience, we rename the real input names to “input_0”, “input_1”… And renaming parameters to “param_0”, “param_1”…

Parameters:
  • model (protobuf object) – ONNX ModelProto after ONNX v1.1.0
  • shape (dict of str to tuple, optional) – The input shape to the graph
  • dtype (str or dict of str to str) – The input types to the graph
Returns:
  • sym (tvm.relay.expr.Function) – Compatible relay function
  • params (dict of str to tvm.NDArray) – The parameter dict to be used by relay

看返回值,sym是relay Function,在后边加一个print(sym)输出,可以看到图这一级的IR

fn (%v1: Tensor[(1, 1, 224, 224), float32], %v2: Tensor[(64, 1, 5, 5), float32], %v3: Tensor[(64,), float32], %v4: Tensor[(64, 64, 3, 3), float32], %v5: Tensor[(64,), float32], %v6: Tensor[(32, 64, 3, 3), float32], %v7: Tensor[(32,), float32], %v8: Tensor[(9, 32, 3, 3), float32], %v9: Tensor[(9,), float32]) {
  %0 = nn.conv2d(%v1, %v2, padding=[2, 2], kernel_size=[5, 5])
  %1 = expand_dims(%v3, axis=1, num_newaxis=2)
  %2 = add(%0, %1)
  %3 = nn.relu(%2)
  %4 = nn.conv2d(%3, %v4, padding=[1, 1], kernel_size=[3, 3])
  %5 = expand_dims(%v5, axis=1, num_newaxis=2)
  %6 = add(%4, %5)
  %7 = nn.relu(%6)
  %8 = nn.conv2d(%7, %v6, padding=[1, 1], kernel_size=[3, 3])
  %9 = expand_dims(%v7, axis=1, num_newaxis=2)
  %10 = add(%8, %9)
  %11 = nn.relu(%10)
  %12 = nn.conv2d(%11, %v8, padding=[1, 1], kernel_size=[3, 3])
  %13 = expand_dims(%v9, axis=1, num_newaxis=2)
  %14 = add(%12, %13)
  %15 = reshape(%14, newshape=[1, 1, 3, 3, 224, 224])
  %16 = transpose(%15, axes=[0, 1, 4, 2, 5, 3])
  reshape(%16, newshape=[1, 1, 672, 672])
}

 

转载于:https://www.cnblogs.com/jourluohua/p/10892811.html

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值