地平线 AI 芯片工具链 - 03 自定义模型转换

地平线 AI 芯片工具链 - 03 自定义模型转换

1. 前提条件

2. 文件目录

在 docker 挂载目录下新建 08_hjw_demo 目录(具体位置可以自定义),并准备好相关脚本和模型

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

3. 模型可视化

4. 模型校验

  • 01_check.sh
#!/usr/bin/env sh

cd $(dirname $0) || exit
set -e

model_type="onnx"
proto="./hjw_demo.onnx"
caffe_model="./hjw_demo.onnx"
output="./hjw_demo_checker.log"

hb_mapper checker --model-type ${model_type} \
                  --proto ${proto} --model ${caffe_model} \
                  --output ${output}
  • 执行 sh 01_check.sh
[root@bbcad39a8264 mapper]# sh 01_check.sh 
2020-09-07 16:49:23,096 INFO Start hb_mapper....
2020-09-07 16:49:23,097 INFO hb_mapper version 1.1.6
2020-09-07 16:49:26,532 INFO generated new fontManager
2020-09-07 16:49:27,352 INFO Model type: onnx
2020-09-07 16:49:27,352 INFO output file: ./hjw_demo_checker.log
2020-09-07 16:49:27,352 INFO input names []
2020-09-07 16:49:27,353 INFO input shapes {}
2020-09-07 16:49:27,353 INFO Begin model checking....
2020-09-07 16:49:27,353 INFO The input parameter is not specified, convert with default parameters.
2020-09-07 16:49:27,353 INFO The hbdk parameter is not specified, and the submodel will be compiled with the default parameter.
2020-09-07 16:49:27,354 INFO HorizonNN version: 0.6.10
2020-09-07 16:49:27,354 INFO HBDK version: 3.10.3
2020-09-07 16:49:27,354 INFO Start to parse the onnx model.
2020-09-07 16:49:28,098 INFO ONNX model info:
ONNX IR version:  6
Opset version:    10
Input name:       data, [1, 8, 1200, 800]
2020-09-07 16:49:28,554 INFO The onnx model was parsed successfully.
2020-09-07 16:49:28,556 INFO Model input names: ['data']
2020-09-07 16:49:28,557 INFO Start to optimize the model.
2020-09-07 16:49:29,169 INFO The model was optimized successfully.
2020-09-07 16:49:29,170 INFO Start to calibrate the model.
2020-09-07 16:49:29,457 INFO End calibrate the model.
2020-09-07 16:49:29,459 INFO Start to quantize the model.
2020-09-07 16:49:33,433 INFO The model was quantized successfully.
2020-09-07 16:49:33,433 INFO Start to compile the model with march: bernoulli2.
2020-09-07 16:49:36,152 INFO Compile submodel: torch-jit-export_subgraph_0
2020-09-07 16:49:39,594 INFO hbdk-cc parameters:{'optimize-level': 'O0', 'input-layout': 'NHWC', 'output-layout': 'NHWC'}
[==================================================] 100%
2020-09-07 16:49:45,537 INFO The model was compiled successfully.
2020-09-07 16:49:45,538 INFO The node information of hybrid model:
-------------------------------------------
Node                                   Type 
-------------Start Subgraph 0-------------
Conv_0                                 BPU  
Conv_3                                 BPU  
Conv_6                                 BPU  
MaxPool_9                              BPU  
Conv_10                                BPU  
Conv_13                                BPU  
Conv_16                                BPU  
Conv_18                                BPU  
Conv_22                                BPU  
Conv_25                                BPU  
Conv_28                                BPU  
Conv_32                                BPU  
Conv_35                                BPU  
Conv_38                                BPU  
Conv_42                                BPU  
Conv_45                                BPU  
Conv_48                                BPU  
Conv_52                                BPU  
Conv_55                                BPU  
Conv_58                                BPU  
AveragePool_60                         BPU  
Conv_61                                BPU  
Conv_65                                BPU  
Conv_68                                BPU  
Conv_71                                BPU  
Conv_75                                BPU  
Conv_78                                BPU  
Conv_81                                BPU  
Conv_85                                BPU  
Conv_88                                BPU  
Conv_91                                BPU  
Conv_95                                BPU  
Conv_98                                BPU  
Conv_101                               BPU  
Conv_105                               BPU  
Conv_108                               BPU  
Conv_111                               BPU  
Conv_115                               BPU  
Conv_118                               BPU  
Conv_121                               BPU  
AveragePool_123                        BPU  
Conv_124                               BPU  
Conv_128                               BPU  
Conv_131                               BPU  
Conv_134                               BPU  
Conv_138                               BPU  
Conv_141                               BPU  
Conv_144                               BPU  
Conv_148                               BPU  
Conv_151                               BPU  
Conv_154                               BPU  
Conv_156                               BPU  
Conv_160                               BPU  
Conv_163                               BPU  
Conv_166                               BPU  
Conv_170                               BPU  
Conv_173                               BPU  
Conv_176                               BPU  
Conv_180                               BPU  
Conv_182                               BPU  
Conv_185                               BPU  
Resize_216                             BPU  
Conv_217                               BPU  
Conv_220                               BPU  
Resize_251                             BPU  
Conv_252                               BPU  
Conv_255                               BPU  
Conv_257                               BPU  
Conv_259                               BPU  
Conv_261                               BPU  
Conv_263                               BPU  
Conv_264                               BPU  
Conv_266                               BPU  
Conv_268                               BPU  
Conv_270                               BPU  
Conv_271                               BPU  
Conv_273                               BPU  
Conv_275                               BPU  
Conv_277                               BPU  
Conv_278                               BPU  
Conv_280                               BPU  
Conv_282                               BPU  
Conv_284                               BPU  
Conv_285                               BPU  
Conv_287                               BPU  
Conv_289                               BPU  
Conv_291                               BPU  
Conv_292                               BPU  
Conv_294                               BPU  
Conv_296                               BPU  
Conv_298                               BPU  
--------------End Subgraph 0--------------
Reshape_301                            CPU  
Reshape_304                            CPU  
Reshape_307                            CPU  
Reshape_310                            CPU  
Reshape_313                            CPU  
Reshape_316                            CPU  
Concat_317                             CPU  
Concat_318                             CPU  
Concat_319                             CPU  
--------------------End--------------------
2020-09-07 16:49:45,547 INFO End model checking....
[root@bbcad39a8264 mapper]# 

5. 模型编译

  • 02_build.sh
#!/bin/bash

cd $(dirname $0) || exit
set -e

config_file="./hjw_demo_config.yaml"
model_type="onnx"
# build model
hb_mapper makertbin --config ${config_file}  \
                    --model-type  ${model_type}
  • hjw_demo_config.yaml
# 模型转化相关的参数
model_parameters:
  # Caffe浮点网络数据模型文件
  caffe_model: ''
  # Caffe网络描述文件
  prototxt: ''
  # Onnx 浮点用户模型文件
  onnx_model: 'hjw_demo.onnx'
  # 指定模型转换过程中是否输出各层的中间结果,如果为True,则输出所有层的中间输出结果,
  layer_out_dump: False
  # 日志文件的输出控制参数,
  # debug输出模型转换的详细信息
  # info只输出关键信息 
  # warn输出警告和错误级别以上的信息
  log_level: 'debug'
  # 模型转换输出的结果的存放目录
  working_dir: 'model_output'
  # 模型转换输出的用于上板执行的模型文件
  output_model_file_prefix: 'hjw_demo'


# 模型输入相关参数
input_parameters:
  #模型输入的节点名称, 此名称应与模型文件中的名称一致, 否则会报错
  - input_name: 'data'
    # 网络实际执行时,输入给网络的数据格式,包括 nv12/featuremap/rgbp/bgrp,
    # 如果输入的数据为yuv444, 模型训练用的是rgb,则hb_mapper将自动插入YUV到RGB转化操作
    input_type_rt: 'featuremap'
    # 网络训练时输入的图像格式,可选的值为rgbp、bgrp
    input_type_train: 'featuremap'
    # 网络输入的预处理方法,主要有以下五种:
    # no_preprocess 不做任何操作
    # mean_file 图像均值文件
    # data_scale 对图像像素乘以data_scale
    # mean_file_and_scale 减去均值后再乘以scale
    norm_type: 'no_preprocess'
    # 经过resize和crop后,输入到网络的大小
    input_shape: '1x8x1200x800'


calibration_parameters:
  # 模型量化的参考图像输入,图片格式支持Jpeg、Bmp等格式,输入的图片
  # 应该是使用的典型场景,一般是从测试集中选择20~50张图片,另外输入
  # 的图片要覆盖典型场景,不要是偏僻场景,如过曝光、饱和、模糊、
  # 纯黑、纯白等图片
  cal_data:
      #模型输入的节点名称, 此名称应与模型文件中的名称一致, 否则会报错
      - input_name: 'data'
        #模型量化的参考图像的存放目录
        dir: '../calibration_data_feature'
  # 如果输入的图片文件尺寸和模型训练的尺寸不一致时,并且pre_process_on为true,则将采用默认预处理方法,
  # 将输入图片缩放或者裁减到指定尺寸,否则,需要用户提前把图像处理为训练时的尺寸
  preprocess_on: False
  # 模型量化的算法类型,支持kl、max和promoter,通常采用KL即可满足要求
  calibration_type: 'kl'
  # 模型的量化校准方法设置为promoter,mapper会根据calibraion的数据对模型进行微调从而提高精度,
  # promoter_level的级别,可选的参数为0到2,建议按照0到2的顺数实验,满足精度即可停止实验
  # 0:表示对模型进行轻微调节,精度提高比较小
  # 1:表示相对1对模型调节幅度稍大,精度提高也比较多
  # 2:表示调节比较激进,可能造成精度的大幅提高也可能造成精度下降
  promoter_level: -1


# 编译器相关参数
compiler_parameters:
  # 编译策略,支持 bandwidth和latency两种优化模式,bandwidth以优化ddr的访问带宽为目标;
  # latency以优化推理时间为目标
  compile_mode: 'latency'
  # 设置debug为True将打开编译器的debug模式,能够输出性能仿真的相关信息,如帧率、DDR带宽占用等
  debug: True
  # 编译模型指定核数,不指定默认编译单核模型, 若编译双核模型,将下边注释打开即可
  # core_num: 2
  • 执行 sh 02_build.sh
[root@bbcad39a8264 mapper]# 
[root@bbcad39a8264 mapper]# sh 02_build.sh 
2020-09-07 16:55:56,386 INFO Start hb_mapper....
2020-09-07 16:55:56,386 INFO hb_mapper version 1.1.6
2020-09-07 16:55:57,712 INFO Working dir: /horizon_x3_tc/horizon_x3_tc_1.1.6/samples/05_miscellaneous/08_hjw_demo/mapper/model_output
2020-09-07 16:55:57,712 INFO Start Model Convert....
2020-09-07 16:55:57,786 INFO Parsing the input parameter:{'data': {'input_shape': [1, 8, 1200, 800]}}
2020-09-07 16:55:57,786 INFO Parsing the calibration parameter
2020-09-07 16:56:12,099 INFO Parsing the hbdk parameter:{'compile_mode': 'latency', 'debug': True}
2020-09-07 16:56:12,100 INFO HorizonNN version: 0.6.10
2020-09-07 16:56:12,100 INFO HBDK version: 3.10.3
2020-09-07 16:56:12,100 INFO Start to parse the onnx model.
2020-09-07 16:56:12,382 INFO ONNX model info:
ONNX IR version:  6
Opset version:    10
Input name:       data, [1, 8, 1200, 800]
2020-09-07 16:56:12,699 INFO The onnx model was parsed successfully.
2020-09-07 16:56:12,701 INFO Model input names: ['data']
2020-09-07 16:56:12,701 INFO Input preprocessing:{'data': {'input_shape': [1, 8, 1200, 800]}}
2020-09-07 16:56:14,152 INFO Saving the original float model: hjw_demo_original_float_model.onnx.
2020-09-07 16:56:14,153 INFO Start to optimize the model.
2020-09-07 16:56:14,790 INFO The model was optimized successfully.
2020-09-07 16:56:15,646 INFO Saving the optimized model: hjw_demo_optimized_float_model.onnx.
2020-09-07 16:56:15,646 INFO Start to calibrate the model.
2020-09-07 16:56:15,852 INFO Run calibration model with kl method.
2020-09-07 16:56:16,260 INFO number of calibration data samples: 48
2020-09-07 16:56:45.050338176 [E:onnxruntime:, sequential_executor.cc:165 Execute] Non-zero status code returned while running Reshape node. Name:'Reshape_316' Status Message: /home/jenkins/workspace/model_convert/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:43 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{8,75,50,14}, requested shape:{1,7500,7,1}
Stacktrace:

2020-09-07 16:56:45,059 WARNING [Warning]: Error when execute ONNXRuntime with batch_size=8 during calibration phase;
2020-09-07 16:56:45,059 INFO Reset batch_size=1 and execute calibration again...
2020-09-07 17:02:32,967 INFO End calibrate the model.
2020-09-07 17:02:32,969 INFO Start to quantize the model.
2020-09-07 17:02:36,577 INFO The model was quantized successfully.
2020-09-07 17:02:39,308 INFO Saving the quantized model: hjw_demo_quantized_model.onnx.
2020-09-07 17:02:39,308 INFO Start to compile the model with march: bernoulli2.
2020-09-07 17:02:42,068 INFO Compile submodel: torch-jit-export_subgraph_0
2020-09-07 17:02:45,382 INFO hbdk-cc parameters:{'optimize-level': 'O2', 'optimize-target': 'fast', 'debug': 'debug', 'input-layout': 'NHWC', 'output-layout': 'NHWC'}
[==================================================] 100%
2020-09-07 17:04:48,453 INFO The model was compiled successfully.
2020-09-07 17:04:48,454 INFO The node information of hybrid model:
-------------------------------------------
Node                                   Type 
-------------Start Subgraph 0-------------
Conv_0                                 BPU  
Conv_3                                 BPU  
Conv_6                                 BPU  
MaxPool_9                              BPU  
Conv_10                                BPU  
Conv_13                                BPU  
Conv_16                                BPU  
Conv_18                                BPU  
Conv_22                                BPU  
Conv_25                                BPU  
Conv_28                                BPU  
Conv_32                                BPU  
Conv_35                                BPU  
Conv_38                                BPU  
Conv_42                                BPU  
Conv_45                                BPU  
Conv_48                                BPU  
Conv_52                                BPU  
Conv_55                                BPU  
Conv_58                                BPU  
AveragePool_60                         BPU  
Conv_61                                BPU  
Conv_65                                BPU  
Conv_68                                BPU  
Conv_71                                BPU  
Conv_75                                BPU  
Conv_78                                BPU  
Conv_81                                BPU  
Conv_85                                BPU  
Conv_88                                BPU  
Conv_91                                BPU  
Conv_95                                BPU  
Conv_98                                BPU  
Conv_101                               BPU  
Conv_105                               BPU  
Conv_108                               BPU  
Conv_111                               BPU  
Conv_115                               BPU  
Conv_118                               BPU  
Conv_121                               BPU  
AveragePool_123                        BPU  
Conv_124                               BPU  
Conv_128                               BPU  
Conv_131                               BPU  
Conv_134                               BPU  
Conv_138                               BPU  
Conv_141                               BPU  
Conv_144                               BPU  
Conv_148                               BPU  
Conv_151                               BPU  
Conv_154                               BPU  
Conv_156                               BPU  
Conv_160                               BPU  
Conv_163                               BPU  
Conv_166                               BPU  
Conv_170                               BPU  
Conv_173                               BPU  
Conv_176                               BPU  
Conv_180                               BPU  
Conv_182                               BPU  
Conv_185                               BPU  
Resize_216                             BPU  
Conv_217                               BPU  
Conv_220                               BPU  
Resize_251                             BPU  
Conv_252                               BPU  
Conv_255                               BPU  
Conv_257                               BPU  
Conv_259                               BPU  
Conv_261                               BPU  
Conv_263                               BPU  
Conv_264                               BPU  
Conv_266                               BPU  
Conv_268                               BPU  
Conv_270                               BPU  
Conv_271                               BPU  
Conv_273                               BPU  
Conv_275                               BPU  
Conv_277                               BPU  
Conv_278                               BPU  
Conv_280                               BPU  
Conv_282                               BPU  
Conv_284                               BPU  
Conv_285                               BPU  
Conv_287                               BPU  
Conv_289                               BPU  
Conv_291                               BPU  
Conv_292                               BPU  
Conv_294                               BPU  
Conv_296                               BPU  
Conv_298                               BPU  
--------------End Subgraph 0--------------
Reshape_301                            CPU  
Reshape_304                            CPU  
Reshape_307                            CPU  
Reshape_310                            CPU  
Reshape_313                            CPU  
Reshape_316                            CPU  
Concat_317                             CPU  
Concat_318                             CPU  
Concat_319                             CPU  
--------------------End--------------------
2020-09-07 17:04:48,465 INFO start convert to *.bin file....
2020-09-07 17:04:48,500 INFO Convert to runtime bin file sucessfully!
2020-09-07 17:04:48,501 INFO End Model Convert
[root@bbcad39a8264 mapper]# 

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

77wpa

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值