官方预训练模型转换
- 下载yolov7源码解压到本地,并配置基础运行环境。
- 下载官方预训练模型
- 进入yolov7-main目录下,新建文件夹weights,并将步骤2中下载的权重文件放进去。
- 修改models/yolo.py文件
def forward(self, x):
# x = x.copy() # for profiling
z = [] # inference output
self.training |= self.export
for i in range(self.nl):
x[i] = self.m[i](x[i]).sigmoid() # conv
return x[0], x[1], x[2]
- 新建export_nnie.py文件
import os
import torch
import onnx
from onnxsim import simplify
import onnxoptimizer
import argparse
from models.yolo import Detect, Model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default='./weights/yolov7.pt', help='initial weights path')
parser.add_argument('--cfg', type=str, default='./cfg/deploy/yolov7.yaml', help='initial weights path')
#================================================================
opt = parser.parse_args()
print(opt)
#Save Only weights
ckpt = torch.load(opt.weights, map_location=torch.device('cpu'))
torch.save(ckpt['model'].state_dict(), opt.weights.replace(".pt", "-model.pt"))
#Load model without postprocessing
new_model = Model(opt.cfg)
new_model.load_state_dict(torch.load(opt.weights.replace(".pt", "-model.pt"), map_location=torch.device('cpu')), False)
new_model.eval()
#save to JIT script
example = torch.rand(1, 3, 640, 640)
traced_script_module = torch.jit.trace(new_model, example)
traced_script_module.save(opt.weights.replace(".pt", "-jit.pt"))
#save to onnx
f = opt.weights.replace(".pt", ".onnx")
torch.onnx.export(new_model, example, f, verbose=False, opset_version=12,
training=torch.onnx.TrainingMode.EVAL,
do_constant_folding=True,
input_names=['data'],
output_names=['out0','out1','out2'])
#onnxsim
model_simp, check = simplify(f)
assert check, "Simplified ONNX model could not be validated"
onnx.save(model_simp, opt.weights.replace(".pt", "-sim.onnx"))
#optimize onnx
passes = ["extract_constant_to_initializer", "eliminate_unused_initializer"]
optimized_model = onnxoptimizer.optimize(model_simp, passes)
onnx.checker.check_model(optimized_model)
onnx.save(optimized_model, opt.weights.replace(".pt", "-op.onnx"))
print('finished exporting onnx')
- 命令行执行python3 export_nnie.py脚本(默认为yolov7.pt, 加–weights参数可指定权重,–cfg指定模型配置文件),转换成功会输出一下信息, 转换后的模型存于权重同级目录(*-op.onnx后缀模型)
Namespace(cfg='./cfg/deploy/yolov7.yaml', weights='./weights/yolov7.pt')
finished exporting onnx
- 修改onnx网络结构,新建modify_onnx.py, yolov7-tiny不需要此步骤
import onnx
model = onnx.load("./weights/yolov7-op.onnx")
graph = model.graph
# print(onnx.helper.printable_graph(graph)) # 输出onnx的计算图
for index,n in enumerate(graph.node):
if n.name in ["Add_291","Add_298","Add_305"]:
graph.node.remove(n)
if n.name == "Add_289":
n.output[0] = "946"
if n.name == "Add_296":
n.output[0] = "955"
if n.name == "Add_303":
n.output[0] = "964"
for index,n in enumerate(graph.node):
print(n.name)
onnx.checker.check_model(model)
onnx.save(model,"./weights/yolov7-op.onnx")
节点中Add(B=0)在转换caffe时会报错,直接删除节点, 并将下层输入替换为上层输出。
海思相机植入-模型转换篇
前期准备
- Hi3516/3519芯片系列软件定义相机
- Ruyistudio开发工具 用于转换NNIE模型
- vscode/MobaXterm 用于相机后台调试
- pycaffe环境 支持上采样、permute等自定义操作
详细流程
-
onnx->caffe
可参考yolov5_onnx2caffe开源工程, 转换成功会输出一下信息。
-
打开Ruyistudio开发工具,新建nnie项目, 主要soc version要与目标相机版本相匹配
-
打开.cfg文件,配置onnx2nnie必要参数, 然后点击运行按钮
-
运行成功会输出以下信息
海思相机植入-NNIE加载推理篇
- 使用海思NNIE服务接口加载推理模型
下集预告
- 海思NNIE开发·yolov8