目录
onnx转ncnn模型:
./onnx2ncnn lw50.onnx lw50.param lw50.bin
ncnn mem加密模型
ncnnmem加密模型
在Ubuntu下编译ncnn后,在build/tools/下执行
./ncnnmem crnn.param crnn.bin crnn.id.h crnn.mem.h
生成crnn.id.h crnn.mem.h crnn.param.bin
python pth转onnx,再转ncnn,再加密
import os.path
import subprocess
import torch
from models.faceland import FaceLanndInference
if __name__ == '__main__':
pth_path=r'../weights/0.1003_slim128_epoch_47.pth'
checkpoint = torch.load(pth_path)
# checkpoint = torch.load(r'0.0887_slim128_epoch_1250.pth')
plfd_backbone = FaceLanndInference() # .cuda()
plfd_backbone.load_state_dict(checkpoint)
plfd_backbone.eval()
file_a=os.path.basename(pth_path)[:-4]
onnx_name = "faceland_98.onnx"
data = torch.rand(1, 3, 112, 112)
torch.onnx.export(plfd_backbone,
data,
onnx_name,
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input'],
output_names=['output'])
cmd=rf"E:\project\angle_net\xuanzhuan_detect\ncnn-20220420-windows-vs2017\x64\bin\onnx2ncnn.exe {onnx_name} {file_a}.param {file_a}.bin"
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
r = p.stdout.read()
print(r)
cmd=rf"E:\project\angle_net\xuanzhuan_detect\ncnn-20220420-windows-vs2017\x64\bin\ncnn2mem.exe {file_a}.param {file_a}.bin landmark_0.1003_47.id.h landmark_0.1003_47.mem.h"
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
r = p.stdout.read()
print(r)
c++ 模型打包进so文件
这时候加载模型有多种方式,为了以后发布方便,准备把模型打包进so文件中,这样子就不用额外提供模型给前端。这样子就会用到crnn.id.h crnn.mem.h ,作为头文件,包含到项目中。具体调用方式如下
#include "net.h"
#include <opencv2/opencv.hpp>
#include <vector>
#include <iostream>
#include "crnn.id.h"
#include "crnn.mem.h"
std::string dir = "F:/";
const int img_width = 144;
const int img_height = 32;
const int output_length = 37;
void pretty_print(const ncnn::Mat& m)
{
std::cout << m.c << std::endl;
std::cout << m.h << std::endl;
std::cout << m.w << std::endl;
for (int q = 0; q < m.c; q++)
{
const float* ptr = m.channel(q);
for (int y = 0; y < m.h; y++)
{
for (int x = 0; x < m.w; x++)
{
//printf("%f ", ptr[x]);
}
ptr += m.w;
//printf("\n");
}
//printf("------------------------\n");
}
}
int main(int argc, char** argv)
{
cv::Mat img = cv::imread("G:/dataset/billinfo/phonenum/ori_set/baidu_rec/done/baidu_error/0009-JT5072471253611-out_warehouse-JTSD-1.jpg");
cv::cvtColor(img, img, cv::COLOR_BGR2GRAY);
int w = img.cols;
int h = img.rows;
ncnn::Mat in = ncnn::Mat::from_pixels_resize(img.data, ncnn::Mat::PIXEL_GRAY, w, h, img_width, img_height);
const float mean_vals[3] = { 0.5f * 255.f, 0.5f * 255.f, 0.5f * 255.f };
const float norm_vals[3] = { 1 / 0.5f / 255.f, 1 / 0.5f / 255.f, 1 / 0.5f / 255.f };
in.substract_mean_normalize(mean_vals, norm_vals);
ncnn::Net net;
net.load_param(crnn_param_bin);
net.load_model(crnn_bin);
ncnn::Extractor ex = net.create_extractor();
ex.set_light_mode(true);
ex.set_num_threads(4);
ncnn::Mat out;
ex.input(crnn_param_id::BLOB_data, in);
ex.extract(crnn_param_id::BLOB_pred_fc, out);
pretty_print(out);
}
模型转换成功,但是在调用时发现结果输出有问题,仔细研究发现,Squeeze,Transpose操作不起作用。还是加密模型操作的有问题,尝试通过windows编译的ncnn加密模型,得到对应文件后尝试,发现能够正常输出结果。所以这是Ubuntu下的ncnn编译的有问题??
————————————————
版权声明:本文为CSDN博主「whatsuo」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/whatsuo/article/details/122589592