tensorflow转NCNN

关于NCNN见NCNN cmake VS2017 编译

目前腾讯的NCNN没有tensorflow2ncnn的工具,目前有一种解决方案是把tensorflow的.pb模型转为coreml模型,接着转为onnx模型,最后转成NCNN。

下面提供一个tensorflow转NCNN的方法,据说在基于MobileNetV2修改的模型上测试通过,模型输出正确;我自己训练的基于ResNet结构的简单分类模型,用该方法试了,模型输出也正确。该方法的步骤如下:

1、使用freeze_graph.py生成.pb模型;参见【tensorflow】生成.pb文件

2、使用tf-coreml将tf模型转换为coreml模型

安装tf-coreml所需依赖项如下:
Python
tensorflow >= 1.5.0
coremltools >= 0.8
numpy >= 1.6.2
protobuf >= 3.1.0

因为我是windows,安装coremltools要用如下命令(电脑里要有安装git):

pip install git+https://github.com/apple/coremltools
pip install -U tfcoreml

tfcoreml安装好了,用如下脚本把tf的.pb模型转为coreml模型

import tfcoreml as tf_converter

tf_converter.convert(tf_model_path = r'C:\software\tensorflow-onnx-master\examples\WorkCardModel.pb', 
                     mlmodel_path = r'C:\software\tensorflow-onnx-master\examples\my_model.mlmodel', 
                     output_feature_names = ['resnet/predictions/Reshape_1:0'])

转换成功,会看到如下结果:

Core ML model generated. Saved at location: C:\software\tensorflow-onnx-master\examples\my_model.mlmodel 

Core ML input(s): 
 [name: "input_x__0"
type {
  multiArrayType {
    shape: 3
    shape: 96
    shape: 96
    dataType: DOUBLE
  }
}
]
Core ML output(s): 
 [name: "resnet__predictions__Reshape_1__0"
type {
  multiArrayType {
    shape: 3
    dataType: DOUBLE
  }
}
]

3、使用WinMLTools将coreml转换为onnx模型

pip install -U winmltools

安装好winmltools之后,执行如下脚本:

from coremltools.models.utils import load_spec
from winmltools import convert_coreml
from winmltools.utils import save_model

# Load model file
model_coreml = load_spec(r'C:\software\tensorflow-onnx-master\examples\my_model.mlmodel')

# Convert it!
# The automatic code generator (mlgen) uses the name parameter to generate class names.
model_onnx = convert_coreml(model_coreml, 7, name='ExampleModel')

# Save the produced ONNX model in binary format
save_model(model_onnx, r'C:\software\tensorflow-onnx-master\examples\example.onnx')

4、在之前已经编译好的目录下C:\software\ncnn\build-vs2017\tools\onnx,使用onnx2ncnn将onnx模型转换为ncnn

# 默认生成ncnn.bin和ncnn.param
onnx2ncnn.exe example.onnx

# 或者制定名称
onnx2ncnn.exe example.onnx example.bin example.param

生成两个文件:ncnn.bin和ncnn.param

基于ncnn/examples里的squeezenet.cpp修改的三分类模型ncnn.bin和ncnn.param使用代码如下:

// Tencent is pleased to support the open source community by making ncnn available.
//
// Copyright (C) 2017 THL A29 Limited, a Tencent company. All rights reserved.
//
// Licensed under the BSD 3-Clause License (the "License"); you may not use this file except
// in compliance with the License. You may obtain a copy of the License at
//
// https://opensource.org/licenses/BSD-3-Clause
//
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.

#include <stdio.h>
#include <iostream>
#include <algorithm>
#include <vector>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>

#include "platform.h"
#include "net.h"
#if NCNN_VULKAN
#include "gpu.h"
#endif // NCNN_VULKAN


static int detect_squeezenet(const cv::Mat& bgr, std::vector<float>& cls_scores)
{
	ncnn::Net squeezenet;

#if NCNN_VULKAN
	squeezenet.opt.use_vulkan_compute = true;
#endif // NCNN_VULKAN

	squeezenet.load_param("ncnn.param");
	squeezenet.load_model("ncnn.bin");

	ncnn::Mat in = ncnn::Mat::from_pixels_resize(bgr.data, ncnn::Mat::PIXEL_BGR, bgr.cols, bgr.rows, 96, 96);

	const float mean_vals[3] = { 0.f, 0.f, 0.f };
	const float norm_vals[3] = { 1.0 / 255, 1.0 / 255, 1.0 / 255 };
	in.substract_mean_normalize(mean_vals, norm_vals);

	ncnn::Extractor ex = squeezenet.create_extractor();

	ex.input("input_x__0", in);

	ncnn::Mat out;
	ex.extract("resnet__predictions__Reshape_1__0", out);

	cls_scores.resize(out.w);
	for (int j = 0; j < out.w; j++)
	{
		cls_scores[j] = out[j];
	}

	return 0;
}



int main(int argc, char** argv)
{
	if (argc != 2)
	{
		fprintf(stderr, "Usage: %s [imagepath]\n", argv[0]);
		return -1;
	}

	const char* imagepath = argv[1];

	cv::Mat m = cv::imread(imagepath, 1);
	if (m.empty())
	{
		fprintf(stderr, "cv::imread %s failed\n", imagepath);
		return -1;
	}

#if NCNN_VULKAN
	ncnn::create_gpu_instance();
#endif // NCNN_VULKAN

	std::vector<float> cls_scores;

	double start, timeConsume;
	start = static_cast<double>(cv::getTickCount());

	for (int i = 0; i < 1; i++)
		detect_squeezenet(m, cls_scores);

	for(int i = 0; i < 3; i++)
		std::cout << cls_scores[i] << std::endl;

	timeConsume = ((double)cv::getTickCount() - start) / cv::getTickFrequency();
	printf("time: %f s\n", timeConsume);

#if NCNN_VULKAN
	ncnn::destroy_gpu_instance();
#endif // NCNN_VULKAN



	return 0;
}

 

参考:

将其他模型文件转化成Core ML模型文件(PB)

https://github.com/Tencent/ncnn/issues/5

tensorflow模型转ncnn模型

Convert ML models to ONNX with WinMLTools

评论 8
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值