使用C++部署Keras或TensorFlow模型

本文介绍如何在C++环境中部署Keras或TensorFlow模型。

一、对于Keras,

第一步,使用Keras搭建、训练、保存模型。

model.save('./your_keras_model.h5')

第二步,冻结Keras模型。

from keras.models import load_model
import tensorflow as tf
from tensorflow.python.framework import graph_io
from keras import backend as K

def
freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True): from tensorflow.python.framework.graph_util import convert_variables_to_constants graph = session.graph with graph.as_default(): freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or [])) output_names = output_names or [] output_names += [v.op.name for v in tf.global_variables()] input_graph_def = graph.as_graph_def() if clear_devices: for node in input_graph_def.node: node.device = "" frozen_graph = convert_variables_to_constants(session, input_graph_def, output_names, freeze_var_names) return frozen_graph

K.set_learning_phase(0)
keras_model = load_model('./your_keras_model.h5')
print('Inputs are:', keras_model.inputs)
print('Outputs are:', keras_model.outputs)

frozen_graph = freeze_session(K.get_session(), output_names=[out.op.name for out in model.outputs])
graph_io.write_graph(frozen_graph, "./", "your_frozen_model.pb", as_text=False)

二、对于TensorFlow,

1、使用TensorFlow搭建、训练、保存模型。

saver = tf.train.Saver() saver.save(sess, "./your_tf_model.ckpt")

2、冻结TensorFlow模型。

python freeze_graph.py --input_checkpoint=./your_tf_model.ckpt --output_graph=./your_frozen_model.pb --output_node_names=output_node

 

三、使用TensorFlow的C/C++接口调用冻结的模型。这里,我们向模型中输入一张经过opencv处理的图片。

#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
#include "opencv2/opencv.hpp" #include <iostream>
using namespace tensorflow;

void CVMat_to_Tensor(Mat img, Tensor* converted_tensor, int image_height, int image_width){
  float *p = converted_tensor->flat<float>().data();
  cv::Mat tempMat(input_rows, input_cols, CV_32FC1, p);
  img.convertTo(tempMat, CV_32FC1);
}

void preporcess_image(Mat img){

  return;

}

int main(int argc, char* argv[]){

Session* session;
Status status = NewSession(SessionOptions(), &session); GraphDef graph_def; status = ReadBinaryProto(Env::Default(), "./your_frozen_model.pb", &graph_def); status = session->Create(graph_def); string input_node_name = "check the name";
string output_node_name = "check the name";

img = imread('./your_image.png', 0);
input_image_height = img.size().height;
input_image_width = img.size().width;
input_image_channels = img.channels();

preprocess_image(img);
#height, width, channels should meet the requirement of the network.
Tensor input_data(DT_FLOAT, TensorShape({1, input_image_height, input_image_width, input_image_channels}));
CVMat_to_Tensor(img, &input_data, input_image_height, input_image_width);

std::vector<std::pair<string, tensorflow::Tensor>> inputs = {{input_node_name, input_data}};
std::vector<tensorflow::Tensor> outputs;
status = session->Run(inputs, {output_node_name}, {}, &outputs);
Tensor output_data = outputs[0];
#for classification problems, the output_data is a tensor of shape [batch_size, class_num]
auto tp = output_data.tensor<float, 2>();

int class_num = output_data.shape().dim_size(1);
int output_class_id = -1;
double output_prob = 0.0;

for(int j = 0; j < output_dim; j++){
if(tp(0, j) >= output_prob){
output_class_id = j;
output_prob = tp(0, j);
}
}
cout << "Class index is: " << output_class_id << ", with prob " << output_prob << std::endl;
session->Close();
return 0;
}

 

转载于:https://www.cnblogs.com/qiandeheng/p/10175188.html

  • 1
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值