Ubuntu16.04下Tensorflow C++编译并调用pb文件(一)

通常tensorflow训练深度学习网络都是在python语言中实现的,因为在python环境中安装tensorflow非常方便并且tensorflow针对python的接口也非常友好,但有些时候我们又必须在C++环境中进行开发。所以我们希望利用python去训练网络,训练完后将网络冻结生成pb文件,然后通过C++版的tensorflow进行调用。但编译C++版的tensorflow相对python要麻烦些,所以本文主要是做个编译记录方便以后查看。当然网上也有很多参考文献,但自己在编译过程中总是会遇到一些问题,所以本文会记录的更加细致。

说下运行环境,Ubuntu16.04,python3.5,CUDA 9.0,cuDNN8.0, tensorflow1.9-GPU(python环境下的),准备编译安装Tensorflow1.9-GPU(C++)版。

列个主要步骤:1.安装protobuf   2.安装Bazel   3.clone tensorflow源码并编译  4.安装Eigen库  5.通过Cmake读取训练好的pb文件 6.通过qtcreator读取训练好的pb文件

1 安装Protobuf

安装Protobuf有一点很重要,要与所编译的tensorflow版本相对应,否则后期会遇到很多问题。但具体哪个版本的Tensorflow对应哪个版本的Protobuf我也没找到相关文献,因为在我安装过程中遇到过很多问题,最后试了好几个版本后在找到合适的。这里我使用的Protobuf版本是3.5.0(对应tensorflow1.9)。首先贴出protobuf官方的链接

 

如图所示,点击最下面的选项Source Code(tar.gz)进行下载,下载后解压到个人文件夹(平时要养成文件归类的好习惯啊,要不然打开文件夹一团糟头都大), 打开文件夹反击鼠标——打开终端。在编译安装protobuf之前需要先安装一些工具(automake libtool)。

 

在终端依次输入指令:

sudo apt-get install automake libtool
./autogen.sh
./configure
make
sudo make install
sudo ldconfig
# sudo make uninstall 安装错版本后卸载指令
protoc --version  # 查看protobuf版本

 

2 安装Bazel

同样安装Bazel也要与Tensorflow版本相对应,这里我使用的Bazel版本是0.15.2.给出Bazel的下载链接

下载二进制文件bazel-0.15.2-installer-linux-x86_64.sh.

接着打开终端运行Bazel安装程序

chmod +x bazel-0.15.2-installer-linux-x86_64.sh
./bazel-0.15.2-installer-linux-x86_64.sh --user

安装完后设置环境。打开~/.bashrc文件(sudo gedit ~/.bashrc)并在文件最后加入如下指令

export PATH="$PATH:$HOME/bin"

保存后使其生效,输入指令(source ~/.bashrc)

 

3 Clone Tensorflow源码并进行编译

首先从github上clone Tensorflow的源码,打开终端输入指令如下

git clone --recursive https://github.com/tensorflow/tensorflow

文件下载完后,通过终端打开文件(由于现在Tensorflow已经更新到1.13版本,而我要的是1.9的,所以先切换分支)

cd ./tensorflow
git checkout r1.9
./configure

在configure过程中基本都可以选No,具体每项的含义大家可以百度,但需要注意的是,如果要用GPU在build Tensorflow with CUDA support ? 选项中一定要选择Y,并输入对应的CUDA以及cuDNN版本(Tensorflow1.9-GPU所需的CUDA是9.0,cuDNN是8.0)。

配置完成后用Bazel进行编译:

bazel build --config=opt //tensorflow:libtensorflow_cc.so  # CPU版
bazel build --config=opt --config=cuda //tensorflow:libtensorflow_cc.so  # GPU版

注意: 若在C++环境中需要使用opencv环境,建议使用以下指令编译:(若不使用该指令可能会遇到opencv imread图像失效问题,问题详情见链接)

bazel build --config=monolithic //tensorflow:libtensorflow_cc.so

 

一般都要编译很长时间30-60min的样子,编译完后的大致信息如下所示:

Target //tensorflow:libtensorflow_cc.so up-to-date:
  bazel-bin/tensorflow/libtensorflow_cc.so

INFO: Elapsed time: 1233.631s, Critical Path: 48.36s
INFO: 2724 processes: 2724 local.
INFO: Build completed successfully, 2842 total actions

 注意:看下路径 ./tensorflow/tensorflow/contrib/makefile下有没有downloads文件夹。如果没有的话需要在./tensorflow/tensorflow/contrib/makefile文件夹下打开终端执行一个sh脚本文件:

./download_dependencies.sh

执行脚本文件后,会开始下载一些依赖文件,下载完后就会有downloads文件夹了。

 

4 安装Eigen库

首先打开上一步的downloads文件夹,里面会有个eigen文件夹,进入eigen文件夹打开终端依次输入如下指令:

mkdir build
cd build
cmake ..
make
sudo make install

安装完毕后,在usr/local/include目录下会出现eigen3文件夹

 

5 Cmake读取训练好的pb文件

(这一步基本都是按照网上网友的流程走的)建立一个Python项目生成一个训练模型,首先在项目文件夹中建立一个model文件夹,在创建Python文件执行以下代码:

import tensorflow as  tf
import  numpy as np
import  os
tf.app.flags.DEFINE_integer('training_iteration', 1000,
'number of training iterations.')
tf.app.flags.DEFINE_integer('model_version', 1, 'version number of the model.')
tf.app.flags.DEFINE_string('work_dir', 'model/', 'Working directory.')
FLAGS = tf.app.flags.FLAGS
 
sess = tf.InteractiveSession()
 
x = tf.placeholder('float', shape=[None, 5],name="inputs")
y_ = tf.placeholder('float', shape=[None, 1])
w = tf.get_variable('w', shape=[5, 1], initializer=tf.truncated_normal_initializer)
b = tf.get_variable('b', shape=[1], initializer=tf.zeros_initializer)
sess.run(tf.global_variables_initializer())
y = tf.add(tf.matmul(x, w) , b,name="outputs")
ms_loss = tf.reduce_mean((y - y_) ** 2)
train_step = tf.train.GradientDescentOptimizer(0.005).minimize(ms_loss)
train_x = np.random.randn(1000, 5)
# let the model learn the equation of y = x1 * 1 + x2 * 2 + x3 * 3
train_y = np.sum(train_x * np.array([1, 2, 3,4,5]) + np.random.randn(1000, 5) / 100, axis=1).reshape(-1, 1)
for i in range(FLAGS.training_iteration):
    loss, _ = sess.run([ms_loss, train_step], feed_dict={x: train_x, y_: train_y})
    if i%100==0:
        print("loss is:",loss)
        graph = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def,
                                                             ["inputs", "outputs"])
        tf.train.write_graph(graph, ".", FLAGS.work_dir + "liner.pb",
                             as_text=False)
print('Done exporting!')
print('Done training!')

执行完后会在model文件下生成一个liner.pb文件。

在使用C++调用pb文件之前升级一下Cmake(这里需要升级到3.10版本以上),给出下载链接

我下载的是3.11版(cmake-3.11.0-Linux-x86_64.tar.gz)

下载完后进行解压,解压后将文件的./cmake-3.11.0-Linux-x86_64./bin目录添加到~./bashrc文件中。

$ gedit ~/.bashrc  # 打开~/.bashrc

将bin目录链接添加到~./bashrc文件最后:

export PATH=/home/wz/cmake-3.11.0/bin:$PATH

接着source一下:

source ~/.bashrc

查看cmake版本:

cmake --version

接下来建立C++项目,在文件夹中创建以下几个文件:

model_loader_base.h:

#ifndef CPPTENSORFLOW_MODEL_LOADER_BASE_H
#define CPPTENSORFLOW_MODEL_LOADER_BASE_H
#include <iostream>
#include <vector>
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
 
using namespace tensorflow;
 
namespace tf_model {
 
/**
 * Base Class for feature adapter, common interface convert input format to tensors
 * */
    class FeatureAdapterBase{
    public:
        FeatureAdapterBase() {};
 
        virtual ~FeatureAdapterBase() {};
 
        virtual void assign(std::string, std::vector<double>*) = 0;  // tensor_name, tensor_double_vector
 
        std::vector<std::pair<std::string, tensorflow::Tensor> > input;
 
    };
 
    class ModelLoaderBase {
    public:
 
        ModelLoaderBase() {};
 
        virtual ~ModelLoaderBase() {};
 
        virtual int load(tensorflow::Session*, const std::string) = 0;     //pure virutal function load method
 
        virtual int predict(tensorflow::Session*, const FeatureAdapterBase&, const std::string, double*) = 0;
 
        tensorflow::GraphDef graphdef; //Graph Definition for current model
 
    };
 
}
 
#endif //CPPTENSORFLOW_MODEL_LOADER_BASE_H

ann_model_loader.h:

#ifndef CPPTENSORFLOW_ANN_MODEL_LOADER_H
#define CPPTENSORFLOW_ANN_MODEL_LOADER_H
 
#include "model_loader_base.h"
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
 
using namespace tensorflow;
 
namespace tf_model {
 
/**
 * @brief: Model Loader for Feed Forward Neural Network
 * */
    class ANNFeatureAdapter: public FeatureAdapterBase {
    public:
 
        ANNFeatureAdapter();
 
        ~ANNFeatureAdapter();
 
        void assign(std::string tname, std::vector<double>*) override; // (tensor_name, tensor)
 
    };
 
    class ANNModelLoader: public ModelLoaderBase {
    public:
        ANNModelLoader();
 
        ~ANNModelLoader();
 
        int load(tensorflow::Session*, const std::string) override;    //Load graph file and new session
 
        int predict(tensorflow::Session*, const FeatureAdapterBase&, const std::string, double*) override;
 
    };
 
}
 
 
#endif //CPPTENSORFLOW_ANN_MODEL_LOADER_H

ann_model_loader.cpp:

#include <iostream>
#include <vector>
#include <map>
#include "ann_model_loader.h"
//#include <tensor_shape.h>
 
using namespace tensorflow;
 
namespace tf_model {
 
/**
 * ANNFeatureAdapter Implementation
 * */
    ANNFeatureAdapter::ANNFeatureAdapter() {
 
    }
 
    ANNFeatureAdapter::~ANNFeatureAdapter() {
 
    }
 
/*
 * @brief: Feature Adapter: convert 1-D double vector to Tensor, shape [1, ndim]
 * @param: std::string tname, tensor name;
 * @parma: std::vector<double>*, input vector;
 * */
    void ANNFeatureAdapter::assign(std::string tname, std::vector<double>* vec) {
        //Convert input 1-D double vector to Tensor
        int ndim = vec->size();
        if (ndim == 0) {
            std::cout << "WARNING: Input Vec size is 0 ..." << std::endl;
            return;
        }
        // Create New tensor and set value
        Tensor x(tensorflow::DT_FLOAT, tensorflow::TensorShape({1, ndim})); // New Tensor shape [1, ndim]
        auto x_map = x.tensor<float, 2>();
        for (int j = 0; j < ndim; j++) {
            x_map(0, j) = (*vec)[j];
        }
        // Append <tname, Tensor> to input
        input.push_back(std::pair<std::string, tensorflow::Tensor>(tname, x));
    }
 
/**
 * ANN Model Loader Implementation
 * */
    ANNModelLoader::ANNModelLoader() {
 
    }
 
    ANNModelLoader::~ANNModelLoader() {
 
    }
 
/**
 * @brief: load the graph and add to Session
 * @param: Session* session, add the graph to the session
 * @param: model_path absolute path to exported protobuf file *.pb
 * */
 
    int ANNModelLoader::load(tensorflow::Session* session, const std::string model_path) {
        //Read the pb file into the grapgdef member
        tensorflow::Status status_load = ReadBinaryProto(Env::Default(), model_path, &graphdef);
        if (!status_load.ok()) {
            std::cout << "ERROR: Loading model failed..." << model_path << std::endl;
            std::cout << status_load.ToString() << "\n";
            return -1;
        }
 
        // Add the graph to the session
        tensorflow::Status status_create = session->Create(graphdef);
        if (!status_create.ok()) {
            std::cout << "ERROR: Creating graph in session failed..." << status_create.ToString() << std::endl;
            return -1;
        }
        return 0;
    }
 
/**
 * @brief: Making new prediction
 * @param: Session* session
 * @param: FeatureAdapterBase, common interface of input feature
 * @param: std::string, output_node, tensorname of output node
 * @param: double, prediction values
 * */
 
    int ANNModelLoader::predict(tensorflow::Session* session, const FeatureAdapterBase& input_feature,
                                const std::string output_node, double* prediction) {
        // The session will initialize the outputs
        std::vector<tensorflow::Tensor> outputs;         //shape  [batch_size]
 
        // @input: vector<pair<string, tensor> >, feed_dict
        // @output_node: std::string, name of the output node op, defined in the protobuf file
        tensorflow::Status status = session->Run(input_feature.input, {output_node}, {}, &outputs);
        if (!status.ok()) {
            std::cout << "ERROR: prediction failed..." << status.ToString() << std::endl;
            return -1;
        }
 
        //Fetch output value
        std::cout << "Output tensor size:" << outputs.size() << std::endl;
        for (std::size_t i = 0; i < outputs.size(); i++) {
            std::cout << outputs[i].DebugString();
        }
        std::cout << std::endl;
 
        Tensor t = outputs[0];                   // Fetch the first tensor
        int ndim = t.shape().dims();             // Get the dimension of the tensor
        auto tmap = t.tensor<float, 2>();        // Tensor Shape: [batch_size, target_class_num]
        int output_dim = t.shape().dim_size(1);  // Get the target_class_num from 1st dimension
        std::vector<double> tout;
 
        // Argmax: Get Final Prediction Label and Probability
        int output_class_id = -1;
        double output_prob = 0.0;
        for (int j = 0; j < output_dim; j++) {
            std::cout << "Class " << j << " prob:" << tmap(0, j) << "," << std::endl;
            if (tmap(0, j) >= output_prob) {
                output_class_id = j;
                output_prob = tmap(0, j);
            }
        }
 
        // Log
        std::cout << "Final class id: " << output_class_id << std::endl;
        std::cout << "Final value is: " << output_prob << std::endl;
 
        (*prediction) = output_prob;   // Assign the probability to prediction
        return 0;
    }
 
}

main.cpp

#include <iostream>
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
#include "ann_model_loader.h"
 
using namespace tensorflow;
 
int main(int argc, char* argv[]) {
    if (argc != 2) {
        std::cout << "WARNING: Input Args missing" << std::endl;
        return 0;
    }
    std::string model_path = argv[1];  // Model_path *.pb file
 
    // TensorName pre-defined in python file, Need to extract values from tensors
    std::string input_tensor_name = "inputs";
    std::string output_tensor_name = "outputs";
 
    // Create New Session
    Session* session;
    Status status = NewSession(SessionOptions(), &session);
    if (!status.ok()) {
        std::cout << status.ToString() << "\n";
        return 0;
    }
 
    // Create prediction demo
    tf_model::ANNModelLoader model;  //Create demo for prediction
    if (0 != model.load(session, model_path)) {
        std::cout << "Error: Model Loading failed..." << std::endl;
        return 0;
    }
 
    // Define Input tensor and Feature Adapter
    // Demo example: [1.0, 1.0, 1.0, 1.0, 1.0] for Iris Example, including bias
    int ndim = 5;
    std::vector<double> input;
    for (int i = 0; i < ndim; i++) {
        input.push_back(1.0);
    }
 
    // New Feature Adapter to convert vector to tensors dictionary
    tf_model::ANNFeatureAdapter input_feat;
    input_feat.assign(input_tensor_name, &input);   //Assign vec<double> to tensor
 
    // Make New Prediction
    double prediction = 0.0;
    if (0 != model.predict(session, input_feat, output_tensor_name, &prediction)) {
        std::cout << "WARNING: Prediction failed..." << std::endl;
    }
    std::cout << "Output Prediction Value:" << prediction << std::endl;
 
    return 0;
}

接着在文件夹中建立一个CMakeList.txt文件,在文件中写入:

cmake_minimum_required(VERSION 3.10)
project(cpptensorflow)
set(CMAKE_CXX_STANDARD 11)
link_directories(/home/wz/cpptest/tensorflow/bazel-bin/tensorflow)
include_directories(
        /home/wz/cpptest/tensorflow
        /home/wz/cpptest/tensorflow/bazel-genfiles
        /home/wz/cpptest/tensorflow/bazel-bin/tensorflow
        /usr/local/include/eigen3
)
add_executable(cpptensorflow main.cpp ann_model_loader.h model_loader_base.h ann_model_loader.cpp)
target_link_libraries(cpptensorflow tensorflow_cc tensorflow_framework)

注意将所有的directories改成你自己相应文件所在的目录。编辑好保存,文件夹中应该有这么几个文件:

接着在该文件夹中打开终端输入指令,开始编译项目:

mkdir  build
cd  build
cmake ..
make 

编译完后会在新建的build文件夹中生成cpptensorflow可执行文件,接着在终端中调用该执行文件:

./cpptensorflow /home/wz/PycharmProjects/create_C++_TEST/model/liner.pb

注意,将后面的liner.pb文件所在目录改成你自己pb文件所在的位置,最后可得到以下结果:

到这就算成功了!

 

6 使用qtcreator环境读取pb文件

(1)将"tensorflow/bazel-genfiles/tensorflow/"中的cc和core文件夹中的内容copy到"tensorflow/tensorflow/"中,然后选择合并覆盖。

(2)进入tensorflow/bazel-bin/tensorflow文件夹下,会有一个编译生成的libtensorflow_cc.so文件。若你是通过bazel build --config=opt //tensorflow:libtensorflow_cc.so指令进行编译的,那么将libtensorflow_cc.so和libtensorflow_framework.so两个文件复制到/usr/local/lib/文件夹下:

sudo cp libtensorflow_cc.so /usr/local/lib/
sudo cp libtensorflow_framework.so /usr/local/lib/

若你是通过bazel build --config=monolithic //tensorflow:libtensorflow_cc.so指令进行编译的,那么你只需要将libtensorflow_cc.so文件复制到/usr/local/lib文件夹下:

sudo cp libtensorflow_cc.so /usr/local/lib

(3)接下来安装qtcreator和opencv,具体安装流程可参考该博文

(4)打开qtcreator建立一个新的空项目,然后将刚使用的Cmake进行编译的四个文件添加进项目(ann_model_loader.cpp,ann_model_loader.h,model_loader_base.h,main.cpp)。

(5)接着配置.pro文件,由于我需要使用opencv,所以我是通过bazel build --config=monolithic //tensorflow:libtensorflow_cc.so指令编译的,现在需要将可能用到的头文件和可能需要链接的文件写入.pro文件中,以下是我的编写的.pro文件:

TEMPLATE = app
CONFIG += console c++11
CONFIG -= app_bundle
CONFIG -= qt

SOURCES += \
    main.cpp

INCLUDEPATH += /usr/local/include \
/usr/local/include/opencv \
/usr/local/include/opencv2 \
/home/wz/tf_c_install/tensor_source/tensorflow \
/usr/local/include/eigen3



LIBS += /usr/local/lib/libtensorflow_cc.so \
/usr/local/lib/libopencv_calib3d.so \
/usr/local/lib/libopencv_shape.so.3.4.4 \
/usr/local/lib/libopencv_calib3d.so.3.4 \
/usr/local/lib/libopencv_stitching.so \
/usr/local/lib/libopencv_calib3d.so.3.4.4 \
/usr/local/lib/libopencv_stitching.so.3.4 \
/usr/local/lib/libopencv_core.so \
/usr/local/lib/libopencv_stitching.so.3.4.4 \
/usr/local/lib/libopencv_core.so.3.4 \
/usr/local/lib/libopencv_superres.so \
/usr/local/lib/libopencv_core.so.3.4.4 \
/usr/local/lib/libopencv_superres.so.3.4 \
/usr/local/lib/libopencv_dnn.so \
/usr/local/lib/libopencv_superres.so.3.4.4 \
/usr/local/lib/libopencv_dnn.so.3.4 \
/usr/local/lib/libopencv_videoio.so \
/usr/local/lib/libopencv_dnn.so.3.4.4 \
/usr/local/lib/libopencv_videoio.so.3.4 \
/usr/local/lib/libopencv_features2d.so \
/usr/local/lib/libopencv_videoio.so.3.4.4 \
/usr/local/lib/libopencv_features2d.so.3.4 \
/usr/local/lib/libopencv_video.so \
/usr/local/lib/libopencv_features2d.so.3.4.4 \
/usr/local/lib/libopencv_video.so.3.4 \
/usr/local/lib/libopencv_flann.so \
/usr/local/lib/libopencv_video.so.3.4.4 \
/usr/local/lib/libopencv_flann.so.3.4 \
/usr/local/lib/libopencv_videostab.so \
/usr/local/lib/libopencv_flann.so.3.4.4 \
/usr/local/lib/libopencv_videostab.so.3.4 \
/usr/local/lib/libopencv_highgui.so \
/usr/local/lib/libopencv_videostab.so.3.4.4 \
/usr/local/lib/libopencv_highgui.so.3.4 \
/usr/local/lib/libopencv_highgui.so.3.4.4 \
/usr/local/lib/libopencv_imgcodecs.so \
/usr/local/lib/libopencv_imgcodecs.so.3.4 \
/usr/local/lib/libopencv_imgcodecs.so.3.4.4 \
/usr/local/lib/libopencv_imgproc.so \
/usr/local/lib/libopencv_imgproc.so.3.4 \
/usr/local/lib/libopencv_imgproc.so.3.4.4 \
/usr/local/lib/libopencv_ml.so \
/usr/local/lib/libopencv_ml.so.3.4 \
/usr/local/lib/libopencv_ml.so.3.4.4 \
/usr/local/lib/libopencv_objdetect.so \
/usr/local/lib/libopencv_objdetect.so.3.4 \
/usr/local/lib/libopencv_objdetect.so.3.4.4 \
/usr/local/lib/libopencv_photo.so \
/usr/local/lib/libopencv_photo.so.3.4 \
/usr/local/lib/libopencv_photo.so.3.4.4 \
/usr/local/lib/libopencv_shape.so \
/usr/local/lib/libopencv_shape.so.3.4 \

使用时记得将路径改成自己的路径,接着就可以编译了,若编译过程中提示缺少什么头文件,不要慌,打开tensorflow/tensorflow/文件夹搜索提示中缺少的头文件,一般都能找到,然后将这个头文件所在目录添加到.pro文件的INCLUDEPATH中。到这一步基本就没什么问题了,撒花!!!

(在我的下一篇博文中进一步结合代码讲解了如何调用Tensorflow提供的object detection模块的pb文件,有兴趣的同学可以看看)

最后给些重要参考链接:

tensorflow c++接口,python训练模型,c++调用

tensorflow1.4 c++编译以及API使用

安装并使用tensorflow c++ api

tensorflow和opencv冲突问题的解决

Ubuntu16.04 安装/更新/升级cmake到 cmake3.9.1的具体安装过程

 

  • 5
    点赞
  • 39
    收藏
    觉得还不错? 一键收藏
  • 31
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 31
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值