Ubuntu18.04下C++编译tensorflow并在QT中使用

版权声明:本文为博主原创文章,遵循 CC 4.0 by-sa 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/qq_29462849/article/details/84986592

本博文参考:
https://blog.csdn.net/qq_25109263/article/details/81285952
https://blog.csdn.net/dragonchow123/article/details/80682787

介绍

最近要把模型落地了,由于整个系统是基于C++的,但是我们一般都是使用python来训练模型,这个时候就需要进行转换了。tensorflow提供了基于C++的API,但是需要编译,过程在这里记录下,网上的教程还是有点坑的。

PipeLine

整个工作流:keras训练->“my_model.h5”->转换到pb类型->“my_model.pb”->C++端用tensorflowAPI调用并运行模型。

必备环境配置

本机配置:GTX1080ti,Ubuntu18.04

(1)一个配有tensorflow和keras的python环境,建议用anaconda去创建

(2) CUDA9.1, cudnn7.1.2(安装CUDA和cudnn的方法请自行查找,不在本文范围内)

(3)OpenCV3.4(安装OpenCV的方法请自行查找,不在本文范围内)

(4)编译tensorflow C++接口,这在接下来会介绍

编译安装tensorflow的C++接口,本文编译的tensorflow C++是1.7版的

1.配置C++版tensorflow使用时的第三方依赖

(1)Protobuf!!!!!这玩意儿是重中之重,它的版本与tensorflow的版本密切相关,它的版本错了就无法work,我用的3.5.0。

先从以下网址下载protobuf-cpp-3.5.0.tar.gz
https://github.com/google/protobuf/releases
再解压出来,获得一个protobuf-3.5.0的文件夹
cd prtobuf-3.5.0
./configure
sudo make -j8
make check -j8
sudo make install
sudo ldconfig
以上步骤可以完成Protubuf的源码的编译和安装
如果遇到什么问题,建议去看Protobuf的官方的编译安装指南:
https://github.com/google/protobuf/blob/master/src/README.md

(2)Eigen,这是一个C++端的矩阵运算库,这个库只要下载压缩包,解压到某个自己知道的路径下即可

先下载eigen的压缩包
wget http://bitbucket.org/eigen/eigen/get/3.3.4.tar.bz2
下载之后解压,重新命名为eigen3,放到某个路径下,安装就好

mkdir build  
cd build  
cmake ..  
make  
sudo make install

2.编译安装Tensorflow

(1)下载安装编译工具bazel

先下载Bazel的安装包
https://github.com/bazelbuild/bazel/releases,我下载的是bazel-0.10.1-installer-linux-x86_64.sh
然后执行安装
./bazel-0.10.1-installer-linux-x86_64.sh

注意:bazel版本不能过高,否则会报错

(2)编译安装Tensorflow,我的源码路径是~/tensorflow

# 先下载tensorflow源码
git clone https://github.com/tensorflow/tensorflow.git

# 进入tensorflow文件夹
cd tensorflow

# 切换到1.7版本:
git checkout r1.7

# 执行configure
sudo ./configure
这一步需要你指定python路径,需要有各种y/N的选择
建议如下:

    python路径用anaconda的路径:/home/oliver/anaconda3/bin/python
    其他的第一个y/N选择y,后面的都是N
    cuda要选择y,然后会自动搜索cudnn版本
    nccl选择默认的1.3,
    后面的不是选择N就是默认

详见Tensorflow官网的提示:https://www.tensorflow.org/install/install_sources

也可以参考一个链接:https://blog.csdn.net/qq_37674858/article/details/81095101

# 使用bazel去编译
bazel build --config=opt //tensorflow:libtensorflow_cc.so // 无显卡,cpu版本
bazel build --config=opt --config=cuda //tensorflow:libtensorflow_cc.so // 有显卡


....漫长的等待编译,大约20分钟
# 最后显示类似如下的信息,说明编译成功了:
....
Target //tensorflow:libtensorflow_cc.so up-to-date:
  bazel-bin/tensorflow/libtensorflow_cc.so
INFO: Elapsed time: 1192.883s, Critical Path: 174.02s
INFO: 654 processes: 654 local.
INFO: Build completed successfully, 656 total actions

然后回到tensorflow目录下执行:

./tensorflow/contrib/makefile/download_dependencies.sh
#完成后会有一个download文件夹在makefile文件夹中。

然后:

在tensorflow/contrib/makefile下,执行build_all_linux.sh文件,成功后会出现一个gen文件夹。

操作完成后需要把"tensorflow/bazel-genfiles/tensorflow/"中的cc和core文件夹中的内容copy到"tensorflow/tensorflow/"中,然后完成覆盖即可,这一步是为了复制.pb.h和.cc文件。
在这里插入图片描述完成这一步之后:

# 再把必要.h头文件以及编译出来.so的动态链接库文件复制到指定的一些路径下:
sudo mkdir /usr/local/include/tf
sudo cp -r bazel-genfiles/ /usr/local/include/tf/
sudo cp -r tensorflow /usr/local/include/tf/
sudo cp -r third_party /usr/local/include/tf/
sudo cp bazel-bin/tensorflow/libtensorflow_cc.so /usr/local/lib/
sudo cp bazel-bin/tensorflow/libtensorflow_framework.so /usr/local/lib

OK到此为止,tensorflow C++的接口已经搞定!

整个pipline的演示

1.python端用keras训练一个手写数字识别的mnist的demo,代码如下,训练完会产生一个my_model_ep20.h5的模型文件。

from tensorflow.examples.tutorials.mnist import *
from keras.models import *
from keras.layers import *
import numpy as np
 
# os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
 
# 加载数据集
mnist=input_data.read_data_sets("MNIST_data/",one_hot=True)
train_X=mnist.train.images
train_Y=mnist.train.labels
test_X=mnist.test.images
test_Y=mnist.test.labels
 
train_X=train_X.reshape((55000,28,28,1))
test_X=test_X.reshape((test_X.shape[0],28,28,1))
 
print("type of train_X:",type(train_X))
print("size of train_X:",np.shape(train_X))
print("train_X:",train_X)
 
print("type of train_Y:",type(train_Y))
print("size of train_Y:",np.shape(train_Y))
print("train_Y:",train_Y)
 
print("num of test:",test_X.shape[0])
 
 
# 配置模型结构
model=Sequential()
 
model.add(Conv2D(32,(3,3),activation='relu',input_shape=(28,28,1),padding="same"))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.5))
 
 
model.add(Conv2D(64, (3, 3), activation='relu',padding="same"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
 
model.add(Conv2D(128, (3, 3), activation='relu',padding="same"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
 
model.add(Flatten())
model.add(Dense(625,activation="relu"))
model.add(Dropout(0.5))
 
model.add(Dense(10,activation='softmax'))
 
model.compile(loss='categorical_crossentropy',optimizer='adadelta',metrics=['accuracy'])
 
# 训练模型
epochs=20
model.fit(train_X, train_Y, batch_size=32, epochs=epochs)
 
# 用测试集去评估模型的准确度
accuracy=model.evaluate(test_X,test_Y,batch_size=20)
print('\nTest accuracy:',accuracy[1])
 
save_model(model,'my_model_ep{}.h5'.format(epochs))

2.将my_model_ep20.h5的模型转化为my_model_ep20.pb的模型,用的脚本为h5_to_pb.py

from keras.models import load_model
import tensorflow as tf
from keras import backend as K
from tensorflow.python.framework import graph_io
 
def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
    from tensorflow.python.framework.graph_util import convert_variables_to_constants
    graph = session.graph
    with graph.as_default():
        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
        output_names = output_names or []
        output_names += [v.op.name for v in tf.global_variables()]
        input_graph_def = graph.as_graph_def()
        if clear_devices:
            for node in input_graph_def.node:
                node.device = ""
        frozen_graph = convert_variables_to_constants(session, input_graph_def,
                                                      output_names, freeze_var_names)
        return frozen_graph
 
 
"""----------------------------------配置路径-----------------------------------"""
epochs=20
h5_model_path='./my_model_ep{}.h5'.format(epochs)
output_path='.'
pb_model_name='my_model_ep{}.pb'.format(epochs)
 
 
"""----------------------------------导入keras模型------------------------------"""
K.set_learning_phase(0)
net_model = load_model(h5_model_path)
 
print('input is :', net_model.input.name)
print ('output is:', net_model.output.name)
 
"""----------------------------------保存为.pb格式------------------------------"""
sess = K.get_session()
frozen_graph = freeze_session(K.get_session(), output_names=[net_model.output.op.name])
graph_io.write_graph(frozen_graph, output_path, pb_model_name, as_text=False)

3.测试模型类型转换后,是否测试效果一致,如果不一致,那就说明转换失败,测试my_model_ep20.h5模型:写了一个load_h5_test.py

import os
import cv2
import numpy as np
from keras.models import load_model
 
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
 
"""---------载入已经训练好的模型---------"""
new_model = load_model('my_model_ep20.h5')
 
"""---------用opencv载入一张待测图片-----"""
# 载入图片
src = cv2.imread('1.png')
cv2.imshow("test picture", src)
 
# 将图片转化为28*28的灰度图
src = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY)
dst = cv2.resize(src, (28, 28))
dst=dst.astype(np.float32)
 
# 将灰度图转化为1*784的能够输入的网络的数组
picture=1-dst/255
picture=np.reshape(picture,(1,28,28,1))
 
# 用模型进行预测
y = new_model.predict(picture)
print("softmax:")
for i,prob in enumerate(y[0]):
    print("class{},Prob:{}".format(i,prob))
result = np.argmax(y)
print("你写的数字是:", result)
print("对应的概率是:",np.max(y[0]))
cv2.waitKey(20170731)

在这里插入图片描述

softmax:
class0,Prob:7.5962966548104305e-06
class1,Prob:0.9997051358222961
class2,Prob:1.4485914334727568e-06
class3,Prob:4.844396812586638e-07
class4,Prob:0.00018645377713255584
class5,Prob:1.3495387065631803e-05
class6,Prob:3.066514909733087e-05
class7,Prob:2.5641845695645316e-06
class8,Prob:4.338693543104455e-05
class9,Prob:8.665668246976566e-06
你写的数字是: 1
对应的概率是: 0.99970514

测试my_model_ep20.pb模型,写了一个load_pb_test.py:

import tensorflow as tf
import numpy as np
import cv2
 
"""-----------------------------------------------定义识别函数-----------------------------------------"""
def recognize(jpg_path, pb_file_path):
    with tf.Graph().as_default():
        output_graph_def = tf.GraphDef()
 
        # 打开.pb模型
        with open(pb_file_path, "rb") as f:
            output_graph_def.ParseFromString(f.read())
            tensors = tf.import_graph_def(output_graph_def, name="")
            print("tensors:",tensors)
 
        # 在一个session中去run一个前向
        with tf.Session() as sess:
            init = tf.global_variables_initializer()
            sess.run(init)
 
            op = sess.graph.get_operations()
 
            # 打印图中有的操作
            for i,m in enumerate(op):
                print('op{}:'.format(i),m.values())
 
            input_x = sess.graph.get_tensor_by_name("conv2d_1_input:0")  # 具体名称看上一段代码的input.name
            print("input_X:",input_x)
 
            out_softmax = sess.graph.get_tensor_by_name("dense_2/Softmax:0")  # 具体名称看上一段代码的output.name
            print("Output:",out_softmax)
 
            # 读入图片
            img = cv2.imread(jpg_path, 0)
            img=cv2.resize(img,(28,28))
            img=img.astype(np.float32)
            img=1-img/255;
            # img=np.reshape(img,(1,28,28,1))
            print("img data type:",img.dtype)
 
            # 显示图片内容
            for row in range(28):
                for col in range(28):
                    if col!=27:
                        print(img[row][col],' ',end='')
                    else:
                        print(img[row][col])
 
            img_out_softmax = sess.run(out_softmax,
                                       feed_dict={input_x: np.reshape(img,(1,28,28,1))})
 
            print("img_out_softmax:", img_out_softmax)
            for i,prob in enumerate(img_out_softmax[0]):
                print('class {} prob:{}'.format(i,prob))
            prediction_labels = np.argmax(img_out_softmax, axis=1)
            print("Final class if:",prediction_labels)
            print("prob of label:",img_out_softmax[0,prediction_labels])
 
 
pb_path = './my_model_ep20.pb'
img = '1.png'
recognize(img, pb_path)

输出结果:

softmax:
class0,Prob:7.5963043855153956e-06
class1,Prob:0.9997051358222961
class2,Prob:1.4485929114016471e-06
class3,Prob:4.844415002480673e-07
class4,Prob:0.00018645377713255584
class5,Prob:1.349542617390398e-05
class6,Prob:3.066514909733087e-05
class7,Prob:2.564179567343672e-06
class8,Prob:4.3386979086790234e-05
class9,Prob:8.665676432428882e-06
你写的数字是: 1
对应的概率是: 0.99970514

结果和.h5模型一模一样,说明转换没有问题。

4.在C++中调用my_model_ep20.pb模型,写了一个hello.cpp,代码如下:注意,代码中input_tensor_name和output_tensor_name很关键,这两个是模型的输入和输出接口,这两个名字怎么确定,就要看你模型定义的时候,对应的tensor的名字了,这个名字可以在load_pb_test.py运行时,看它打印的信息,你就会明白该用什么名字了.还有一点是,网络输入的得是一个tensor,那么你的测试图片怎么变成一个tensor呢?opencv的Mat类型为我们提供了一个接口,我们定义一个Mat,去跟Tensor绑定,然后往这个Mat中输入数据,对应的Tensor的值也就被改变了。

使用QT来调用.pb文件,首先在.pro文件中配置下tensorflow和opencv环境。

QT += core
QT -= gui

CONFIG += c++11

TARGET = test
CONFIG += console
CONFIG -= app_bundle

TEMPLATE = app

SOURCES += main.cpp

# The following define makes your compiler emit warnings if you use
# any feature of Qt which as been marked deprecated (the exact warnings
# depend on your compiler). Please consult the documentation of the
# deprecated API in order to know how to port your code away from it.
DEFINES += QT_DEPRECATED_WARNINGS

# You can also make your code fail to compile if you use deprecated APIs.
# In order to do so, uncomment the following line.
# You can also select to disable deprecated APIs only up to a certain version of Qt.
#DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000    # disables all the APIs deprecated before Qt 6.0.0

#Opencv的配置
INCLUDEPATH += /usr/local/include\
/usr/local/include/opencv\
/usr/local/include/opencv2\
/home/oliver/darknet/include\
/usr/local/include/tf


LIBS += /usr/local/lib/libopencv_highgui.so \
        /usr/local/lib/libopencv_core.so    \
        /usr/local/lib/libopencv_imgproc.so \
        /usr/local/lib/libopencv_imgcodecs.so\
        /home/oliver/darknet/libdarknet.so\
        /usr/local/lib/libtensorflow_cc.so\
        /usr/local/lib/libtensorflow_framework.so

然后main.cpp

#include <fstream>
#include <utility>
#include <Eigen/Core>
#include <Eigen/Dense>
#include <iostream>
 
#include "tensorflow/cc/ops/const_op.h"
#include "tensorflow/cc/ops/image_ops.h"
#include "tensorflow/cc/ops/standard_ops.h"
 
#include "tensorflow/core/framework/graph.pb.h"
#include "tensorflow/core/framework/tensor.h"
 
#include "tensorflow/core/graph/default_device.h"
#include "tensorflow/core/graph/graph_def_builder.h"
 
#include "tensorflow/core/lib/core/errors.h"
#include "tensorflow/core/lib/core/stringpiece.h"
#include "tensorflow/core/lib/core/threadpool.h"
#include "tensorflow/core/lib/io/path.h"
#include "tensorflow/core/lib/strings/stringprintf.h"
 
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/util/command_line_flags.h"
 
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/platform/init_main.h"
#include "tensorflow/core/platform/logging.h"
#include "tensorflow/core/platform/types.h"
 
#include "opencv2/opencv.hpp"
 
using namespace tensorflow::ops;
using namespace tensorflow;
using namespace std;
using namespace cv;
using tensorflow::Flag;
using tensorflow::Tensor;
using tensorflow::Status;
using tensorflow::string;
using tensorflow::int32 ;
 
// 定义一个函数讲OpenCV的Mat数据转化为tensor,python里面只要对cv2.read读进来的矩阵进行np.reshape之后,
// 数据类型就成了一个tensor,即tensor与矩阵一样,然后就可以输入到网络的入口了,但是C++版本,我们网络开放的入口
// 也需要将输入图片转化成一个tensor,所以如果用OpenCV读取图片的话,就是一个Mat,然后就要考虑怎么将Mat转化为
// Tensor了
void CVMat_to_Tensor(Mat img,Tensor* output_tensor,int input_rows,int input_cols)
{
    //imshow("input image",img);
    //图像进行resize处理
    resize(img,img,cv::Size(input_cols,input_rows));
    //imshow("resized image",img);
 
    //归一化
    img.convertTo(img,CV_32FC1);
    img=1-img/255;
 
    //创建一个指向tensor的内容的指针
    float *p = output_tensor->flat<float>().data();
 
    //创建一个Mat,与tensor的指针绑定,改变这个Mat的值,就相当于改变tensor的值
    cv::Mat tempMat(input_rows, input_cols, CV_32FC1, p);
    img.convertTo(tempMat,CV_32FC1);
 
//    waitKey(0);
 
}
 
int main(int argc, char** argv )
{
    /*--------------------------------配置关键信息------------------------------*/
    string model_path="../my_model_ep20.pb";
    string image_path="../1.png";
    int input_height =28;
    int input_width=28;
    string input_tensor_name="conv2d_1_input";
    string output_tensor_name="dense_2/Softmax";
 
    /*--------------------------------创建session------------------------------*/
    Session* session;
    Status status = NewSession(SessionOptions(), &session);//创建新会话Session
 
    /*--------------------------------从pb文件中读取模型--------------------------------*/
    GraphDef graphdef; //Graph Definition for current model
 
    Status status_load = ReadBinaryProto(Env::Default(), model_path, &graphdef); //从pb文件中读取图模型;
    if (!status_load.ok()) {
        cout << "ERROR: Loading model failed..." << model_path << std::endl;
        cout << status_load.ToString() << "\n";
        return -1;
    }
    Status status_create = session->Create(graphdef); //将模型导入会话Session中;
    if (!status_create.ok()) {
        cout << "ERROR: Creating graph in session failed..." << status_create.ToString() << std::endl;
        return -1;
    }
    cout << "<----Successfully created session and load graph.------->"<< endl;
 
    /*---------------------------------载入测试图片-------------------------------------*/
    cout<<endl<<"<------------loading test_image-------------->"<<endl;
    Mat img=imread(image_path,0);
    if(img.empty())
    {
        cout<<"can't open the image!!!!!!!"<<endl;
        return -1;
    }
 
    //创建一个tensor作为输入网络的接口
    Tensor resized_tensor(DT_FLOAT, TensorShape({1,input_height,input_width,1}));
 
    //将Opencv的Mat格式的图片存入tensor
    CVMat_to_Tensor(img,&resized_tensor,input_height,input_width);
 
    cout << resized_tensor.DebugString()<<endl;
 
    /*-----------------------------------用网络进行测试-----------------------------------------*/
    cout<<endl<<"<-------------Running the model with test_image--------------->"<<endl;
    //前向运行,输出结果一定是一个tensor的vector
    vector<tensorflow::Tensor> outputs;
    string output_node = output_tensor_name;
    Status status_run = session->Run({{input_tensor_name, resized_tensor}}, {output_node}, {}, &outputs);
 
    if (!status_run.ok()) {
        cout << "ERROR: RUN failed..."  << std::endl;
        cout << status_run.ToString() << "\n";
        return -1;
    }
    //把输出值给提取出来
    cout << "Output tensor size:" << outputs.size() << std::endl;
    for (std::size_t i = 0; i < outputs.size(); i++) {
        cout << outputs[i].DebugString()<<endl;
    }
 
    Tensor t = outputs[0];                   // Fetch the first tensor
    auto tmap = t.tensor<float, 2>();        // Tensor Shape: [batch_size, target_class_num]
    int output_dim = t.shape().dim_size(1);  // Get the target_class_num from 1st dimension
 
    // Argmax: Get Final Prediction Label and Probability
    int output_class_id = -1;
    double output_prob = 0.0;
    for (int j = 0; j < output_dim; j++)
    {
        cout << "Class " << j << " prob:" << tmap(0, j) << "," << std::endl;
        if (tmap(0, j) >= output_prob) {
            output_class_id = j;
            output_prob = tmap(0, j);
        }
    }
 
    // 输出结果
    cout << "Final class id: " << output_class_id << std::endl;
    cout << "Final class prob: " << output_prob << std::endl;
 
    return 0;
}
 

输出结果:
在这里插入图片描述注意:如果在运行的过程中提示缺少/找不到“nsync_cv.h”,“nsync_mu.h"文件,其实这个文件是有的, 在"tensorflow/contrib/makefile/downloads/nsync/public/nsync_cv.h”,“tensorflow/contrib/makefile/downloads/nsync/public/nsync_mu.h”,把头文件路径替换即可。

展开阅读全文

没有更多推荐了,返回首页