Linux(Uuntu)下Tensorflow GPU C++ API接口编译

步骤:

  1. 安装bazel
    注意:一定要下载合适的版本,否则会出现各种莫名其妙的错误!因为我自己编译的tf版本是1.13.1,所以这里选用的bazel版本为0.19.2(其它版本会出现问题)
  2. 下载tensorflow源码

git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
git checkout -b My_v1.13.1 v1.13.1 # 新建一个1.13.1版本分支

  1. 编译 configure

./configure

配置会话示例参考:

./configure
You have bazel 0.19.2 installed.
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python2.7

Found possible Python library paths:
  /usr/local/lib/python3.6/dist-packages
  /usr/lib/python3.6/dist-packages
Please input the desired Python library path to use.  Default is [/usr/lib/python3.6/dist-packages]

Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]:n
jemalloc as malloc support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]:n
Google Cloud Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Hadoop File System support? [Y/n]:n
Hadoop File System support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Amazon AWS Platform support? [Y/n]:n
Amazon AWS Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]:n
Apache Kafka Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [y/N]:n
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with GDR support? [y/N]:n
No GDR support will be enabled for TensorFlow.

Do you wish to build TensorFlow with VERBS support? [y/N]:n
No VERBS support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]:n
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: Y
CUDA support will be enabled for TensorFlow.

Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]: 10.0

Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:

Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.5.0

Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:

Do you wish to build TensorFlow with TensorRT support? [y/N]:
No TensorRT support will be enabled for TensorFlow.

Please specify the NCCL version you want to use. If NCLL 2.2 is not installed, then you can use version 1.3 that can be fetched automatically but it may have worse performance with multiple GPUs. [Default is 2.2]: 2.3.7

Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your
build time and binary size. [Default is: 7.5] 7.5

Do you want to use clang as CUDA compiler? [y/N]:
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:

Do you wish to build TensorFlow with MPI support? [y/N]:n
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:

Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:n
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
    --config=mkl            # Build with MKL support.
    --config=monolithic     # Config for mostly static monolithic build.
Configuration finished
  1. 编译so文件

bazel build --config=opt --config=cuda --config=monolithic tensorflow:libtensorflow_cc.so

bazel build --config=opt --config=cuda --config=monolithic tensorflow:libtensorflow_framework.so

注意:如果不加–config=monlithic,编译出来的库会导致cv::imread()为空;这样导入库时只需导入libtensorflow_cc.so,不再需要libtensorflow_framework.so
参考:opencv cannot read any image with tensorflow

  1. 编译其他依赖
    在tensorflow/contrib/makefile下,执行./build_all_linux.sh文件,成功后会出现一个gen文件夹。
    注意:若出现如下错误 :

./autogen.sh: 37: ./autogen.sh: autoreconf: not found

,安装相应依赖即可 sudo apt-get install autoconf automake libtool
然后,重新执行命令:./build_all_linux.sh
若是最后又报错:

./tensorflow/core/lib/io/zlib_outputbuffer.h:19:18: fatal error: zlib.h: No such file or directory

则需要安装 zlib1g-dev:

sudo apt-get install zlib1g-dev

最后,再次重新执行命令:./build_all_linux.sh


如需Eigen库,则进入tensorflow/contrib/makefile/downloads/eigen,执行:

mkdir build
cd build
cmake …
make
sudo make install

安装完毕后,在usr/local/include目录下会出现eigen3文件夹。

  1. 理库文件(lib)和头文件(include)
    lib:
    /xx/tensorflow/bazel-bin/tensorflow/libtensorflow_framework.so
    /xx/tensorflow/bazel-bin/tensorflow/libtensorflow_cc.so
    /xx/tensorflow/tensorflow/contrib/makefile/gen/protobuf/lib/libprotobuf.a

include:
/xx/tensorflow/tensorflow/contrib/makefile/gen/protobuf/include
/xx/tensorflow/tensorflow/contrib/makefile/gen/host_obj
/xx/tensorflow/tensorflow/contrib/makefile/downloads/nsync/public
/xx/tensorflow/tensorflow/contrib/makefile/downloads/eigen
/xx/tensorflow/tensorflow/contrib/makefile/downloads/absl
/xx/tensorflow/bazel-genfiles
/xx/tensorflow/tensorflow (实际主要用tensorflow下的cc和core)
/xx/tensorflow/third_party
/usr/local/include/eigen3

  1. 参考:
    https://blog.csdn.net/qq_25109263/article/details/81285952
    https://blog.csdn.net/wd1603926823/article/details/92843830
    https://www.cnblogs.com/hrlnw/p/7007648.html
    https://tensorflow.google.cn/install/source
    https://www.jianshu.com/p/a4c103820bad !
    https://www.cnblogs.com/seniusen/p/9756302.html !
    https://www.jianshu.com/p/0bf9f1d85f4c
    https://www.jianshu.com/p/d46596558640
    https://blog.csdn.net/dragonchow123/article/details/80682787
  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值