Win10+VS2015+CUDA10+Tensorflow GPU C++ API接口编译

1. 基本环境

Anaconda3: python3.6.5
CUDA :10.0
cuDNN: 7.5
TensorFlow: 1.13.1
VS: 2015

2. 安装VS2015

VS下载地址: Visual Studio
VS2017没测试,基本差不多

3. 安装MSYS2、Bazel

MSYS2下载地址: MSYS2 installer
Bazel下载地址: Bazel

将下载的bazel名字改为bazel.exe ,并将该bazel.exe添加到msys64\usr\bin中(根据自己安装目录,例如我的安装目录是:E:\SoftEnv\Msys2\usr\bin);
将msys2和bazel加入环境变量path中,例如我的:
在这里插入图片描述
接下来使用 cmd.exe 运行以下命令:pacman -S git patch unzip (需要梯子)
最后通过命令命令:bazel version 验证bazel安装是否成功

4. 让PowerShell可执行.pl1文件

方法参考: PowerShell让系统可以执行.ps1文件

5. 下载编译脚本和tensorflow源码

编译脚本下载以及tensorflow源码(v1.13.1):

git clone --recursive https://github.com/gulingfengze/tensorflow-windows-build-script.git

下载完成后解压source.zip压缩包,该文件夹中为TF的1.13.1版本。

注:如果想编译其它版本,则需要删除source下的所有文件,然后下载tensorflow源码, 并切换到对应版本即可,tensorflow源码下载:

git clone --recursive https://github.com/tensorflow/tensorflow.git

6. 修改 build.ps1文件并编译

修改两个地方,如下所示:

if (!(CheckInstalled pacman)) {
$version = askForVersion “20180531.0.0”
choco install msys2 --version $version --params “/NoUpdate /InstallDir:E:\SoftEnv\Msys2” # 改成自己的安装路径
}
$ENV:Path += “;E:\SoftEnv\Msys2\usr\bin” # 改成自己的路径
$ENV:BAZEL_SH = “E:\SoftEnv\Msys2\usr\bin\bash.exe” # 改成自己的路径

if (!(CheckInstalled python “3.6.5”)) { # 改为自己的版本号
$version = askForVersion “3.6.5”
choco install python --version $version --params “‘TARGETDIR:C:/Python36’”
}

在tensorflow-windows-build-script下执行命令:python -m venv venv 创建一个虚拟环境;以管理员方式打开Windows PowerShell,进行编译:

$parameterString = "--config=opt --config=cuda --define=no_tensorflow_py_deps=true --copt=-nvcc_options=disable-warnings //tensorflow:libtensorflow_cc.so --verbose_failures"
.\build.ps1 `
    -BazelBuildParameters $parameterString `
    -BuildCppAPI -ReserveSource -ReserveVenv

回车,选择一个tensorflow版本,默认为 v1.13.1
编译过程中会检查配置信息,另外还有需要选择配置的设置,示例配置如下:

Do you wish to build TensorFlow with XLA JIT support? 选择n
Do you wish to build TensorFlow with ROCm support? 选择n
Please input the desired Python library path to use。。。 直接回车就行(自动检测到本地的site-packages路径)
Do you wish to build TensorFlow with CUDA support? 选择y
Please specify the CUDA SDK version you want to use。。。 直接回车就行(自动检测到本地的CUDA SDK版本)
Please specify the location where CUDA 10.0 toolkit is installed。。。 直接回车就行(自动检测到本地的CUDA toolkit)
Please specify the cuDNN version you want to use。。。 输入7.5(和本地cuDNN版本号一致)
Please specify the location where cuDNN 7 library is installed。。。 直接回车就行(自动检测到本地的cuDNN安装路径)
Please specify a list of comma-separated Cuda compute capabilities you want to build with.You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your
build time and binary size. [Default is: 3.5,7.0] 7.5 (因我的是2070)
Please specity optimization flags to use during compilation when bazel option “–config=opt” is specified [Default is /arch:AVX]: /arch:AVX2

其它的选项基本就是回车就行(看情况定),正常情况到这一步不会报错

接下来就是等待了,时间可能比较长.
运气不差的话,在tensorflow-windows-build-script\source\bazel-bin\tensorflow下应该就生成了如下文件:

liblibtensorflow_cc.so.exp
liblibtensorflow_cc.so.ifso
libtensorflow_cc.so
libtensorflow_cc.so.runfiles_manifest
libtensorflow_cc.so-2.params

实际上 libtensorflow_cc.so 、liblibtensorflow_cc.so.ifso 就是我们所需要的编译文件; libtensorflow_cc.so对应dll文件, liblibtensorflow_cc.so.ifso对应lib文件,我们只需要分别将libtensorflow_cc.so改为tensorflow_cc.dll ,liblibtensorflow_cc.so.ifso改为 tensorflow_cc.lib即可。下面就是整理dll、include和lib文件,具体参考博客:win10 + bazel-0.20.0 + tensorflow-1.13.1 编译tensorflow GPU版本的C++库 ,这里非常感谢该博主的详细介绍。

demo示例参考上述博客即可,另外在强调一点:编译时出现符号链接错误,将符号链接复制到source\tensorflow下的tf_exported_symbols_msvc.lds 文件中,然后重新执行编译动态库操作(重复上述6的操作即可)

另外,如果需要 framework.dll和 framework.lib 文件,则进入source文件夹路径下,执行以下命令,生成的文件(C:\Users\xx_bazel_xx\47nc6e7r\execroot\org_tensorflow\bazel-out\x64_windows-opt\bin\tensorflow下)参照上面进行修改后缀,然后将修改名字后的framework.dll放到lib文件夹下, framework.lib放到lib文件夹下。

bazel build --config=opt --config=cuda --define=no_tensorflow_py_deps=true --copt=-nvcc_options=disable-warnings //tensorflow:libtensorflow_framework.so --verbose_failures

这篇博客早先写了部分,后面发现不少更为详细的博客,这里也顺便推荐给各位朋友:

  1. win10 + bazel-0.20.0 + tensorflow-1.13.1 编译tensorflow GPU版本的C++库
  2. windows+bazel+tensorflow-v1.12.0(GPU)编译生成dll与lib
编译tensorflow: 1.python3.5,tensorflow1.12; 2.支持cuda10.0,cudnn7.3.1,TensorRT-5.0.2.6-cuda10.0-cudnn7.3; 3.无mkl支持; 软硬件硬件环境:Ubuntu16.04,GeForce GTX 1080 TI 配置信息: hp@dla:~/work/ts_compile/tensorflow$ ./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.19.1 installed. Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3 Found possible Python library paths: /usr/local/lib/python3.5/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] Do you wish to build TensorFlow with XLA JIT support? [Y/n]: XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]: Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.3.1 Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]: Do you wish to build TensorFlow with TensorRT support? [y/N]: y TensorRT support will be enabled for TensorFlow. Please specify the location where TensorRT is installed. [Default is /usr/lib/x86_64-linux-gnu]://home/hp/bin/TensorRT-5.0.2.6-cuda10.0-cudnn7.3/targets/x86_64-linux-gnu Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]: Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1,6.1,6.1]: Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=" to your build command. See .bazelrc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. Preconfigured Bazel build configs to DISABLE default on features: --config=noaws # Disable AWS S3 filesystem support. --config=nogcp # Disable GCP support. --config=nohdfs # Disable HDFS support. --config=noignite # Disable Apacha Ignite support. --config=nokafka # Disable Apache Kafka support. --config=nonccl # Disable NVIDIA NCCL support. Configuration finished 编译: bazel build --config=opt --verbose_failures //tensorflow/tools/pip_package:build_pip_package 卸载已有tensorflow: hp@dla:~/temp$ sudo pip3 uninstall tensorflow 安装自己编译的成果: hp@dla:~/temp$ sudo pip3 install tensorflow-1.12.0-cp35-cp35m-linux_x86_64.whl
编译tensorflow: 1.python3.5,tensorflow1.12; 2.支持cuda10.0,cudnn7.3.1,TensorRT-5.0.2.6-cuda10.0-cudnn7.3; 3.支持mkl,无MPI; 软硬件硬件环境:Ubuntu16.04,GeForce GTX 1080 配置信息: hp@dla:~/work/ts_compile/tensorflow$ ./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.19.1 installed. Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3 Found possible Python library paths: /usr/local/lib/python3.5/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] Do you wish to build TensorFlow with XLA JIT support? [Y/n]: XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]: Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.3.1 Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]: Do you wish to build TensorFlow with TensorRT support? [y/N]: y TensorRT support will be enabled for TensorFlow. Please specify the location where TensorRT is installed. [Default is /usr/lib/x86_64-linux-gnu]:/home/hp/bin/TensorRT-5.0.2.6-cuda10.0-cudnn7.3/targets/x86_64-linux-gnu Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]: Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1,6.1,6.1]: Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=" to your build command. See .bazelrc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. Preconfigured Bazel build configs to DISABLE default on features: --config=noaws # Disable AWS S3 filesystem support. --config=nogcp # Disable GCP support. --config=nohdfs # Disable HDFS support. --config=noignite # Disable Apacha Ignite support. --config=nokafka # Disable Apache Kafka support. --config=nonccl # Disable NVIDIA NCCL support. Configuration finished 编译: hp@dla:~/work/ts_compile/tensorflow$ bazel build --config=opt --config=mkl --verbose_failures //tensorflow/tools/pip_package:build_pip_package 卸载已有tensorflow: hp@dla:~/temp$ sudo pip3 uninstall tensorflow 安装自己编译的成果: hp@dla:~/temp$ sudo pip3 install tensorflow-1.12.0-cp35-cp35m-linux_x86_64.whl
评论 13
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值