JETSON TX2源码编译安装tensorflow1.4.0

本博客是参考tensorflow教程:
https://syed-ahmed.gitbooks.io/nvidia-jetson-tx2-recipes/content/first-question.html
原教程经过测试能成功安装v1.0.1版本
由于tensorflow 开源 Tensorflow Object Detection API需要v1.4.0版本,因此安装过程会有一些变化
tensorflow1.4.0需要cudnn6.0版本,因此若版本不是cudnn6.0需将TX2刷机到JetPack-3.1,否则编译源码时会出错

以下在原教程基础上修改:
前两步不变
Step 1: Install Java

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer

Step 2: Install More Stuff (I am using Python 2.7)

sudo apt-get install zip unzip autoconf automake libtool curl zlib1g-dev maven -y
sudo apt-get install python-numpy swig python-dev python-pip python-wheel -y

Step 3: Install Bazel,这儿需要用Bazel最低0.6.1版本,下载地址

1. Unzip the package.
2. cd bazel-0.6.1-dist
3. Start the compilation process by issuing ./compile.sh
4. Copy the build to your system bin folder sudo cp output/bazel /usr/local/bin

Step 4: Create a Swap File

Since TensorFlow needs about 8GB memory to compile, we are going to create a swap file.

1.Create an 8GB swapfile

fallocate -l 8G swapfile

2.Change permission of the swapfile

chmod 600 swapfile

3.Create swap area

mkswap swapfile

4.Activate the swap area

swapon swapfile

5.Confirm swap area being used

swapon -s

Step 5: Install TensorFlow

1.Clone the repo in your desired directory:

git clone https://github.com/tensorflow/tensorflow.git

2.Check out the latest release:

cd tensorflow
git checkout v1.4.0

3.Open the file tensorflow/stream_executor/cuda/cuda_gpu_executor.cc in an editor. In the function static intTryToReadNumaNode(conststring &pci_bus_id,intdevice_ordinal) add the following lines at the start of the function. This hardcodes the function to return 0, since, there is no NUMA node in the ARM and because we know that we are installing this stuff in an ARM system:

LOG(INFO) << "ARM has no NUMA node, hardcoding to return zero";
return 0;

4.For some reason, my Jetson installation didn’t put cudnn.h in the directory TensorFlow was looking into. Hence I had to manually copy the installed cudnn.h into the desired folder as follows:
此处若cp失败,没有aarch64-linux-gnu/include文件夹,则先新建一个/include

sudo cp /usr/include/cudnn.h /usr/lib/aarch64-linux-gnu/include/cudnn.h

5.Configure the TensorFlow installation by issuing:

./configure

6.Following are my selections. I chose to keep XLA, since it’s a cool feature and I wanted to experiment with it:
大致和以下选得一样,额外的注意
(1).use clang as cuda complier选择n,选gcc.
(2).注意GDR选择n.否则会报错

ubuntu@tegra-ubuntu:~/tensorflow$ ./configure 
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python2.7
Please specify optimization flags to use during compilation [Default is -march=native]: 
Do you wish to use jemalloc as the malloc implementation? (Linux only) [Y/n] y
jemalloc enabled on Linux
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] n
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N] n
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] y
XLA JIT support will be enabled for TensorFlow
Found possible Python library paths:
  /usr/local/lib/python2.7/dist-packages
  /usr/lib/python2.7/dist-packages
Please input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/dist-packages]

Using python library path: /usr/local/lib/python2.7/dist-packages
Do you wish to build TensorFlow with OpenCL support? [y/N] n
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] y
CUDA support will be enabled for TensorFlow
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: 
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 
Please specify the location where CUDA  toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 
Please specify the Cudnn version you want to use. [Leave empty to use system default]: 
Please specify the location where cuDNN  library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
Extracting Bazel installation...
.......................
INFO: Starting clean (this may take a while). Consider using --expunge_async if the clean takes more than several minutes.
.......................
INFO: All external dependencies fetched successfully.
Configuration finished

(3).Once your configuration is done, start the compilation by issuing the command:

bazel build -c opt --local_resources 3072,4.0,1.0 --verbose_failures --config=cuda //tensorflow/tools/pip_package:build_pip_package

大概一个小时成功编译如下
这里写图片描述
(4).Once Tensorflow is compiled, build the pip package:

bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

(5).Move the pip wheel from the tmp directory if you want to save it:

mv /tmp/tensorflow_pkg/tensorflow-1.4.0-cp27-cp27mu-linux_aarch64.whl $HOME/

(6).Install the pip wheel:

sudo pip install $HOME/tensorflow-1.4.0-cp27-cp27mu-linux_aarch64.whl

(7)Reboot the system,重启生效

sudo reboot

7.测试

ubuntu@tegra-ubuntu:~$ python
>>> import tensorflow as tf
  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值