Ubuntu ARM64 安装 Tensorflow

Compiling Bazel from Source (bootstrapping)

You can build Bazel from source without using an existing Bazel binary by doing the following:

  1. Ensure that JDK 8, Python, Bash, zip, and the usual C++ build toolchain are installed on your system.

    • On systems based on Debian packages (Debian, Ubuntu): you can install OpenJDK 8 and Python by running the following command in a terminal:

      sudo apt-get install build-essential openjdk-8-jdk python zip
      
    • On Windows: you need additional software and the right OS version. See the Windows page.

  2. Download and unpack Bazel's distribution archive.

    Download bazel-<version>-dist.zip from the release page.  #使用最新版的

    Note: There is a single, architecture-independent distribution archive. There are no architecture-specific or OS-specific distribution archives.

    We recommend to also verify the signature made by our release key 48457EE0.

    The distribution archive contains generated files in addition to the versioned sources, so this step cannot be short cut by checking out the source tree.

  3. Build Bazel using ./compile.sh.

    • On Unix-like systems (e.g. Ubuntu, macOS), do the following steps in a shell session:
      1. cd into the directory where you unpacked the distribution archive
      2. run bash ./compile.sh
    • On Windows, do following steps in the MSYS2 shell:

      1. cd into the directory where you unpacked the distribution archive
      2. run ./compile.sh

      Once you have a Bazel binary, you no longer need to use the MSYS2 shell. You can run Bazel from the Command Prompt (cmd.exe) or PowerShell.

    The output will be output/bazel on Unix-like systems (e.g. Ubuntu, macOS) and output/bazel.exe on Windows. This is a self-contained Bazel binary. You can copy it to a directory on the PATH (such as /usr/local/bin on Linux) or use it in-place.

#Copy bazel to $PATH
$ sudo cp output/bazel /usr/local/bin/

Clone the TensorFlow repository

Start the process of building TensorFlow by cloning a TensorFlow repository.

To clone the latest TensorFlow repository, issue the following command:

 
$ git clone https://github.com/tensorflow/tensorflow

The preceding git clone command creates a subdirectory named tensorflow. After cloning, you may optionally build aspecific branch (such as a release branch) by invoking the following commands:

 
$ cd tensorflow$ git checkout Branch # where Branch is the desired branch

For example, to work with the r1.0 release instead of the master release, issue the following command:

 
$ git checkout r1.7   #使用最新版

Next, you must prepare your environment for Linux or macOS

Install TensorFlow Python dependencies

To install TensorFlow, you must install the following packages:

  • numpy, which is a numerical processing package that TensorFlow requires.
  • dev, which enables adding extensions to Python.
  • pip, which enables you to install and manage certain Python packages.
  • wheel, which enables you to manage Python compressed packages in the wheel (.whl) format.

To install these packages for Python 2.7, issue the following command:

 
$ sudo apt-get install python-numpy python-dev python-pip python-wheel

To install these packages for Python 3.n, issue the following command:

 
$ sudo apt-get install python3-numpy python3-dev python3-pip python3-wheel

Configure the installation

The root of the source tree contains a bash script named configure. This script asks you to identify the pathname of all relevant TensorFlow dependencies and specify other build configuration options such as compiler flags. You must run this script prior to creating the pip package and installing TensorFlow.

If you wish to build TensorFlow with GPU, configure will ask you to specify the version numbers of Cuda and cuDNN. If several versions of Cuda or cuDNN are installed on your system, explicitly select the desired version instead of relying on the default.

One of the questions that configure will ask is as follows:

 
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]

This question refers to a later phase in which you'll use bazel to build the pip package. We recommend accepting the default (-march=native), which will optimize the generated code for your local machine's CPU type. However, if you are building TensorFlow on one CPU type but will run TensorFlow on a different CPU type, then consider specifying a more specific optimization flag as described in the gcc documentation.

Here is an example execution of the configure script. Note that your own input will likely differ from our sample input:

 

$ cd tensorflow # cd to the top-level directory created

$ ./configurePlease specify the location of python. [Default is /usr/bin/python]: /usr/bin/python2.7

Found possible Python library paths: 

 /usr/local/lib/python2.7/dist-packages 

/usr/lib/python2.7/dist-packagesPlease 

input the desired Python library path to use. Default is [/usr/lib/python2.7/dist-packages]Using python library path: /usr/local/lib/python2.7/dist-packagesPlease 

specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:

Do you wish to use jemalloc as the malloc implementation? [Y/n]

jemalloc enabledDo you wish to build TensorFlow with Google Cloud Platform support? [y/N]

No Google Cloud Platform support will be enabled for TensorFlow

Do you wish to build TensorFlow with Hadoop File System support? [y/N]

No Hadoop File System support will be enabled for TensorFlow

Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N]

No XLA support will be enabled for TensorFlow

Do you wish to build TensorFlow with VERBS support? [y/N]

No VERBS support will be enabled for TensorFlow

Do you wish to build TensorFlow with OpenCL support? [y/N]

No OpenCL support will be enabled for TensorFlow

Do you wish to build TensorFlow with CUDA support? [y/N] 

YCUDA support will be enabled for TensorFlow

Do you want to use clang as CUDA compiler? [y/N]

Do you wish to build TensorFlow with MPI support? [y/N]

MPI support will not be enabled for TensorFlowConfiguration finished

If you told configure to build for GPU support, then configure will create a canonical set of symbolic links to the Cuda libraries on your system. Therefore, every time you change the Cuda library paths, you must rerun the configure script before re-invoking the bazel build command.

Note the following:

  • Although it is possible to build both Cuda and non-Cuda configs under the same source tree, we recommend running bazel clean when switching between these two configurations in the same source tree.
  • If you don't run the configure script before running the bazel build command, the bazel build command will fail.

Build the pip package

非常重要##建议swap和内存加起来要7G以上,编译Tensor flow前改好 修改下方红色的512既能修改swap大小。多次执行这三个命令,swap空间会相加。

# 生成swap镜像文件
sudo dd if=/dev/zero of=/mnt/512Mb.swap bs=1M count=512
# 对该镜像文件格式化
sudo mkswap /mnt/512Mb.swap
# 挂载该镜像文件 
sudo swapon /mnt/512Mb.swap

至此,使用free -m 即可查看到swap空间已经增加成功。

To build a pip package for TensorFlow with CPU-only support, you would typically invoke the following command:

 
$ bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package   #红色部分删掉

To build a pip package for TensorFlow with GPU support, invoke the following command:

 
$ bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

NOTE on gcc 5 or later: the binary pip packages available on the TensorFlow website are built with gcc 4, which uses the older ABI. To make your build compatible with the older ABI, you need to add --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0"to your bazel build command. ABI compatibility allows custom ops built against the TensorFlow pip package to continue to work against your built package.

Tip: By default, building TensorFlow from sources consumes a lot of RAM. If RAM is an issue on your system, you may limit RAM usage by specifying --local_resources 2048,.5,1.0 while invoking bazel.

The bazel build command builds a script named build_pip_package. Running this script as follows will build a .whlfile within the /tmp/tensorflow_pkg directory:

 
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

Install the pip package

Invoke pip install to install that pip package. The filename of the .whl file depends on your platform. For example, the following command will install the pip package

for TensorFlow 1.6.0 on Linux:

 
$ sudo pip install /tmp/tensorflow_pkg/tensorflow-1.6.0-py2-none-any.whl


出现错误:gcc: internal compiler error: Killed  [status 4]

status 4 是内存不足的问题,只需要增加swap空间即可。 
该问题顺利解决。 
Ubuntu下增加swap空间的命令如下:  非常重要##建议swap和内存加起来要7G以上,编译Tensor flow前改好 修改下方红色的512既能修改swap大小。多次执行这三个命令,swap空间会相加。

# 生成swap镜像文件
sudo dd if=/dev/zero of=/mnt/512Mb.swap bs=1M count=512
# 对该镜像文件格式化
sudo mkswap /mnt/512Mb.swap
# 挂载该镜像文件 
sudo swapon /mnt/512Mb.swap

至此,使用free -m 即可查看到swap空间已经增加成功。


  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值