TensorFlow平台的搭建

一、TensorFlow介绍
TensorFlow™ 是一个采用数据流图(data flow graphs),用于数值计算的开源软件库。节点(Nodes)在图中表示数学操作,图中的线(edges)则表示在节点间相互联系的多维数据数组,即张量(tensor)。它灵活的架构让你可以在多种平台上展开计算,例如台式计算机中的一个或多个CPU(或GPU),服务器,移动设备等等。TensorFlow 最初由Google大脑小组(隶属于Google机器智能研究机构)的研究员和工程师们开发出来,用于机器学习和深度神经网络方面的研究,但这个系统的通用性使其也可广泛用于其他计算领域。
二、Tensorflow的安装机器的准备
一. cuda准备:cuda_8.0.61_375.26_linux.run
下载网址:https://developer.nvidia.com/cuda-downloads
二、cudnn准备:cudnn-8.0-linux-x64-v6.0.tgz
下载网址:https://developer.nvidia.com/cudnn
三、安装系统:Ubuntu16.04 LTS
下载网址:https://www.ubuntu.com/downloads/desktop
四、Python版本:Python-2.7
下载网址:https://www.python.org/downloads/release/python-2713/
五、该准备也是最重要的准备,如果你的电脑显卡为AMD的显卡,此处我推荐你安装CPU版,如果你的电脑为Nvidia的显卡,此处我推荐你安装GPU版,AMD的显卡最好不要安装GPU版的,而对于Nvidia的显卡,两个版本均可。
三、Tensorflow的安装方式
关于TensorFlow的安装方式此处介绍三种安装方式,分别为:pip安装、virtualenv安装、源码安装。
TensorFlow的版本:TensorFlow分为CPU版和GPU版
一、通过pip方式安装
a)、首先安装python环境的依赖。
sudo apt-get install python-py python-dev python-pip numpy
b)、直接通过pip安装,首先将安装过的pip更新下。
pip install –upgrade pip ——更新pip
pip install tensorflow —— TensorFlow CPU版
pip install tensorflow-gpu ——TensorFlow GPU版
c)、如果通过上述方法安装失败,此处提供另外一种pip的安装方式
仅CPU版:

* sudo pip install --upgrade \
<https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.3.0.cp27-none-linux_x64.whl>

然后再将tensorflow更新

* pip install -- upgrade tensorflow

GPU支持:

* sudo pip install –upgrade \ https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-1.3.0-cp27-none-linux_x86_64.whl

至此pip版的安装成功:
测试是否安装成功:

 /usr/bin/env Python
 coding:utf-8
import tensorflow as tf
hello = tf.constant(‘Hello,TensorFlow!’)
sess = tf.Session()
print(sess.run(hello))
如果输出结果为:Hello,TensorFlow!表示安装成功

二、通过virtualenv安装
a)、首先需要安装虚拟环境,虚拟环境的好处就是虚拟环境中的变量不会“污染“主系统环境中的变量。

sudo apt-get install python-py python-dev python-pip python-vertualenv numpy
    b)、创建并启动虚拟环境

virtualenv (创建环境的名字。例如:virtualenv test 表示创建一个名为test的虚拟环境)
cd (创建的环境中)
source ./bin/activate
c)、安装tensorflow的过程见pip安装的过程。
d)、退出虚拟环境
输入deactivate 例如:(test)$: deactivate
e)、验证是否安装成功的方式见pip的验证方式。
三、通过源码安装tensorflow,此处我要说明下,tensorflow通过源码的方式安装的tensorflow的过程中会出现很多的错误需要调试,希望电脑前的你有足够的耐心调试完毕。(在此编写此篇文章的人提你们向上天祈祷,愿你们有耐心解决问题。)
a)、首先先安装cuda,在安装cuda的过程中需要先关闭桌面,进入tty模式,按ctrl+alt+F1进入tty模式。找到并进入下载的cuda文件夹,然后在终端输入:

service lightdm stop
        sudo sh cuda_8.0.61_375.26_linux.run

image.png

image.png
在安装好cuda之后配置cuda的环境变量:

        export PATH=”/usr/local/cuda-8.0/bin”
        export LD_LIBRARY_PATH=”/usr/local/cuda-8.0/lib64”
        然后输入nvidia-smi如果能出现下面图片中的内容,表示安装成功。

image.png
此处最好是重启下机器,输入reboot重启
b)、cudnn的安装。

tar zxvf cudnn-8.0-linux-x64-v6.0.tgz
            sudo cp cuda/include/* /usr/local/cuda/include/
            sudo cp cuda/lib64/* /usr/local/cuda/lib64/
            sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
    c)、在~/.bashrc文件中添加环境变量
            sudo gedit ~/.bashrc
export \ LD_LIBRARY_PATH=”$ LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64”
export CUDA_HOME=/usr/local/cuda
使设置的环境变量生效:source ~/.bashrc

d)、下载tensorflow源码
此处没有安装git的可以使用
sudo apt-get install git安装GIT

--recurse-submodules参数是必须的,用于获取TensorFlow依赖的protobuf库

e)、bazel的安装。在此说明下bazel有两种安装方式,一种为自动安装,一种为二进制安装。此处我选择自动安装。另给出bazel官网的链接。

<https://docs.bazel.build/versions/master/install-ubuntu.html>

首先需要安装openjdk8:

sudo apt-get install openjdk-8-jdk

其次添加Bazel分发URL作为包源

echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
curl https://storage.googleapis.com/bazel-apt/doc/apt-key.pub.gpg | sudo apt-key add –
            然后更新并安装bazel
            sudo apt-get update&&sudo apt-get install bazel
            升级bazel:
            sudo apt-get upgrade bazel
            验证bazel是否安装:
        输入bazel会显示如下内容:


![image.png](http://upload-images.jianshu.io/upload_images/7035387-cd269096f6ea7040.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

f)、安装tensorflow

            cd tensorflow
            ./configure

root@fm-GREATWALL-PC:~/tensorflow# ./configure
You have bazel 0.5.3 installed.
Please specify the location of python. [Default is /usr/bin/python]:
Found possible Python library paths:
/usr/local/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages
Please input the desired Python library path to use. Default is /usr/local/lib/python2.7/dist-packages

Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]:
jemalloc as malloc support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]:
No Google Cloud Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Hadoop File System support? [y/N]:
No Hadoop File System support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [y/N]:
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with GDR support? [y/N]:
No GDR support will be enabled for TensorFlow.

Do you wish to build TensorFlow with VERBS support? [y/N]:
No VERBS support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL support? [y/N]:
No OpenCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.

Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 8.0]:
Please specify the location where CUDA 8.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
“Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 6.0]:
Please specify the location where cuDNN 6 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,5.2]

Do you want to use clang as CUDA compiler? [y/N]:
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:

Do you wish to build TensorFlow with MPI support? [y/N]:
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option “–config=opt” is specified [Default is -march=native]:
Add “–config=mkl” to your bazel command to build with MKL support.
Please note that MKL on MacOS or windows is still not supported.
If you would like to use a local MKL instead of downloading, please set the environment variable “TF_MKL_ROOT” every time before build.
Configuration finished

当出现Configuration finished时表示配置成功
之后编译tensorflow(编译的过程中可能会出现很多WARNING,可以不用管):
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
bazel编译命令建立了一个名为build_pip_package的脚本。运行如下的命令会在/tmp/tensorflow_pkg路径中生成一个.whl文件:
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
安装pip包
sudo pip install /tmp/tensorflow_pkg/tensorflow-1.3.0rc2-cp27-cp27mu-linux_x86_64.whl
等pip安装完成。验证是否导包完成,使用上文pip的验证方式。
注:如果首次运行没有出现Hello,TensorFlow!的字样,可以选择重新从./configure开始,也可以试着重启下机器然后在验证是否成功.
g)、验证导入成功的效果图:

image.png

test.py文件中的内容:

image.png
参考文档:

image.png

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值