TensorFlow安装指南(Centos 7&Windows)及DCGAN demo测试

作者:Kindle君
时间:2017年4月12日。
出处:http://blog.csdn.net/yexiaogu1104/article/details/69055802
声明:版权所有,转载请联系作者并注明出处

本文记录可成功安装TensorFlow的步骤,包括Centos 7 和Windows 10。具体请参考TensorFlow。另外,介绍了一下GitHub的使用。最后,跑通了DCGAN的demo。希望和大家一起交流学习!

1、TensorFlow安装步骤及问题

1.1 linux下使用virtualenv进行安装

步骤指令结果或问题
1.通过发出以下命令安装pip和virtualenv:sudo yum install python-py python-devel python-virtualenv1.yum不能使用 –>>检查python配置路径是否正确。
2.centos下yum中不叫python-dev,而是python-devel.
2.通过发出以下命令创建一个virtualenv环境:sudo virtualenv –system-site-packages /storage/hesiying/hesiying/tensorflow-virtualenv/1.注意使用sudo.
2.安装提示:
a.New_python_executable_in_/storage/hesiying/hesiying/tensorflow_virtualenv/bin/python;
b.Installing Setuptools……done.;
c.Installing Pip……done
3.通过发出以下命令之一激活virtualenv环境:source /storage/hesiying/hesiying/tensorflow-virtualenv/bin/activate成功提示:(tensorflow-virtualenv)[hesiying@localhost hesiying]$
4.发出以下命令之一在活动的virtualenv环境中安装TensorFlow:sudo pip install –upgrade
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.0.1-cp27-none-linux_x86_64.whl
1.失败原因: If Step 4 failed (typically because you invoked a pip version lower than 8.1)
2.成功提示:Successfully installed appdirs-1.4.3 funcsigs-1.0.2 mock-2.0.0 numpy-1.12.1 packaging-16.8 pbr-2.0.0 protobuf-3.2.0 pyparsing-2.2.0 setuptools-34.3.3 six-1.10.0 tensorflow-gpu-1.0.1 wheel-0.29.0
5.测试是否成功a.activate your container as step 3.
b.python
c.>>>import tensorflow as tf
d. >>>sess = tf.Session()
e.>>>hello = tf.constant(‘Hello,TensorFlow!’)
f.>>>sess = tf.Session()
g.>>>print(sess.run(hello))
成功提示:见下面的代码片段
安装成功
(tensorflow-virtualenv)[hesiying@localhost hesiying]$ python
Python 2.7.5 (default, Nov  6 2016, 00:28:07) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
>>> hello = tf.constant('Hello,TensorFlow!')
>>> sess = tf.Session()
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:01:00.0
Total memory: 11.92GiB
Free memory: 4.25GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:01:00.0)
>>> print(sess.run(hello))
Hello,TensorFlow!

1.2 Windows下使用Anaconda(失败)和 Native pip进行安装

步骤指令结果或问题
1.下载并安装Anaconda3- 4.1.1 with python-3.5.2(传说TF 仅仅对python3.5的支持比较好)1.按照界面提示安装,记得勾选所有选项.conda version : 4.1.6
conda-env version : 2.5.1
conda-build version : 1.21.3
python version : 3.5.2.final.0
2.下载并安装最新的CUDA® Toolkit 8.0或通过百度云盘下载,密码:awwn.1.安装路径:B:\Program Files\Navida\CUDA
2.安装过程比较的慢,不要着急哈。安装完之后系统变量会自动为你添加上
3.输入 nvcc -V,输出版本信息则表示成功。如果之前安装了7.5版本的,请卸载后重新安装
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Mon_Jan__9_17:32:33_CST_2017
Cuda compilation tools, release 8.0, V8.0.60
3.下载并安装cuDNN v5.1 for CUDA 8.0或通过百度云盘下载,密码:awwn.将B:\Program Files\Navida\cudnn-8.0-windows10-x64-v5.1\cuda添加到系统环境变量PATHs
4.用Anaconda安装TensorFlow的GPU版本1.在conda独立空间中创建一个名为tensorflow的环境:
conda create -n tensorflow python=3.5.2
2.激活创建的环境:
activate tensorflow
3.安装tensorflow的CPU版本:
pip install –ignore-installed –upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.0.1-cp35-cp35m-win_amd64.whl
1.在路径\Anaconda3\envs下会出现一个tensorflow的文件夹
2.cmd中会显示(tensorflow) B:\Program Files\Anaconda3>
3.遇到问题:is not a supported wheel on this platform ——>> The Anaconda installation is community supported, not officially supported(马勒个吉的,仅是社区支持,不是官方支持,也就是说在Anaconda中对于whl文件格式不官方支持,那你倒是给出社区支持版的Anaconda啊!!!!!).So I’ve decided to try python 3.5.2 from python.org and this worked immediately1.
5.用Native pip安装TensorFlow1.下载安装python 3.5.2 from python.org
pip3 install –upgrade tensorflow-gpu
2.验证是否安装成功:python:import tensorflow as tf
successfully opened CUDA library cublas64_80.dll locally
successfully opened CUDA library cudnn64_5.dll locally
successfully opened CUDA library cufft64_80.dll locally
successfully opened CUDA library nvcuda.dll locally
successfully opened CUDA library curand64_80.dll locally

2、GitHub简易使用指南

介绍如何通过GitHub同步自己的代码。更详细的GitHub操作可以参见廖雪峰的博客。

(tensorflow-virtualenv)[hesiying@localhost GAN]$ git init
重新初始化现存的 Git 版本库于 /storage/hesiying/hesiying/GAN/.git/
(tensorflow-virtualenv)[hesiying@localhost GAN]$ git add DCGAN-tensorflow-master
(tensorflow-virtualenv)[hesiying@localhost GAN]$ ls
DCGAN-tensorflow-master
(tensorflow-virtualenv)[hesiying@localhost GAN]$ git status
# 位于分支 master
无文件要提交,干净的工作区
(tensorflow-virtualenv)[hesiying@localhost GAN]$ touch readm.txt
(tensorflow-virtualenv)[hesiying@localhost GAN]$ ls
DCGAN-tensorflow-master  readm.txt
(tensorflow-virtualenv)[hesiying@localhost GAN]$ git add readm.txt
(tensorflow-virtualenv)[hesiying@localhost GAN]$ git commit -m "write a readme.file"
[master a25c2ff] write a readme.file
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 readm.txt
(tensorflow-virtualenv)[hesiying@localhost GAN]$ git status 
# 位于分支 master
无文件要提交,干净的工作区
(tensorflow-virtualenv)[hesiying@localhost GAN]$ git remote add origin https://github.com/KindleHe/GAN.git
fatal: 远程 origin 已经存在。
(tensorflow-virtualenv)[hesiying@localhost GAN]$ git push -u origin master
Username for 'https://github.com': KindleHe
Password for 'https://KindleHe@github.com': 
Counting objects: 98, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (97/97), done.
Writing objects: 100% (98/98), 34.86 MiB | 54.00 KiB/s, done.
Total 98 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1), done.
To https://github.com/KindleHe/GAN.git
 * [new branch]      master -> master
分支 master 设置为跟踪来自 origin 的远程分支 master。

3、DCGAN demo

验证TensorFlow是否安装成功,同时运行了DCGAN的一个demo程序。具体步骤如下,代码请参照carpedm20DCGAN-tensorflow

步骤指令问题解决办法
1.download dataset$ python download.py –datasets mnist celebA1.ImportError: No module named requests
2.ImportError: No module named tqdm
3. error: unrecognized arguments: –datasets
1.sudo pip install requests
2.sudo pip install tqdm
3.修改download.py中parser.add_argument第一个参数为“–datasets”。应该是可以接受多个参数,只是下载celebA时连接服务器超时,所以无法使用。
4.这里暂时以MNIST数据及进行demo测试
2.1.train a model(错误示例)python main.py –dataset mnist –input_height=28 –output_height=28 –c_dim=11.ImportError: No module named scipy:
Attempting to use uninitialized value generator /g_h0_lin/Matrix
1.原因:最后缺少–is_train,导致在main.py中line 80:if Flags.is_train:等于默认值false,接下来的dcgan.train(FLAGS)就无法训练,进而导致参数无法被初始化
python main.py –dataset mnist –input_height=28 –output_height=28 –c_dim=1 –is_train [] Reading checkpoints… [] Failed to find a checkpoint[!] Load failed…
2.Loaded runtime CuDNN library:5005 (compatibility version 5000) but source was compiled with 5110 (compatibility version 5100).
1.报错语句出现在model.py的train函数中,说明已经进入训练过程。
2.CuDNN库兼容性问题:执行以下指令安装最新的CuDNN
sudo cp cuda/include/cudnn.h /usr/local/cuda-8.0/include
sudo cp cuda/lib64/libcudnn* /usr/local/cuda-8.0/lib64
sudo chmod a+r /usr/local/cuda-8.0/include/cudnn.h /usr/local/cuda-8.0/lib64/libcudnn*
2.2.train a model(错误示例)解决方案:python main.py –dataset mnist –input_height=28 –output_height=28 –c_dim=1 –is_train
test a model python main.py –dataset mnist –input_height=28 –output_height=28 –c_dim=1结果参见以下代码
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
{'batch_size': 64,
 'beta1': 0.5,
 'c_dim': 1,
 'checkpoint_dir': 'checkpoint',
 'dataset': 'mnist',
 'epoch': 25,
 'input_fname_pattern': '*.jpg',
 'input_height': 28,
 'input_width': None,
 'is_crop': False,
 'is_train': False,
 'learning_rate': 0.0002,
 'output_height': 28,
 'output_width': None,
 'sample_dir': 'samples',
 'train_size': inf,
 'visualize': False}
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:01:00.0
Total memory: 11.92GiB
Free memory: 9.54GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:01:00.0)
---------
Variables: name (type shape) [size]
---------
generator/g_h0_lin/Matrix:0 (float32_ref 110x1024) [112640, bytes: 450560]
generator/g_h0_lin/bias:0 (float32_ref 1024) [1024, bytes: 4096]
generator/g_bn0/beta:0 (float32_ref 1024) [1024, bytes: 4096]
generator/g_bn0/gamma:0 (float32_ref 1024) [1024, bytes: 4096]
generator/g_h1_lin/Matrix:0 (float32_ref 1034x6272) [6485248, bytes: 25940992]
generator/g_h1_lin/bias:0 (float32_ref 6272) [6272, bytes: 25088]
generator/g_bn1/beta:0 (float32_ref 6272) [6272, bytes: 25088]
generator/g_bn1/gamma:0 (float32_ref 6272) [6272, bytes: 25088]
generator/g_h2/w:0 (float32_ref 5x5x128x138) [441600, bytes: 1766400]
generator/g_h2/biases:0 (float32_ref 128) [128, bytes: 512]
generator/g_bn2/beta:0 (float32_ref 128) [128, bytes: 512]
generator/g_bn2/gamma:0 (float32_ref 128) [128, bytes: 512]
generator/g_h3/w:0 (float32_ref 5x5x1x138) [3450, bytes: 13800]
generator/g_h3/biases:0 (float32_ref 1) [1, bytes: 4]
discriminator/d_h0_conv/w:0 (float32_ref 5x5x11x11) [3025, bytes: 12100]
discriminator/d_h0_conv/biases:0 (float32_ref 11) [11, bytes: 44]
discriminator/d_h1_conv/w:0 (float32_ref 5x5x21x74) [38850, bytes: 155400]
discriminator/d_h1_conv/biases:0 (float32_ref 74) [74, bytes: 296]
discriminator/d_bn1/beta:0 (float32_ref 74) [74, bytes: 296]
discriminator/d_bn1/gamma:0 (float32_ref 74) [74, bytes: 296]
discriminator/d_h2_lin/Matrix:0 (float32_ref 3636x1024) [3723264, bytes: 14893056]
discriminator/d_h2_lin/bias:0 (float32_ref 1024) [1024, bytes: 4096]
discriminator/d_bn2/beta:0 (float32_ref 1024) [1024, bytes: 4096]
discriminator/d_bn2/gamma:0 (float32_ref 1024) [1024, bytes: 4096]
discriminator/d_h3_lin/Matrix:0 (float32_ref 1034x1) [1034, bytes: 4136]
discriminator/d_h3_lin/bias:0 (float32_ref 1) [1, bytes: 4]
Total size of variables: 10834690
Total bytes of variables: 43338760
 [*] Reading checkpoints...
 [*] Success to read DCGAN.model-32002
 [*] 0
 [*] 1
 [*] 2
 [*] 3
 [*] 4
 [*] 5
 [*] 6
 [*] 7
 [*] 8
 [*] 9
 [*] 10
 [*] 11
 [*] 12
 [*] 13
 [*] 14
 [*] 15
 [*] 16
 [*] 17
 [*] 18
 [*] 19
 [*] 20
 [*] 21
 [*] 22
 [*] 23
 [*] 24
 [*] 25
 [*] 26
 [*] 27
 [*] 28
 [*] 29
 [*] 30
 [*] 31
 [*] 32
 [*] 33
 [*] 34
 [*] 35
 [*] 36
 [*] 37
 [*] 38
 [*] 39
 [*] 40
 [*] 41
 [*] 42
 [*] 43
 [*] 44
 [*] 45
 [*] 46
 [*] 47
 [*] 48
 [*] 49
 [*] 50
 [*] 51
 [*] 52
 [*] 53
 [*] 54
 [*] 55
 [*] 56
 [*] 57
 [*] 58
 [*] 59
 [*] 60
 [*] 61
 [*] 62
 [*] 63
 [*] 64
 [*] 65
 [*] 66
 [*] 67
 [*] 68
 [*] 69
 [*] 70
 [*] 71
 [*] 72
 [*] 73
 [*] 74
 [*] 75
 [*] 76
 [*] 77
 [*] 78
 [*] 79
 [*] 80
 [*] 81
 [*] 82
 [*] 83
 [*] 84
 [*] 85
 [*] 86
 [*] 87
 [*] 88
 [*] 89
 [*] 90
 [*] 91
 [*] 92
 [*] 93
 [*] 94
 [*] 95
 [*] 96
 [*] 97
 [*] 98
 [*] 99

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
TensorFlow是一个开源的人工智能框架,支持GPU加速计算。本文将提供在CentOS 7下安装TensorFlow-GPU的完整手册。 步骤1:安装GPU驱动程序 首先,确保您的机器已配置好NVIDIA显卡,并安装了相应版本的CUDA驱动程序。CUDA是用于在GPU上执行并行计算的一套工具。 步骤2:安装CUDA Toolkit 在CentOS 7上安装CUDA Toolkit,可以使用RPM包管理器。首先,下载适合您显卡型号的CUDA Toolkit安装包。然后,使用以下命令进行安装: ``` sudo rpm -i cuda-repo-<version>.rpm sudo yum clean expire-cache sudo yum install cuda ``` 步骤3:添加CUDA路径 安装完成后,需要将CUDA库路径添加到环境变量中。编辑`~/.bashrc`文件,并在文件末尾添加以下行: ``` export PATH=/usr/local/cuda/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH ``` 保存文件并执行以下命令使配置生效: ``` source ~/.bashrc ``` 步骤4:安装cuDNN cuDNN是用于加速深度神经网络计算的CUDA库。您需要注册NVIDIA开发者帐户才能下载cuDNN。一旦下载完成,使用以下命令解压文件: ``` tar -xzvf cudnn-<version>.tgz ``` 然后将文件复制到CUDA安装目录: ``` sudo cp cuda/include/cudnn.h /usr/local/cuda/include sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64 sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn* ``` 步骤5:安装Python和TensorFlowCentOS 7上,您可以使用Anaconda包管理器非常方便地安装Python和TensorFlow。首先,下载适合您操作系统版本的Anaconda安装包。然后,运行以下命令进行安装: ``` bash Anaconda3-<version>-Linux-x86_64.sh ``` 根据提示进行安装,并将Anaconda的bin目录添加到环境变量中。 步骤6:创建并激活虚拟环境 创建一个新的虚拟环境,以避免与现有的Python环境冲突: ``` conda create -n tensorflow python=3.7 conda activate tensorflow ``` 步骤7:安装TensorFlow-GPU 在虚拟环境中,使用conda命令安装TensorFlow-GPU: ``` conda install tensorflow-gpu ``` 安装完成后,您可以验证TensorFlow是否正确安装,可以使用以下代码进行测试: ``` import tensorflow as tf print(tf.__version__) ``` 如果能够成功输出TensorFlow版本号,则表示安装成功。 以上就是在CentOS 7下安装TensorFlow-GPU的完整手册。希望对您有帮助!

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值