基于win10+CUDA10.2+cuDNN+Anaconda的Tensorflow(GPU) & PyTroch安装

一. 电脑配置如下:

写在前面,本机的电脑配置如下:

  • System:windows 10 专业版 (64位)
  • CPU:i5-9400F
  • RAM:16G(2666MHz)
  • 显卡:GEFORCE GTX 1660 Ti (万图师 Ti OC)

首先,在安装之前需要查看显卡所能支持的最高CUDA版本,打开【NVIDIA控制面板】,选择左下角的【系统信息】选项,并点击【组件】按钮进入到如下界面:

从图中我们可看出,GTX 1660 Ti 的显卡支持CUDA 10.2版本的。因此,我们基于10.2版本进行安装!

二. Tensorflow安装过程:

安装的各个组件的版本信息如下:

为方便广大读者下载,百度云私密链接如下:

链接:https://pan.baidu.com/s/15OOZKoVv5FezOnfYkV-odA 
提取码:uwkb

1. Anaconda3 2019.10 安装过程:

  • 首先,我们进入Anaconda的官网(Anaconda | Individual Edition),我们就可以看见下载链接,我们选择与系统版本对应的Anaconda版本进行下载。

  • 下载完成后,我们双击进行安装(傻瓜式安装),具体可以在网上找到步骤!

2. CUDA 10.2 安装过程:

  • 首先,按照本文开始部分的介绍确定CUDA的版本,在此我们选择CUDA10.2进行安装
  • 随后,进入CUDA官网(CUDA Toolkit | NVIDIA Developer),选择【Download Now】进入下载界面,选择对应的版本进行Download:

  • 下载之后,点击进行安装,具体的安装过程以10.1为例,选择安装路径:

  • 点击下一步:

  • 点击同意安装:

  • 点击精简安装

  • 显示没有VS,勾选并点击NEXT(切记关闭360等安全软件不然会失败)

  • 静静等待安装结束,最终的安装路径为【C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2】:

  • 最后,按住【win+R】运行程序,输出【cmd】,输入【nvcc -V】,出现如下提示表示安装成功:

  • 若未出现,则检查环境变量是否配置完整:

至此,CUDA的安装过程完成,该过程可以参考其他博客!!!

3. cuDNN 7.6.5 安装过程:

  • 首先,进入官网(NVIDIA cuDNN | NVIDIA Developer),进行账号的注册(未注册不能下载)!!!本人采用“邮箱”注册,密码设置为“常用密码+$”
  • 注册完成后,点击【Download cuDNN】进入下载界面,并根据自己安装的CUDA的版本选择对应的cuDNN:

  • 下载完成后,对文件进行解压,获得如下子文件:

  • 将解压的文件复制到CUDA安装目录的对应文件中去。

至此,cuDNN的安装过程完成!!!

4. Tensorflow-gpu 安装过程:

为了便于不同环境的管理,本文基于虚拟环境进行安装!

  • 打开Anaconda Prompt创建一个Tensorlow的虚拟环境(python环境设为3.7.4),中间会让我们确认一下,输入【y】即可:
conda create -n tensoflow-gpu python=3.7.4
  • 创建完环境之后,我们切换至创建好的环境下:
activate tensorflow_gpu
  • 在(tensorflow_gpu)环境下,安装tensorflow 1.14.0 版本:
conda install tensorflow-gpu==1.14.0

注:或采用pip install tensorflow进行安装时,在之后的运行过程中要求会报错显示CUDA必须为10.0版本,而采用conda方式安装则可以避免这个问题,过程中发现自动安装了cuda10.0toolkit,但不影响正常的运行,省去了一些麻烦:

  • 安装完之后,我们需要补齐该虚拟环境缺失的其他库:
conda install anaconda
  • 输入【python】,导入【tensorflow】模块,则安装成功:

至此,Tensorflow的安装过程完成!!!

5. Tensorflow运行过程中的一些问题:

  • 在运行过程中出现"Failed to get convolution algorithm. This is probably because cuDNN failed to initialize"的报错问题,具体的解决措施是设置配置信息如下:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
with tf.Session(config=config) as session:

三. PyTorch安装过程:

PyTorch的安装过程也是建立在CUDA10.2和cuDNN7.6.5的前提之上!!!

  • 按照上述的Tensorflow安装步骤,创建PyTorch虚拟环境
  • 进入PyTorch官网(PyTorch),选择合适的版本:

  • 在虚拟环境中采用conda进行安装:
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch

至此,PyTorch的安装过程完成!!!

参考博客:

1. win10+1060显卡安装anaconda+CUDA10.1+pytorch+cuDNN+tensorflow-gpu - 一只天真的小蜗牛 - 博客园

2. 新电脑重新安装win10+python3.6+anaconda+tensorflow1.12(gpu版) - 每天坚持一点点 - 博客园

3. WIN10 + GTX1660Ti配置TensorFlow GPU版本_大春SSC的博客-CSDN博客

  • 6
    点赞
  • 30
    收藏
    觉得还不错? 一键收藏
  • 11
    评论
自编译tensorflow: 1.python3.5,tensorflow1.12; 2.支持cuda10.0,cudnn7.3.1,TensorRT-5.0.2.6-cuda10.0-cudnn7.3; 3.无mkl支持; 软硬件硬件环境:Ubuntu16.04,GeForce GTX 1080 TI 配置信息: hp@dla:~/work/ts_compile/tensorflow$ ./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.19.1 installed. Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3 Found possible Python library paths: /usr/local/lib/python3.5/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] Do you wish to build TensorFlow with XLA JIT support? [Y/n]: XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]: Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.3.1 Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]: Do you wish to build TensorFlow with TensorRT support? [y/N]: y TensorRT support will be enabled for TensorFlow. Please specify the location where TensorRT is installed. [Default is /usr/lib/x86_64-linux-gnu]://home/hp/bin/TensorRT-5.0.2.6-cuda10.0-cudnn7.3/targets/x86_64-linux-gnu Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]: Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1,6.1,6.1]: Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=" to your build command. See .bazelrc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. Preconfigured Bazel build configs to DISABLE default on features: --config=noaws # Disable AWS S3 filesystem support. --config=nogcp # Disable GCP support. --config=nohdfs # Disable HDFS support. --config=noignite # Disable Apacha Ignite support. --config=nokafka # Disable Apache Kafka support. --config=nonccl # Disable NVIDIA NCCL support. Configuration finished 编译: bazel build --config=opt --verbose_failures //tensorflow/tools/pip_package:build_pip_package 卸载已有tensorflow: hp@dla:~/temp$ sudo pip3 uninstall tensorflow 安装自己编译的成果: hp@dla:~/temp$ sudo pip3 install tensorflow-1.12.0-cp35-cp35m-linux_x86_64.whl

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 11
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值