30系/40系GPU使用TensorFlow1.x的方法

4 篇文章 0 订阅
3 篇文章 0 订阅

1. Windows平台

在Windows上平台比较简单,可以在这个网站找到编译好的cuda11.1版本的TensorFlow1.15的whl文件,然后安装cuda11.8就可以支持40系显卡了

TensorFlow1.15.4+cuda11.1+avx2icon-default.png?t=N7T8https://github.com/fo40225/tensorflow-windows-wheel/tree/master/1.15.4%2Bnv20.12/py38/CPU%2BGPU/cuda111cudnn8avx2

2. Linux平台

首先在这里下载对应cuda版本的TensorFlow,具体版本对应参考段末列表

NVIDIA-TensorFlowicon-default.png?t=N7T8https://developer.download.nvidia.com/compute/redist/nvidia-tensorflow

查看系统版本 

# 查看系统版本
lsb_release -a

# 查看cuda版本
ls -l /usr/local/cuda

 对于Ubuntu 18.02,只能创建Python3.6环境,不能用python3.8,否则会出现glibc2.29找不到的问题

conda create -n tf-1.15 python=3.6
conda activate tf-1.15
pip install --user nvidia-pyindex
export PATH=$PATH:$HOME/.local/bin
# 提前下载好 https://developer.download.nvidia.com/compute/redist/nvidia-tensorflow/nvidia_tensorflow-1.15.4+nv20.10-cp36-cp36m-linux_x86_64.whl
pip install nvidia_tensorflow-1.15.4+nv20.10-cp36-cp36m-linux_x86_64.whl opencv-python==3.4.5.20 matplotlib==3.3.0 -i https://pypi.tuna.tsinghua.edu.cn/simple

  如果是Ubuntu20,则Python3.8/Python3.6均可

conda create -n tf-1.15 python=3.8
conda activate tf-1.15
pip install --user nvidia-pyindex
export PATH=$PATH:$HOME/.local/bin
# tf15_cuda-11.1: https://developer.download.nvidia.com/compute/redist/nvidia-tensorflow/nvidia_tensorflow-1.15.4+nv20.12-cp38-cp38-linux_x86_64.whl
# cudnn-8.0.5.43: https://developer.download.nvidia.cn/compute/redist/nvidia-cudnn/nvidia_cudnn-8.0.5.43-py3-none-manylinux1_x86_64.whl 
pip install nvidia_tensorflow-1.15.4+nv20.12-cp38-cp38-linux_x86_64.whl opencv-python matplotlib==3.3.0 -i https://pypi.tuna.tsinghua.edu.cn/simple

如果是cuda12.x 

conda create -n tf-1.15 python=3.8
conda activate tf-1.15
pip install --user nvidia-pyindex
export PATH=$PATH:$HOME/.local/bin
pip install nvidia_tensorflow-1.15.5+nv23.03-7472065-cp38-cp38-linux_x86_64.whl opencv-python==3.4.10.35 matplotlib==3.3.0 tensorrt==8.6.0 -i https://pypi.tuna.tsinghua.edu.cn/simple

测试

import tensorflow as tf
tf.test.is_gpu_available() 

cuda-11.0 

nvidia_tensorflow-1.15.2+nv20.06-cp36-cp36m-linux_x86_64.whl

cuda-11.0 nvidia_tensorflow-1.15.3+nv20.07-cp36-cp36m-linux_x86_64.whl
cuda-11.0 nvidia_tensorflow-1.15.3+nv20.08-cp36-cp36m-linux_x86_64.whl
cuda-11.0 nvidia_tensorflow-1.15.3+nv20.09-cp36-cp36m-linux_x86_64.whl
cuda-11.1 nvidia_tensorflow-1.15.4+nv20.10-cp36-cp36m-linux_x86_64.whl
cuda-11.1 nvidia_tensorflow-1.15.4+nv20.11-cp36-cp36m-linux_x86_64.whl
cuda-11.1 nvidia_tensorflow-1.15.4+nv20.12-cp38-cp38-linux_x86_64.whl
cuda-11.2 nvidia_tensorflow-1.15.5+nv21.02-cp38-cp38-linux_x86_64.whl
cuda-11.2 nvidia_tensorflow-1.15.5+nv21.03-cp38-cp38-linux_x86_64.whl
cuda-11.3 nvidia_tensorflow-1.15.5+nv21.04-cp38-cp38-linux_x86_64.whl
cuda-11.3 nvidia_tensorflow-1.15.5+nv21.05-cp38-cp38-linux_x86_64.whl
cuda-11.3 nvidia_tensorflow-1.15.5+nv21.06-cp38-cp38-linux_x86_64.whl
cuda-11.4 nvidia_tensorflow-1.15.5+nv21.07-cp38-cp38-linux_x86_64.whl
cuda-11.4 nvidia_tensorflow-1.15.5+nv21.08-cp38-cp38-linux_x86_64.whl
cuda-11.4 nvidia_tensorflow-1.15.5+nv21.09-cp38-cp38-linux_x86_64.whl
cuda-11.4 nvidia_tensorflow-1.15.5+nv21.10-cp38-cp38-linux_x86_64.whl
cuda-11.5 nvidia_tensorflow-1.15.5+nv21.11-cp38-cp38-linux_x86_64.whl
cuda-11.5 nvidia_tensorflow-1.15.5+nv21.12-cp38-cp38-linux_x86_64.whl
cuda-11.6 nvidia_tensorflow-1.15.5+nv22.01-3720650-cp38-cp38-linux_x86_64.whl
cuda-11.6 nvidia_tensorflow-1.15.5+nv22.02-3927706-cp38-cp38-linux_x86_64.whl
cuda-11.6 nvidia_tensorflow-1.15.5+nv22.03-4138614-cp38-cp38-linux_x86_64.whl
cuda-11.6 nvidia_tensorflow-1.15.5+nv22.04-4387458-cp38-cp38-linux_x86_64.whl
cuda-11.7 nvidia_tensorflow-1.15.5+nv22.05-4761017-cp38-cp38-linux_x86_64.whl
cuda-11.7 nvidia_tensorflow-1.15.5+nv22.06-5077300-cp38-cp38-linux_x86_64.whl
cuda-11.7 nvidia_tensorflow-1.15.5+nv22.07-5236135-cp36-cp36m-linux_x86_64.whl
cuda-11.7 nvidia_tensorflow-1.15.5+nv22.07-5236135-cp38-cp38-linux_x86_64.whl
cuda-11.7 nvidia_tensorflow-1.15.5+nv22.08-5542765-cp36-cp36m-linux_x86_64.whl
cuda-11.7 nvidia_tensorflow-1.15.5+nv22.08-5542765-cp38-cp38-linux_x86_64.whl
cuda-11.8 nvidia_tensorflow-1.15.5+nv22.09-6040196-cp36-cp36m-linux_x86_64.whl
cuda-11.8 nvidia_tensorflow-1.15.5+nv22.09-6040196-cp38-cp38-linux_x86_64.whl
cuda-11.8 nvidia_tensorflow-1.15.5+nv22.10-6183310-cp36-cp36m-linux_x86_64.whl
cuda-11.8 nvidia_tensorflow-1.15.5+nv22.10-6183310-cp38-cp38-linux_x86_64.whl
cuda-11.8 nvidia_tensorflow-1.15.5+nv22.11-6482588-cp36-cp36m-linux_x86_64.whl
cuda-11.8 nvidia_tensorflow-1.15.5+nv22.11-6482588-cp38-cp38-linux_x86_64.whl
cuda-11.8 nvidia_tensorflow-1.15.5+nv22.12-6638418-cp36-cp36m-linux_x86_64.whl
cuda-11.8 nvidia_tensorflow-1.15.5+nv22.12-6638418-cp38-cp38-linux_x86_64.whl
cuda-12.0 nvidia_tensorflow-1.15.5+nv23.01-7142245-cp36-cp36m-linux_x86_64.whl
cuda-12.0 nvidia_tensorflow-1.15.5+nv23.01-7142245-cp38-cp38-linux_x86_64.whl
cuda-12.0 nvidia_tensorflow-1.15.5+nv23.02-7195399-cp36-cp36m-linux_x86_64.whl
cuda-12.0 nvidia_tensorflow-1.15.5+nv23.02-7195399-cp38-cp38-linux_x86_64.whl
cuda-12.1 nvidia_tensorflow-1.15.5+nv23.03-7472065-cp36-cp36m-linux_x86_64.whl
cuda-12.1 nvidia_tensorflow-1.15.5+nv23.03-7472065-cp38-cp38-linux_x86_64.whl

参考文档:

  • 7
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
引用\[1\]中的代码是使用TensorFlow 2.0版本的示例代码,其中通过调用`tf.test.is_gpu_available()`函数来检查是否有可用的GPU。如果返回True,则表示有可用的GPU。另外,通过设置`os.environ\['TF_CPP_MIN_LOG_LEVEL'\] = '2'`可以屏蔽掉等级2以下的提示信息。在代码中,还定义了两个常量a和b,并打印了它们的和。 如果你想在TensorFlow 1.x版本中使用GPU,可以按照以下步骤进行操作: 1. 确保你已经安装了适用于GPUTensorFlow版本。可以参考引用\[2\]中的链接,按照教程进行安装。 2. 在代码中,首先导入TensorFlow库:`import tensorflow as tf`。 3. 创建一个会话(session)并指定使用GPU:`sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))`。 4. 在创建会话之前,可以设置GPU使用方式。例如,可以使用以下代码将TensorFlow限制在特定的GPU上: ```python import os os.environ\["CUDA_VISIBLE_DEVICES"\] = "0" # 指定使用GPU编号,多个GPU可以用逗号分隔 ``` 5. 在代码中,使用`with tf.device('/gpu:0'):`来指定使用GPU进行计算。例如: ```python with tf.device('/gpu:0'): # 在这里编写需要在GPU上运行的代码 ``` 请注意,上述代码中的`'/gpu:0'`表示使用第一个GPU,如果你有多个GPU,可以根据需要进行调整。 总结起来,要在TensorFlow 1.x版本中使用GPU,你需要确保安装了适用于GPUTensorFlow版本,并在代码中设置GPU使用方式,并使用`with tf.device('/gpu:0'):`来指定在GPU上运行的代码。 #### 引用[.reference_title] - *1* *2* *3* [TensorFlow2.x,GPU代码测试](https://blog.csdn.net/weixin_45092662/article/details/102931156)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insert_down1,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值