如何用windows使用GPU进行人工智能

适用GPU (这个要找个通用的方法,根据显卡、windows系统,找到对应版本的cuda cunn pytorch)
安装 cuda cunn 和显卡型号 都要匹配上,不然会出现莫名其妙的问题,包括 pytorch也要匹配上。

注意事项:不同的算法平台,对cuda和cunn的版本需求都不一样,为避免发生乱七八糟的错误,需要保持版本的一致。例如pytorch 与 cuda cunn要保持一致

一、介绍

  • cuda : 简单地说,CUDA 是一个框架,允许你用 C/C++(或其他支持的语言)直接为 GPU 编写程序。例如,让代码能够使用GPU
  • cunn:是一个专为深度学习而设计的库,旨在使神经网络操作在支持CUDA的NVIDIA GPU上得到加速。它是Torch深度学习框架的一个组件
  • pytorch:PyTorch 是一个开源的深度学习框架
  • paddlepaddle:PaddlePaddle(Parallel Distributed Deep Learning) 是由百度研发并开源的深度学习框架

二、版本选型与安装

1、cuda选择并下载安装

1.1 输入CMD指令 nvidia-smi ,查看自己电脑的显卡型号。或者用其他方式

在这里插入图片描述
1.2. 找到显卡算力 和 对应架构 地址
在这里插入图片描述
1.3. 下载cuda 地址
点击对应各个版本,去看自己的想下载的版本支持的算力和架构
在这里插入图片描述
这里直接提供一个总结后的清单列表

Supported CUDA level of GPU and card.

CUDA SDK 1.0 support for compute capability 1.01.1 (Tesla
CUDA SDK 1.1 support for compute capability 1.01.1+x (Tesla)
CUDA SDK 2.0 support for compute capability 1.01.1+x (Tesla)
CUDA SDK 2.12.3.1 support for compute capability 1.01.3 (Tesla)
CUDA SDK 3.03.1 support for compute capability 1.02.0 (Tesla, Fermi)
CUDA SDK 3.2 support for compute capability 1.02.1 (Tesla, Fermi)
CUDA SDK 4.04.2 support for compute capability 1.02.1+x (Tesla, Fermi, more).
CUDA SDK 5.05.5 support for compute capability 1.03.5 (Tesla, Fermi, Kepler).
CUDA SDK 6.0 support for compute capability 1.03.5 (Tesla, Fermi, Kepler).
CUDA SDK 6.5 support for compute capability 1.15.x (Tesla, Fermi, Kepler, Maxwell). Last version with support for compute capability 1.x (Tesla).
CUDA SDK 7.07.5 support for compute capability 2.05.x (Fermi, Kepler, Maxwell).
CUDA SDK 8.0 support for compute capability 2.06.x (Fermi, Kepler, Maxwell, Pascal). Last version with support for compute capability 2.x (Fermi).
CUDA SDK 9.09.2 support for compute capability 3.07.0 (Kepler, Maxwell, Pascal, Volta)
CUDA SDK 10.010.2 support for compute capability 3.07.5 (Kepler, Maxwell, Pascal, Volta, Turing). Last version with support for compute capability 3.0 and 3.2 (Kepler in part). 10.2 is the last official release for macOS, as support will not be available for macOS in newer releases.
CUDA SDK 11.0 support for compute capability 3.58.0 (Kepler (in part), Maxwell, Pascal, Volta, Turing, Ampere (in part)).
CUDA SDK 11.111.4 support for compute capability 3.58.6 (Kepler (in part), Maxwell, Pascal, Volta, Turing, Ampere (in part)).
CUDA SDK 11.511.7.1 support for compute capability 3.58.7 (Kepler (in part), Maxwell, Pascal, Volta, Turing, Ampere).
CUDA SDK 11.8 support for compute capability 3.59.0 (Kepler (in part), Maxwell, Pascal, Volta, Turing, Ampere, Ada Lovelace, Hopper).
CUDA SDK 12.0 support for compute capability 5.09.0 (Maxwell, Pascal, Volta, Turing, Ampere, Ada Lovelace, Hopper)
Compute
capability
(version)
Micro-architectureGPUsGeForce
1.0TeslaG80GeForce 8800 Ultra, GeForce 8800 GTX, GeForce 8800 GTS(G80)
1.1TeslaG92, G94, G96, G98, G84, G86GeForce GTS 250, GeForce 9800 GX2, GeForce 9800 GTX, GeForce 9800 GT, GeForce 8800 GTS(G92), GeForce 8800 GT, GeForce 9600 GT, GeForce 9500 GT, GeForce 9400 GT, GeForce 8600 GTS, GeForce 8600 GT, GeForce 8500 GT,
GeForce G110M, GeForce 9300M GS, GeForce 9200M GS, GeForce 9100M G, GeForce 8400M GT, GeForce G105M
1.2TeslaGT218, GT216, GT215GeForce GT 340*, GeForce GT 330*, GeForce GT 320*, GeForce 315*, GeForce 310*, GeForce GT 240, GeForce GT 220, GeForce 210,
GeForce GTS 360M, GeForce GTS 350M, GeForce GT 335M, GeForce GT 330M, GeForce GT 325M, GeForce GT 240M, GeForce G210M, GeForce 310M, GeForce 305M
1.3TeslaGT200, GT200bGeForce GTX 295, GTX 285, GTX 280, GeForce GTX 275, GeForce GTX 260
2.0FermiGF100, GF110GeForce GTX 590, GeForce GTX 580, GeForce GTX 570, GeForce GTX 480, GeForce GTX 470, GeForce GTX 465,
GeForce GTX 480M
2.1FermiGF104, GF106 GF108, GF114, GF116, GF117, GF119GeForce GTX 560 Ti, GeForce GTX 550 Ti, GeForce GTX 460, GeForce GTS 450, GeForce GTS 450*, GeForce GT 640 (GDDR3), GeForce GT 630, GeForce GT 620, GeForce GT 610, GeForce GT 520, GeForce GT 440, GeForce GT 440*, GeForce GT 430, GeForce GT 430*, GeForce GT 420*,
GeForce GTX 675M, GeForce GTX 670M, GeForce GT 635M, GeForce GT 630M, GeForce GT 625M, GeForce GT 720M, GeForce GT 620M, GeForce 710M, GeForce 610M, GeForce 820M, GeForce GTX 580M, GeForce GTX 570M, GeForce GTX 560M, GeForce GT 555M, GeForce GT 550M, GeForce GT 540M, GeForce GT 525M, GeForce GT 520MX, GeForce GT 520M, GeForce GTX 485M, GeForce GTX 470M, GeForce GTX 460M, GeForce GT 445M, GeForce GT 435M, GeForce GT 420M, GeForce GT 415M, GeForce 710M, GeForce 410M
3.0KeplerGK104, GK106, GK107GeForce GTX 770, GeForce GTX 760, GeForce GT 740, GeForce GTX 690, GeForce GTX 680, GeForce GTX 670, GeForce GTX 660 Ti, GeForce GTX 660, GeForce GTX 650 Ti BOOST, GeForce GTX 650 Ti, GeForce GTX 650,
GeForce GTX 880M, GeForce GTX 870M, GeForce GTX 780M, GeForce GTX 770M, GeForce GTX 765M, GeForce GTX 760M, GeForce GTX 680MX, GeForce GTX 680M, GeForce GTX 675MX, GeForce GTX 670MX, GeForce GTX 660M, GeForce GT 750M, GeForce GT 650M, GeForce GT 745M, GeForce GT 645M, GeForce GT 740M, GeForce GT 730M, GeForce GT 640M, GeForce GT 640M LE, GeForce GT 735M, GeForce GT 730M
3.5KeplerGK110, GK208GeForce GTX Titan Z, GeForce GTX Titan Black, GeForce GTX Titan, GeForce GTX 780 Ti, GeForce GTX 780, GeForce GT 640 (GDDR5), GeForce GT 630 v2, GeForce GT 730, GeForce GT 720, GeForce GT 710, GeForce GT 740M (64-bit, DDR3), GeForce GT 920M
5.0MaxwellGM107, GM108GeForce GTX 750 Ti, GeForce GTX 750, GeForce GTX 960M, GeForce GTX 950M, GeForce 940M, GeForce 930M, GeForce GTX 860M, GeForce GTX 850M, GeForce 845M, GeForce 840M, GeForce 830M
5.2MaxwellGM200, GM204, GM206GeForce GTX Titan X, GeForce GTX 980 Ti, GeForce GTX 980, GeForce GTX 970, GeForce GTX 960, GeForce GTX 950, GeForce GTX 750 SE,
GeForce GTX 980M, GeForce GTX 970M, GeForce GTX 965M
6.1PascalGP102, GP104, GP106, GP107, GP108Nvidia TITAN Xp, Titan X,
GeForce GTX 1080 Ti, GTX 1080, GTX 1070 Ti, GTX 1070, GTX 1060,
GTX 1050 Ti, GTX 1050, GT 1030, GT 1010,
MX350, MX330, MX250, MX230, MX150, MX130, MX110
7.0VoltaGV100NVIDIA TITAN V
7.5TuringTU102, TU104, TU106, TU116, TU117NVIDIA TITAN RTX,
GeForce RTX 2080 Ti, RTX 2080 Super, RTX 2080, RTX 2070 Super, RTX 2070, RTX 2060 Super, RTX 2060 12GB, RTX 2060,
GeForce GTX 1660 Ti, GTX 1660 Super, GTX 1660, GTX 1650 Super, GTX 1650, MX550, MX450
8.6AmpereGA102, GA103, GA104, GA106, GA107GeForce RTX 3090 Ti, RTX 3090, RTX 3080 Ti, RTX 3080 12GB, RTX 3080, RTX 3070 Ti, RTX 3070, RTX 3060 Ti, RTX 3060, RTX 3050, RTX 3050 Ti(mobile), RTX 3050(mobile), RTX 2050(mobile), MX570
8.9

Ada

Lovelace

AD102, AD103, AD104, AD106, AD107GeForce RTX 4090, RTX 4080, RTX 4070 Ti, RTX 4070
### 二、panddlepanddle版本选择与安装
  1. cunn版本查找,版本对应地址
    在这里插入图片描述
  2. 下载地址 需要有账号,直接用苹果ID账号登录也可以
  3. cunn安装
    下载好了,把下载包解压。将里面的文件翻盖到cuda安装路径中。
    (cunn的bin目录里面文件,全部放到 cuda安装路径中的bin目录。其余文件也类似)
  4. paddlepaddle-gpu安装
    (1)paddlepaddle-gpu 使用 pip 安装后默认使用GPU参加计算
    (2)paddlepaddle 安装后,默认使用CPU参加计算
    (3)python -m pip install paddlepaddle-gpu==2.4.2.post117 -f https://www.paddlepaddle.org.cn/whl/windows/mkl/avx/stable.html
    (4)cuda 117 paddle版本 2.4.2 最新的 2.5.1需要7.0以上算力的架构,我1050TI显卡(算力6.1)暂时不满足

三、pytorch版本选择与安装

依照 paddlepaddle

错误

错误一

Could not locate zlibwapi.dll. Please make sure it is in your library path!
Process finished with exit code -1073740791 (0xC0000409)

解决:安装  zlibwapi.dll 数据压缩的库
    zlibwapi.lib文件放到 C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.5/lib
    zlibwapi.dll文件放到 C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.5/bin
    原文链接:https://blog.csdn.net/qq_40280673/article/details/132229908

错误二

 The GPU architecture in your current machine is Pascal, which is not compatible with Paddle installation with arch: 70 75 80 86 , it is recommended to install the corresponding wheel package according to the installation information on the official Paddle website    
 paddlepaddle-gpu版本是2.5.1 ,需要显卡算力 7.0以上,我的显卡算力只有6.1,所以不能使用
  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

简维旅者

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值