Windows安装配置CUDA12.5

搞大模型往往都需要GPU加速,本次在家里的PC上安装CUDA来实现GPU加速。

一、环境准备

操作系统:Windows11 23H2

GPU:RTX 4070 Ti Super

显卡驱动:555.99 (NVIDIA GeForce 驱动程序 - N 卡驱动 | NVIDIA

注意:尽量安装Studio版本的驱动(否则需要在安装过程中取消很多组件的安装) 

(可选)尽量已安装Visual Studio,笔者安装的是Visual Studio 2022 (不赘述)。

二、安装CUDA

1. 查看当前的英伟达显卡驱动版本

按Win+R,输入cmd,在命令行中输入 nvidia-smi,即可查看显卡驱动版本

本机的显卡驱动版本为555.99。

1. CUDA 12.5 Release Notes — Release Notes 12.5 documentation 中查看适合当前显卡驱动版本对应的CUDA版本

本机的显卡驱动版本为555.99,因此可以安装CUDA 12.5。

若你的显卡驱动版本较低,又想装高版本CUDA,需要对显卡驱动进行升级。

在英伟达官网上下载CUDA,地址:

https://developer.nvidia.com/cuda-toolkit-archive

选择相应的系统、架构、系统版本、以及安装模式,点击download进行下载。

下载完成后双击安装即可(安装过程中大部分可以保持默认设置,直接下一步)。

提取安装文件的临时存放位置,保持默认,点击OK

选择【精简】或【自定义】均可,建议选【自定义】

安装位置可以点击浏览自选

安装完毕

测试CUDA是否安装成功,重新打开命令行,切换到C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.5\extras\demo_suite下,运行bandwidthTest.exe测试程序,看结果是否为“PASS”(成功)。

三、安装cuDNN

下载合适的cuDNN版本,下载地址:https://developer.nvidia.com/rdp/cudnn-archive

选择:Download cuDNN v8.9.7 (December 5th, 2023), for CUDA 12.x下的Windows压缩包进行下载。注意下载时要注册账号。

下载完成直接解压,将其中的这三个文件夹复制到CUDA安装路径(默认路径C:\Program Files\NVlDlA GPU Computing Toolkit\CUDA\v12.5)下对应的文件夹,与原先同名文件合并。

四、配置环境变量

点击【此电脑】—【右键】—【属性】—【高级系统设置】—【环境变量】,打开环境变量窗口。

找到Path变量并双击,添加指向CUDA Development 安装路径下的 bin文件夹和libnvvp 文件夹(实际安装过程中已自动添加,~~省事)。

重新打开命令行窗口,输入nvcc -V,出现如下信息,表示安装成功。

注:至此安装已结束,网上还有一些教程说需要安装zlib。对于新版本的CUDA来说不需要,且最新的官方安装文档显式Windows下已无需安装zlib。

五、在PyTorch中测试CUDA

安装PyTorch

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

如果网速慢无法下载,直接下载安装包:https://download.pytorch.org/whl/cu121/torch-2.3.1%2Bcu121-cp311-cp311-win_amd64.whl

编写如下Python代码:

# -*- coding: utf-8 -*-

import time
import torch


def torch_cuda():
    # 判断是否有cuda
    print(torch.cuda.is_available())
    # 查看cuda设备数量
    print(torch.cuda.device_count())
    # 查看当前CUDA设备名称
    print(torch.cuda.get_device_name(0))
    # 查看当前CUDA使用详情
    print(torch.cuda.memory_summary())

    # 使用cuda进行矩阵乘法
    start_time1 = time.time()
    x = torch.randn(20000, 20000).to("cuda")
    y = torch.randn(20000, 20000).to("cuda")
    z = torch.matmul(x, y)
    end_time1 = time.time()
    print("Time1 (With CUDA): ", end_time1 - start_time1)

    # 不使用cuda进行矩阵乘法
    start_time2 = time.time()
    xx = torch.randn(20000, 20000)
    yy = torch.randn(20000, 20000)
    zz = torch.matmul(xx, yy)
    end_time2 = time.time()
    print("Time2 (Without CUDA): ", end_time2 - start_time2)


if __name__ == '__main__':
    torch_cuda()

结果如下:

有CUDA加速的情况下,20000*20000的矩阵乘法耗时降为原先的1/8。

六、常见问题

CUDA安装失败

解决方案:再重新运行CUDA安装包,安装模式选【自定义】,然后取消相关失败的组件即可。

参考:

Installing cuDNN on Windows — NVIDIA cuDNN v9.2.0 documentation

Installation Guide :: NVIDIA cuDNN Documentation

CUDA超详细安装教程(windows版)_windows安装cuda-CSDN博客

### CUDA 12.5 Compatible PyTorch Versions For ensuring compatibility between CUDA and PyTorch, it's essential to match the correct versions of both software components. With CUDA 12.5 specifically, certain PyTorch releases are designed to work seamlessly with this version of CUDA. The latest stable release of PyTorch as of now supports CUDA up to a specific version which may or may not include CUDA 12.5 depending on when updates were made relative to your query date. For precise information regarding supported CUDA versions by each PyTorch build, one should refer directly to official documentation provided by PyTorch developers[^4]. However, typically newer major releases of PyTorch tend to add support for recent CUDA versions while maintaining backward compatibility within reasonable limits. Therefore, users aiming to utilize CUDA 12.5 would likely need at least PyTorch version `2.0` or later since these iterations have been noted for expanding their hardware support including more advanced NVIDIA architectures like Ampere (e.g., RTX 30 series GPUs)[^5]. To verify exact compatibility: - Check the [official PyTorch website](https://pytorch.org/get-started/locally/) where tables listing available binaries alongside corresponding CUDA versions can be found. - Consider installing via Conda channels that often provide pre-built environments tailored towards different computational backends such as CUDA 12.5. When setting up an environment using pip or conda commands ensure all necessary dependencies are met avoiding errors similar to those encountered during package installations[^6]: ```bash conda create -n pytorch_env python=3.9 conda activate pytorch_env conda install pytorch torchvision torchaudio cudatoolkit=12.5 -c pytorch-nightly ``` This command sequence creates a new Python environment named "pytorch_env", activates it, then installs compatible builds of PyTorch along with other libraries linked against CUDA Toolkit 12.5 from nightly builds repository which might offer cutting-edge features but could also contain experimental changes.
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值