ubuntu使用anaconda搭建深度学习环境无法调用GPU问题,并给transformer示例。虚拟环境中pytorch与cuda版本与系统cuda版本不匹配问题。系统cuda安装后期随缘出。

1.打开系统终端,nvidia-smi命令查看自己CUDA版本

(base) richard@richard-NH50-70RA:~$ nvidia-smi

Sun Apr 14 11:42:01 2024

+---------------------------------------------------------------------------------------+

| NVIDIA-SMI 535.171.04 Driver Version: 535.171.04 CUDA Version: 12.2 |

|-----------------------------------------+----------------------+----------------------+

| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |

| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |

| | | MIG M. |

|=========================================+======================+======================|

| 0 NVIDIA GeForce GTX 1650 Off | 00000000:01:00.0 Off | N/A |

| N/A 45C P0 9W / 50W | 6MiB / 4096MiB | 0% Default |

| | | N/A |

+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+

| Processes: |

| GPU GI CI PID Type Process name GPU Memory |

| ID ID Usage |

|=======================================================================================|

| 0 N/A N/A 1720 G /usr/lib/xorg/Xorg 4MiB |

+---------------------------------------------------------------------------------------+

2. CUDA Version: 12.2

3.前往pytorch官网安装支持CUDA12以上的pytorch版本,例如:

conda install pytorch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 pytorch-cuda=12.1 -c pytorch -c nvidia

4.打开自己Anaconda创建虚拟环境,将此pytorch配置

5.打开pycharm选择此编译器

6.使用如下代码检测是否能成功调用GPU并运行如下transformer示例:

import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset

# 检查CUDA是否可用,并将设备设置为GPU或CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

if torch.cuda.is_available():
    print("成功调用GPU")
else:
    print("未能成功调用GPU,将使用CPU进行训练")

# 定义一个简单的 Transformer 模型
class TransformerModel(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(TransformerModel, self).__init__()
        self.embedding = nn.Embedding(input_size, hidden_size)
        self.transformer_layer = nn.TransformerEncoderLayer(d_model=hidden_size, nhead=2, batch_first=True)
        self.transformer = nn.TransformerEncoder(self.transformer_layer, num_layers=2)
        self.fc = nn.Linear(hidden_size, output_size)

    def forward(self, x):
        embedded = self.embedding(x)
        transformed = self.transformer(embedded)
        output = self.fc(transformed[:, -1, :])  # 取最后一个时间步的输出
        return output

# 准备数据
input_size = 100  # 输入词汇表大小
hidden_size = 128  # 隐藏单元大小
output_size = 10  # 输出类别数
batch_size = 32

# 将模型移动到GPU上(如果可用)
model = TransformerModel(input_size, hidden_size, output_size).to(device)
optimizer = optim.Adam(model.parameters(), lr=0.001)
criterion = nn.CrossEntropyLoss()

# 模拟一些数据,并将数据移动到GPU上(如果可用)
train_data = torch.randint(0, input_size, (1000, 20)).to(device)
train_labels = torch.randint(0, output_size, (1000,)).to(device)

# 将数据加载到 DataLoader 中
train_dataset = TensorDataset(train_data, train_labels)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

# 训练模型
epochs = 5
for epoch in range(epochs):
    total_loss = 0
    for batch_data, batch_labels in train_loader:
        optimizer.zero_grad()
        output = model(batch_data)
        loss = criterion(output, batch_labels)
        loss.backward()
        optimizer.step()
        total_loss += loss.item()
    print(f"Epoch {epoch+1}, Loss: {total_loss / len(train_loader)}")

7.输出

成功调用GPU

Epoch 1, Loss: 2.4715860038995743

Epoch 2, Loss: 2.281775623559952

Epoch 3, Loss: 2.189281165599823

Epoch 4, Loss: 2.0735388584434986

Epoch 5, Loss: 1.8043855652213097

  • 10
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值