pytorch 支持amd显卡吗_有人用 AMD 的 Vega 显卡 跑 深度学习框架么?

3

2018-11-16 11:06:13 +08:00

AMD: Powerful But Lacking Support

HIP via ROCm unifies NVIDIA and AMD GPUs under a common programming language which is compiled into the respective GPU language before it is compiled to GPU assembly. If we would have all our GPU code in HIP this would be a major milestone, but this is rather difficult because it is difficult to port the TensorFlow and PyTorch code bases. TensorFlow has some support for AMD GPUs and all major networks can be run on AMD GPUs, but if you want to develop new networks some details might be missing which could prevent you from implementing what you need. The ROCm community is also not too large and thus it is not straightforward to fix issues quickly. There also does not seem to be much money allocated for deep learning development and support from AMD ’ s side which slows the momentum.

However, AMD GPUs show strong performance compared to NVIDIA GPUs and the next AMD GPU the Vega 20 will be a computing powerhouse which will feature Tensor-Core-like compute units.

Overall I think I still cannot give a clear recommendation for AMD GPUs for ordinary users that just want their GPUs to work smoothly. More experienced users should have fewer problems and by supporting AMD GPUs and ROCm/HIP developers they contribute to the combat against the monopoly position of NVIDIA as this will greatly benefit everyone in the long-term. If you are a GPU developer and want to make important contributions to GPU computing, then an AMD GPU might be the best way to make a good impact over the long-term. For everyone else, NVIDIA GPUs might be the safer choice.

在使用 AMD 显卡训练 PyTorch 模型之前,需要先确保以下几点: 1. 安装 AMD ROCm 软件包。该软件包是 AMD 显卡的驱动程序和运行时环境。可以在 AMD 官网上下载和安装。 2. 安装 PyTorch ROCm 版本。PyTorch ROCm 版本是针对 AMD 显卡的优化版本,可以提高模型训练的速度和效率。 3. 安装深度学习框架所需的依赖项。这些依赖项包括 CUDA、cuDNN 等,可以在 PyTorch 官网上找到安装指南。 安装完成后,可以按照以下步骤使用 AMD 显卡训练 PyTorch 模型: 1. 导入 PyTorch 和其他必要的库: ``` import torch import torch.nn as nn import torch.optim as optim ``` 2. 定义模型和损失函数: ``` class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(10, 5) self.fc2 = nn.Linear(5, 2) def forward(self, x): x = self.fc1(x) x = nn.ReLU()(x) x = self.fc2(x) return x model = Model() criterion = nn.CrossEntropyLoss() ``` 3. 定义优化器: ``` optimizer = optim.Adam(model.parameters(), lr=0.01) ``` 4. 加载数据集: ``` train_dataset = ... train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True) ``` 5. 训练模型: ``` for epoch in range(10): for i, data in enumerate(train_loader): inputs, labels = data optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() ``` 这是一个简单的 PyTorch 模型训练示例。在使用 AMD 显卡训练时,需要将代码中的 CUDA 相关函数替换为 ROCm 相关函数。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值