PyTorch单机多卡并行训练

Pytorch 单机多GPU训练

原理

多GPU训练的基本过程:

  • 首先把模型加载到一个主设备上
  • 把模型只读复制到多个设备
  • 把大的batch数据也等分到不同的设备
  • 最后将所有设备计算得到的梯度合并更新主设备上的模型参数

代码实现

以Minist为例

  1. 首先设置device_id
device_ids = [3, 4, 6, 7]
  1. 在调用DataLoader时要注意放大Batch_size的倍数
data_loader_train = torch.utils.data.DataLoader(dataset=data_train,batch_size = BATCH_SIZE * len(device_ids), shuffle = True,num_workers=2)
# 注意batch size要对应放大倍数
  1. 创建模型时调用torch.nn.DataParallel(),同时模型要放在主设备上
model = Model()
model = torch.nn.DataParallel(model, device_ids=device_ids)  # 声明所有可用设备
model = model.cuda(device=device_ids[0])  # 模型放在主设备上
  1. 训练时数据是放在主设备上的
x_train = X_train.cuda(device=device_ids[0])
y_train = Y_train.cuda(device=device_ids[0])

完整代码:

import torch
from torchvision import datasets, transforms
import torchvision
from tqdm import tqdm

device_ids = [3, 4, 6, 7]
BATCH_SIZE = 64

transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize(mean=[0.5,0.5,0.5],std=[0.5,0.5,0.5])])
data_train = datasets.MNIST(root = "./data/",
                            transform=transform,
                            train = True,
                            download = True)
data_test = datasets.MNIST(root="./data/",
                           transform = transform,
                           train = False)

data_loader_train = torch.utils.data.DataLoader(dataset=data_train, batch_size = BATCH_SIZE * len(device_ids), shuffle = True,num_workers=2)
# 这里注意batch size要对应放大倍数
data_loader_test = torch.utils.data.DataLoader(dataset=data_test,
batch_size = BATCH_SIZE * len(device_ids),shuffle = True,num_workers=2)

class Model(torch.nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = torch.nn.Sequential(
        	torch.nn.Conv2d(1, 64, kernel_size=3, stride=1, padding=1),
        	torch.nn.ReLU(),
        	torch.nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),
        	torch.nn.ReLU(),
        	torch.nn.MaxPool2d(stride=2, kernel_size=2),
    )
        self.dense = torch.nn.Sequential(
            torch.nn.Linear(14 * 14 * 128, 1024),                            
            torch.nn.ReLU(),
            torch.nn.Dropout(p=0.5),
            torch.nn.Linear(1024, 10)
    )
    def forward(self, x):
        x = self.conv1(x)
        x = x.view(-1, 14 * 14 * 128)
        x = self.dense(x)
        return x


model = Model()
model = torch.nn.DataParallel(model, device_ids=device_ids) # 声明所有可用设备
model = model.cuda(device=device_ids[0])  # 模型放在主设备

cost = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())

n_epochs = 50
for epoch in range(n_epochs):
    running_loss = 0.0
    running_correct = 0
    print("Epoch {}/{}".format(epoch, n_epochs))
    print("-"*10)
    for data in tqdm(data_loader_train):
        X_train, y_train = data
        # 注意数据也是放在主设备
        X_train, y_train = X_train.cuda(device=device_ids[0]), y_train.cuda(device=device_ids[0])
        
        outputs = model(X_train)
        _,pred = torch.max(outputs.data, 1)
        optimizer.zero_grad()
        loss = cost(outputs, y_train)
        
        loss.backward()
        optimizer.step()
        running_loss += loss.data.item()
        running_correct += torch.sum(pred == y_train.data)
    testing_correct = 0
    for data in data_loader_test:
        X_test, y_test = data
        X_test, y_test = X_test.cuda(device=device_ids[0]), y_test.cuda(device=device_ids[0])
        outputs = model(X_test)
        _, pred = torch.max(outputs.data, 1)
        testing_correct += torch.sum(pred == y_test.data)
    print("Loss is:{:.4f}, Train Accuracy is:{:.4f}%, Test Accuracy is:{:.4f}".format(running_loss/len(data_train),
                                                                                      100*running_correct/len(data_train),
                                                                                      100*testing_correct/len(data_test)))
torch.save(model.state_dict(), "model_parameter.pkl")

上述第4步的简便模式:

  • 设置主GPU:
torch.cuda.set_device('cuda:{}'.format(device_ids[0]))

这样就省去了之后要一直.cuda(device=device_ids[0])

参考文章

Pytorch多GPU训练
pytorch多gpu并行训练

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值