Pytorch torch.distributed 实现单机多卡分布式训练

 

一、分布式训练的优势:

torch.nn.DataParallel可以使我们方便地将模型和数据加载到多块gpu上,实现数据并行训练,但存在着训练速度缓慢、负载不均衡的问题。相比之下,torch.distributed具有以下几点优势:

1. distributed是多进程的,会分配n个进程对应n块gpu,而DataParallel是单进程控制的,所以存在着PIL(全局解释器锁)的问题。

2. (主要优势)distributed在每个进程内都维护了一个optimizer,每个进程都能够独立完成梯度下降这一优化步骤。因此,在各进程梯度计算完成后,只需要将梯度进行汇总平均,再由主进程将其广播到所有进程,之后各进程就可以用该梯度来更新参数了。因为各进程上的模型初始化相同,更新模型参数时所用的梯度相同,所以各进程上的模型参数始终保持相同。

而DataParallel全局只维护了一个optimizer,只有一个进程能执行梯度下降。因此,DataParallel在主进程上进行梯度汇总和参数更新后,需要将更新后的模型参数广播到其他gpu上,所以传输的数据量大大增加,训练速度大大降低。

3. 还是由于distributed在每个进程内都维护了一个optimizer,所以不会出现负载不均衡的问题,而DataParallel在有optimizer的那块gpu上会占用更多的显存。

二、分布式训练使用步骤:

先按使用顺序介绍需要添加的各个函数和模块,最后给出完整代码。

1. 导入分布式训练相关的模块以及定义一些相关的参数:

import torch.distributed as dist
import torch.multiprocessing as mp

rank:表示当前进程id,用于进程间通讯。

ngpus_per_node:表示每个节点的gpu数量。

gpu:表示当前显卡id。

parser.add_argument('--rank', default=0, type=int, help='node rank for distributed training')
parser.add_argument('--ngpus_per_node', default=2, type=int)
parser.add_argument('--gpu', default=None, type=int, help='GPU id to use.')

2. 通过torch.multiprocessing.spawn来开启多进程:

mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))

multiprocessing会为我们自动地创建进程,上行代码中开启了ngpus_per_node个进程,每个进程都会执行main_worker方法,并向其中传入当前进程id和参数args=(ngpus_per_node, args)。

3. 使用init_process_group设置GPU 之间通信使用的后端和端口(以下步骤均在main_worker方法中):

main_worker方法的形参多一个'gpu',即为当前进程的id。

def main_worker(gpu, ngpus_per_node, args):
    args.gpu = gpu
    args.rank = args.rank * ngpus_per_node + gpu
    dist.init_process_group(backend='nccl', init_method='tcp://127.0.0.1:23456', world_size=ngpus_per_node, rank=gpu)

4. 定义模型并使用DistributedDataParallel进行封装:

方便地帮助我们为不同gpu上求得的梯度进行汇总平均,并同步计算结果,不同gpu中模型的梯度均为所求的梯度均值。

(这里我将batch_size除以了ngpus_per_node,如果你的单卡显存足够大可以省略此步)

model = EfficientNet.from_pretrained('efficientnet-b5', num_classes=args.nclass)
cudnn.benchmark = True
if args.gpu is not None:
    torch.cuda.set_device(args.gpu)
    model.cuda(args.gpu)
    args.batch_size = int(args.batch_size / ngpus_per_node)
    model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
else:
    model.cuda()
    model = torch.nn.parallel.DistributedDataParallel(model)  

5. 使用DistributedSampler对数据集进行划分:

它能帮助我们将每个batch划分成几个部分,每个进程中只需要获取和rank对应的那部分进行训练。

注意:当使用了DistributedSampler时,DataLoader中的shuffle必须为False。

train_dataset = ImageFolder(traindir, transform=transform_train)
test_dataset = ImageFolder(testdir, transform=transform_test)
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset, shuffle=True)
test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset, shuffle=False)
    
# data loader
train_loader = torch.utils.data.DataLoader(
    train_dataset, batch_size=args.batch_size, shuffle=False, drop_last=True, pin_memory=True, num_workers=16, sampler=train_sampler)
test_loader = torch.utils.data.DataLoader(
    test_dataset, batch_size=args.batch_size, shuffle=False, drop_last=False, pin_memory=True, num_workers=16, sampler=test_sampler)

6. 在打印训练过程和保存模型时,使用一个进程就够了:

if args.rank % ngpus_per_node == 0:
        print('Test Epoch: {} Top1 {:.3f}% Top5 {:.3f}%'.format(epoch, top1_avg, top5_avg))
if args.rank % ngpus_per_node == 0:
    save_checkpoint(args, state_dict={
                        'epoch': epoch,
                        'state_dict': model.state_dict(),
                        'optimizer': optimizer.state_dict(),
                        'prec1': prec1})

三、分布式训练完整代码:

仅供参考。

import torch
import torch.nn as nn
import torch.backends.cudnn as cudnn
from torch.utils.data import DataLoader
from torchvision.datasets import ImageFolder
import torch.multiprocessing as mp
import torch.distributed as dist
import sys
import argparse
import os
import numpy as np
import matplotlib.pyplot as plt
from efficientnet_pytorch import EfficientNet

from torch.optim.lr_scheduler import StepLR, MultiStepLR, CosineAnnealingWarmRestarts

from senet.se_resnet import FineTuneSEResnet50
from utils.init import init_weights
from utils.transform import get_transform_for_train, get_transform_for_test
from utils.loss_function import LabelSmoothingCrossEntropy
from utils.utils import adjust_learning_rate, accuracy, cosine_anneal_schedule
from utils.save import save_checkpoint
from utils.cutmix import cutmix_data
# from train import train
# from validate import validate
from graph_rise.graph_regularization import get_images_info, graph_rise_loss

os.environ['CUDA_VISIBLE_DEVICES'] = "2, 3"

parser = argparse.ArgumentParser(description='PyTorch Training')
parser.add_argument('--dataroot', default='/data/userdata/set100-80/annotated_images_448/test1', type=str)
parser.add_argument('--logs_dir', default='./weights_dir/efficientnet-b5/test1', type=str)
parser.add_argument('--weights_dir', default='./weights_dir/efficientnet-b5/test1', type=str)
parser.add_argument('--test_weights_path', default="")
parser.add_argument('--init_type',  default='', type=str)
parser.add_argument('--weight_decay', '--wd', default=5e-4, type=float)

parser.add_argument('--epochs', default=300, type=int)
parser.add_argument('--start_epochs', default=0, type=int)
parser.add_argument('--batch_size', default=32, type=int)
parser.add_argument('--test_batch_size', default=32, type=int)
parser.add_argument('--lr', default=0.001, type=float)
parser.add_argument('--img_size', default=448, type=int)
parser.add_argument('--eval_epoch', default=1, type=int)
parser.add_argument('--nclass', default=113, type=int)
parser.add_argument('--multi_gpus', default=[0, 1, 2, 3])
parser.add_argument('--gpu_nums', default=1, type=int)
parser.add_argument('--resume', default=r"", type=str)
parser.add_argument('--milestones', default=[120, 220, 270])

parser.add_argument('--graph_reg', default=False)
parser.add_argument('--label_smooth', default=False)
parser.add_argument('--cutmix', default=False)
parser.add_argument('--mixup', default=False)
parser.add_argument('--cosine_decay', default=True)

parser.add_argument('--rank', default=0, type=int, help='node rank for distributed training')
parser.add_argument('--ngpus_per_node', default=2, type=int)
parser.add_argument('--gpu', default=None, type=int, help='GPU id to use.')

best_prec1 = 0


def main():
    print('Part1 : prepare for parameters <==> Begin')
    args = parser.parse_args()
    
    ngpus_per_node = args.ngpus_per_node
    print('ngpus_per_node:', ngpus_per_node)
    mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))


def train(args, train_loader, model, criterion, optimizer, epoch, name_list, name_dict, ngpus_per_node):
    # switch to train mode
    model.train()
    
    for i, (inputs, labels) in enumerate(train_loader):
        inputs = inputs.cuda(args.gpu, non_blocking=True)
        labels = labels.cuda(args.gpu, non_blocking=True)
        # cutmix
        if args.cutmix:
            inputs, labels_a, labels_b, lam = cutmix_data(inputs, labels)
            outputs = model(inputs)
            loss = criterion(outputs, labels_a) * lam + criterion(outputs, labels_b) * (1. - lam)
        # mixup
        elif args.mixup:
            inputs, labels_a, labels_b, lam = mixup_data(inputs, labels)
            outputs = model(inputs)
            loss = criterion(outputs, labels_a) * lam + criterion(outputs, labels_b) * (1. - lam)
        else:
            outputs = model(inputs)
            loss = criterion(outputs, labels)
        # graph-rise regularization
        if args.graph_reg:
            graph_loss = graph_rise_loss(outputs, labels, name_list, name_dict)
            loss = loss + graph_loss
        
        # measure accuracy and record loss
        prec1, prec3 = accuracy(outputs, labels, topk=(1, 3))  # this is metric on trainset

        # compute gradient and do Adam step
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        
        if i % 10 == 0 and args.rank % ngpus_per_node == 0:
            print('Train Epoch: {0} Step: {1}/{2} Loss {loss:.4f} Top1 {top1:.3f} Top3 {top3:.3f} LR {lr:.7f}'.format(
                epoch, i, len(train_loader), loss=loss.item(), top1=prec1[0], top3=prec3[0], lr=optimizer.param_groups[0]['lr']))

            logs_dir = args.logs_dir
            if not os.path.exists(logs_dir):
                os.mkdir(logs_dir)
            logs_file = os.path.join(logs_dir, 'log_train.txt')

            with open(logs_file, 'a') as f:
                f.write('Train Epoch: {0} Step: {1}/{2} Loss {loss:.4f} Top1 {top1:.3f} Top3 {top3:.3f} LR {lr:.7f}\n'.format(
                epoch, i, len(train_loader), loss=loss.item(), top1=prec1[0], top3=prec3[0], lr=optimizer.param_groups[0]['lr']))


def validate(args, val_loader, model, criterion, epoch, ngpus_per_node):
    prec1_list = []
    prec5_list = []
    model.eval()

    for i, (inputs, labels) in enumerate(val_loader):
        inputs = inputs.cuda()
        labels = labels.cuda()

        # compute output
        outputs = model(inputs)

        # measure accuracy and record loss
        prec1, prec5 = accuracy(outputs, labels, topk=(1, 5))
        prec1_list.append(prec1[0].item())
        prec5_list.append(prec5[0].item())

    top1_avg = np.mean(prec1_list)
    top5_avg = np.mean(prec5_list)
    if args.rank % ngpus_per_node == 0:
        print('Test Epoch: {} Top1 {:.3f}% Top5 {:.3f}%'.format(epoch, top1_avg, top5_avg))

        logs_dir = args.logs_dir
        if not os.path.exists(logs_dir):
            os.mkdir(logs_dir)
        logs_file = os.path.join(logs_dir, 'log_test.txt')

        with open(logs_file, 'a') as f:
            f.write('Test Epoch: {} Top1 {:.3f}% Top5 {:.3f}%\n'.format(epoch, top1_avg, top5_avg))
    return top1_avg


def main_worker(gpu, ngpus_per_node, args):
    global best_prec1
    args.gpu = gpu
    
    args.rank = args.rank * ngpus_per_node + gpu
    dist.init_process_group(backend='nccl', init_method='tcp://127.0.0.1:23456', world_size=ngpus_per_node, rank=gpu)
    print('rank', args.rank, ' use multi-gpus...')
    
    name_list, name_dict = get_images_info()
    
    if args.rank % ngpus_per_node == 0:
        print('Part1 : prepare for parameters <==> Done')
        print('Part2 : Load Network  <==> Begin')
    model = EfficientNet.from_pretrained('efficientnet-b5', num_classes=args.nclass)
    cudnn.benchmark = True
    if args.gpu is not None:
        torch.cuda.set_device(args.gpu)
        model.cuda(args.gpu)
        args.batch_size = int(args.batch_size / ngpus_per_node)
        model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
    else:
        model.cuda()
        model = torch.nn.parallel.DistributedDataParallel(model)  

    if args.label_smooth:
        criterion = LabelSmoothingCrossEntropy()
    else:
        criterion = nn.CrossEntropyLoss()
    optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, momentum=0.9, weight_decay=args.weight_decay)
    if args.cosine_decay:
        scheduler = CosineAnnealingWarmRestarts(optimizer, T_0=args.epochs)
    else:
        scheduler = MultiStepLR(optimizer, milestones=args.milestones, gamma=0.1)
    
    if args.resume:
        if os.path.isfile(args.resume):
            print("=> loading checkpoint '{}'".format(args.resume))
            # 反序列化为python字典
            if args.gpu is None:
                checkpoint = torch.load(args.resume)
            else:
                # Map model to be loaded to specified single gpu.
                loc = 'cuda:{}'.format(args.gpu)
                checkpoint = torch.load(args.resume, map_location=loc)
            args.start_epochs = checkpoint['epoch']
            best_prec1 = checkpoint['prec1']
#            if args.gpu is not None:
#                # best_acc1 may be from a checkpoint from a different GPU
#                best_prec1 = best_prec1.to(args.gpu)
            # 加载模型、优化器参数,继续从断开的地方开始训练
            model.load_state_dict(checkpoint['state_dict'])
            optimizer.load_state_dict(checkpoint['optimizer'])
            print('继续从epoch:{}开始训练,当前best_acc为:{:.3f}'.format(args.start_epochs, best_prec1))
        else:
            print("=> no checkpoint found at '{}'".format(args.resume))
    if args.rank % ngpus_per_node == 0:
        print('Part2 : Load Network  <==> Done')
        print('Part3 : Load Dataset  <==> Begin')

    dataroot = os.path.abspath(args.dataroot)
    traindir = os.path.join(dataroot, 'train_images')
    testdir = os.path.join(dataroot, 'test_images')
    
    # ImageFolder
    # mean=[0.948078, 0.93855226, 0.9332005], var=[0.14589554, 0.17054074, 0.18254866]
    transform_train = get_transform_for_train(mean=[0.948078, 0.93855226, 0.9332005], var=[0.14589554, 0.17054074, 0.18254866])
    transform_test = get_transform_for_test(mean=[0.948078, 0.93855226, 0.9332005], var=[0.14589554, 0.17054074, 0.18254866])
    
    train_dataset = ImageFolder(traindir, transform=transform_train)
    test_dataset = ImageFolder(testdir, transform=transform_test)
    train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset, shuffle=True)
    test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset, shuffle=False)
    
    # data loader
    train_loader = torch.utils.data.DataLoader(
        train_dataset, batch_size=args.batch_size, shuffle=False, drop_last=True, pin_memory=True, num_workers=16, sampler=train_sampler)
    test_loader = torch.utils.data.DataLoader(
        test_dataset, batch_size=args.batch_size, shuffle=False, drop_last=False, pin_memory=True, num_workers=16, sampler=test_sampler)
    
    if args.rank % ngpus_per_node == 0:
        print('Part3 : Load Dataset  <==> Done')
        print('Part4 : Train and Test  <==> Begin')

    for epoch in range(args.start_epochs, args.epochs):
        # adjust_learning_rate(args, optimizer, epoch, gamma=0.1)
        
        # train for one epoch
        train(args, train_loader, model, criterion, optimizer, epoch, name_list, name_dict, ngpus_per_node)

        # evaluate on validation set
        if epoch % args.eval_epoch == 0:
            prec1 = validate(args, test_loader, model, criterion, epoch, ngpus_per_node)

            is_best = prec1 > best_prec1
            best_prec1 = max(prec1, best_prec1)
            if args.rank % ngpus_per_node == 0:
                if not is_best:
                    print('Top1 Accuracy stay with {:.3f}'.format(best_prec1))
                else:   # save the best model
                    save_checkpoint(args, state_dict={
                        'epoch': epoch,
                        'state_dict': model.state_dict(),
                        'optimizer': optimizer.state_dict(),
                        'prec1': prec1,
                    })
                    print('Save the best model with accuracy {:.3f}'.format(best_prec1))
        scheduler.step()
    print('Part4 : Train and Test  <==> Done')



if __name__ == '__main__':
    main()

四、参考:

https://zhuanlan.zhihu.com/p/98535650

  • 26
    点赞
  • 95
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值