Pytorch并行主要有两种方式,DataParallel(DP)和DistributedDataParallel(DDP)。DP方式较为简单,但是多线程训练,并且主卡显存占用比其他卡会多很多。因此这里采用DDP方式来多卡计算。
DDP是多进程,将模型复制到多块卡上计算,数据分配较均衡。
使用DDP的一机多卡配置
1.加入local_rank参数,这一步会在代码运行时通过torch.distributed.launch输入,该参数意义是当前进程所用的是哪块卡
parser.add_argument('--local_rank', default=-1, type=int, help='node rank for distributed training')
2.初始化,设置pytorch支持的通讯后端,GPU的通讯采用nccl
torch.distributed.init_process_group(backend="nccl")
3.模型封装
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank], output_device=args.local_rank, find_unused_parameters=True)
4.数据分配
train_sampler = DistributedSampler(train_dataset)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=args.batch_size, shuffle=False, num_workers=args.workers, pin_memory=True, sampler=train_sampler)
在定义DataLoader的时候,要把shuffle设置为False,并输入采样器sampler
完整的例子(main.py)
import argparse
import os
import shutil
import time
import warnings
import numpy as np
warnings.filterwarnings('ignore')
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.distributed as dist
import torch.optim
import torch.utils.data
import torch.utils.data.distributed
from torch.utils.data.distributed import DistributedSampler
from models import DeepLab
from dataset import Cityscaples
parser = argparse.ArgumentParser(description='DeepLab')
parser.add_argument('-j', '--workers', default=4, type=int, metavar='N',
help='number of data loading workers (default: 4)')
parser.add_argument('--epochs', default=100, type=int, metavar='N',
help='number of total epochs to run')
parser.add_argument('--start-epoch', default=0, type=int, metavar='N',
help='manual epoch number (useful on restarts)')
parser.add_argument('-b', '--batch-size', default=3, type=int,
metavar='N')
parser.add_argument('--local_rank', default=0, type=int, help='node rank for distributed training')
args = parser.parse_args()
torch.distributed.init_process_group(backend="nccl") # 初始化
print("Use GPU: {} for training".format(args.local_rank))
# create model
model = DeepLab()
torch.cuda.set_device(args.local_rank) # 当前卡
model = model.cuda() # 模型放在卡上
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank], output_device=args.local_rank, find_unused_parameters=True) # 数据并行
criterion = nn.CrossEntropyLoss().cuda()
optimizer = torch.optim.SGD(model.parameters(), args.lr, momentum=args.momentum, weight_decay=args.weight_decay)
train_dataset = Cityscaples()
train_sampler = DistributedSampler(train_dataset) # 分配数据
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=args.batch_size, shuffle=False, num_workers=args.workers, pin_memory=True, sampler=train_sampler) # shuffle要设置为False
通过命令行启动
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 main.py
- CUDA_VISIBLE_DEVICES设置要利用的GPU卡号
- --nproc_per_node设置几块卡
- 参数中一定要加入 --local_rank