torch.distributed
from torch.utils.data.distributed import DistributedSampler
# 1) 初始化
torch.distributed.init_process_group(backend="nccl")
# 2) 配置每个进程的gpu
local_rank = torch.distributed.get_rank()
print('local_rank',local_rank)
torch.cuda.set_device(local_rank)
device = torch.device("cuda", local_rank)
# 3)使用DistributedSampler
rand_loader = DataLoader(dataset=dataset,
batch_size=batch_size,
sampler=DistributedSampler(dataset))
# 4) 封装之前要把模型移到对应的gpu
model.to(device)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# 5) 封装
model = torch.nn.parallel.DistributedDataParallel(model,
device_ids=[local_rank],
output_device=local_rank)
运行
python -m torch.distributed.launch --nproc_per_node=2 main.py
使用argparse.ArgumentParser()
解析参数时可能会出现unrecognized arguments: --local-rank=2
的错误一般只需要加上
parser.add_argument("--rank", default=-1, type=int, help="node rank for distributed training")
就行
但是在新服务器上发现此方法失效,暂时只能能将argparse.ArgumentParser()
模块删去,解决报错。
报错显示为--local-rank
没有识别,之前一直加的参数为parser.add_argument("--local_rank", default=-1, type=int)
现修改为parser.add_argument("--local-rank", default=-1, type=int)
(下划线改为破折号)正常运行。
报错
RuntimeError: Expected to have finished reduction in the prioriteration before starting a new one. This error indicates that yourmodule has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel; (2) making sure all forward function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn’t able to locate the output tensors in the return value of your module’s forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable).
使用单GPU可以正常梯度下降,多GPU时提醒这个报错。
在 torch.nn.parallel.DistributedDataParallel中加入find_unused_parameters=True忽略这些报错参数
model = torch.nn.parallel.DistributedDataParallel(model,find_unused_parameters=True,
device_ids=[local_rank],
output_device=local_rank)