pytorch使用DistributedParallel单机多GPU训练
① 导入包import torch.distributed as dist② from torch.utils.data.distributed import DistributedSampler③ 引入local_rank参数parser.add_argument('--local_rank', type=int, default=-1)④ 进程初始化torch.distributed.init_process_group(backend='nccl',init_method='env://',



