使用pytorch的DDP分布式训练的时候遇到错误:
TypeError: _queue_reduction(): incompatible function arguments. The following argument types are supported:
1. (process_group: torch.distributed.ProcessGroup, grads_batch: List[List[at::Tensor]], devices: List[int]) -> Tuple[torch.distributed.Work, at::Tensor]
Invoked with: <torch.distributed.ProcessGroupNCCL object at 0x7f19b4fe8d88>, [[tensor([0., 0., 0., ..., 0., 0., 0.], device='cuda:0'), tensor([0., 0., 0., ..., 0., 0., 0.], device='cuda:0'), tensor([0., 0., 0., ..., 0., 0., 0.], device='cuda:0'), None, None, tensor([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]], device='cuda:0'), tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., ......
原因
模型中有一些参数没有参与计算损失函数,所以它们的梯度会在loss backpropagation变为None。但是pytorch要求每个进程在每个time step给每个参数提供梯度,所以会出错。
解决方案1
用apex.parallel.DistributedDataParallel
代替torch.nn.parallel.DistributedDataParallel
解决方案2
先用单GPU运行,在模型loss.backward()
后,添加代码,找到神经网络中没有参与grad的参数:
for name, param in model.named_parameters():
if param.grad is None:
print(name, param)
os._exit(0)
对没用的参数。设置其requires_grad=False
,例如:
for name, params in model.named_parameters():
if name == 'xxx':
params.requires_grad=False
解决方案3
把每个参数强制算一遍没用的loss
loss = loss + 0 * sum(p.sum() for p in model.parameters())
参考资料:
https://github.com/NVIDIA/apex/issues/265
https://github.com/pytorch/pytorch/issues/19791