租服务器跑多卡训练时遇到一个奇怪的报错,单卡时是正常的。报错信息如下:
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 873, in forward
if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`,
网上可以查到的大部分解决方法和报错内容一样,让你找出这个没有参与loss计算的位置。然而这个代码我前几天刚跑过多卡,大概率不是自己的问题。我的解决过程如下:
根据报错信息,直接找到distributed.py
vim /root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/distributed.py
然后找到find_unused_parameters,改成find_unused_parameters=True,可以修改__init__里面的,也可以修改self.find_unused_parameters,总之改成True,也就是哪有问题让他找出来。然后,然后他就不报错了!?!
我愿称之为磁小轨报错。