RuntimeError: Expected to have finished reduction in the prior iteration before starting
a new one. This error indicates that your module has parameters that were not used in producing loss. Since `find_unused_parameters=True` is enabled,
this likely means that not all `forward` outputs participate in computing loss.
You can fix this by making sure all `forward` function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function.
Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 0: 180 181
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
报错信息如上,主要是模型forward的输出并没有全部参与计算,按照网上的方法在加载模型时加了一个find_unused_parameters=True参数,还是同样报错,所以解决方法就是看看网络的输出,是不是loss有漏掉的,如果确实有的输出,不需要参与loss的计算,可以在网络中直接删掉即可。
参考博客:
https://blog.csdn.net/weixin_44966641/article/details/120385212