错误记录
报错
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True
to torch.nn.parallel.DistributedDataParallel
, and by
making sure all forward
function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn’t able to locate the output tensors in the return value of your module’s forward
function. Please include the loss function and the structure of the return value of forward
of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 0: 160 161 182 183 204 205 230 231 252 253 274 275 330 331 414 415 438 439 462 463 486 487 512 513 536 537 560 561 584 585
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
解决经过
源代码
class AttentionBlock(nn.Module):
def __init__(
self,
):
super().__init__()
self.encoder_kv = conv_nd(1, 512, channels * 2, 1) #这行没有注释掉
self.encoder_qkv = conv_nd(1, 512, channels * 3, 1)
self.trans = nn.Linear(resolution*resolution*9+128,resolution*resolution*9)
def forward(self, x, encoder_out=None):
b, c, *spatial = x.shape
x = x.reshape(b, c, -1)
qkv = self.qkv(self.norm(x))
if encoder_out is not None:
# encoder_out = self.encoder_kv(encoder_out) #这行代码注释了,没有用self.encoder_kv
encoder_out = self.encoder_qkv(encoder_out)
return encode_out
错误原因
self.encoder_kv 在def__init__中写了,但是在forward中没有使用,导致to torch.nn.parallel.DistributedDataParallel出错。改正方法
修正代码
方法一:
class AttentionBlock(nn.Module):
def __init__(
self,
):
super().__init__()
#self.encoder_kv = conv_nd(1, 512, channels * 2, 1) #这行在forward中没有用到注释掉
self.encoder_qkv = conv_nd(1, 512, channels * 3, 1)
self.trans = nn.Linear(resolution*resolution*9+128,resolution*resolution*9)
def forward(self, x, encoder_out=None):
b, c, *spatial = x.shape
x = x.reshape(b, c, -1)
qkv = self.qkv(self.norm(x))
if encoder_out is not None:
# encoder_out = self.encoder_kv(encoder_out)
encoder_out = self.encoder_qkv(encoder_out)
return encode_out
把self.encoder_kv = conv_nd(1, 512, channels * 2, 1)在forward中不用的函数给注释掉就行了,程序正常运行。
方法二:
from torch.nn.parallel.distributed import DistributedDataParallel as DDP
self.ddp_model = DDP(
self.model,
device_ids=[self.device],
# output_device=self.device,
# broadcast_buffers=False,
# bucket_cap_mb=128,
find_unused_parameters=True, #这个参数加上
)
find_unused_parameters=True
也是非常有效
注意:当设置find_unused_parameters=True 时,记得加入如下命令代码,以查找有哪些未使用的参数,(自己判断一下是否真的不使用这些参数)
ls = [name for name,para in model.named_parameters() if para.grad==None]
print(ls)