在计算loss的时候,根据实际情况,是想加入新的loss,来达到某些目的。
但是当写完代码报错了:
one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128, 128]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
后来查阅资料(资料)发现,是这样的:
原本代码中就有loss,且:
loss = torch.nn.functional.mse_loss(noise_pred.float(), target.float(), reduction="none")
本人新增的loss标记为loss1:
loss1 = torch.nn.functional.mse_loss(noise_pred[i,-1,:,:].float(), ((mask.float()-22)/56.), reduction="none")
根据上面资料的说法,我应该是在新增loss中忘记取消梯度,才导致这种报错。
因此,noise_pred[i,-1,:,:].float()改成noise_pred[i,-1,:,:].float().detach();((mask.float()-22)/56.)改成((mask.float()-22)/56.).detach()
即,新loss的代码:
loss1 = torch.nn.functional.mse_loss(noise_pred[i,-1,:,:].float().detach(), ((mask.float()-22)/56.).detach(), reduction="none")
就没有报错了。