RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [16, 1, 96, 96]], which is output 0 of SigmoidBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
主要看这一部分[torch.cuda.FloatTensor [16, 1, 96, 96]], which is output 0 of SigmoidBackward0, is at version 1; expected version 0 instead.
这里指出了是SigmoidBackward0的问题,你debug发现错误代码为
loss.backward()
则是梯度计算被Pytorch中的Sigmoid函数破坏了
方法1:把Sigmoid激活函数换成别的激活函数,如
LeakyReLU()
方法2:
如果你检查一下这一行(即你使用Sigmoid激活函数的那一层)
conv_layer = self.linear_layers_(conv_layer)
linear_layers_的赋值是就地改变conv_layer的值,结果是值被覆盖,因此梯度计算失败。解决这个问题的简单方法是使用clone()函数
即
conv_layer = self.linear_layers_(conv_layer).clone()