RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 512, 16, 16]], which is output 0 of ConstantPadNdBackward, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later.
loss.backward()出的问题
根据文章可以:This error can be resolved by setting inplace=False in nn.ReLU and nn.LeakyReLU in blocks.py
我是按里面的修改F.relu
def my_relu(x):
return torch.maximum(x, torch.zeros_like(x))