1、inplace operation问题
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 128, 60, 116]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
我也在往常查找了一些解决办法,例如:变量名不能用一样的,不行,pass梯度更新放在最后面,optimizer.step(),但是这个训练代码训练其他的googlenet等网络都没有问题,pass由于网络里连用两个nn.ReLU