RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [256, 5]], which is output 0 of SubBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
今天用pytorch写训练代码的时候出现该错误。
发现问题出现在下面两个语句:
x -= torch.mean(x,dim=1).reshape(-1,1)
x /= torch.std(x,dim=1).reshape(-1,1)
将其改为:
x = x-torch.mean(x,dim=1).reshape(-1,1)
x = x/torch.std(x,dim=1).reshape(-1,1)
就能正确运行了。笔者猜测可能是临时变量的原因?或者是pytorch-=,/=相关的运算符兼容性没弄好,总之以后写训练代码的时候不要用-=,/=相关的简写,一律老老实实的写完整。