纪念我第一次调通代码
报错信息:RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [19, 175, 32]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True)
加入一行语句可以定位问题
发现问题出现在relu的一行
由此搜索得到
原来是relu对于不同的版本效果不一样,将其替换成relu6之后就解决了问题!