关于pytorch中多个backward出现的问题:enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly True.
在执行代码中包含两个方向传播(backward)时,可能会出现这种问题:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [10, 1]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
什么情况下会出现这种问题,我们先构建一个场景:
import torch
from torch import nn as nn
from torch.nn import functional as F
from torch import optim
为了简化问题,构建两个相同的神经网络:
class Net_1(nn.Module):
def __init__(self):
super(Net_1, self).__init__()
self.linear_1 = nn.Linear(1,10)
self.linear_2 = nn.Linear(10,1)
def forward(self,x):
x = self.linear_1(x)
x = F.relu(x)
x = self.linear_2(x)
x = F.softmax(x,dim=1)
return x
class Net_2(nn.Module):
def __init__(self):
super(Net_2,self).__init__()
self.linear_1 = nn.Linear(1,10)
self.linear_2 = nn.Linear(10,1)
def forward(self, x):
x = self.linear_1(x)
x = F.relu(x)
x = self.linear_2(x)
x = F.softmax(x,dim=1)
return x
算法执行流程:
定义模型Net_1,Net_2
、两个模型对应的优化器(Optimizer)optimizer_n1,optimizer_n2
,以及损失函数criterion
:
n_1 = Net_1()</