enable anomaly detection to find the operation that failed to compute its gradient, with torch.autog

关于pytorch中多个backward出现的问题:enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly True.

在执行代码中包含两个方向传播(backward)时,可能会出现这种问题:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [10, 1]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

什么情况下会出现这种问题,我们先构建一个场景

import torch
from torch import nn as nn
from torch.nn import functional as F
from torch import optim

为了简化问题,构建两个相同的神经网络:

class Net_1(nn.Module):
    def __init__(self):
        super(Net_1, self).__init__()

        self.linear_1 = nn.Linear(1,10)
        self.linear_2 = nn.Linear(10,1)

    def forward(self,x):
        x = self.linear_1(x)
        x = F.relu(x)
        x = self.linear_2(x)
        x = F.softmax(x,dim=1)
        return x

class Net_2(nn.Module):
    def __init__(self):
        super(Net_2,self).__init__()

        self.linear_1 = nn.Linear(1,10)
        self.linear_2 = nn.Linear(10,1)

    def forward(self, x):
        x = self.linear_1(x)
        x = F.relu(x)
        x = self.linear_2(x)
        x = F.softmax(x,dim=1)
        return x

算法执行流程
定义模型Net_1,Net_2、两个模型对应的优化器(Optimizer)optimizer_n1,optimizer_n2,以及损失函数criterion

n_1 = Net_1()</
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

静静的喝酒

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值