RuntimeError: one of the variables needed for gradient computation has been modified by an inplace

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2, 1218]], which is output 0 of TanhBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

我出现这个错误是,通过一个变量计算两种损失时,计算完第一种损失,对这个变量的值进行修改再计算第二种损失时就会出现这个错误。出现这个错误还有很多种情况,其他帖子有。
        generate_fake_data = G(z_final)  # 通过噪声生成序列。G(优化Z)
        recons_loss_1 = torch.mean(
            torch.sum(torch.pow(want_label - generate_fake_data, 2), dim=-1))  # 长码误差1

        generate_fake_data_2 = generate_fake_data  # 修改变量
        generate_fake_data_2 = generate_fake_data_2.view(2, 58, 21)
        index = torch.tensor([0, 1, 2, 3, 4, 5, 7, 11, 14, 15, 17, 18, 19, 20, 21, 22,
                              24, 25, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46,
                              47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57]).cuda()
        generate_fake_data_2.index_fill_(1, index, 0)
        generate_fake_data_2 = generate_fake_data_2.view(2, 1218)  # 2*1218

        recons_loss_2 = torch.mean(
            torch.sum(torch.pow(want_label - generate_fake_data_2, 2), dim=-1))  # 长码误差2
        all_loss = recons_loss_1 + recons_loss_2
        all_loss.backward()
        G_optimizer.step()
报错!

上面的代码只是示意。我用生成的变量 generate_fake_data,计算了第一种损失 recons_loss_1后,我将变量修改了,再计算 recons_loss_2.这时候就会报错。

解决办法:深拷贝这个变量。

generate_fake_data_2 = generate_fake_data.clone()
        generate_fake_data = G(z_final)  # 通过噪声生成序列。G(优化Z)
        recons_loss_1 = torch.mean(
            torch.sum(torch.pow(want_label - generate_fake_data, 2), dim=-1))  # 长码误差1

        generate_fake_data_2 = generate_fake_data.clone()  #修改变量
        generate_fake_data_2 = generate_fake_data_2.view(2, 58, 21)
        index = torch.tensor([0, 1, 2, 3, 4, 5, 7, 11, 14, 15, 17, 18, 19, 20, 21, 22,
                              24, 25, 28, 29, 31, 32, 33, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46,
                              47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57]).cuda()
        generate_fake_data_2.index_fill_(1, index, 0)
        generate_fake_data_2 = generate_fake_data_2.view(2, 1218)  # 2*1218

        recons_loss_2 = torch.mean(
            torch.sum(torch.pow(want_label - generate_fake_data_2, 2), dim=-1))  # 长码误差2
        all_loss = recons_loss_1 + recons_loss_2
        all_loss.backward()
        G_optimizer.step()

正常运行!

好像涉及了计算图的东西,不太懂。个人理解,欢迎指正!

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值