[解决]one of the variables needed for gradient computation has been modified by an inplace operation:

 代码位置

for epoch in range(10):
    for vector,xyLoc in tqdm(train_loader):
        xyLoc = xyLoc.cuda()
        optimizer.zero_grad()
        outputAll,(outputh,outputc) = model(xyLoc)#这次只用loc进行预测
        try :
            registerY
        except NameError:
            registerY = outputAll[0]
            registerY = registerY.unsqueeze_(0)
        else:   
            registerY = torch.cat((registerY,outputAll[0].unsqueeze_(dim = 0)),dim = 0)

        loss = criterion(outputAll, xyLoc) 
        loss.backward()
        optimizer.step()
    if epoch%1==0:
        print("epoch "+str(epoch)+"  : \t "+str(loss))

在使用LSTM进行学习时出现

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:
 [torch.cuda.FloatTensor [100, 12, 2]], which is output 0 of CudnnRnnBackward0, is at version 1; expected version 0 instead. 
Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

排查下来是.unsqueeze_导致的,因为想将model输出的每次结果都记录下来所以进行了升维,结果他会直接保存升维后的形状,导致后面出错。

unsqueeze_()unsqueeze()实现一样的功能, 区别在于unsqueeze_in_place操作。

解决办法:将unsqueeze_()改为unsqueeze()

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值