RuntimeError: Trying to backward through the graph a second time, but the buffers have already been

37 篇文章 1 订阅
6 篇文章 0 订阅

错误:

RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
详细信息:

Traceback (most recent call last):
  File "main.py", line 21, in <module>
    startTrain.run(trainBatch)
  File "/root/train.py", line 49, in run
    loss.backward()
  File "/usr/local/lib/python3.7/site-packages/torch/tensor.py", line 102, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/usr/local/lib/python3.7/site-packages/torch/autograd/__init__.py", line 90, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

原因:

程序在试图执行backward的时候,发现计算图的缓冲区已经被释放

解决办法:

backward()函数中添加参数retain_graph=True
即:

loss.backward()

改为:

loss.backward(retain_graph=True)
  • 19
    点赞
  • 29
    收藏
    觉得还不错? 一键收藏
  • 18
    评论
评论 18
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值