pytorch当有两个backward() 同时进行的时候可能会出现:RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:
之前的pytorch版本,这样子写可能可以
opt_1.zero_gard()
loss_1 = fun(...)
loss_1.backward()
opt_1.step()
opt_2.zero_gard()
loss_2 = fun(...)
loss_1.backward()
opt_2.step()
但是如今的可能不行了,会发生下述错误:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:
[torch.cuda.FloatTensor [1, 512, 4, 4]] is at version 2; expected version 1 instead.
Hint: enable anomaly detection to find the operation that failed to compute its gradient,
with torch.autograd.set_detect_anomaly(True).
得这样写才行:
opt_1.zero_gard()
loss_1 = fun(...)
loss_1.backward()
opt_2.zero_gard()
loss_2 = fun(...)
loss_2.backward()
opt_1.step()
opt_2.step()
注意:如果改成这样还是不行的话,全都添加 retain_graph=True
backward(retain_graph=True)
by: 在所有backward()的地方添加下面这个,如果发生错误它会告诉你哪里的backward出了错。
with torch.autograd.set_detect_anomaly(True):
backward()..