generative-inpainting-pytorch-master 推理bug
2023-12-15 09:44:16,926 ERROR one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 65536, 1]] is at version 6; expected version 5 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
Traceback (most recent call last):
File "/home/doctor/try/gan/generative-inpainting-pytorch-master/train.py", line 185, in <module> main() File "/home/doctor/try/gan/generative-inpainting-pytorch-master/train.py", line 181, in main raise e File "/home/doctor/try/gan/generative-inpainting-pytorch-master/train.py", line 143, in main losses['g'].backward() File "/home/doctor/anaconda3/envs/gan/lib/python3.10/site-packages/torch/_tensor.py", line 363, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/doctor/anaconda3/envs/gan/lib/python3.10/site-packages/torch/autograd/__init__.py", line 173, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 65536, 1]] is at version 6; expected version 5 instead. Hint: the backtrace further above shows theoperation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
一个很怪的bug,出问题的相关代码:
###### Backward pass ######
# Update D
trainer_module.optimizer_d.zero_grad()
losses['d'] = losses['wgan_d'] + losses['wgan_gp'] * config['wgan_gp_lambda']
losses['d'].backward()
trainer_module.optimizer_d.step()
# Update G
if compute_g_loss:
trainer_module.optimizer_g.zero_grad()
losses['g'] = losses['l1'] * config['l1_loss_alpha'] \
+ losses['ae'] * config['ae_loss_alpha'] \
+ losses['wgan_g'] * config['gan_loss_alpha']
losses['g'].backward()
trainer_module.optimizer_g.step()
问题在于,torch内部forward和backward需要一一对应,但是这此处,源代码中匹配不一致...
修改代码顺序:
# Update G
if compute_g_loss:
trainer_module.optimizer_g.zero_grad()
# trainer_module.optimizer_d.zero_grad()
losses['g'] = losses['l1'] * config['l1_loss_alpha'] \
+ losses['ae'] * config['ae_loss_alpha'] \
+ losses['wgan_g'] * config['gan_loss_alpha']
losses['g'].backward()
trainer_module.optimizer_g.step()
###### Backward pass ######
# Update D
trainer_module.optimizer_d.zero_grad()
losses['d'] = losses['wgan_d'] + losses['wgan_gp'] * config['wgan_gp_lambda']
losses['d'].backward()
trainer_module.optimizer_d.step()
就可以正常运行了。
2023-12-15 09:49:08,183 INFO Iter: [100/500000] l1: 0.088805 ae: 0.610284 wgan_g: 194.796021 wgan_d: 16.596321 wgan_gp:0.155838 g: 1.033702 d: 18.154701 speed: 10.97 batches/s
2023-12-15 09:49:15,711 INFO Iter: [200/500000] l1: 0.080827 ae: 0.527556 wgan_g: 233.517044 wgan_d: -61.742981 wgan_gp: 0.062023 g: 0.963577 d: -61.122757 speed: 13.28 batches/s