由于使用的pytorch版本是1.5,但是代码的原来版本是1.0,所以在调试过程中遇到版本不同问题。
问题1:
UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead。
解决办法:
例如:
#points = Variable(points, volatile=True)
改为:
with torch.no_grad():
points=Variable(points)
问题二:
在使用pytorch的指数衰减学习率时,出现报错:UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.
解决办法:
原因是如报错所说,在“optimizer.step()”之前对“lr_scheduler.step()”的进行了调用,如下错误的代码所示:
for epoch in range(opt.nepoch):
current_train_batch_index = -1
train_completion = 0.0
train_batches = enumerate(train_dataloader, 0)
current_test_batch_index = -1
test_completion = 0.0
test_batches = enumerate(test_dataloader, 0)
for current_train_batch_index, data in train_batches:
# update learning rate
# # scheduler.step(epoch * total_train_batches + current_train_batch_index)
# set to training mode
pcpnet.train()
# prepare noisy points batch
points = data[0]
points = Variable(points).transpose(2, 1)
points = points.cuda()
# prepare ground truth points batch
target = data[1:-1]
target = tuple(Variable(t) for t in target)
target = tuple(t.cuda() for t in target)
# zero gradients
optimizer.zero_grad()
所以应该把lr_scheduler.step()放在每次epoch训练完成之后:
scheduler.step()
#scheduler.step(epoch * total_train_batches + current_train_batch_index)
# save model, overwriting the old model
if epoch % opt.saveinterval == 0 or epoch == opt.nepoch - 1:
torch.save(pcpnet.state_dict(), model_filename)