UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and

文章讲述了在PyTorch1.1.0及以上版本中,如何修正由于错误的函数调用顺序导致的UserWarning,即应在`optimizer.step()`之后再调用`lr_scheduler.step()`。示例代码展示了修改前后消除警告的方法。
摘要由CSDN通过智能技术生成

若在Pytorch训练网络中出现一下错误警告的解决办法如下:

UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
  "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)

错误原因在“optimizer.step()”之前检测到对“lr_scheduler.step()”的调用。在 PyTorch 1.1.0 及更高版本中,您应该以相反的顺序调用它们:'optimizer.step()' 在 'lr_scheduler.step()' 之前。 如果不这样做,将导致 PyTorch 跳过学习率计划的第一个值。

例如代码修改前:

for epoch in range( 1 + opt.continueEpochs, opt.nEpochs +1 + opt.continueEpochs):
    
    print("Training...")
    
   
scheduler.step()

    epoch_loss = 0

    psnr_list = []
    for iteration, inputs in enumerate(train_dataloader,1):

        haze, gt = Variable(inputs['hazy_image']), Variable(inputs['clear_image'])
        
        haze = haze.cuda()
        #print(haze.shape)
        gt = gt.cuda()

        # --- Zero the parameter gradients --- #
        optimizer.zero_grad()

        # --- Forward + Backward + Optimize --- #
        model.train()
        dehaze = model(haze)
        #print(dehaze.size())
        #MSE_loss = MSELoss(dehaze, gt)
        #msssim_loss_ =1 -msssim_loss(dehaze, gt, normalize=True)
        Loss1 = loss_function_at(dehaze,gt)
        perceptual_loss = loss_network(dehaze,gt)
        #EDGE_loss = edge_loss(dehaze, gt, device)
        #ContrastLoss = ContrastLoss(dehaze)
        Loss = Loss1 +0.01*perceptual_loss# + 0.2*msssim_loss_
        epoch_loss +=Loss
        Loss.backward()
       
optimizer.step()

代码修改后消除警告

for epoch in range( 1 + opt.continueEpochs, opt.nEpochs +1 + opt.continueEpochs):
    
    print("Training...")
    

    epoch_loss = 0
    psnr_list = []
    for iteration, inputs in enumerate(train_dataloader,1):

        haze, gt = Variable(inputs['hazy_image']), Variable(inputs['clear_image'])
        
        haze = haze.to(device) 
        gt = gt.to(device)

        # --- Zero the parameter gradients --- #
        optimizer.zero_grad()

        # --- Forward + Backward + Optimize --- #
        model.train()
        dehaze = model(haze)
        #print(dehaze.size())
        #MSE_loss = MSELoss(dehaze, gt)
        #msssim_loss_ =1 -msssim_loss(dehaze, gt, normalize=True)
        Loss1 = loss_function_at(dehaze,gt)
        perceptual_loss = loss_network(dehaze,gt)
        #EDGE_loss = edge_loss(dehaze, gt, device)
        #ContrastLoss = ContrastLoss(dehaze)
        Loss = Loss1 +0.01*perceptual_loss# + 0.2*msssim_loss_
        epoch_loss +=Loss
        Loss.backward()
        optimizer.step()
        scheduler.step()   

 消除警告

  • 38
    点赞
  • 39
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值