new_lrs[:5] = lr_warm [12] TypeError: can only assign an iterable

该博客介绍了如何在PyTorch中实现带有重启的余弦退火学习率调度器。代码中定义了一个名为`CosineAnnealingLR_with_Restart`的类,该类扩展了PyTorch的_LRScheduler。它根据预设的周期(T_max)和周期乘数(T_mult)调整学习率,并在每个周期结束时进行重启。在warmup阶段,使用指数衰减策略。此外,还提供了保存模型权重的功能。
摘要由CSDN通过智能技术生成
new_lrs[:5] = lr_warm
[12] TypeError: can only assign an iterable

说明:

在python中,用List[0:3] = ‘XXX’,不会产生错误,使下标为0,1,2的元素赋值为‘xxxx’;这是因为String字符串本身在python里就是一个字符数组,是可以进行迭代操作的。

而List[0:2] = 1中,就会产生错误:TypeError: can only assign an iterable

这是因为整型1,不具有迭代能力,它就是一个值。未达到目的,写成List[0:2] = (1,)即可

这个赋值右侧必须是可迭代类型,整数不行,但 [int] 是可以的

lr =[0.0001,0.00012,0.00013]
new_lrs = [0.001, 0.0009,0.0008,0.0007,0.0006]
new_lrs[:3] = lr
new_lrs
Out[5]: [0.0001, 0.00012, 0.00013, 0.0007, 0.0006]

是在将学习率加入warmup的过程中遇到的,完整代码如下

import torch
import math
from torch.optim.lr_scheduler import _LRScheduler
from utils.utils import read_cfg

cfg = read_cfg(cfg_file="/yangjiang/CDCN-Face-Anti-Spoofing.pytorch/config/CDCNpp_adam_lr1e-3.yaml")

class CosineAnnealingLR_with_Restart(_LRScheduler):
    """Set the learning rate of each parameter group using a cosine annealing
    schedule, where :math:`\eta_{max}` is set to the initial lr and
    :math:`T_{cur}` is the number of epochs since the last restart in SGDR:

    .. math::

        \eta_t = \eta_{min} + \frac{1}{2}(\eta_{max} - \eta_{min})(1 +
        \cos(\frac{T_{cur}}{T_{max}}\pi))

    When last_epoch=-1, sets initial lr as lr.

    It has been proposed in
    `SGDR: Stochastic Gradient Descent with Warm Restarts`_. The original pytorch
    implementation only implements the cosine annealing part of SGDR,
    I added my own implementation of the restarts part.

    Args:
        optimizer (Optimizer): Wrapped optimizer.
        T_max (int): Maximum number of iterations.
        T_mult (float): Increase T_max by a factor of T_mult
        eta_min (float): Minimum learning rate. Default: 0.
        last_epoch (int): The index of last epoch. Default: -1.
        model (pytorch model): The model to save.
        out_dir (str): Directory to save snapshots
        take_snapshot (bool): Whether to save snapshots at every restart

    .. _SGDR\: Stochastic Gradient Descent with Warm Restarts:
        https://arxiv.org/abs/1608.03983
    """

    def __init__(self, optimizer, T_max, T_mult, model, out_dir, take_snapshot, eta_min=0, last_epoch=-1):
        self.T_max = T_max
        self.T_mult = T_mult
        self.Te = self.T_max
        self.eta_min = eta_min
        self.current_epoch = last_epoch

        self.model = model
        self.out_dir = out_dir
        self.take_snapshot = take_snapshot

        self.lr_history = []

        super(CosineAnnealingLR_with_Restart, self).__init__(optimizer, last_epoch)

    def get_lr(self):
        if self.current_epoch < 5:
            warm_factor = (cfg['train']['lr'] / cfg['train']['warmup_start_lr']) ** (1 / cfg['train']['warmup_epochs'])
            lr = cfg['train']['warmup_start_lr'] * warm_factor ** self.current_epoch
            new_lrs = [lr]
        else:
            new_lrs = [self.eta_min + (base_lr - self.eta_min) *
                   (1 + math.cos(math.pi * self.current_epoch / self.Te)) / 2
                   for base_lr in self.base_lrs]
        
        #new_lrs[:5] = lr_warm
        #self.lr_history.append(new_lrs)
        #print('new_lrs', new_lrs,len(new_lrs))
        return new_lrs

    def step(self, epoch=None):
        if epoch is None:
            epoch = self.last_epoch + 1
        self.last_epoch = epoch
        self.current_epoch += 1

        for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()):
            param_group['lr'] = lr

        ## restart
        if self.current_epoch == self.Te:
            print("restart at epoch {:03d}".format(self.last_epoch + 1))

            if self.take_snapshot:
                torch.save({
                    'epoch': self.T_max,
                    'state_dict': self.model.state_dict()
                }, self.out_dir + "Weight/" + 'snapshot_e_{:03d}.pth.tar'.format(self.T_max))

            ## reset epochs since the last reset
            self.current_epoch = 0

            ## reset the next goal
            self.Te = int(self.Te * self.T_mult)
            self.T_max = self.T_max + self.Te
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

我现在强的可怕~

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值