8、PyTorch 优化算法和学习率策略


一、优化算法

pytorch的优化器:管理并更新模型中可学习参数的值,使得模型输出更接近真实标签

  • 导数:函数在指定坐标轴上的变化率
  • 方向导数:指定方向上的变化率
  • 梯度:一个向量,方向为方向导数 取得最大值的方向
# 常用优化算法
torch.optim.SGD(params, lr=<required parameter>, momentum=0, dampening=0, weight_decay=0, nesterov=False)
>>> params:管理的参数组
>>> lr:初始学习率
>>> momentum:动量系数,贝塔 
>>> weight_decay:L2 正则化系数 
>>> nesterov:是否采用 NAG

# alpha、rho、betas 为梯度(平方)的滑动平均衰减率,控制着历史梯度(平方)信息滑动平均的长度范围
torch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)
torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0) # 无学习率参数,lr 为 delta 的系数
torch.optim.Adam(params, lr=0.001, =(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False) 
# 其它变种
torch.optim.ASGD(params, lr=0.01, lambd=0.0001, alpha=0.75, t0=1000000.0, weight_decay=0)  # 随机平均梯度下降D
torch.optim.SparseAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08) # 稀疏版的 Adam

# 加了正则的 Adam,可以尝试调节 lr、weight_decay 和 amsgrad;可尝试 lr0.001+wd0.05
# AdamW 现在已经成为 transformer 训练中的默认优化器了
# Note:adam 多占用了一倍的参数量显存
torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False)  


# Nadam相对Adamax来说更加简单,此算法相当于将Nestrov动量法的临时梯度思想引入了自适应矩估计算法之中,在每次对梯度进行计算时先获得一个参数临时
# 更新量,对参数进行临时更新后计算获得临时梯度,利用临时梯度对一阶矩与二阶矩进行估计,并使用临时一阶矩与二阶矩来计算参数的更新量
torch.optim.NAdam(params, lr=2e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0, momentum_decay=4e-3)


# 优化器基本属性
defaults:优化器超参数 
state:参数的缓存,如 momentum 的缓存 
params_groups:管理的参数组 
_step_count:记录更新次数,学习率调整中使用


# 优化器常用方法
zero_grad(): Clears the gradients of all optimized torch.Tensors(清空所管理参数的梯度)
step(closure): Performs a single optimization step (parameter update, 执行一步更新)
add_param_group(param_group): Add a param group to the Optimizers param_groups(添加参数组)
load_state_dict(state_dict): Loads the optimizer state(加载状态信息字典)
state_dict(): Returns the state of the optimizer as a dict, contains two entries state and param_groups(获取优化器当前状态信息字典)


# Examples
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

optimizer = torch.optim.Adam([var1, var2], lr=0.0001)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9)

fc_params_id = list(map(id, res18.fc.parameters()))  # 返回的是 parameters 的内存地址
base_params = filter(lambda p: id(p) not in fc_params_id, res18.parameters())
optimizer = torch.optim.SGD([
    {'params': base_params, 'lr': LR * 0},  # 冻结最终分类层前所有层的参数
    {'params': res18.fc.parameters(), 'lr': LR}], momentum=0.9)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1)  # 每迭代 step_size 个 epoch, 学习率降低为原来的 gamma 倍

# Taking an optimization step, update the parameters
for epoch in range(MAX_EPOCH):
    loss_mean = 0.
    correct = 0.
    total = 0.
    res18.train()  # 设置为训练模式

	for i, data in enumerate(train_loader):
	    # forward
	    inputs, labels = data
	    inputs, labels = inputs.to(device), labels.to(device)
	    outputs = res18(inputs)
	
	    # backward
	    optimizer.zero_grad()  # 将梯度清零
	    loss = criterion(outputs, labels)
	    loss.backward()
	
	    # update weights
	    optimizer.step()

	scheduler.step()  # 更新学习率

    # validate the model
    if (epoch + 1) % val_interval == 0:
        correct_val = 0.
        total_val = 0.
        loss_val = 0.
        res18.eval()  # 设置为验证模型
        
        # Disabling gradient calculation is useful for inference
        # In this mode, the result of every computation will have requires_grad=False
        with torch.no_grad():
            for j, data in enumerate(valid_loader):
                inputs, labels = data
                inputs, labels = inputs.to(device), labels.to(device)
                
                outputs = res18(inputs)
                loss = criterion(outputs, labels)

                _, predicted = torch.max(outputs.data, 1)
# Copyright 2023 Google Research. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""PyTorch implementation of the Lion optimizer."""
import torch
from torch.optim.optimizer import Optimizer


class Lion(Optimizer):
  r"""Implements Lion algorithm."""

  def __init__(self, params, lr=1e-4, betas=(0.9, 0.99), weight_decay=0.0):
    """Initialize the hyperparameters.

    Args:
      params (iterable): iterable of parameters to optimize or dicts defining
        parameter groups
      lr (float, optional): learning rate (default: 1e-4)
      betas (Tuple[float, float], optional): coefficients used for computing
        running averages of gradient and its square (default: (0.9, 0.99))
      weight_decay (float, optional): weight decay coefficient (default: 0)
    """

    if not 0.0 <= lr:
      raise ValueError('Invalid learning rate: {}'.format(lr))
    if not 0.0 <= betas[0] < 1.0:
      raise ValueError('Invalid beta parameter at index 0: {}'.format(betas[0]))
    if not 0.0 <= betas[1] < 1.0:
      raise ValueError('Invalid beta parameter at index 1: {}'.format(betas[1]))
    defaults = dict(lr=lr, betas=betas, weight_decay=weight_decay)
    super().__init__(params, defaults)

  @torch.no_grad()
  def step(self, closure=None):
    """Performs a single optimization step.

    Args:
      closure (callable, optional): A closure that reevaluates the model
        and returns the loss.

    Returns:
      the loss.
    """
    loss = None
    if closure is not None:
      with torch.enable_grad():
        loss = closure()

    for group in self.param_groups:
      for p in group['params']:
        if p.grad is None:
          continue

        # Perform stepweight decay
        p.data.mul_(1 - group['lr'] * group['weight_decay'])

        grad = p.grad
        state = self.state[p]
        # State initialization
        if len(state) == 0:
          # Exponential moving average of gradient values
          state['exp_avg'] = torch.zeros_like(p)

        exp_avg = state['exp_avg']
        beta1, beta2 = group['betas']

        # Weight update
        update = exp_avg * beta1 + grad * (1 - beta1)

        p.add_(update.sign_(), alpha=-group['lr'])

        # Decay the momentum running average coefficient
        exp_avg.mul_(beta2).add_(grad, alpha=1 - beta2)

    return loss

二、学习率策略

1、StepLR:均匀分步策略

# 采用均匀降低的方式,每迭代 step_size 个 epoch, 学习率降低为原来的 gamma 倍 
# 计算公式:base_lr * self.gamma ** (self.last_epoch // self.step_size)
torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=-1)
>>> optimizer:关联的优化器
>>> step_size/gamma:每迭代 step_size 个 epoch, 学习率降低为原来的 gamma 倍
>>> last_epoch:记录 epoch 数
>>> 
>>> 常用方法:
>>> step():更新下一个 epoch 的学习率 
>>> get_lr():虚函数,计算下一个 epoch 的学习率
>>>
>>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.05     if epoch < 30
>>> # lr = 0.005    if 30 <= epoch < 60
>>> # lr = 0.0005   if 60 <= epoch < 90
>>> # ...
>>> # Learning rate scheduling should be applied after optimizer’s update
>>> scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=30, gamma=0.1)
>>> for epoch in range(100):
>>>     train(...)
>>>     validate(...)
>>>     scheduler.step()

2、MultiStepLR:不均匀分步策略

# 采用非均匀降低策略,按设定的间隔(epoch)将学习率降低为原来的 gamma 倍
# 计算公式:base_lr * self.gamma ** bisect_right(self.milestones, self.last_epoch)
torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=-1)
>>> milestones:设定学习率改变的时刻,可为 list 形式指定 
>>> gamma:调整系数
>>>
>>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.05     if epoch < 30
>>> # lr = 0.005    if 30 <= epoch < 80
>>> # lr = 0.0005   if epoch >= 80
>>> # Learning rate scheduling should be applied after optimizer’s update
>>> scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[30, 80], gamma=0.1)
>>> for epoch in range(100):
>>>     train(...)
>>>     validate(...)
>>>     scheduler.step()  # 执行一次 epoch 会加 1,因此 scheduler.step()要放在 epoch 的 for 循环当中执行
>>>     print('epoch: ', epoch, 'lr: ', scheduler.get_lr())  # 获取当前 epoch,参数组的学习率

3、ExponentialLR:指数变换策略

# 计算公式:base_lr * self.gamma ** self.last_epoch
torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=-1)  # 按指数衰减调整学习率, gamma 为指数的底

4、LambdaLR:自定义调整策略

# 为不同参数组设定不同学习率调整策略,调整规则为:lr = base_lr * lmbda(self.last_epoch)
torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=-1)
- lr_lambda(function or list):一个计算学习率调整倍数的函数,输入通常为 epoch,当有多个参数组时,设为 list
- last_epoch(int):上一个 epoch 数

>>> # Assuming optimizer has two groups.
>>> lambda1 = lambda epoch: epoch // 30
>>> lambda2 = lambda epoch: 0.95 ** epoch
>>> scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])
>>> for epoch in range(100):
>>>     train(...)
>>>     validate(...)
>>>     scheduler.step()

5、ReduceLROnPlateau:自适应调整策略

# 当某指标不再变化(下降或升高),调整学习率
torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, 
										   patience=10, verbose=False, threshold=0.0001, 
										   threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08)
# 主要参数解析
- mode(str)'min' 表示当指标不再降低(如监测 loss)'max' 表示当指标不再升高(如监测 accuracy)
- factor(float):学习率调整倍数, new_lr = lr * factor
- patience(int):loss&acc 在多少个 epoch 变化不大时(缓冲期),才开始降低学习率
- verbose(bool):是否打印更新后的学习率
- threshold(float):在多少个 epoch 变化不超过此阈值,则开始降低学习率
- threshold_mode(str):阈值模式,按比例(rel) 还是按数值(abs)
	- rel mode, dynamic_threshold = best * ( 1 + threshold ) or best * ( 1 - threshold )  # max or min mode
	- abs mode, dynamic_threshold = best + threshold or best - threshold  # max or min mode
- cooldown(int):当调整学习率之后,让学习率调整策略冷静一下(多少个 epoch),让模型再训练一段时间,再重启监测模式
- min_lr(float or list):学习率下限
- eps(float):学习率衰减的最小值,当学习率变化小于 eps 时,则不调整学习率

三、参考资料

1、https://pytorch.org/docs/stable/optim.html
2、PyTorch 的十个优化器
3、PyTorch 的六个学习率调整方法
4、当前训练神经网络最快的方式:AdamW优化算法+超级收敛
5、AdamW and Super-convergence is now the fastest way to train neural nets
6、十分钟速通优化器原理,通俗易懂(从SGD到AdamW)
7、adamw优化器为什么和大的weight decay的效果好?
8、为什么NLP模型通常使用AdamW作为优化器,而不是SGD?
9、https://github.com/google/automl/blob/master/lion/README.md
10、Google Brain新提出的优化器“Lion”,效果要比Adam(W)更好:小的 batch_size 时,Lion 的表现不如 AdamW,因为噪声必须适量才能更好地发挥作用

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值