paddle复现pytorch踩坑(十二):可视化paddle各种学习率及OneCycleLR复现

本文详细介绍了PaddlePaddle中常见的学习率衰减策略,如余弦退火、分段衰减、指数衰减、自然指数衰减、倒数时间衰减和多项式衰减,并提供了代码示例。此外,还展示了如何使用线性学习率预热和OneCycleLR策略,并给出了相应的代码实现。通过可视化学习率变化,帮助理解不同衰减策略对训练过程的影响。
摘要由CSDN通过智能技术生成

版本说明: paddlepaddle 1.8.4

cosine_decay

paddlepaddle官网实现原理说明
fluid.layers.cosine_decay(learning_rate, step_each_epoch, epochs)
在这里插入图片描述

piecewise_decay

paddlepaddle官网实现原理说明
fluid.layers.piecewise_decay(boundaries, values)
在这里插入图片描述

exponential_decay

paddlepaddle官网实现原理说明
fluid.layers.exponential_decay(learning_rate, decay_steps, decay_rate, staircase=False)
在这里插入图片描述

natural_exp_decay

paddlepaddle官网实现原理说明
fluid.layers.natural_exp_decay(learning_rate, decay_steps, decay_rate, staircase=False)
在这里插入图片描述

inverse_time_decay

paddlepaddle官网实现原理说明
fluid.layers.inverse_time_decay(learning_rate, decay_steps, decay_rate, staircase=False)
在这里插入图片描述

polynomial_decay

paddlepaddle官网实现原理说明
fluid.layers.polynomial_decay(learning_rate, decay_steps, end_learning_rate=0.0001, power=1.0, cycle=False)
在这里插入图片描述

linear_lr_warmup

paddlepaddle官网实现原理说明
fluid.layers.linear_lr_warmup(learning_rate, warmup_steps, start_lr, end_lr)
在这里插入图片描述

noam_decay

paddlepaddle官网实现原理说明
fluid.layers.noam_decay(d_model, warmup_steps)
在这里插入图片描述

附录:代码实现

import paddle.fluid as fluid
import numpy as np
import matplotlib.pyplot as plt

with fluid.dygraph.guard(fluid.CPUPlace()):
    epochs = 1000
    loss = fluid.layers.cast(fluid.dygraph.to_variable(np.array([0.18])), 'float32')

    emb = fluid.dygraph.Embedding([10, 10])

    # learning_rate = fluid.layers.noam_decay(10, 50, 0.01)

    # learning_rate = fluid.layers.cosine_decay(
    #         learning_rate = 0.01, step_each_epoch=5, epochs=10)

    # boundaries = [100, 400]
    # lr_steps = [0.1, 0.01, 0.001]
    # learning_rate = fluid.layers.piecewise_decay(boundaries, lr_steps)

    # learning_rate = fluid.layers.exponential_decay(
    #     learning_rate=0.01,
    #     decay_steps=50,
    #     decay_rate=0.5,
    #     staircase=True)

    # learning_rate = fluid.layers.natural_exp_decay(
    #     learning_rate=0.01,
    #     decay_steps=50,
    #     decay_rate=0.5,
    #     staircase=True)

    # learning_rate = fluid.layers.inverse_time_decay(
    #     learning_rate=0.01,
    #     decay_steps=50,
    #     decay_rate=0.5,
    #     staircase=True)

    # learning_rate = fluid.layers.polynomial_decay(
    #     0.01, 200, 0.01 * 0.00001, power=1, cycle=True)

    boundaries = [100, 400]
    lr_steps = [0.1, 0.01, 0.001]
    learning_rate = fluid.layers.piecewise_decay(boundaries, lr_steps)
    learning_rate = fluid.layers.linear_lr_warmup(learning_rate,
                                               200, 0.01, 0.05)


    name = 'linear_lr_warmup'
    optimizer = fluid.optimizer.SGDOptimizer( learning_rate = learning_rate,
                                 parameter_list = emb.parameters())

    lr_list = []
    for i in range(epochs):
        lr_list.append(optimizer.current_step_lr())
        optimizer.minimize(loss)
        # print(i, ': ', optimizer.current_step_lr())

    # 画出lr的变化
    plt.plot(list(range(epochs)), lr_list)
    plt.xlabel("epoch")
    plt.ylabel("lr")
    plt.title(name)
    plt.savefig(name + ".png")
    plt.show()

OneCycleLR复现

在这里插入图片描述

from paddle.fluid.dygraph.learning_rate_scheduler import LearningRateDecay
import math

class OneCycleLR(LearningRateDecay):

    def __init__(self,
                 max_lr,
                 total_steps=None,
                 steps_per_epoch=1,
                 pct_start=0.3,
                 anneal_strategy='cos',
                 cycle_momentum=True,
                 base_momentum=0.85,
                 max_momentum=0.95,
                 div_factor=25.,
                 final_div_factor=1e4,
                 last_epoch=-1):

        super(OneCycleLR, self).__init__(last_epoch)

        self.total_steps = total_steps
        self.step_size_up = float(pct_start * self.total_steps) - 1
        self.step_size_down = float(self.total_steps - self.step_size_up) - 1
        self.last_epoch = last_epoch

        self.learning_rate = max_lr

    def _annealing_cos(self, start, end, pct):
        "Cosine anneal from `start` to `end` as pct goes from 0.0 to 1.0."
        cos_out = math.cos(math.pi * pct) + 1
        return end + (start - end) / 2.0 * cos_out

    def step(self):
        down_step_num = self.step_num - self.step_size_up

        a = self._annealing_cos(self.learning_rate * 0.00001, self.learning_rate, self.step_num / self.step_size_up)
        b = self._annealing_cos(self.learning_rate, self.learning_rate * 0.00001, down_step_num / self.step_size_down)

        if self.step_num < self.step_size_up:
            lr_value = a
        else:
            lr_value = b
        return lr_value

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值