pytorch中Adam优化器源码解读

1. 调用方法

torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)

参数:
weight_decay : 这里是采用权重衰减,权重衰减的系数
amsgrad:在更新时,是否保留梯度的二阶历史信息

2.源码

源码中的实现,参照最后一幅图中L2正则化的Adam。

    def step(self, closure=None):
        """Performs a single optimization step.

        Arguments:
            closure (callable, optional): A closure that reevaluates the model
                and returns the loss.
        """
        loss = None
        if closure is not None:
            loss = closure()

        for group in self.param_groups:
            for p in group['params']:
                if p.grad is None:
                    continue
                grad = p.grad.data
                if grad.is_sparse:
                    raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')
                amsgrad = group['amsgrad']

                state = self.state[p] # 之前的step累计数据

                # State initialization
                if len(state) == 0:
                    state['step'] = 0
                    # Exponential moving average of gradient values
                    state['exp_avg'] = torch.zeros_like(p.data) # [batch, seq]
                    # Exponential moving average of squared gradient values
                    state['exp_avg_sq'] = torch.zeros_like(p.data)
                    if amsgrad:
                        # Maintains max of all exp. moving avg. of sq. grad. values
                        state['max_exp_avg_sq'] = torch.zeros_like(p.data)

                exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] # 上次的r与s
                if amsgrad:
                # asmgrad优化方法是针对Adam的改进,通过添加额外的约束,使学习率始终为正值。
                    max_exp_avg_sq = state['max_exp_avg_sq']
                beta1, beta2 = group['betas']

                state['step'] += 1
                bias_correction1 = 1 - beta1 ** state['step']
                bias_correction2 = 1 - beta2 ** state['step']
				# 序号对应最后一幅图中序号
                if group['weight_decay'] != 0:  # 进行权重衰减(实际是L2正则化)
                	# 6. grad(t)=grad(t-1)+ weight*p(t-1)
                    grad.add_(group['weight_decay'], p.data)

                # Decay the first and second moment running average coefficient
                # 7.计算m(t): m(t)=beta_1*m(t-1)+(1-beta_1)*grad
                exp_avg.mul_(beta1).add_(1 - beta1, grad)
                # 8.计算v(t): v(t)= beta_2*v(t-1)+(1-beta_2)*grad^2
                exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
                if amsgrad:
                    # Maintains the maximum of all 2nd moment running avg. till now
                    # 迭代改变max_exp_avg_sq的值(取最大值),传到下一次,保留之前的梯度信息。
                    torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
                    # Use the max. for normalizing running avg. of gradient
                    denom = (max_exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])
                else:
                	# 计算sqrt(v(t))+epsilon
                	# sqrt(v(t))+eps = denom = sqrt(v(t))/sqrt(1-beta_2^t)+eps
                    denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])
				# step_size=lr/bias_correction1=lr/(1-beta_1^t)
                step_size = group['lr'] / bias_correction1
				#p(t)=p(t-1)-step_size*m(t)/denom
                p.data.addcdiv_(-step_size, exp_avg, denom)

        return loss

对最后一步更新
d e n o m = v t 1 − β 2 t + ϵ = v t ^ + ϵ denom=\frac{\sqrt{v_t}}{\sqrt{1-\beta_2^t}}+\epsilon=\sqrt{\hat{v_t}}+\epsilon denom=1β2t vt +ϵ=vt^ +ϵ
p ( t ) = p ( t − 1 ) − s t e p _ s i z e ∗ m ( t ) / d e n o m = p ( t − 1 ) − l r 1 − β 2 t ∗ m ( t ) ∗ 1 d e n o m = p ( t − 1 ) − l r ∗ m t ^ v t ^ + ϵ ∗ 1 − β 1 t 1 − β 2 t p(t)=p(t-1)-step\_size*m(t)/denom \\ =p(t-1)-\frac{lr}{1-\beta_2^t}*m(t)*\frac{1}{denom}\\ =p(t-1)-\frac{lr*\hat{m_t}}{\sqrt{\hat{v_t}}+\epsilon}*{\frac{1-\beta_1^t}{1-\beta_2^t}} p(t)=p(t1)step_sizem(t)/denom=p(t1)1β2tlrm(t)denom1=p(t1)vt^ +ϵlrmt^1β2t1β1t
上式取 α = 1 − β 1 t 1 − β 2 t \alpha={\frac{1-\beta_1^t}{1-\beta_2^t}} α=1β2t1β1t,即可与最后一幅图中序号12等价
算法:
(《深度学习》书中,pytorch中Adam不采用下面方式)
在这里插入图片描述

3. adam中权重衰减与L2正则化的关系

在sgd中,权重衰减和L2正则化等价,在adam等自适应优化算法(AdaGrad/RMSProp等)中,不等价。
在pytorch中的adam中,实际使用的是L2正则化(下图中使用红色部分),adamw算法中使用weight_decay(下图中暗黄色部分),两者的区别在于使用位置不同,其他部分都相同。
在这里插入图片描述

  • 9
    点赞
  • 43
    收藏
    觉得还不错? 一键收藏
  • 9
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 9
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值