深度学习优化方法总结

先敬大佬的一篇文章

《深度学习最全优化方法总结比较(SGD,Adagrad,Adadelta,Adam,Adamax,Nadam)》


assignment2 FullyConnectedNets作业中的optim.py里有以下几种优化机制(cs231n_2018_lecture07):

  • SGD
  • SGD + Momentum
  • RMSprop
  • Adam

1. SGD

公式:   w = w -lr * dw

缺点:1.来回振荡 2.会困在局部最优点或鞍点(鞍点在高维向量中很常见)

代码:

def sgd(w, dw, config=None):

    if config is None: config = {}
    config.setdefault('learning_rate', 1e-2)

    w -= config['learning_rate'] * dw
    return w, config

2. SGD + Momentum

公式: w_{i+1} = w_{i} - lr * dw + \rho *v_{i-1}

优缺点:帮助快速收敛,但会跳过某些点

代码:

def sgd_momentum(w, dw, config=None):

    if config is None: config = {}
    config.setdefault('learning_rate', 1e-2)
    config.setdefault('momentum', 0.9)
    v = config.get('velocity', np.zeros_like(w))

    next_w = None

    v = config['momentum'] * v - config['learning_rate'] * dw
    next_w = w + v

    config['velocity'] = v

    return next_w, config

3. RMSprop

公式:w_{i+1} = w_{i} - lr * \frac{dw_{i}}{\sqrt{cache }+ \varepsilon }

优点:可以动态调节梯度,当dw较小时,cache较小,则dw_{i}/(\sqrt{cache }+ \varepsilon )较大,提高速度。

缺点:当开始速度慢,因为梯度小

代码:

def rmsprop(w, dw, config=None):
    """
    Uses the RMSProp update rule, which uses a moving average of squared
    gradient values to set adaptive per-parameter learning rates.

    config format:
    - learning_rate: Scalar learning rate.
    - decay_rate: Scalar between 0 and 1 giving the decay rate for the squared
      gradient cache.
    - epsilon: Small scalar used for smoothing to avoid dividing by zero.
    - cache: Moving average of second moments of gradients.
    """
    if config is None: config = {}
    config.setdefault('learning_rate', 1e-2)
    config.setdefault('decay_rate', 0.99)
    config.setdefault('epsilon', 1e-8)
    config.setdefault('cache', np.zeros_like(w))

    next_w = None
    
    cache = config['decay_rate'] * config['cache'] + (1 - config['decay_rate']) * dw ** 2
    next_w = w - config['learning_rate'] * dw / (np.sqrt(cache) + config['epsilon'])
    config['cache'] = cache

    return next_w, config

4. Adam(最常用)

公式:

m_{t} = \frac{m}{1 - \beta _{1}^{t}}

v_{t} = \frac{v}{1 - \beta _{2}^{t}}

w_{i+1} = w_{i} - lr * \frac{m_{t}}{\sqrt{v_{t} }+ \varepsilon }

优缺点:当迭代次数变多时,m变小,lr变小,逼近全局最优。

    """
    Uses the Adam update rule, which incorporates moving averages of both the
    gradient and its square and a bias correction term.

    config format:
    - learning_rate: Scalar learning rate.
    - beta1: Decay rate for moving average of first moment of gradient.
    - beta2: Decay rate for moving average of second moment of gradient.
    - epsilon: Small scalar used for smoothing to avoid dividing by zero.
    - m: Moving average of gradient.
    - v: Moving average of squared gradient.
    - t: Iteration number.
    """
    if config is None: config = {}
    config.setdefault('learning_rate', 1e-3)
    config.setdefault('beta1', 0.9)
    config.setdefault('beta2', 0.999)
    config.setdefault('epsilon', 1e-8)
    config.setdefault('m', np.zeros_like(w))
    config.setdefault('v', np.zeros_like(w))
    config.setdefault('t', 0)

    next_w = None

    config['t'] += 1
    m = config['beta1'] * config['m'] + (1 - config['beta1']) * dw
    mt = m / (1 - config['beta1'] ** config['t'])
    v = config['beta2'] * config['v'] + (1 - config['beta2']) * dw ** 2
    vt = v / (1 - config['beta2'] ** config['t'])
    next_w = w - config['learning_rate'] * mt / (np.sqrt(vt) + config['epsilon'])
    config['m'] = m
    config['v'] = v
    return next_w, config
  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值