梯度下降、随机梯度下降法、及其改进

本文详细介绍了梯度下降法的三种形式:批量梯度下降、随机梯度下降和小批量梯度下降,以及各自的优势与挑战。针对随机梯度下降的不稳定性,研究者提出了一系列改进方法,包括动量法、AdaGrad、AdaDelta、RMSProp和Adam等。动量法通过引入历史信息减少震荡,AdaGrad、RMSProp和Adam则实现了自适应学习率。文章还讨论了不同优化方法在训练稳定性、泛化能力和超参数调优方面的影响。
摘要由CSDN通过智能技术生成

题目(155):当训练数据量特别大时,经典的梯度下降法存在什么问题,需要做如何改进?
题目(158):随机梯度下降法失效的原因。
题目(160):为了改进随机梯度下降法,研究者都做了哪些改动?提出了哪些变种方法?它们各有哪些特点?即SGD的一些变种。

考点1:gradient descent, stochastic gradient descent, mini-batch gradient descent

(Batch) gradient descent

Advantages \textcolor{blue}{\text{Advantages}} Advantages

  • unbiased estimate of gradients
  • non-increasing trajectory if the loss function is convex

Disadvantages \textcolor{red}{\text{Disadvantages}} Disadvantages

  • may result in local minima for non-convex problems

Stochastic gradient descent

(Pseudo-code of SGD [2])
· Choose an initial vector of parameters w and learning rate α.
· Repeat until an approximate minimum is obtained:
	· Randomly shuffle examples in the training set.
	· For i=1,2,...,n, do:
		· θ := θ - α δL(θ,z_i)/δθ.
  • SGD: randomly select one of the training samples at each iteration to update your coefficients.
  • online gradient descent: use the most recent sample at each iteration. The samples may not be IID.

Advantages \textcolor{blue}{\text{Advantages}} Advantages

  • The noisy update process can allow the model to avoid local minima [5]

Challenges \textcolor{red}{\text{Challenges}} Challenges

  • 山谷 / narrow and tall level curves (contours) – see figure below
    Reason: 在山谷中,准确的梯度方向是沿山道向下,稍有偏离就会撞向山壁,而粗糙的梯度估计使得它在两山壁间来回反弹震荡,不能沿山道方向迅速下降,导致收敛不稳定和收敛速度慢 [1]。
  • saddle point
    Test \textcolor{green}{\text{\small Test}} Test: This happens when at least one eigenvalue of the Hessian matrix is negative and the rest of eigenvalues are positive[4].
    Reason: 在梯度近乎为零的区域,SGD无法准确察觉出梯度的微小变化,结果就停滞下来。

Disadvantages \textcolor{red}{\text{Disadvantages}} Disadvantages

  • learning rate decay

Mini-batch gradient descent

Practical issues

  • tuning of batch size: It is usually chosen as power of 2 such as 32, 64, 128, 256, 512, etc. The reason is that, with common batch sizes such as power of 2, some hardware such as GPUs achieve better run time and fit their memory requirements [4,5].

    • Tip: A good default for batch size might be 32 [5].

    … [batch size] is typically chosen between 1 and a few hundreds, e.g. [batch size] = 32 is a good default value, with values above 10 taking advantage of the speedup of matrix-matrix products over matrix-vector products [6].

    • key considerations: training stability and generalization
  • tuning of learning rate: 通常会采用衰减学习速率的方案:一开始算法采用较大的学习速率,当误差曲线进入平台期后,减小学习速率做更精细的调整 [1]。

  • Tip: Tune batch size and learning rate after tuning all other hyperparameters [5].

  • Practitioners’ experience: batch size = 32; no more than 2-10 epochs

Advantages \textcolor{blue}{\text{Advantages}} Advantages
compared with batch GD

  • with batch size smaller than total size, it adds noise to the learning process that helps improve generalization ability [4]

compared with SGD

  • reduce the variance of gradient; however, the return is less than linear compared to the computational burden we incur [4]
  • take the advantage of efficient vectorisation of matrices [3]

Disadvantages \textcolor{red}{\text{Disadvantages}} Disadvantages

  • wander around the minimum but never converge due to randomness in sampling
  • as a result of above, require adding the learning-decay to decrease the learning rate when approaching closer to the minimum
  • additional hyperparameter – batch size

Additional practical issues and tips

  • in small dimensions, local minimum is common; in large dimensions, saddle points are more common

Tip: scale the data if it’s on very different scales as the level curves (contours) may be narrow and tall and thus taking longer time to converge.

考点2:momentum, adaptive learning rate

Momentum方法

Momentum的本质是在参数更新时加入历史信息[9]。

Intuitive understanding of why momentum methods could address the two problems of SGD

  • 山谷问题:向下的力稳定不变,产生的动量不断累积,速度越来越快;左右的弹力不停切换,动量累积的结果是相互抵消,减弱了来回震荡。
  • saddle point问题:利用惯性,保持前进
    θ t + 1 = θ t + v t + 1 v t + 1 = γ v t − α ∇ f ( θ t ) , \begin{aligned} \theta_{t+1} &= \theta_t + v_{t+1} \\ v_{t+1} &= \gamma v_t - \alpha \nabla f(\theta_t), \end{aligned} θt+1v
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值