题目(155):当训练数据量特别大时,经典的梯度下降法存在什么问题,需要做如何改进?
题目(158):随机梯度下降法失效的原因。
题目(160):为了改进随机梯度下降法,研究者都做了哪些改动?提出了哪些变种方法?它们各有哪些特点?即SGD的一些变种。
考点1:gradient descent, stochastic gradient descent, mini-batch gradient descent
(Batch) gradient descent
Advantages \textcolor{blue}{\text{Advantages}} Advantages
- unbiased estimate of gradients
- non-increasing trajectory if the loss function is convex
Disadvantages \textcolor{red}{\text{Disadvantages}} Disadvantages
- may result in local minima for non-convex problems
Stochastic gradient descent
(Pseudo-code of SGD [2])
· Choose an initial vector of parameters w and learning rate α.
· Repeat until an approximate minimum is obtained:
· Randomly shuffle examples in the training set.
· For i=1,2,...,n, do:
· θ := θ - α δL(θ,z_i)/δθ.
- SGD: randomly select one of the training samples at each iteration to update your coefficients.
- online gradient descent: use the most recent sample at each iteration. The samples may not be IID.
Advantages \textcolor{blue}{\text{Advantages}} Advantages
- The noisy update process can allow the model to avoid local minima [5]
Challenges \textcolor{red}{\text{Challenges}} Challenges
- 山谷 / narrow and tall level curves (contours) – see figure below
Reason: 在山谷中,准确的梯度方向是沿山道向下,稍有偏离就会撞向山壁,而粗糙的梯度估计使得它在两山壁间来回反弹震荡,不能沿山道方向迅速下降,导致收敛不稳定和收敛速度慢 [1]。 - saddle point
Test \textcolor{green}{\text{\small Test}} Test: This happens when at least one eigenvalue of the Hessian matrix is negative and the rest of eigenvalues are positive[4].
Reason: 在梯度近乎为零的区域,SGD无法准确察觉出梯度的微小变化,结果就停滞下来。
Disadvantages \textcolor{red}{\text{Disadvantages}} Disadvantages
- learning rate decay
Mini-batch gradient descent
Practical issues
-
tuning of batch size: It is usually chosen as power of 2 such as 32, 64, 128, 256, 512, etc. The reason is that, with common batch sizes such as power of 2, some hardware such as GPUs achieve better run time and fit their memory requirements [4,5].
- Tip: A good default for batch size might be 32 [5].
… [batch size] is typically chosen between 1 and a few hundreds, e.g. [batch size] = 32 is a good default value, with values above 10 taking advantage of the speedup of matrix-matrix products over matrix-vector products [6].
- key considerations: training stability and generalization
-
tuning of learning rate: 通常会采用衰减学习速率的方案:一开始算法采用较大的学习速率,当误差曲线进入平台期后,减小学习速率做更精细的调整 [1]。
-
Tip: Tune batch size and learning rate after tuning all other hyperparameters [5].
-
Practitioners’ experience: batch size = 32; no more than 2-10 epochs
Advantages \textcolor{blue}{\text{Advantages}} Advantages
compared with batch GD
- with batch size smaller than total size, it adds noise to the learning process that helps improve generalization ability [4]
compared with SGD
- reduce the variance of gradient; however, the return is less than linear compared to the computational burden we incur [4]
- take the advantage of efficient vectorisation of matrices [3]
Disadvantages \textcolor{red}{\text{Disadvantages}} Disadvantages
- wander around the minimum but never converge due to randomness in sampling
- as a result of above, require adding the learning-decay to decrease the learning rate when approaching closer to the minimum
- additional hyperparameter – batch size
Additional practical issues and tips
- in small dimensions, local minimum is common; in large dimensions, saddle points are more common
Tip: scale the data if it’s on very different scales as the level curves (contours) may be narrow and tall and thus taking longer time to converge.
![](https://i-blog.csdnimg.cn/blog_migrate/5a2e720b43c6866018fd27107c9fb51c.png)
考点2:momentum, adaptive learning rate
Momentum方法
Momentum的本质是在参数更新时加入历史信息[9]。
Intuitive understanding of why momentum methods could address the two problems of SGD
- 山谷问题:向下的力稳定不变,产生的动量不断累积,速度越来越快;左右的弹力不停切换,动量累积的结果是相互抵消,减弱了来回震荡。
- saddle point问题:利用惯性,保持前进
θ t + 1 = θ t + v t + 1 v t + 1 = γ v t − α ∇ f ( θ t ) , \begin{aligned} \theta_{t+1} &= \theta_t + v_{t+1} \\ v_{t+1} &= \gamma v_t - \alpha \nabla f(\theta_t), \end{aligned} θt+1v