类神经网络训练不起来怎么办(三)
Loss不再下降不一定是卡在Critical point
不同的参数需要不同的Learning rate
- 平稳需要大一点的Learning rate
- 陡峭需要小一点的Learning rate
θ i t + 1 ← θ i t − η g i t \theta^{t+1}_i\leftarrow \theta^t_i-\eta g^t_i θit+1←θit−ηgit改成 θ i t + 1 ← θ i t − η σ i t g i t \theta^{t+1}_i\leftarrow \theta^t_i-\frac{\eta}{\sigma^t_i} g^t_i θit+1←θit−σitηgit
Root Mean Squre
σ i t = 1 t + 1 ∑ i = 0 t ( g i t ) 2 \sigma^t_i=\sqrt{\frac{1}{t+1}\sum_{i=0}^t(g^t_i)^2} σit=t+11∑i=0t(git)2 Used in Adagrad
RMSProp
σ i t = α ( σ i t − 1 ) + ( 1 − α ) g i t \sigma^t_i=\sqrt{\alpha(\sigma^{t-1}_i)+(1-\alpha)g^t_i} σit=α(σit−1)+(1−α)git
Adam: RMSProp+Mumentum
Learning rate scheduling
Learning rate decay,Warm up