深度学习中的优化算法

一般来说,深度学习通过仔细的设计Objective function和constraints来确保优化问题是convex的,从而避开一下通用优化算法的难题。
优化(optimization)说的有点大,因为存在一阶优化(first-order)和二阶(second-order)优化算法。我们常用的优化算法,比如Adam,SGD等其实都是一阶优化算法(基于gradient),这篇写的也是一阶优化算法.

现今的一阶优化算法都是基于batch或者是minibatch的,所以从minibatch开始说起吧


batch and minibatch

假设做最大似然估计,需要优化下面的式子:

等价于需要在经验分布上最大化期望:

优化上式自然想到梯度:

求这个期望需要计算每个example的梯度,对于大数据集(比如100W)来说计算成本过高,一个常用的解决方法就是:random sampling

In practice, we can compute these expectations by randomly sampling a small number of examples from the dataset, then taking the average over only those examples.

这样做就是使用近似的梯度估计来代替精确的梯度,毫无疑问计算代价很小,计算的代价是不随training set的增加而增加的.

batch gradient descent

Optimization algorithms that use the entire training set are called batch or deterministic gradient methods, because they process all of the training examples simultaneously in a large batch.

使用所有的训练数据来计算梯度

SGD(stochastic gradient descent)

Optimization algorithms that use only a single example at a time are sometimes called stochastic or sometimes online methods.

minibatch SGD

SGD和batch GD是两个极端,一个每次只使用一个训练数据来计算梯度,一个是使用所有的训练数据;batch GD计算成本太高,SGD太具有随机性,所以需要综合下。

using more than one but less than all of the training examples. These were traditionally called minibatch or minibatch stochastic methods and it is now common to simply call them stochastic methods.

重要的一点是:选择minibatch的数据时一定要随机

It is also crucial that the minibatches be selected randomly. Computing an unbiased estimate of the expected gradient from a set of samples requires that those samples be independent. We also wish for two subsequent gradient estimates to be independent from each other, so two subsequent minibatches of examples should also be independent from each other.

这里只是为了区分batch和minibatch的概念,现在所说的stochastic其实都是minibatch的.

SGD

SGD和它的变种是最常用的一阶优化算法,具体描述:

SGD

momentum

SGD的问题就是它可能会很慢,所以使用momentum来加速学习过程.

The method of momentum (Polyak, 1964) is designed to accelerate learning, especially in the face of high curvature, small but consistent gradients, or noisy gradients.

momentum的原理:

The momentum algorithm accumulates an exponentially decaying moving average of past gradients and continues to move in their direction.

引入一个参数α ∈ [0 ,1),作用是:

determines how quickly the contributions of previous gradients exponentially decay.

更新公式:

其中L是损失函数,v代表速度
使用了momentum之前,step_size = ||g||*lr(梯度乘以学习率),使用了momentum之后的step_size,取决于两个因素:

  • how large a sequence of gradients are.
  • how aligned a sequence of gradients are.

这个对齐(align)怎么理解,可以放到物理上想,当所有的物体的运动方向是一致的时候,这个整体的动量是最大的.

step_size_with_momentum=lr/(1-α)*||g||,通常典型的α值是0.9

CS231n上有比较形象的解释: 地址在这
中文翻译过来就是:

损失值可能理解为山峰的高度, 用随机数字初始化参数等同于在某个位置给质点设定初始速度为0. 这样最优化过程可以看做是模拟参数向量(即质点)在地形上滚动的过程, 质点所受的力就是损失函数的负梯度. 在普通的更新(比如SGD)中,梯度直接影响位置,momentum上是梯度影响速度,速度影响位置.

SGD with momentum

Nesterov momentum

跟momentum非常像,只是计算梯度的位置(x+a*v)不一样,所以可以把nesterov momentum看成是对标准的momentum的一个correction factor.

对于上式更新来说:theta <- theta + v == theta + alpha * v -step_size
此时的theta+alpha*v看做是未来的近似位置,所以算梯度时我们直接在未来的位置(前向位置)处计算就可以了. 所以Nesterov的核心就是在前向位置而不是原始的位置计算梯度.

更新公式:

直观上的理解就是:

Nesterov

Nesterov直接在前向位置(绿色箭头指向的位置)处更新梯度.

SGD with nesterov momentum

AdaGrad

是个学习率自适应的优化算法

individually adapts the learning rates of all model parameters by scaling them inversely proportional to the square root of the sum of all of their historical squared values.

对loss有很大贡献的parameter的学习率会下降的比较快:

The parameters with the largest partial derivative of the loss have a correspondingly rapid decrease in their learning rate, while parameters with small partial derivatives have a relatively small decrease in their learning rate.

经验表明:

the accumulation of squared gradients from the beginning of training can result in a premature and excessive decrease in the effective learning rate.

所以其实AdaGrad使用的也不是很多

RMSprop

RMSProp是从AdaGrad上修改来的,也是个自适应的算法,就是把gradient accumulation换成exponentially weighted moving average.

两者之间的区别:

  1. AdaGrad shrinks the learning rate according to the entire history of the squared gradient and may have made the learning rate too small before arriving at such a convex structure.
  2. RMSProp uses an exponentially decaying average to discard history from the extreme past so that it can converge rapidly after finding a convex bowl.

RMSProp with nesterov momentum

可以将nesterov momentum和RMSProp结合:

Adam

同样的也是个自适应学习率的优化算法,基本和SGD各占半边天.
可以把Adam看成是RMsProp+momentum的变体(存在几个主要的区别)
区别在于:

  1. First, in Adam, momentum is incorporated directly as an estimate of the first order moment (with exponential weighting) of the gradient.
  2. Second, Adam includes bias corrections to the estimates of both the first-order moments (the momentum term) and the (uncentered) second-order moments to account for their initialization at the origin.

算法如下所示:

总结

  1. 优化算法有一阶和二阶算法
  2. 常见优化算法的几乎都是一阶算法比如SGD ,Adam, AdaGrad, RMSProp等
  3. 二阶算法由于计算的代价等问题不常用,比如牛顿法, BFGS, L-BFGS等
  4. 最常用的一阶优化算法是SGD和Adam
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值