已知损失函数为:
L = ∑ j = 1 n ( y ^ j − ( b + ∑ i = 1 m w i x i j ) ) 2 L=\sum_{j=1}^n(\hat y^j-(b+\sum_{i=1}^{m}w_ix_i^j))^2 L=∑j=1n(y^j−(b+∑i=1mwixij))2·············································(1)
n n n代表样本的个数, m m m代表特征的个数。
∙ \bullet ∙一般的梯度下降(Gradient Descent):
θ i = θ i − 1 − η ∇ L ( θ i − 1 ) \theta^i=\theta^{i-1}-\eta\nabla L(\theta^{i-1}) θi=θi−1−η∇L(θi−1)··································································(2)
∙ \bullet ∙随机梯度下降(Stochastic Gradient Descent):
只取其中一个样本
L j = ( y ^ j − ( b + ∑ i = 1 m w i x i j ) ) 2 L^j=(\hat y^j-(b+\sum_{i=1}^mw_ix_i^j))^2 Lj=(y^j−(b+∑i=1mwixij))2······················································(3)
θ i = θ i − 1 − η ∇ L j ( θ i − 1 ) \theta^i=\theta^{i-1}-\eta \nabla L^j(\theta^{i-1}) θi=θi−1−η∇Lj(θi−1)
下图是随机梯度下降法与一般的梯度下降的比较:
优点:
(1)由于不是在全部训练数据上的损失函数,而是在每轮迭代中,随机优化某一条训练数据上的损失函 数,这样每一轮参数的更新速度大大加快。
缺点:
(1)准确度下降。由于即使在目标函数为强凸函数的情况下,SGD仍旧无法做到线性收敛。
(2)可能会收敛到局部最优,由于单个样本并不能代表全体样本的趋势。
(3)不易于并行实现。