四、Gradient Descent(梯度下降)
θ ∗ = a r g m i n θ L ( θ ) \theta ^{\ast } = arg\underset{\theta}{min } L(\theta ) θ∗=argθminL(θ)
L:loss function
θ \theta θ:parameters
梯度下降步骤:
1.选取两个参数{
θ 1 , θ 2 \theta_{1},\theta _{2} θ1,θ2}
2.随意选取一个初始位置 θ 0 = [ θ 1 0 θ 2 0 ] \theta^{0} = \begin{bmatrix} \theta _{1}^{0}\\ \theta _{2}^{0} \end{bmatrix} θ0=[θ10θ20]
3.通过梯度下降得到新的位置 θ 1 \theta^{1} θ1,即
[ θ 1 1 θ 2 1 ] = [ θ 1 0 θ 2 0 ] − η [ ∂ L ( θ 1 0 ) / ∂ θ 1 ∂ L ( θ 2 0 ) / ∂ θ 2 ] \begin{bmatrix} \theta _{1}^{1}\\ \theta _{2}^{1} \end{bmatrix} = \begin{bmatrix} \theta _{1}^{0}\\ \theta _{2}^{0} \end{bmatrix} - \eta \begin{bmatrix} \partial L(\theta _{1}^{0})/\partial\theta _{1}\\ \partial L(\theta _{2}^{0})/\partial\theta _{2} \end{bmatrix} [θ11θ21]=[θ10θ20]−η[∂L(θ10)/∂θ1∂L(θ20)/∂θ2],其中 η \eta η为learning rate,后面为对Loss function的偏微分;
4.同样的步骤得到 θ 2 \theta^{2} θ2、 θ 3 \theta^{3} θ3… θ n \theta^{n} θn
[ θ 1 2 θ 2 2 ] = [ θ 1 1