回顾
书接上回,讲到了逻辑回归的Loss函数的一般形式,大体如下:
L ( θ ) = 1 m ∑ i = 1 m C o s t ( h θ ( x ( i ) ) , y ( i ) ) = 1 m ∑ i = 1 m [ − y ( i ) l o g ( h θ ( x ( i ) ) ) − ( 1 − y ( i ) ) l o g ( 1 − h θ ( x ( i ) ) ) ] h θ ( x ) = g ( θ T X ) g ( z ) = 1 1 + e − z L(\theta) = \frac{1}{m}\sum_{i=1}^mCost(h_\theta(x^{(i)}),y^{(i)}) \\=\frac{1}{m}\sum_{i=1}^m[-y^{(i)}log(h_\theta(x^{(i)}))-(1-y^{(i)})log(1-h_\theta(x^{(i)}))] \\h_{\theta}(x) = g(\theta^TX)\\ g(z) = \frac{1}{1+e^{-z}} L(θ)=m1i=1∑mCost(hθ(x(i)),y(i))=m1i=1∑m[−y(i)log(hθ(x(i)))−(1−y(i))log(1−hθ(x(i)))]hθ(x)=g(θTX)g(z)=1+e−z1
梯度下降
已经有了Loss函数,那么我们要做的事情就是通过梯度下降的方法来确定最佳参数 θ \theta θ,使得对于训练集的所有数据,Loss函数值最小。那么我们就来对这看似复杂的函数进行一下求导吧。
对Loss函数求偏导
我们知道函数有以下形式:
L ( θ ) = 1 m ∑ i = 1 m [ − y ( i ) l o g ( h θ ( x ( i ) ) ) − ( 1 − y ( i ) ) l o g ( 1 − h θ ( x ( i ) ) ) ] L(\theta) = \frac{1}{m}\sum_{i=1}^m[-y^{(i)}log(h_\theta(x^{(i)}))-(1-y^{(i)})log(1-h_\theta(x^{(i)}))] L(θ)=m1i=1∑m[−y(i)log(hθ(x(i)))−(1−y(i))log(1−hθ(x(i)))]
1 m \frac{1}{m} m1是常数,我们可以先放至一边。括号里面的 y ( i ) y^{(i)} y(i)和 ( 1 − y ( i ) ) (1-y^{(i)}) (1−y(i))都是常数,对导数的形式并无影响。Loss函数对 θ \theta θ求偏导有以下形式:
∂ L ( θ ) ∂ θ j = 1 m ∑ i = 1 m [ − y ( i ) ∂ l o g ( h θ ( x ( i ) ) ) ∂ θ j − ( 1 − y ( i ) ) ∂ l o g ( 1 − h θ ( x ( i ) ) ) ∂ θ j ] \frac{\partial L(\theta)}{\partial \theta_j}=\frac{1}{m}\sum_{i=1}^m[-y^{(i)}\frac{\partial log(h_\theta(x^{(i)}))}{\partial \theta_j}-(1-y^{(i)})\frac{\partial log(1-h_\theta(x^{(i)}))}{\partial \theta_j}] ∂θj∂L(θ)=m