损失函数推导
线性回归
首先损失函数是为了衡量模型预测的数据与真实数据之间的区别,那么问题来了为什么是平方损失,而不是绝对值损失,四次方损失。
一个很浅显的理解:二次方简单,导数是线性的且连续,而且离预测值远的值会被放大。
推导
假设模型已被训练到最佳,这时候与真实值必然会存在一些误差。比如房子供应商的心情突然好了,进行促销。这些都是不可知的,但所有的误差叠加到一起,
可以将其看做成一个高斯分布。也就是说所有点的误差
ϵ
∼
N
(
0
,
σ
2
)
\epsilon \sim \mathcal N(0,\sigma^2)
ϵ∼N(0,σ2),
误差为x的概率
P
(
ϵ
=
x
)
=
1
2
π
σ
e
x
2
−
2
σ
2
P(\epsilon=x) = \frac{1}{\sqrt{2 \pi} \sigma }e^{\frac{x^2}{-2\sigma^2}}
P(ϵ=x)=2πσ1e−2σ2x2
所以每个
X
i
\pmb X^i
XXi因为误差取得
y
i
\pmb y^i
yyi的概率
P
(
y
i
∣
X
i
)
=
1
2
π
σ
e
(
X
i
θ
−
y
i
)
2
−
2
σ
2
P(\pmb y^i|\pmb X^i) = \frac{1}{\sqrt{2 \pi} \sigma }e^{\frac{(\pmb X^i\pmb \theta - \pmb y^i)^2}{-2\sigma^2}}
P(yyi∣XXi)=2πσ1e−2σ2(XXiθθ−yyi)2。
以上都是对一训练好的模型来说的。
而对于
θ
\pmb \theta
θθ未确定来说,
X
i
\pmb X^i
XXi因为误差取得
y
i
\pmb y^i
yyi的概率
P
(
y
i
∣
X
i
;
θ
)
=
1
2
π
σ
e
(
X
i
θ
−
y
i
)
2
−
2
σ
2
P(\pmb y^i|\pmb X^i; \pmb \theta) = \frac{1}{\sqrt{2 \pi} \sigma }e^{\frac{(\pmb X^i \pmb \theta - \pmb y^i)^2}{-2\sigma^2}}
P(yyi∣XXi;θθ)=2πσ1e−2σ2(XXiθθ−yyi)2
这时
y
i
y^i
yi都发生的概率应为
P
(
y
∣
X
;
θ
)
P(\pmb y| \pmb X; \pmb \theta)
P(yy∣XX;θθ),将其设为
L
(
θ
)
\mathcal L(\pmb \theta)
L(θθ)
L ( θ ) = P ( y ∣ X ; θ ) = ∏ i = 1 n P ( y i ∣ X i ; θ ) = ∏ i = 1 n 1 2 π σ e − ( X i θ − y i ) 2 2 σ 2 {\LARGE \begin{array}{rcl} \mathcal L(\pmb \theta) &=& P(\pmb y | \pmb X; \pmb \theta)\\ &=&\prod_{i=1}^n P(\pmb y^i|\pmb X^i; \pmb \theta)\\ &=& \prod_{i=1}^n \frac{1}{\sqrt{2 \pi} \sigma }e^{-\frac{(\pmb X^i\pmb \theta - \pmb y^i)^2}{2\sigma^2}} \end{array} } L(θθ)===P(yy∣XX;θθ)∏i=1nP(yyi∣XXi;θθ)∏i=1n2πσ1e−2σ2(XXiθθ−yyi)2
如上式,所表示的是 y \pmb y yy发生的概率,只需要求能使得这个式子取得最大值的 θ \pmb{\theta} θθ即可,观此式,为积式,而且还有许多我们可以通过 log \log log函数将其转化为和式并去掉指数函数。设 l ( θ ) = log ( L ( θ ) ) \mathcal l(\pmb \theta) = \log (\mathcal L(\pmb \theta)) l(θθ)=log(L(θθ))
l ( θ ) = log ( L ( θ ) ) = log ( ∏ i = 1 n 1 2 π σ e − ( X i θ − y i ) 2 2 σ 2 ) = ∑ i = 1 n ( log ( 1 2 π σ ) + log ( e − ( X i θ − y i ) 2 2 σ 2 ) ) = n log ( 1 2 π σ ) + ∑ i = 1 n ( X i θ − y i ) 2 2 σ 2 = n log ( 1 2 π σ ) + ∑ i = 1 n ( X i θ − y i ) 2 2 σ 2 {\LARGE \begin{array}{rcl} \mathcal l(\pmb \theta) &=& \log (\mathcal L(\pmb \theta))\\ &=&\log(\prod_{i=1}^n \frac{1}{\sqrt{2 \pi} \sigma }e^{-\frac{(\pmb X^i\pmb \theta - \pmb y^i)^2}{2\sigma^2}})\\ &=&\sum_{i=1}^n(\log( \frac{1}{\sqrt{2 \pi} \sigma })+\log(e^{-\frac{(\pmb X^i\pmb \theta - \pmb y^i)^2}{2\sigma^2}}))\\ &=&n\log( \frac{1}{\sqrt{2 \pi} \sigma }) + \sum_{i=1}^n\frac{(\pmb X^i\pmb \theta - \pmb y^i)^2}{2\sigma^2}\\ &=&n\log( \frac{1}{\sqrt{2 \pi} \sigma }) + \frac{\sum_{i=1}^n(\pmb X^i\pmb \theta - \pmb y^i)^2}{2\sigma^2}\\ \end{array} } l(θθ)=====log(L(θθ))log(∏i=1n2πσ1e−2σ2(XXiθθ−yyi)2)∑i=1n(log(2πσ1)+log(e−2σ2(XXiθθ−yyi)2))nlog(2πσ1)+∑i=1n2σ2(XXiθθ−yyi)2nlog(2πσ1)+2σ2∑i=1n(XXiθθ−yyi)2
由此得到了函数 ∑ i = 1 n ( y i − X i θ ) 2 \sum_{i=1}^n(y^i-X^i\theta)^2 ∑i=1n(yi−Xiθ)2,即 ( X θ − y ) T ( X θ − y ) (\pmb X \pmb \theta - \pmb y)^T(\pmb X \pmb \theta - \pmb y) (XXθθ−yy)T(XXθθ−yy),且只需要求函数的最小值,就是求 L ( θ ) \mathcal L(\pmb\theta) L(θθ)的最大值。
即:
J
(
θ
)
=
(
X
θ
−
y
)
T
(
X
θ
−
y
)
J(\pmb \theta) = (\pmb X \pmb \theta - \pmb y)^T(\pmb X \pmb \theta - \pmb y)
J(θθ)=(XXθθ−yy)T(XXθθ−yy)
这时求其梯度得
∂ J ( θ ) ∂ θ = ∂ ( ( X θ − y ) T ( X θ − y ) ) ∂ θ = x = X θ − y ( ∂ x T x ∂ x ) T ∂ ( X θ − y ) ∂ θ = ( 2 x ) T X = 2 ( X θ − y ) T X {\LARGE \begin{array} {rcl} \frac{\partial J(\pmb \theta)}{\partial \pmb \theta} &=& \frac{\partial ((\pmb X \pmb \theta - \pmb y)^T(\pmb X \pmb \theta - \pmb y))}{\partial \pmb \theta}\\ &\overset{\pmb{x} = \pmb X\pmb\theta-\pmb y }{=}& (\frac{\partial\pmb x^T \pmb x}{\partial \pmb x})^T \frac{\partial (\pmb X \pmb\theta - \pmb y)}{\partial \pmb \theta}\\ &=& (2\pmb x)^T \pmb X\\ &=& 2(\pmb X\pmb\theta-\pmb y)^T \pmb X\\ \end{array} } ∂θθ∂J(θθ)==xx=XXθθ−yy==∂θθ∂((XXθθ−yy)T(XXθθ−yy))(∂xx∂xxTxx)T∂θθ∂(XXθθ−yy)(2xx)TXX2(XXθθ−yy)TXX
逻辑回归
h ( X i ) = 1 1 + e − X i θ h(\pmb X^i) = \frac{1}{1+e^{- \pmb X^i \pmb\theta}} h(XXi)=1+e−XXiθθ1
求逻辑回归线性回归同理:
l ( θ ) = log ( L ( θ ) ) = log ( P ( y ∣ X ; θ ) ) = log ( ∏ i = 1 n P ( y i ∣ X i ; θ ) ) = log ( ∏ i = 1 n h θ ( X ) y i ( 1 − h θ ( X ) ) 1 − y i ) = ∑ i = 1 n ( y i log ( h θ ( X ) ) + ( 1 − y i ) log ( 1 − h θ ( X ) ) ) {\LARGE \begin{array}{rcl} \mathcal l(\pmb \theta) &=& \log (\mathcal L(\pmb \theta))\\ &=& \log (P(\pmb y | \pmb X; \pmb \theta))\\ &=& \log (\prod_{i=1}^n P(\pmb y^i|\pmb X^i; \pmb \theta))\\ &=& \log (\prod_{i=1}^n h_{\pmb \theta} (\pmb X )^{\pmb y^i} (1 - h_{\pmb \theta}(\pmb X))^{1-\pmb y^i})\\ &=& \sum_{i=1}^n(\pmb y^i\log(h_{\pmb \theta} (\pmb X ))+(1-\pmb y^i )\log(1 - h_{\pmb \theta}(\pmb X)))\\ \end{array} } l(θθ)=====log(L(θθ))log(P(yy∣XX;θθ))log(∏i=1nP(yyi∣XXi;θθ))log(∏i=1nhθθ(XX)yyi(1−hθθ(XX))1−yyi)∑i=1n(yyilog(hθθ(XX))+(1−yyi)log(1−hθθ(XX)))
令
J
(
θ
)
=
−
l
(
θ
)
J(\pmb \theta) = - \mathcal l(\pmb \theta)
J(θθ)=−l(θθ)
对其求梯度得
∂ J ( θ ) ∂ θ = − ∑ i = 1 n ( y i log ( h θ ( X ) ) + ( 1 − y i ) log ( 1 − h θ ( X ) ) ) ∂ θ = − ∑ i = 1 n y i log ( h ) + ( 1 − y i ) log ( 1 − h ) ∂ h h ( θ ) ∂ θ = − ∑ i = 1 n ( y i h − 1 − y i 1 − h ) h ( 1 − h ) X i = − ∑ i = 1 n ( y i ( 1 − h ) − h ( 1 − y i ) ) X i = ∑ i = 1 n ( h θ ( X i ) − y i ) X i = ( [ ⋮ h θ ( X i ) ⋮ ] − y ) T X {\LARGE \begin{array} {rcl} \frac{\partial J(\pmb \theta)}{\partial \pmb \theta} &=& \frac{-\sum_{i=1}^n(\pmb y^i\log(h_{\pmb \theta} (\pmb X ))+(1-\pmb y^i )\log(1 - h_{\pmb \theta}(\pmb X)))}{\partial \pmb \theta}\\ &=& -\sum_{i=1}^n\frac{\pmb y^i\log(h)+(1-\pmb y^i )\log(1 - h)}{\partial h} \frac{h(\pmb\theta)}{\partial \pmb \theta}\\ &=& -\sum_{i=1}^n(\frac{\pmb y^i}{h} - \frac{1 - \pmb y^i}{1 - h})h(1 - h)\pmb X^i\\ &=& -\sum_{i=1}^n(\pmb y^i(1-h) - h(1 - \pmb y^i))\pmb X^i\\ &=& \sum_{i=1}^n(h_{\pmb \theta }(\pmb X^i) - \pmb y^i)\pmb X^i\\ &=& ( \begin{bmatrix} \vdots \\ h_{\pmb \theta }(\pmb X^i)\\ \vdots\\ \end{bmatrix} -\pmb y)^T \pmb X\\ \end{array} } ∂θθ∂J(θθ)======∂θθ−∑i=1n(yyilog(hθθ(XX))+(1−yyi)log(1−hθθ(XX)))−∑i=1n∂hyyilog(h)+(1−yyi)log(1−h)∂θθh(θθ)−∑i=1n(hyyi−1−h1−yyi)h(1−h)XXi−∑i=1n(yyi(1−h)−h(1−yyi))XXi∑i=1n(hθθ(XXi)−yyi)XXi(⎣ ⎡⋮hθθ(XXi)⋮⎦ ⎤−yy)TXX
为什么是sigmoid
在线性回归的时候我们假设了误差
ϵ
\epsilon
ϵ,将其视为一个高斯分布,得到每个
y
i
\pmb y^i
yyi的概率,
为什么到了逻辑回归就不用假设,而是
h
θ
h_{\pmb \theta}
hθθ直接得到的就是概率呢?是通过推导得出来的还是只是因为此函数比较优异?
见链接