Logistic回归(Logistic Regression, LR)- 线性Logistic分类器
线性Logistic分类器(linear logistic classifier, LLC)是以下形式的二元分类器:
h
(
x
;
w
,
b
)
=
{
+
1
if
σ
(
u
(
x
)
)
>
σ
t
h
r
e
s
h
o
l
d
−
1
otherwise
h(\bold{x};\bold{w},b)= \begin{cases} +1& \text{if} \ \ \sigma(u(\bold{x})) > \sigma_{threshold}\\ -1& \text{otherwise} \end{cases}
h(x;w,b)={+1−1if σ(u(x))>σthresholdotherwise
-
threshold 一般为 0.5。
-
我们将其称为线性分类器,是因为它内部的函数 u ( x ) = w ⊤ x + b u(\bold{x}) = \bold{w}^\top\bold{x} + b u(x)=w⊤x+b 是线性的。
-
而称为逻辑分类器,是因为外部的函数 σ ( z ) = 1 1 + e − z \sigma(z) = \frac{1}{1+e^{-z}} σ(z)=1+e−z1 是逻辑函数。
Sigmoid/Logistic函数
σ ( z ) = 1 1 + e − z \sigma(z) = \frac{1}{1+e^{-z}} σ(z)=1+e−z1
-
取值范围在0到1之间,即 0 ≤ σ ( z ) ≤ 1 0 \leq \sigma(z) \leq 1 0≤σ(z)≤1。在二分类问题中可以将预测值映射到0和1之间的概率。
-
平滑函数,即在整个定义域上都是连续可导的。可以进行梯度下降等优化方法。
LLC的损失函数 - 负对数似然函数(Negative Log Likelihood, NLL)
定义预测概率为: g ( i ) = σ ( w ⊤ x + b ) g^{(i)} = \sigma(\bold{w}^\top\bold{x}+b) g(i)=σ(w⊤x+b),这里 σ ( z ) = 1 1 + e − z \sigma(z) = \frac{1}{1+e^{-z}} σ(z)=1+e−z1
则数据集 D n \mathscr{D}_n Dn 的概率 P = ∏ i = 1 n { g ( i ) if y ( i ) = 1 1 − g ( i ) else P = \prod_{i=1}^n \begin{cases} g^{(i)}& \text{if} \ \ y^{(i)}=1\\ 1-g^{(i)}& \text{else} \end{cases} P=∏i=1n{g(i)1−g(i)if y(i)=1else
将
P
P
P 重写为
P
=
∏
i
=
1
n
g
(
i
)
y
(
i
)
(
1
−
g
(
i
)
)
(
1
−
y
(
i
)
)
P = \prod_{i=1}^n g^{{(i)}^{y^{(i)}}}(1 - g^{(i)})^{(1-y^{(i)})}
P=∏i=1ng(i)y(i)(1−g(i))(1−y(i)),两边取对数
log
P
=
∑
i
=
1
n
(
y
(
i
)
log
g
(
i
)
+
(
1
−
y
(
i
)
)
log
(
1
−
g
(
i
)
)
)
\log P = \sum_{i=1}^n (y^{(i)}\log g^{(i)} + (1-y^{(i)})\log (1- g^{(i)}))
logP=i=1∑n(y(i)logg(i)+(1−y(i))log(1−g(i)))
我们最小化
L
=
∑
i
=
1
n
L
n
l
l
(
g
(
i
)
,
y
(
i
)
)
L = \sum_{i=1}^n L_{nll}(g^{(i)}, y^{(i)})
L=∑i=1nLnll(g(i),y(i)),其中
L
n
l
l
(
g
(
i
)
,
y
(
i
)
)
=
−
(
y
(
i
)
log
g
(
i
)
+
(
1
−
y
(
i
)
)
log
(
1
−
g
(
i
)
)
)
L_{nll} (g^{(i)}, y^{(i)}) = -(y^{(i)}\log g^{(i)} + (1-y^{(i)})\log (1- g^{(i)}))
Lnll(g(i),y(i))=−(y(i)logg(i)+(1−y(i))log(1−g(i)))
负对数似然函数(Negative Log Likelihood, NLL)写作:
L n l l ( g , y ) = − ( y log g + ( 1 − y ) log ( 1 − g ) ) L_{nll} (g, y) = -(y\log g + (1-y)\log (1- g)) Lnll(g,y)=−(ylogg+(1−y)log(1−g))
也称为 log loss 或 交叉熵(cross entropy)损失函数
机器学习问题:LLC
-
数据集: D n = { ( x ( 1 ) , y ( 1 ) ) , ⋯ , ( x ( n ) , y ( n ) ) } \mathscr{D}_n = \{(\bold{x}^{(1)}, y^{(1)}),\cdots,(\bold{x}^{(n)}, y^{(n)})\} Dn={(x(1),y(1)),⋯,(x(n),y(n))}
-
假设空间: H = { h ( x ; w , b ) = σ ( w ⊤ x + b ) } \mathscr{H} = \{h(\bold{x};\bold{w},b) = \sigma(\bold{w}^\top\bold{x}+b)\} H={h(x;w,b)=σ(w⊤x+b)}
-
损失函数: L n l l ( g ( i ) , y ( i ) ) = − ( y ( i ) log g ( i ) + ( 1 − y ( i ) ) log ( 1 − g ( i ) ) ) L_{nll} (g^{(i)}, y^{(i)}) = -(y^{(i)}\log g^{(i)} + (1-y^{(i)})\log (1- g^{(i)})) Lnll(g(i),y(i))=−(y(i)logg(i)+(1−y(i))log(1−g(i)))
目标函数(代价函数):
J
l
r
(
w
,
b
;
D
n
)
=
1
n
∑
i
=
1
n
L
n
l
l
(
σ
(
w
⊤
x
+
b
)
,
y
(
i
)
)
+
λ
∥
w
∥
2
J_{lr}(\bold{w},b;\mathscr{D}_n) = \frac{1}{n}\sum_{i=1}^nL_{nll}(\sigma(\bold{w}^\top\bold{x}+b), y^{(i)}) + \lambda\Vert\bold{w}\Vert^2
Jlr(w,b;Dn)=n1i=1∑nLnll(σ(w⊤x+b),y(i))+λ∥w∥2
最小化目标函数:
- 求解使目标函数 J l r J_{lr} Jlr 取最小值的参数 w , b \bold{w},b w,b
- 目标函数 J l r J_{lr} Jlr 不一定有解析解,可以用梯度下降或随机梯度下降方法
LR的梯度下降法
LR-Gradient-Descent( w i n i t , b i n i t , η , ϵ \bold{w}_{init}, b_{init}, \eta, \epsilon winit,binit,η,ϵ)
Initialize
w ( 0 ) = w i n i t \bold{w}^{(0)}=\bold{w}_{init} w(0)=winit
b ( 0 ) = b i n i t b^{(0)}=b_{init} b(0)=binit
t = 0 t=0 t=0
Repeat
t = t + 1 t = t+1 t=t+1
w ( t ) = w ( t − 1 ) − η { 1 n ∑ i = 1 n [ σ ( w ( t − 1 ) ⊤ x + b ( t − 1 ) ) − y ( i ) ] x ( i ) + 2 λ w ( t − 1 ) } \bold{w}^{(t)}=\bold{w}^{(t-1)} - \eta\{ \frac{1}{n}\sum_{i=1}^n[\sigma(\bold{w}^{(t-1)\top}\bold{x}+b^{(t-1)})-y^{(i)}]\bold{x}^{(i)} + 2\lambda\bold{w}^{(t-1)} \} w(t)=w(t−1)−η{n1∑i=1n[σ(w(t−1)⊤x+b(t−1))−y(i)]x(i)+2λw(t−1)}
b ( t ) = b ( t − 1 ) − η { 1 n ∑ i = 1 n [ σ ( w ( t − 1 ) ⊤ x + b ( t − 1 ) ) − y ( i ) ] } b^{(t)} = b^{(t-1)} - \eta\{ \frac{1}{n}\sum_{i=1}^n[\sigma(\bold{w}^{(t-1)\top}\bold{x}+b^{(t-1)})-y^{(i)}] \} b(t)=b(t−1)−η{n1∑i=1n[σ(w(t−1)⊤x+b(t−1))−y(i)]}
Until J l r ( w ( t ) , b ( t ) ) − J l r ( w ( t − 1 ) , b ( t − 1 ) ) < ϵ J_{lr}(\bold{w}^{(t)}, b^{(t)}) - J_{lr}(\bold{w}^{(t-1)}, b^{(t-1)}) < \epsilon Jlr(w(t),b(t))−Jlr(w(t−1),b(t−1))<ϵ
Return w ( t ) , b ( t ) \bold{w}^{(t)}, b^{(t)} w(t),b(t)