本文是笔者初学时的推导笔记,极其详细,可谓“保姆级”详细推导,看不懂的来打我好吧 🐶 🐶 🐶
另外,觉得前面在讲废话到可以直接跳到第三章🐶 🐶 🐶
之前的一篇文章 交叉熵损失(Cross-entropy)和平方损失(MSE)究竟有何区别? ,其中就涉及到逻辑回归,那么今天,我就把逻辑回归掰开了揉碎了再讲一遍
本篇公式过多,导致编辑器直接卡死好几次,来个三连一点都不过分好吧 🐶 🐶 🐶
为了引出逻辑回归,先从最简单的线性回归开始讲起…
一、直观理解
1.1 最小二乘法的回顾
设
{
(
x
k
,
y
k
)
}
k
=
1
N
\left\{\left(x_{k}, y_{k}\right)\right\}_{k=1}^{N}
{(xk,yk)}k=1N 有N个点,如下图所示,用一条直线来拟合,该直线称为最小二乘拟合曲线。
曲线
y
=
A
x
+
B
y=Ax+B
y=Ax+B
的系数是下列线性方程的解,这些方程称为正规方程
( ∑ k = 1 N x k 2 ) A + ( ∑ k = 1 N x k ) B = ∑ k = 1 N x k y k \begin{aligned} \left(\sum_{k=1}^{N} x_{k}^{2}\right) A+\left(\sum_{k=1}^{N} x_{k}\right) \mathrm{B}=\sum_{k=1}^{N} x_{k} y_{k} \end{aligned} (k=1∑Nxk2)A+(k=1∑Nxk)B=k=1∑Nxkyk ( ∑ k = 1 N x k ) A + N B = ∑ k = 1 N y k \begin{aligned} \left(\sum_{k=1}^{N} x_{k}\right) A+N B=\sum_{\mathrm{k}=1}^{\mathrm{N}} y_{k} \end{aligned} (k=1∑Nxk)A+NB=k=1∑Nyk
证明:
对于直线
y
=
A
x
+
B
y=Ax+B
y=Ax+B ,点
(
x
k
,
y
k
)
(x_{k}, y_{k})
(xk,yk)到线上的点
(
x
k
,
A
k
+
B
)
\left(x_{k}, A_{k}+\mathrm{B}\right)
(xk,Ak+B)的垂直距离为:
d
k
=
∣
A
k
+
B
−
y
k
∣
\begin{aligned} d_{k}=\left|A_{k}+B-y_{k}\right| \end{aligned}
dk=∣Ak+B−yk∣
如上图所示,需要使垂直距离的平方和
E
(
A
,
B
)
=
∑
k
=
1
N
(
A
k
+
B
−
y
k
)
2
=
∑
k
=
1
N
d
k
2
E(A,B)=\sum_{k=1}^{N}\left(A_{k}+B-y_{k}\right)^{2}=\sum_{k=1}^{N} d_{k}^{2}
E(A,B)=k=1∑N(Ak+B−yk)2=k=1∑Ndk2
最小化。
通过偏导数
∂
E
∂
A
\frac{\partial E}{\partial A}
∂A∂E和
∂
E
∂
B
\frac{\partial E}{\partial B}
∂B∂E为0,可以得到
E
(
A
,
B
)
E(A, B)
E(A,B)的最小值,并且可以求出拟合曲线的2个参数
A
A
A和
B
B
B。主要此时
{
x
k
}
\left\{x_{k}\right\}
{xk}和
{
y
k
}
\left\{y_{k}\right\}
{yk}是常量,而
A
A
A和
B
B
B是变量。首先,固定
B
B
B,对
A
A
A求导可得
∂
E
(
A
,
B
)
∂
A
=
∑
k
=
1
N
2
(
A
x
k
+
B
−
y
k
)
(
x
k
)
=
2
∑
k
=
1
N
(
A
x
k
2
+
B
x
k
−
x
k
y
k
)
=
2
∑
k
=
1
N
A
x
k
2
+
2
∑
k
=
1
N
B
x
k
−
2
∑
k
=
1
N
x
k
y
k
=
0
\begin{aligned} \frac{\partial E(A, B)}{\partial A} &=\sum_{k=1}^{N} 2\left(A x_{k}+B-y_{k}\right)\left(x_{k}\right) \\ &=2 \sum_{k=1}^{N}\left(A x_{k}^{2}+B x_{k}-x_{k} y_{k}\right) \\ &=2 \sum_{k=1}^{N} A x_{k}^{2}+2 \sum_{k=1}^{N} B x_{k}-2 \sum_{k=1}^{N} x_{k} y_{k} \\ &=0 \end{aligned}
∂A∂E(A,B)=k=1∑N2(Axk+B−yk)(xk)=2k=1∑N(Axk2+Bxk−xkyk)=2k=1∑NAxk2+2k=1∑NBxk−2k=1∑Nxkyk=0
而后固定
A
A
A,
E
(
A
,
B
)
E(A, B)
E(A,B)对
B
B
B求导可得
∂
E
(
A
,
B
)
∂
B
=
∑
k
=
1
N
2
(
A
x
k
+
B
−
y
k
)
=
2
∑
k
=
1
N
(
A
x
k
+
B
−
y
k
)
=
0
\begin{aligned} \frac{\partial E(A, B)}{\partial B} &=\sum_{k=1}^{N} 2\left(A x_{k}+B-y_{k}\right) \\ &=2 \sum_{k=1}^{N}\left(A x_{k}+B-y_{k}\right) \\ &=0 \end{aligned}
∂B∂E(A,B)=k=1∑N2(Axk+B−yk)=2k=1∑N(Axk+B−yk)=0
以上即最小二乘法的推导
E ( A , B ) = ∑ k = 1 N ( A k + B − y k ) 2 = ∑ k = 1 N d k 2 \begin{aligned} E(A,B)&=\sum_{k=1}^{N}\left(A_{k}+B-y_{k}\right)^{2} \\ &=\sum_{k=1}^{N} d_{k}^{2} \end{aligned} E(A,B)=k=1∑N(Ak+B−yk)2=k=1∑Ndk2
二、线性回归(Linear Regression)
2.1 基本形式
【符号说明】:在李航《统计学方法》中, w \boldsymbol{w} w和 x \boldsymbol{x} x等向量没有加粗体,为了便于识别,本文中对向量一律加上粗体表示。 w ⋅ x \boldsymbol{w} \cdot \boldsymbol{x} w⋅x表示内积,其实更加准确的表达应该是 w T ⋅ x \boldsymbol{w}^T \cdot \boldsymbol{x} wT⋅x。
另外,在李航《统计学方法》中, N N N表示样本数, n n n表示特征维度;而周志华《机器学习》以及吴恩达《机器学习》中, m m m表示样本数, d d d表示特征维度。如果符号混乱,注意区分。
类比第一章中所提及的最小二乘拟合法。有
d
d
d个特征维度的样本(示例)
x
=
(
x
1
;
x
1
;
x
1
…
,
x
d
)
\begin{aligned} \boldsymbol{x}=\left(x_{1} ; x_{1} ; x_{1} \ldots, x_{d}\right) \end{aligned}
x=(x1;x1;x1…,xd)
其中
x
i
x_i
xi是第
i
i
i个属性(特征)的取值,其中粗体
x
\boldsymbol{x}
x表示向量,分号表示换行符。向量默认是纵向排列的。
线性模型(Linear Model)试图学得一个通过线性组合来预测对函数,即
f
(
x
)
=
w
1
x
1
+
w
2
x
1
+
⋯
+
w
d
x
d
+
b
\begin{aligned} f(\boldsymbol{x})=w_{1} x_{1}+w_{2} x_{1}+\cdots+w_{d} x_{d}+b \end{aligned}
f(x)=w1x1+w2x1+⋯+wdxd+b
一般向量表示为
f
(
x
)
=
[
w
1
w
2
⋮
w
d
]
T
[
x
1
x
2
⋮
x
d
]
+
b
=
w
T
x
+
b
\begin{aligned} f(\boldsymbol{x}) &=\left[\begin{array}{c}{w_{1}} \\ {w_{2}} \\ {\vdots} \\ {w_{d}}\end{array}\right]^{T}\left[\begin{array}{c}{x_{1}} \\ {x_{2}} \\ {\vdots} \\ {x_{d}}\end{array}\right]+\mathrm{b} =\boldsymbol{w}^{T} \boldsymbol{x}+b \end{aligned}
f(x)=⎣⎢⎢⎢⎡w1w2⋮wd⎦⎥⎥⎥⎤T⎣⎢⎢⎢⎡x1x2⋮xd⎦⎥⎥⎥⎤+b=wTx+b
其中权重参数
w
\boldsymbol{w}
w和偏置参数
b
b
b,在学习模型的过程中确定。
权重参数的具体数值解释了这些权重在解释模型时的重要性,权重 w i w_i wi越大,则表示特征 w i w_i wi对重要性越大。
2.2 线性回归(Linear Regression)(回归模型)
给定数据集
D
=
{
(
x
1
,
y
1
)
,
(
x
2
,
y
2
)
,
…
(
x
m
,
y
m
)
}
=
{
(
[
x
11
x
12
⋮
x
1
d
]
,
y
1
)
,
(
[
x
21
x
22
⋮
x
2
d
]
,
y
2
)
,
…
(
[
x
m
1
x
m
2
⋮
x
m
d
]
,
y
m
)
}
\begin{aligned} D &=\left\{\left({\boldsymbol{x}}_{1}, y_{1}\right),\left({\boldsymbol{x}}_{2}, y_{2}\right), \ldots\left({\boldsymbol{x}}_{m}, y_{m}\right)\right\} \\ &=\left\{\left(\left[\begin{array}{c}{{x}_{11}} \\ {{x}_{12}} \\ {\vdots} \\ {{x}_{1 d}}\end{array}\right], y_{1}\right),\left(\left[\begin{array}{c}{{x}_{21}} \\ {{x}_{22}} \\ {\vdots} \\ {{x}_{2 d}}\end{array}\right], {y}_{2}\right), \dots\left(\left[\begin{array}{c}{{x}_{m 1}} \\ {{x}_{m 2}} \\ {\vdots} \\ {{x}_{m d}}\end{array}\right], y_{m}\right)\right\} \end{aligned}
D={(x1,y1),(x2,y2),…(xm,ym)}=⎩⎪⎪⎪⎨⎪⎪⎪⎧⎝⎜⎜⎜⎛⎣⎢⎢⎢⎡x11x12⋮x1d⎦⎥⎥⎥⎤,y1⎠⎟⎟⎟⎞,⎝⎜⎜⎜⎛⎣⎢⎢⎢⎡x21x22⋮x2d⎦⎥⎥⎥⎤,y2⎠⎟⎟⎟⎞,…⎝⎜⎜⎜⎛⎣⎢⎢⎢⎡xm1xm2⋮xmd⎦⎥⎥⎥⎤,ym⎠⎟⎟⎟⎞⎭⎪⎪⎪⎬⎪⎪⎪⎫
其中样本数为 m m m,每个样本有 d d d维特征。 y i ∈ R y_{i} \in \mathbb{R} yi∈R。线性回归(Linear Regression)将学习一个线性模型,以尽可能准确得预测出实值输出标记。
2.2.1 单变量线性回归(Linear Regression with one Variable)
先考虑最简单的模型,特征属性个数为1,即
d
=
1
d=1
d=1。此时粗体的
x
\boldsymbol{x}
x只有一维,为了表示方便,就不用粗体了。即
D
=
{
(
x
1
,
y
1
)
,
(
x
2
,
y
2
)
,
…
(
x
m
,
y
m
)
}
=
{
(
x
1
,
y
1
)
,
(
x
2
,
y
2
)
,
…
(
x
m
,
y
m
)
}
D=\left\{\left({\boldsymbol{x}}_{1}, y_{1}\right),\left({\boldsymbol{x}}_{2}, y_{2}\right), \ldots\left({\boldsymbol{x}}_{m}, y_{m}\right)\right\}=\left\{\left(x_{1}, y_{1}\right),\left(x_{2}, y_{2}\right), \ldots\left(x_{m}, y_{m}\right)\right\}
D={(x1,y1),(x2,y2),…(xm,ym)}={(x1,y1),(x2,y2),…(xm,ym)}
其中 x i ∈ R x_{i} \in \mathbb{R} xi∈R。
对于离散型的属性(特征)
-
如果存在"序"(order)关系,可以通过连续化将其转化为连续值。例如升高的高、矮可转化为 { 1.0 , 0.0 } \{1.0,0.0\} {1.0,0.0},三值属性如高、中、低,可转化为 { 1.0 , 0.5 , 0.0 } \{1.0,0.5,0.0\} {1.0,0.5,0.0}。
-
若属性间不存在"序"关系,假定有k个属性值,则通常可转化为k维one-hot向量。例如属性(特征)"瓜类"的取值为西瓜、南瓜、黄瓜,可转化为 ( 0 , 0 , 1 ) , ( 0 , 1 , 0 ) , ( 0 , 0 , 1 ) (0,0,1),(0,1,0),(0,0,1) (0,0,1),(0,1,0),(0,0,1)。
线性回归试图学得
f
(
x
i
)
=
ω
x
i
+
b
\begin{aligned} f\left(x_{i}\right)=\omega x_{i}+b \end{aligned}
f(xi)=ωxi+b 使得
f
(
x
i
)
≈
y
i
\begin{aligned} f\left(x_{i}\right) \approx y_{i} \end{aligned}
f(xi)≈yi
选择最佳的
ω
\omega
ω和
b
b
b的关键是使得
f
(
x
i
)
f(x_i)
f(xi)和
y
i
y_i
yi之间的距离最小。对于回归任务,均方差(平方损失 square loss)是回归任务最常用的性能度量。模型最优即表示均方误差最小。即
(
ω
∗
,
b
∗
)
=
arg
min
(
ω
,
b
)
∑
i
=
1
m
(
f
(
x
i
)
−
y
i
)
2
=
arg
min
(
ω
,
b
)
∑
i
=
1
m
(
y
i
−
ω
x
i
−
b
)
2
\begin{aligned} \left(\omega^{*}, b^{*}\right) &=\underset{(\omega, b)}{\arg \min } \sum_{i=1}^{m}\left(f\left(x_{i}\right)-y_{i}\right)^{2} \\ &=\underset{(\omega, b)}{\arg \min } \sum_{i=1}^{m}\left(y_{i}-\omega x_{i}-b\right)^{2} \end{aligned}
(ω∗,b∗)=(ω,b)argmini=1∑m(f(xi)−yi)2=(ω,b)argmini=1∑m(yi−ωxi−b)2
其中
ω
∗
\omega^*
ω∗和
b
∗
b^*
b∗分别表示中
ω
\omega
ω和
b
b
b的近似解。
均方误差有很好的几何意义,它描述了欧式距离(Euclidean distance)。基于均方误差最小化来求模型的解的方法称之为"最小二乘法"(least square method)。在线性回归中,最小二乘法就是试图找一条直线,使得所有样本到直线上的欧式距离之和最小。如第一章所示。
求解
ω
∗
\omega^*
ω∗ 和
b
∗
b^*
b∗ 使得
E
(
ω
,
b
)
=
∑
i
=
1
m
(
y
i
−
ω
x
i
−
b
)
2
E_{(\omega, b)}=\sum_{i=1}^{m}\left(y_{i}-\omega x_{i}-b\right)^{2}
E(ω,b)=∑i=1m(yi−ωxi−b)2 最小化的过程,称为线性回归模型的最小二乘"参数估计"(parameter estimation)。类比第一章的推导,可以将
E
(
ω
,
b
)
E_{(\omega, b)}
E(ω,b) 分别对
ω
∗
\omega^*
ω∗ 和
b
∗
b^*
b∗ 求导,可得
∂
E
(
w
,
b
)
∂
w
=
2
(
w
∑
i
=
1
m
x
i
2
−
∑
i
=
1
m
(
y
i
−
b
)
x
i
)
\begin{aligned} \frac{\partial E_{(w, b)}}{\partial w}&=2\left(w \sum_{i=1}^{m} x_{i}^{2}-\sum_{i=1}^{m}\left(y_{i}-b\right) x_{i}\right)\\ \end{aligned}
∂w∂E(w,b)=2(wi=1∑mxi2−i=1∑m(yi−b)xi)
∂
E
(
w
,
b
)
∂
b
=
2
(
m
b
−
∑
i
=
1
m
(
y
i
−
w
x
i
)
)
\begin{aligned} \frac{\partial E_{(w, b)}}{\partial b}&=2\left(m b-\sum_{i=1}^{m}\left(y_{i}-w x_{i}\right)\right) \end{aligned}
∂b∂E(w,b)=2(mb−i=1∑m(yi−wxi))
而后令上面两式为0,可得
w
w
w和
b
b
b的最优解的闭式(closed-form)解
w
=
∑
i
=
1
m
y
i
(
x
i
−
x
‾
)
∑
i
=
1
m
x
i
2
−
1
m
(
∑
i
=
1
m
x
i
)
2
\begin{aligned} w=\frac{\sum_{i=1}^{m} y_{i}\left(x_{i}-\overline{x}\right)}{\sum_{i=1}^{m} x_{i}^{2}-\frac{1}{m}\left(\sum_{i=1}^{m} x_{i}\right)^{2}} \end{aligned}
w=∑i=1mxi2−m1(∑i=1mxi)2∑i=1myi(xi−x)
b
=
1
m
∑
i
=
1
m
(
y
i
−
w
x
i
)
\begin{aligned} b=\frac{1}{m} \sum_{i=1}^{m}\left(y_{i}-w x_{i}\right) \end{aligned}
b=m1i=1∑m(yi−wxi)
其中
x
‾
=
1
m
∑
i
=
1
m
x
i
\overline{x}=\frac{1}{m} \sum_{i=1}^{m} x_{i}
x=m1∑i=1mxi为
x
x
x的均值。
证明:
这里的 E ( w , b ) E_{(w, b)} E(w,b) 是关于 w w w 和 b b b 的凸函数。关于凸函数的内容可参考《凸优化》一书,在此不再赘述。
令
∂
E
(
w
,
b
)
∂
w
\frac{\partial E_{(w, b)}}{\partial w}
∂w∂E(w,b) 等于0
w
∑
i
=
1
m
x
i
2
−
∑
i
=
1
m
(
y
i
−
b
)
x
i
=
0
w\sum_{i=1}^{m}x_i^2-\sum_{i=1}^{m}(y_i-b)x_i=0
wi=1∑mxi2−i=1∑m(yi−b)xi=0
w
∑
i
=
1
m
x
i
2
=
∑
i
=
1
m
y
i
x
i
−
∑
i
=
1
m
b
x
i
w\sum_{i=1}^{m}x_i^2 = \sum_{i=1}^{m}y_ix_i-\sum_{i=1}^{m}bx_i
wi=1∑mxi2=i=1∑myixi−i=1∑mbxi
令 ∂ E ( w , b ) ∂ b \frac{\partial E_{(w, b)}}{\partial b} ∂b∂E(w,b)等于0,可得 b = 1 m ∑ i = 1 m ( y i − w x i ) b=\cfrac{1}{m}\sum_{i=1}^{m}(y_i-wx_i) b=m1∑i=1m(yi−wxi)
又
1
m
∑
i
=
1
m
y
i
=
y
ˉ
\cfrac{1}{m}\sum_{i=1}^{m}y_i=\bar{y}
m1∑i=1myi=yˉ,
1
m
∑
i
=
1
m
x
i
=
x
ˉ
\cfrac{1}{m}\sum_{i=1}^{m}x_i=\bar{x}
m1∑i=1mxi=xˉ
则
b
=
y
ˉ
−
w
x
ˉ
b=\bar{y}-w\bar{x}
b=yˉ−wxˉ 代入上式可得
w
∑
i
=
1
m
x
i
2
=
∑
i
=
1
m
y
i
x
i
−
∑
i
=
1
m
(
y
ˉ
−
w
x
ˉ
)
x
i
w
∑
i
=
1
m
x
i
2
=
∑
i
=
1
m
y
i
x
i
−
y
ˉ
∑
i
=
1
m
x
i
+
w
x
ˉ
∑
i
=
1
m
x
i
w
(
∑
i
=
1
m
x
i
2
−
x
ˉ
∑
i
=
1
m
x
i
)
=
∑
i
=
1
m
y
i
x
i
−
y
ˉ
∑
i
=
1
m
x
i
w
=
∑
i
=
1
m
y
i
x
i
−
y
ˉ
∑
i
=
1
m
x
i
∑
i
=
1
m
x
i
2
−
x
ˉ
∑
i
=
1
m
x
i
\begin{aligned} w\sum_{i=1}^{m}x_i^2 & = \sum_{i=1}^{m}y_ix_i-\sum_{i=1}^{m}(\bar{y}-w\bar{x})x_i \\ w\sum_{i=1}^{m}x_i^2 & = \sum_{i=1}^{m}y_ix_i-\bar{y}\sum_{i=1}^{m}x_i+w\bar{x}\sum_{i=1}^{m}x_i \\ w(\sum_{i=1}^{m}x_i^2-\bar{x}\sum_{i=1}^{m}x_i) & = \sum_{i=1}^{m}y_ix_i-\bar{y}\sum_{i=1}^{m}x_i \\ w & = \cfrac{\sum_{i=1}^{m}y_ix_i-\bar{y}\sum_{i=1}^{m}x_i}{\sum_{i=1}^{m}x_i^2-\bar{x}\sum_{i=1}^{m}x_i} \end{aligned}
wi=1∑mxi2wi=1∑mxi2w(i=1∑mxi2−xˉi=1∑mxi)w=i=1∑myixi−i=1∑m(yˉ−wxˉ)xi=i=1∑myixi−yˉi=1∑mxi+wxˉi=1∑mxi=i=1∑myixi−yˉi=1∑mxi=∑i=1mxi2−xˉ∑i=1mxi∑i=1myixi−yˉ∑i=1mxi
又
y
ˉ
∑
i
=
1
m
x
i
=
1
m
∑
i
=
1
m
y
i
∑
i
=
1
m
x
i
=
x
ˉ
∑
i
=
1
m
y
i
\bar{y}\sum_{i=1}^{m}x_i=\cfrac{1}{m}\sum_{i=1}^{m}y_i\sum_{i=1}^{m}x_i=\bar{x}\sum_{i=1}^{m}y_i
yˉi=1∑mxi=m1i=1∑myii=1∑mxi=xˉi=1∑myi
x
ˉ
∑
i
=
1
m
x
i
=
1
m
∑
i
=
1
m
x
i
∑
i
=
1
m
x
i
=
1
m
(
∑
i
=
1
m
x
i
)
2
\bar{x}\sum_{i=1}^{m}x_i=\cfrac{1}{m}\sum_{i=1}^{m}x_i\sum_{i=1}^{m}x_i=\cfrac{1}{m}(\sum_{i=1}^{m}x_i)^2
xˉi=1∑mxi=m1i=1∑mxii=1∑mxi=m1(i=1∑mxi)2
代入上式可得
w
=
∑
i
=
1
m
y
i
(
x
i
−
x
ˉ
)
∑
i
=
1
m
x
i
2
−
1
m
(
∑
i
=
1
m
x
i
)
2
w=\cfrac{\sum_{i=1}^{m}y_i(x_i-\bar{x})}{\sum_{i=1}^{m}x_i^2-\cfrac{1}{m}(\sum_{i=1}^{m}x_i)^2}
w=∑i=1mxi2−m1(∑i=1mxi)2∑i=1myi(xi−xˉ)
【 注 】
上式还可以进一步简化为能够用向量表达的形式,将
1
m
(
∑
i
=
1
m
x
i
)
2
=
x
ˉ
∑
i
=
1
m
x
i
\cfrac{1}{m}(\sum_{i=1}^{m}x_i)^2=\bar{x}\sum_{i=1}^{m}x_i
m1(i=1∑mxi)2=xˉi=1∑mxi
代入分母可得
w
=
∑
i
=
1
m
y
i
(
x
i
−
x
ˉ
)
∑
i
=
1
m
x
i
2
−
x
ˉ
∑
i
=
1
m
x
i
=
∑
i
=
1
m
(
y
i
x
i
−
y
i
x
ˉ
)
∑
i
=
1
m
(
x
i
2
−
x
i
x
ˉ
)
\begin{aligned} w & = \cfrac{\sum_{i=1}^{m}y_i(x_i-\bar{x})}{\sum_{i=1}^{m}x_i^2-\bar{x}\sum_{i=1}^{m}x_i} \\ & = \cfrac{\sum_{i=1}^{m}(y_ix_i-y_i\bar{x})}{\sum_{i=1}^{m}(x_i^2-x_i\bar{x})} \end{aligned}
w=∑i=1mxi2−xˉ∑i=1mxi∑i=1myi(xi−xˉ)=∑i=1m(xi2−xixˉ)∑i=1m(yixi−yixˉ)
又因为
y
ˉ
∑
i
=
1
m
x
i
=
x
ˉ
∑
i
=
1
m
y
i
=
∑
i
=
1
m
y
ˉ
x
i
=
∑
i
=
1
m
x
ˉ
y
i
=
m
x
ˉ
y
ˉ
=
∑
i
=
1
m
x
ˉ
y
ˉ
\bar{y}\sum_{i=1}^{m}x_i=\bar{x}\sum_{i=1}^{m}y_i=\sum_{i=1}^{m}\bar{y}x_i=\sum_{i=1}^{m}\bar{x}y_i=m\bar{x}\bar{y}=\sum_{i=1}^{m}\bar{x}\bar{y}
yˉi=1∑mxi=xˉi=1∑myi=i=1∑myˉxi=i=1∑mxˉyi=mxˉyˉ=i=1∑mxˉyˉ
∑
i
=
1
m
x
i
x
ˉ
=
x
ˉ
∑
i
=
1
m
x
i
=
x
ˉ
⋅
m
⋅
1
m
⋅
∑
i
=
1
m
x
i
=
m
x
ˉ
2
=
∑
i
=
1
m
x
ˉ
2
\sum_{i=1}^{m}x_i\bar{x}=\bar{x}\sum_{i=1}^{m}x_i=\bar{x}\cdot m \cdot\frac{1}{m}\cdot\sum_{i=1}^{m}x_i=m\bar{x}^2=\sum_{i=1}^{m}\bar{x}^2
i=1∑mxixˉ=xˉi=1∑mxi=xˉ⋅m⋅m1⋅i=1∑mxi=mxˉ2=i=1∑mxˉ2
则上式可化为:
w
=
∑
i
=
1
m
(
y
i
x
i
−
y
i
x
ˉ
−
x
i
y
ˉ
+
x
ˉ
y
ˉ
)
∑
i
=
1
m
(
x
i
2
−
x
i
x
ˉ
−
x
i
x
ˉ
+
x
ˉ
2
)
=
∑
i
=
1
m
(
x
i
−
x
ˉ
)
(
y
i
−
y
ˉ
)
∑
i
=
1
m
(
x
i
−
x
ˉ
)
2
\begin{aligned} w & = \cfrac{\sum_{i=1}^{m}(y_ix_i-y_i\bar{x}-x_i\bar{y}+\bar{x}\bar{y})}{\sum_{i=1}^{m}(x_i^2-x_i\bar{x}-x_i\bar{x}+\bar{x}^2)} \\ & = \cfrac{\sum_{i=1}^{m}(x_i-\bar{x})(y_i-\bar{y})}{\sum_{i=1}^{m}(x_i-\bar{x})^2} \end{aligned}
w=∑i=1m(xi2−xixˉ−xixˉ+xˉ2)∑i=1m(yixi−yixˉ−xiyˉ+xˉyˉ)=∑i=1m(xi−xˉ)2∑i=1m(xi−xˉ)(yi−yˉ)
若令
x
=
(
x
1
,
x
2
,
.
.
.
,
x
m
)
T
\boldsymbol{x}=(x_1,x_2,...,x_m)^T
x=(x1,x2,...,xm)T
x
d
=
(
x
1
−
x
ˉ
,
x
2
−
x
ˉ
,
.
.
.
,
x
m
−
x
ˉ
)
T
\boldsymbol{x}_{d}=(x_1-\bar{x},x_2-\bar{x},...,x_m-\bar{x})^T
xd=(x1−xˉ,x2−xˉ,...,xm−xˉ)T
为去均值后的
x
\boldsymbol{x}
x
y
=
(
y
1
,
y
2
,
.
.
.
,
y
m
)
T
\boldsymbol{y}=(y_1,y_2,...,y_m)^T
y=(y1,y2,...,ym)T
y
d
=
(
y
1
−
y
ˉ
,
y
2
−
y
ˉ
,
.
.
.
,
y
m
−
y
ˉ
)
T
\boldsymbol{y}_{d}=(y_1-\bar{y},y_2-\bar{y},...,y_m-\bar{y})^T
yd=(y1−yˉ,y2−yˉ,...,ym−yˉ)T
为去均值之后的
y
\boldsymbol{y}
y
其中
x
\boldsymbol{x}
x、
x
d
\boldsymbol{x_d}
xd、
y
\boldsymbol{y}
y、
y
d
\boldsymbol{y_d}
yd均为m行1列的列向量。代入上式可得
w
=
x
d
T
y
d
x
d
T
x
d
w=\cfrac{\boldsymbol{x}_{d}^T\boldsymbol{y}_{d}}{\boldsymbol{x}_d^T\boldsymbol{x}_{d}}
w=xdTxdxdTyd
2.2.2 多变量线性回归(Linear Regression with Multiple Variable)
接下来考虑更一般的情况,如本节开头的数据集
D
D
D,样本由
d
d
d个属性(特征)描述,
其中,
w
=
[
w
1
w
2
⋮
w
d
]
\boldsymbol{w}=\left[\begin{array}{c}{w_{1}} \\ {w_{2}} \\ {\vdots} \\ {w_d} \end{array}\right]
w=⎣⎢⎢⎢⎡w1w2⋮wd⎦⎥⎥⎥⎤, 此时试图学得
f
(
x
i
)
=
w
T
x
i
+
b
=
[
w
1
w
2
…
w
d
]
[
x
1
x
2
⋮
x
d
]
+
b
=
w
1
x
1
+
w
2
x
2
+
⋯
+
w
d
x
d
+
b
\begin{aligned} f\left(\boldsymbol{x}_{i}\right)&=\boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}_{i}+b \\ &=\left[\begin{array}{llll}{w_{1}} & {w_{2}} & {\dots} & {w_{d}}\end{array}\right]\left[\begin{array}{c}{x_{1}} \\ {x_{2}} \\ {\vdots} \\ {x_{d}}\end{array}\right]+b \\ &=w_{1} x_{1}+w_{2} x_{2}+\cdots+w_{d} x_{d}+b \end{aligned}
f(xi)=wTxi+b=[w1w2…wd]⎣⎢⎢⎢⎡x1x2⋮xd⎦⎥⎥⎥⎤+b=w1x1+w2x2+⋯+wdxd+b
使得:
f
(
x
i
)
≃
y
i
f\left(\boldsymbol{x}_{i}\right) \simeq y_{i}
f(xi)≃yi 这称为"多元线性回归"(multivariable linear regression)。
类似的,可利用最小二乘对
w
\boldsymbol{w}
w和
b
b
b进行估计。为便于讨论,将
w
\boldsymbol{w}
w和
b
b
b合成向量形式
w
^
=
[
w
b
]
=
[
w
1
w
2
⋮
w
d
b
]
\widehat{\boldsymbol{w}}=\left[\begin{array}{c}{\boldsymbol{w}} \\ {b}\end{array}\right] =\left[\begin{array}{c}{w_{1}} \\ {w_{2}} \\ {\vdots} \\ {w_d} \\ {b}\end{array}\right]
w
=[wb]=⎣⎢⎢⎢⎢⎢⎡w1w2⋮wdb⎦⎥⎥⎥⎥⎥⎤
相应的,把数据集
D
D
D表示成一个
m
×
(
d
+
1
)
m \times(d+1)
m×(d+1)大小的矩阵
X
\mathbf{X}
X,其中样本个数为
m
m
m,特征(属性)个数为
d
d
d,其中每一行对应一个样本(示例),每行的前
d
d
d个元素为该样本的
d
d
d个特征,最后一个元素恒置为1,即
X
=
(
x
11
x
12
…
x
1
d
1
x
21
x
22
…
x
2
d
1
⋮
⋮
⋱
⋮
⋮
x
m
1
x
m
2
…
x
m
d
1
)
=
(
x
1
T
1
x
2
T
1
⋮
⋮
x
m
T
1
)
\mathbf{X}=\left(\begin{array}{ccccc}{x_{11}} & {x_{12}} & {\dots} & {x_{1 d}} & {1} \\ {x_{21}} & {x_{22}} & {\dots} & {x_{2 d}} & {1} \\ {\vdots} & {\vdots} & {\ddots} & {\vdots} & {\vdots} \\ {x_{m 1}} & {x_{m 2}} & {\dots} & {x_{m d}} & {1}\end{array}\right)=\left(\begin{array}{cc}{\boldsymbol{x}_{1}^{\mathrm{T}}} & {1} \\ {\boldsymbol{x}_{2}^{\mathrm{T}}} & {1} \\ {\vdots} & {\vdots} \\ {\boldsymbol{x}_{m}^{\mathrm{T}}} & {1}\end{array}\right)
X=⎝⎜⎜⎜⎛x11x21⋮xm1x12x22⋮xm2……⋱…x1dx2d⋮xmd11⋮1⎠⎟⎟⎟⎞=⎝⎜⎜⎜⎛x1Tx2T⋮xmT11⋮1⎠⎟⎟⎟⎞
再把标签也改成向量形式
y
=
[
y
1
y
2
⋮
y
m
]
\boldsymbol{y}=\left[\begin{array}{c}{y_{1}} \\ {y_{2}} \\ {\vdots} \\ {y_{m}}\end{array}\right]
y=⎣⎢⎢⎢⎡y1y2⋮ym⎦⎥⎥⎥⎤
类似于单变量的线性回归,有
w
^
∗
=
arg
min
w
^
(
y
−
X
w
^
)
T
(
y
−
X
w
^
)
\begin{aligned} \hat{\boldsymbol{w}}^{*}=\underset{\hat{\boldsymbol{w}}}{\arg \min }(\boldsymbol{y}-\mathbf{X} \hat{\boldsymbol{w}})^{\mathrm{T}}(\boldsymbol{y}-\mathbf{X} \hat{\boldsymbol{w}}) \end{aligned}
w^∗=w^argmin(y−Xw^)T(y−Xw^)
其中
w
∗
\boldsymbol{w}^{*}
w∗表示
w
\boldsymbol{w}
w的解。令
E
w
^
=
(
y
−
X
w
^
)
T
(
y
−
X
w
^
)
E_{\hat{\boldsymbol{w}}}=(\boldsymbol{y}-\mathbf{X} \hat{\boldsymbol{w}})^{\mathrm{T}}(\boldsymbol{y}-\mathbf{X} \hat{\boldsymbol{w}})
Ew^=(y−Xw^)T(y−Xw^),因为
(
y
−
X
w
^
)
=
[
y
1
y
2
⋮
y
m
]
−
[
x
11
x
12
x
1
d
1
x
21
x
22
x
2
d
1
⋮
⋮
⋱
⋮
x
m
1
x
m
2
⋯
x
m
d
1
]
[
w
1
w
2
⋮
w
d
b
]
=
[
y
1
y
2
⋮
y
m
]
−
[
w
1
x
11
+
w
2
x
12
+
⋯
+
w
d
x
1
d
+
b
w
1
x
21
+
w
2
x
22
+
⋯
+
w
d
x
2
d
+
b
⋮
w
1
x
m
1
+
w
2
x
m
2
+
⋯
+
w
d
x
m
d
+
b
]
=
[
y
1
−
f
(
x
1
)
y
2
−
f
(
x
2
)
⋮
y
m
−
f
(
x
m
)
]
\begin{aligned} (\boldsymbol{y}-\mathbf{X} \hat{\boldsymbol{w}}) &=\left[\begin{array}{c}{y_{1}} \\ {y_{2}} \\ {\vdots} \\ {y_{m}}\end{array}\right]-\left[\begin{array}{ccccc}{x_{11}} & {x_{12}} & {} & {x_{1 d}} & {1} \\ {x_{21}} & {x_{22}} & {} & {x_{2 d}} & {1} \\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ {x_{m 1}} & {x_{m 2}} & {\cdots} & {x_{m d}} & {1}\end{array}\right]\left[\begin{array}{c}{w_{1}} \\ {w_{2}} \\ {\vdots} \\ {w_{d}} \\ {b}\end{array}\right] \\ &=\left[\begin{array}{c}{y_{1}} \\ {y_{2}} \\ {\vdots} \\ {y_{m}}\end{array}\right]-\left[\begin{array}{c}{w_{1} x_{11}+w_{2} x_{12}+\cdots+w_{d} x_{1 d}+b} \\ {w_{1} x_{21}+w_{2} x_{22}+\cdots+w_{d} x_{2 d}+b} \\ {\vdots} \\ {w_{1} x_{m 1}+w_{2} x_{m 2}+\cdots+w_{d} x_{m d}+b}\end{array}\right] \\ &=\left[\begin{array}{c}{y_{1}-f\left(\boldsymbol{x}_{1}\right)} \\ {y_{2}-f\left(\boldsymbol{x}_{2}\right)} \\ {\vdots} \\ {y_{m}-f\left(\boldsymbol{x}_{\boldsymbol{m}}\right)}\end{array}\right] \end{aligned}
(y−Xw^)=⎣⎢⎢⎢⎡y1y2⋮ym⎦⎥⎥⎥⎤−⎣⎢⎢⎢⎡x11x21⋮xm1x12x22⋮xm2⋱⋯x1dx2d⋮xmd111⎦⎥⎥⎥⎤⎣⎢⎢⎢⎢⎢⎡w1w2⋮wdb⎦⎥⎥⎥⎥⎥⎤=⎣⎢⎢⎢⎡y1y2⋮ym⎦⎥⎥⎥⎤−⎣⎢⎢⎢⎡w1x11+w2x12+⋯+wdx1d+bw1x21+w2x22+⋯+wdx2d+b⋮w1xm1+w2xm2+⋯+wdxmd+b⎦⎥⎥⎥⎤=⎣⎢⎢⎢⎡y1−f(x1)y2−f(x2)⋮ym−f(xm)⎦⎥⎥⎥⎤
所以
(
y
−
X
w
^
)
T
(
y
−
X
w
^
)
=
[
y
1
−
f
(
x
1
)
y
2
−
f
(
x
2
)
…
y
m
−
f
(
x
m
)
]
[
y
1
−
f
(
x
1
)
y
2
−
f
(
x
2
)
⋮
y
m
−
f
(
x
m
)
]
=
(
y
1
−
f
(
x
1
)
)
2
+
(
y
2
−
f
(
x
2
)
)
2
+
⋯
+
(
y
m
−
f
(
x
m
)
)
2
\begin{aligned} (\boldsymbol{y}-\mathbf{X} \hat{\boldsymbol{w}})^{\mathrm{T}}(\boldsymbol{y}-\mathbf{X} \hat{\boldsymbol{w}}) &=\left[y_{1}-f\left(\boldsymbol{x}_{\mathbf{1}}\right) \quad y_{2}-f\left(\boldsymbol{x}_{2}\right) \quad \ldots \quad y_{m}-f\left(\boldsymbol{x}_{\boldsymbol{m}}\right)\right]\left[\begin{array}{c}{y_{1}-f\left(\boldsymbol{x}_{1}\right)} \\ {y_{2}-f\left(\boldsymbol{x}_{2}\right)} \\ {\vdots} \\ {y_{m}-f\left(\boldsymbol{x}_{\boldsymbol{m}}\right)}\end{array}\right] \\ &=\left(y_{1}-f\left(\boldsymbol{x}_{\mathbf{1}}\right)\right)^{2}+\left(y_{2}-f\left(\boldsymbol{x}_{2}\right)\right)^{2}+\cdots+\left(y_{m}-f\left(\boldsymbol{x}_{\boldsymbol{m}}\right)\right)^{2} \end{aligned}
(y−Xw^)T(y−Xw^)=[y1−f(x1)y2−f(x2)…ym−f(xm)]⎣⎢⎢⎢⎡y1−f(x1)y2−f(x2)⋮ym−f(xm)⎦⎥⎥⎥⎤=(y1−f(x1))2+(y2−f(x2))2+⋯+(ym−f(xm))2
可见,
E
w
^
=
(
y
−
X
w
^
)
T
(
y
−
X
w
^
)
E_{\hat{\boldsymbol{w}}}=(\boldsymbol{y}-\mathbf{X} \hat{\boldsymbol{w}})^{\mathrm{T}}(\boldsymbol{y}-\mathbf{X} \hat{\boldsymbol{w}})
Ew^=(y−Xw^)T(y−Xw^) 表示的也是最小二乘的概念。
E
w
^
E_{\hat{\boldsymbol{w}}}
Ew^对
w
^
\hat{\boldsymbol{w}}
w^求导可得
∂
E
w
^
∂
w
^
=
2
X
T
(
X
w
^
−
y
)
\begin{aligned} \frac{\partial E_{\hat{\boldsymbol{w}}}}{\partial \hat{\boldsymbol{w}}}=2 \mathbf{X}^{\mathrm{T}}(\mathbf{X} \hat{\boldsymbol{w}}-\boldsymbol{y}) \end{aligned}
∂w^∂Ew^=2XT(Xw^−y)
证明:
E
w
^
=
(
y
−
X
w
^
)
T
(
y
−
X
w
^
)
=
y
T
y
−
y
T
X
w
^
−
w
^
T
X
T
y
+
w
^
T
X
T
X
w
^
\begin{aligned} E_{\hat{w}} &=(\boldsymbol{y}-\mathbf{X} \hat{\boldsymbol{w}})^{T}(\boldsymbol{y}-\mathbf{X} \hat{\boldsymbol{w}}) \\ &=\boldsymbol{y}^{T} \boldsymbol{y}-\boldsymbol{y}^{T} \mathbf{X} \hat{\boldsymbol{w}}-\hat{\boldsymbol{w}}^{T} \mathbf{X}^{T} \boldsymbol{y}+\hat{\boldsymbol{w}}^{T} \mathbf{X}^{T} \mathbf{X} \hat{\boldsymbol{w}} \end{aligned}
Ew^=(y−Xw^)T(y−Xw^)=yTy−yTXw^−w^TXTy+w^TXTXw^
对
w
^
\hat{\boldsymbol{w}}
w^求导可得
∂
E
w
^
∂
w
^
=
∂
y
T
y
∂
w
^
−
∂
y
T
X
w
^
∂
w
^
−
∂
w
^
T
X
T
y
∂
w
^
+
∂
w
^
T
X
T
X
w
^
∂
w
^
=
0
−
X
T
y
−
X
T
y
+
(
X
T
X
+
X
T
X
)
w
^
=
2
X
T
(
X
w
^
−
y
)
\begin{aligned} \frac{\partial E_{\hat{\boldsymbol{w}}}}{\partial \hat{\boldsymbol{w}}} &=\frac{\partial \boldsymbol{y}^{T} \boldsymbol{y}}{\partial \hat{\boldsymbol{w}}}-\frac{\partial \boldsymbol{y}^{T} \mathbf{X} \hat{\boldsymbol{w}}}{\partial \hat{\boldsymbol{w}}}-\frac{\partial \hat{\boldsymbol{w}}^{T} \mathbf{X}^{T} \boldsymbol{y}}{\partial \hat{\boldsymbol{w}}}+\frac{\partial \hat{\boldsymbol{w}}^{T} \mathbf{X}^{T} \mathbf{X} \hat{\boldsymbol{w}}}{\partial \hat{\boldsymbol{w}}} \\ &=0-\mathbf{X}^{T} \boldsymbol{y}-\mathbf{X}^{T} \boldsymbol{y}+\left(\mathbf{X}^{T} \mathbf{X}+\mathbf{X}^{T} \mathbf{X}\right) \hat{\boldsymbol{w}} \\ &=2 \mathbf{X}^{T}(\mathbf{X} \hat{\boldsymbol{w}}-\boldsymbol{y}) \end{aligned}
∂w^∂Ew^=∂w^∂yTy−∂w^∂yTXw^−∂w^∂w^TXTy+∂w^∂w^TXTXw^=0−XTy−XTy+(XTX+XTX)w^=2XT(Xw^−y)
由此即得证。
2.2.3 关于向量求导补充证明
实值函数相对于实向量求偏导
关于向量求导的详细说明可参考 张贤达《 矩阵分析与应用》一书。
符号说明:向量默认为纵向排列的 列向量
1、[实值对列向量求偏导]
相对于
n
×
1
n\times 1
n×1向量
x
=
[
x
1
x
2
⋮
x
n
]
\boldsymbol{x}=\left[\begin{array}{c}{x_{1}} \\ {x_{2}} \\ {\vdots} \\ {x_{n}}\end{array}\right]
x=⎣⎢⎢⎢⎡x1x2⋮xn⎦⎥⎥⎥⎤ 的梯度算子记作
∇
x
\nabla_{x}
∇x,定义为
∇
x
=
d
e
f
[
∂
∂
x
1
∂
∂
x
2
⋮
∂
∂
x
n
]
=
∂
∂
x
\begin{aligned} \nabla_{x} \stackrel{\mathrm{def}}{=}\left[\begin{array}{c}{\frac{\partial}{\partial x_{1}}} \\ {\frac{\partial}{\partial x_{2}}} \\ {\vdots} \\ {\frac{\partial}{\partial x_{n}}}\end{array}\right]=\frac{\partial}{\partial \boldsymbol{x}} \end{aligned}
∇x=def⎣⎢⎢⎢⎡∂x1∂∂x2∂⋮∂xn∂⎦⎥⎥⎥⎤=∂x∂
因此,以
n
×
1
n\times 1
n×1实向量
x
\boldsymbol{x}
x 为变元的实标量函数
f
(
x
)
f(\boldsymbol{x})
f(x)相对于
x
\boldsymbol{x}
x 的梯度为一个以
n
×
1
n\times 1
n×1列向量,定义为
∇
x
f
(
x
)
=
d
e
f
[
∂
f
(
x
)
∂
x
1
∂
f
(
x
)
∂
x
2
⋮
∂
f
(
x
)
∂
x
n
]
=
∂
f
(
x
)
∂
x
\begin{aligned} \nabla_{\boldsymbol{x}} f(\boldsymbol{x}) \stackrel{\mathrm{def}}{=}\left[\begin{array}{c}{\frac{\partial f(\boldsymbol{x})}{\partial x_{1}}} \\ {\frac{\partial f(\boldsymbol{x})}{\partial x_{2}}} \\ {\vdots} \\ {\frac{\partial f(\boldsymbol{x})}{\partial x_{n}}}\end{array}\right]=\frac{\partial f(\boldsymbol{x})}{\partial \boldsymbol{x}} \end{aligned}
∇xf(x)=def⎣⎢⎢⎢⎢⎡∂x1∂f(x)∂x2∂f(x)⋮∂xn∂f(x)⎦⎥⎥⎥⎥⎤=∂x∂f(x)
梯度方向的负方向称为变元
x
x
x 的梯度流(gradient flow),记作
x
˙
=
−
∇
x
f
(
x
)
\dot{\boldsymbol{x}}=-\nabla_{\boldsymbol{x}} f(\boldsymbol{x})
x˙=−∇xf(x)
从梯度的定义可以看出
- 一个以向量为变元的标量函数的梯度为一向量。
- 梯度的每个分量给出了标量函数在该分量上的变化率。
2、 [实值对行向量求偏导]
实值函数
f
(
x
)
f(\boldsymbol{x})
f(x) 相对于
1
×
n
1\times n
1×n行向量
x
T
=
[
x
1
,
x
2
⋯
x
n
]
\boldsymbol{x}^{\mathrm{T}}=\left[x_{1}, x_{2} \cdots x_{n}\right]
xT=[x1,x2⋯xn] 的梯度为
1
×
n
1\times n
1×n行向量,定义为
∂
f
(
x
)
∂
x
T
=
d
e
f
[
∂
f
(
x
)
∂
x
1
,
∂
f
(
x
)
∂
x
2
,
⋯
,
∂
f
(
x
)
∂
x
n
]
=
∇
x
T
f
(
x
)
\frac{\partial f(\boldsymbol{x})}{\partial \boldsymbol{x}^{\mathrm{T}}} \stackrel{\mathrm{def}}{=}\left[\frac{\partial f(\boldsymbol{x})}{\partial x_{1}}, \frac{\partial f(\boldsymbol{x})}{\partial x_{2}}, \cdots, \frac{\partial f(\boldsymbol{x})}{\partial x_{n}}\right]=\nabla_{\boldsymbol{x}^{\mathrm{T}}} f(\boldsymbol{x})
∂xT∂f(x)=def[∂x1∂f(x),∂x2∂f(x),⋯,∂xn∂f(x)]=∇xTf(x)
3、 [行向量对列向量求偏导]
m
m
m维行向量
f
(
x
)
=
[
f
1
(
x
)
,
f
2
(
x
)
,
⋯
,
f
m
(
x
)
]
\boldsymbol{f}(\boldsymbol{x})=\left[f_{1}(\boldsymbol{x}), f_{2}(\boldsymbol{x}), \cdots, f_{m}(\boldsymbol{x})\right]
f(x)=[f1(x),f2(x),⋯,fm(x)] 相对于
n
n
n维实列向量
x
\boldsymbol{x}
x 的梯度为一个
n
×
m
n\times m
n×m的矩阵,定义为
∂
f
(
x
)
∂
x
=
d
e
f
[
∂
f
1
(
x
)
∂
x
1
∂
f
2
(
x
)
∂
x
1
…
∂
f
m
(
x
)
∂
x
1
∂
f
2
(
x
)
∂
x
2
∂
f
2
(
x
)
∂
x
2
⋯
∂
f
m
(
x
)
∂
x
2
⋮
⋮
⋮
∂
f
1
(
x
)
∂
x
n
∂
f
2
(
x
)
∂
x
n
⋯
∂
f
m
(
x
)
∂
x
n
]
=
∇
x
f
(
x
)
\begin{aligned} \frac{\partial \boldsymbol{f}(\boldsymbol{x})}{\partial \boldsymbol{x}} \stackrel{\mathrm{def}}{=}\left[\begin{array}{cccc}{\frac{\partial f_{1}(\boldsymbol{x})}{\partial x_{1}}} & {\frac{\partial f_{2}(\boldsymbol{x})}{\partial x_{1}}} & {\dots} & {\frac{\partial f_{m}(\boldsymbol{x})}{\partial x_{1}}} \\ {\frac{\partial f_{2}(\boldsymbol{x})}{\partial x_{2}}} & {\frac{\partial f_{2}(\boldsymbol{x})}{\partial x_{2}}} & {\cdots} & {\frac{\partial f_{m}(\boldsymbol{x})}{\partial x_{2}}} \\ {\vdots} & {\vdots} & {} & {\vdots} \\ {\frac{\partial f_{1}(\boldsymbol{x})}{\partial x_{n}}} & {\frac{\partial f_{2}(\boldsymbol{x})}{\partial x_{n}}} & {\cdots} & {\frac{\partial f_{m}(\boldsymbol{x})}{\partial x_{n}}}\end{array}\right]=\nabla_{\boldsymbol{x}} \boldsymbol{f}(\boldsymbol{x}) \end{aligned}
∂x∂f(x)=def⎣⎢⎢⎢⎢⎡∂x1∂f1(x)∂x2∂f2(x)⋮∂xn∂f1(x)∂x1∂f2(x)∂x2∂f2(x)⋮∂xn∂f2(x)…⋯⋯∂x1∂fm(x)∂x2∂fm(x)⋮∂xn∂fm(x)⎦⎥⎥⎥⎥⎤=∇xf(x)
若
f
(
x
)
=
[
y
1
,
y
2
,
⋯
,
y
n
]
=
[
x
1
,
x
2
,
⋯
,
x
n
]
=
x
T
\boldsymbol{f}(\boldsymbol{x}) =\left[y_{1}, y_{2}, \cdots, y_{n}\right] =\left[x_{1}, x_{2}, \cdots, x_{n}\right] =\boldsymbol{x}^{\mathbf{T}}
f(x)=[y1,y2,⋯,yn]=[x1,x2,⋯,xn]=xT
则
∂
x
T
∂
x
=
I
\begin{aligned} \frac{\partial \boldsymbol{x}^{\mathbf{T}}}{\partial \boldsymbol{x}}=\boldsymbol{I} \end{aligned}
∂x∂xT=I
式中,
I
I
I为单位矩阵。这是一个非常有用的结果。
4、 [列向量对行向量求偏导]
若
m
×
1
m\times1
m×1 向量函数
f
(
x
)
=
y
=
[
y
1
y
2
⋮
y
m
]
f(x)=\boldsymbol{y}=\left[\begin{array}{c}{y_{1}} \\ {y_{2}} \\ {\vdots} \\ {y_{m}}\end{array}\right]
f(x)=y=⎣⎢⎢⎢⎡y1y2⋮ym⎦⎥⎥⎥⎤
其中,
y
1
,
y
2
,
.
.
.
y
m
y_1,y_2,...y_m
y1,y2,...ym是向量的标量函数。一阶梯度
∂
y
∂
x
T
=
[
∂
y
1
∂
x
1
∂
y
1
∂
x
2
⋯
∂
y
1
∂
x
n
∂
y
2
∂
x
1
∂
y
2
∂
x
2
⋯
∂
y
2
∂
x
n
⋮
⋮
⋮
∂
y
m
∂
x
1
∂
y
m
∂
x
2
⋯
∂
y
m
∂
x
n
]
\begin{aligned} \frac{\partial \boldsymbol{y}}{\partial \boldsymbol{x}^{\mathrm{T}}}=\left[\begin{array}{cccc}{\frac{\partial y_{1}}{\partial x_{1}}} & {\frac{\partial y_{1}}{\partial x_{2}}} & {\cdots} & {\frac{\partial y_{1}}{\partial x_{n}}} \\ {\frac{\partial y_{2}}{\partial x_{1}}} & {\frac{\partial y_{2}}{\partial x_{2}}} & {\cdots} & {\frac{\partial y_{2}}{\partial x_{n}}} \\ {\vdots} & {\vdots} & {} & {\vdots} \\ {\frac{\partial y_{m}}{\partial x_{1}}} & {\frac{\partial y_{m}}{\partial x_{2}}} & {\cdots} & {\frac{\partial y_{m}}{\partial x_{n}}}\end{array}\right] \end{aligned}
∂xT∂y=⎣⎢⎢⎢⎢⎡∂x1∂y1∂x1∂y2⋮∂x1∂ym∂x2∂y1∂x2∂y2⋮∂x2∂ym⋯⋯⋯∂xn∂y1∂xn∂y2⋮∂xn∂ym⎦⎥⎥⎥⎥⎤
是一个
m
×
n
m\times n
m×n矩阵,称为向量函数
f
(
x
)
=
y
=
[
y
1
y
2
⋮
y
m
]
f(x)=\boldsymbol{y}=\left[\begin{array}{c}{y_{1}} \\ {y_{2}} \\ {\vdots} \\ {y_{m}}\end{array}\right]
f(x)=y=⎣⎢⎢⎢⎡y1y2⋮ym⎦⎥⎥⎥⎤
的Jacobi(雅克比)矩阵 (列向量对行向量求偏导)
【 几个重要推论】
1、 若
A
A
A和
y
y
y均与向量
x
x
x无关,则
∂
x
T
A
y
∂
x
=
∂
x
T
∂
x
A
y
=
A
y
\frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{y}}{\partial \boldsymbol{x}}=\frac{\partial \boldsymbol{x}^{\mathrm{T}}}{\partial \boldsymbol{x}} \boldsymbol{A} \boldsymbol{y}=\boldsymbol{A} \boldsymbol{y}
∂x∂xTAy=∂x∂xTAy=Ay
【 注】 偏导符号
∂
\partial
∂ 要与紧跟在其后的
x
T
\boldsymbol{x}^{\mathrm{T}}
xT 连在一起,不可直接分开。
2.、 由于
y
T
A
x
=
⟨
A
T
y
,
x
⟩
=
⟨
x
,
A
T
y
⟩
=
x
T
A
T
y
\boldsymbol{y}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}=\left\langle\boldsymbol{A}^{\mathrm{T}} \boldsymbol{y}, \boldsymbol{x}\right\rangle=\left\langle\boldsymbol{x}, \boldsymbol{A}^{\mathrm{T}} \boldsymbol{y}\right\rangle=\boldsymbol{x}^{\mathrm{T}} \boldsymbol{A}^{\mathrm{T}} \boldsymbol{y}
yTAx=⟨ATy,x⟩=⟨x,ATy⟩=xTATy
所以
∂
y
T
A
x
∂
x
=
∂
x
T
A
T
y
∂
x
=
A
T
y
\frac{\partial \boldsymbol{y}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}}{\partial \boldsymbol{x}}=\frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A}^{\mathrm{T}} \boldsymbol{y}}{\partial \boldsymbol{x}}=\boldsymbol{A}^{\mathrm{T}} \boldsymbol{y}
∂x∂yTAx=∂x∂xTATy=ATy
3、 由于
x
T
A
x
=
∑
i
=
1
n
∑
j
=
1
n
A
i
j
x
i
x
j
\boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}=\sum_{i=1}^{n} \sum_{j=1}^{n} A_{i j} x_{i} x_{j}
xTAx=i=1∑nj=1∑nAijxixj
所以可求出梯度
∂
x
T
A
x
∂
x
\frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}}{\partial \boldsymbol{x}}
∂x∂xTAx 的第
k
k
k 个分量为
[
∂
x
T
A
x
∂
x
]
k
=
∂
∂
x
k
∑
i
=
1
n
∑
j
=
1
n
A
i
j
x
ı
x
j
=
∑
i
=
1
n
A
i
k
x
i
+
∑
j
=
1
n
A
k
j
x
j
\left[\frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}}{\partial \boldsymbol{x}}\right]_{k}=\frac{\partial}{\partial \boldsymbol{x}_{k}} \sum_{i=1}^{n} \sum_{j=1}^{n} \boldsymbol{A}_{i j} x_{\boldsymbol{\imath}} x_{j}=\sum_{i=1}^{n} A_{i k} x_{i}+\sum_{j=1}^{n} A_{k j} x_{j}
[∂x∂xTAx]k=∂xk∂i=1∑nj=1∑nAijxıxj=i=1∑nAikxi+j=1∑nAkjxj
即有
∂
x
T
A
x
∂
x
=
A
x
+
A
T
x
\frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}}{\partial \boldsymbol{x}}=\boldsymbol{A} \boldsymbol{x}+\boldsymbol{A}^{\mathrm{T}} \boldsymbol{x}
∂x∂xTAx=Ax+ATx
特别得,若A为对称矩阵,则有
∂
x
T
A
x
∂
x
=
2
A
x
\frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}}{\partial \boldsymbol{x}}=2 \boldsymbol{A x}
∂x∂xTAx=2Ax
以下对上述三个向量求导对结论再进行实例的细化证明,为了简单说明问题,对向量 x \boldsymbol{x} x、向量 y \boldsymbol{y} y以及矩阵 A \boldsymbol{A} A都简化成最简单的形式。
【 实例说明】
证明推论 1)
其中
x
=
[
x
1
x
2
x
3
]
\boldsymbol{x}=\left[\begin{array}{l}{x_{1}} \\ {x_{2}} \\ {x_{3}}\end{array}\right]
x=⎣⎡x1x2x3⎦⎤ ,
y
=
[
y
1
y
2
y
3
]
\boldsymbol{y}=\left[\begin{array}{l}{y_{1}} \\ {y_{2}} \\ {y_{3}}\end{array}\right]
y=⎣⎡y1y2y3⎦⎤
x
T
A
y
=
[
x
1
x
2
x
3
]
[
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
]
[
y
1
y
2
y
3
]
=
[
(
x
1
a
11
+
x
2
a
21
+
x
3
a
31
)
,
(
x
1
a
12
+
x
2
a
22
+
x
3
a
32
)
,
(
x
1
a
13
+
x
2
a
23
+
x
3
a
33
)
]
[
y
1
y
2
y
3
]
=
(
y
1
(
x
1
a
11
+
x
2
a
21
+
x
3
a
31
)
+
y
2
(
x
1
a
12
+
x
2
a
22
+
x
3
a
32
)
+
y
3
(
x
1
a
13
+
x
2
a
23
+
x
3
a
33
)
)
\begin{aligned} &\boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{y}=\left[\begin{array}{lll} x_{1} & x_{2} & x_{3} \end{array}\right]\left[\begin{array}{lll} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array}\right]\left[\begin{array}{l} y_{1} \\ y_{2} \\ y_{3} \end{array}\right] \\ &=\left[\left(x_{1} a_{11}+x_{2} a_{21}+x_{3} a_{31}\right),\left(x_{1} a_{12}+x_{2} a_{22}+x_{3} a_{32}\right),\left(x_{1} a_{13}+x_{2} a_{23}+x_{3} a_{33}\right)\right]\left[\begin{array}{l} y_{1} \\ y_{2} \\ y_{3} \end{array}\right] \\ &=\left(y_{1}\left(x_{1} a_{11}+x_{2} a_{21}+x_{3} a_{31}\right)+y_{2}\left(x_{1} a_{12}+x_{2} a_{22}+x_{3} a_{32}\right)+y_{3}\left(x_{1} a_{13}+x_{2} a_{23}+x_{3} a_{33}\right)\right) \end{aligned}
xTAy=[x1x2x3]⎣⎡a11a21a31a12a22a32a13a23a33⎦⎤⎣⎡y1y2y3⎦⎤=[(x1a11+x2a21+x3a31),(x1a12+x2a22+x3a32),(x1a13+x2a23+x3a33)]⎣⎡y1y2y3⎦⎤=(y1(x1a11+x2a21+x3a31)+y2(x1a12+x2a22+x3a32)+y3(x1a13+x2a23+x3a33))
对每个分量求偏导可得
∂
x
T
A
y
∂
x
1
=
(
y
1
a
11
+
y
2
a
12
+
y
3
a
13
)
=
[
y
1
y
2
y
3
]
[
a
11
a
12
a
13
]
\begin{aligned} \frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{y}}{\partial \boldsymbol{x}_{\mathbf{1}}}=\left(y_{1} a_{11}+y_{2} a_{12}+y_{3} a_{13}\right)=\left[\begin{array}{lll}{y_{1}} & {y_{2}} & {y_{3}}\end{array}\right]\left[\begin{array}{c}{a_{11}} \\ {a_{12}} \\ {a_{13}}\end{array}\right] \end{aligned}
∂x1∂xTAy=(y1a11+y2a12+y3a13)=[y1y2y3]⎣⎡a11a12a13⎦⎤
∂
x
T
A
y
∂
x
2
=
(
y
1
a
21
+
y
2
a
22
+
y
3
a
23
)
=
[
y
1
y
2
y
3
]
[
a
21
a
22
a
23
]
\begin{aligned} \frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{y}}{\partial \boldsymbol{x}_{2}}=\left(y_{1} a_{21}+y_{2} a_{22}+y_{3} a_{23}\right)=\left[\begin{array}{lll}{y_{1}} & {y_{2}} & {y_{3}}\end{array}\right]\left[\begin{array}{c}{a_{21}} \\ {a_{22}} \\ {a_{23}}\end{array}\right] \end{aligned}
∂x2∂xTAy=(y1a21+y2a22+y3a23)=[y1y2y3]⎣⎡a21a22a23⎦⎤
∂
x
T
A
y
∂
x
3
=
(
y
1
a
31
+
y
2
a
32
+
y
3
a
33
)
=
[
y
1
y
2
y
3
]
[
a
31
a
32
a
33
]
\begin{aligned} \frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{y}}{\partial \boldsymbol{x}_{3}}=\left(y_{1} a_{31}+y_{2} a_{32}+y_{3} a_{33}\right)=\left[\begin{array}{lll}{y_{1}} & {y_{2}} & {y_{3}}\end{array}\right]\left[\begin{array}{l}{a_{31}} \\ {a_{32}} \\ {a_{33}}\end{array}\right] \end{aligned}
∂x3∂xTAy=(y1a31+y2a32+y3a33)=[y1y2y3]⎣⎡a31a32a33⎦⎤
∂ x T A y ∂ x = [ ∂ x T A y ∂ x 1 ∂ x T A y ∂ x 2 ∂ x T A y ∂ x 3 ] = [ y 1 a 11 + y 2 a 12 + y 3 a 13 y 1 a 21 + y 2 a 22 + y 3 a 23 y 1 a 31 + y 2 a 32 + y 3 a 33 ] = A y \frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{y}}{\partial \boldsymbol{x}}=\left[\begin{array}{c}{\frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{y}}{\partial \boldsymbol{x}_{\mathbf{1}}}} \\ {\frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{y}}{\partial \boldsymbol{x}_{2}}} \\ {\frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{y}}{\partial \boldsymbol{x}_{3}}}\end{array}\right]=\left[\begin{array}{c}{y_{1} a_{11}+y_{2} a_{12}+y_{3} a_{13}} \\ {y_{1} a_{21}+y_{2} a_{22}+y_{3} a_{23}} \\ {y_{1} a_{31}+y_{2} a_{32}+y_{3} a_{33}}\end{array}\right]=\boldsymbol{A} \boldsymbol{y} ∂x∂xTAy=⎣⎢⎡∂x1∂xTAy∂x2∂xTAy∂x3∂xTAy⎦⎥⎤=⎣⎡y1a11+y2a12+y3a13y1a21+y2a22+y3a23y1a31+y2a32+y3a33⎦⎤=Ay
证明推论 3)
其中
x
=
[
x
1
x
2
x
3
]
\boldsymbol{x}=\left[\begin{array}{l}{x_{1}} \\ {x_{2}} \\ {x_{3}}\end{array}\right]
x=⎣⎡x1x2x3⎦⎤
x
T
A
x
=
[
x
1
x
2
x
3
]
[
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
]
[
x
1
x
2
x
3
]
=
[
(
x
1
a
11
+
x
2
a
21
+
x
3
a
31
)
,
(
x
1
a
12
+
x
2
a
22
+
x
3
a
32
)
,
(
x
1
a
13
+
x
2
a
23
+
x
3
a
33
)
]
[
x
1
x
2
x
3
]
=
(
x
1
(
x
1
a
11
+
x
2
a
21
+
x
3
a
31
)
+
x
2
(
x
1
a
12
+
x
2
a
22
+
x
3
a
32
)
+
x
3
(
x
1
a
13
+
x
2
a
23
+
x
3
a
33
)
)
\begin{aligned} &\boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}=\left[\begin{array}{lll} x_{1} & x_{2} & x_{3} \end{array}\right]\left[\begin{array}{lll} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array}\right]\left[\begin{array}{l} x_{1} \\ x_{2} \\ x_{3} \end{array}\right] \\ &=\left[\left(x_{1} a_{11}+x_{2} a_{21}+x_{3} a_{31}\right),\left(x_{1} a_{12}+x_{2} a_{22}+x_{3} a_{32}\right),\left(x_{1} a_{13}+x_{2} a_{23}+x_{3} a_{33}\right)\right]\left[\begin{array}{l} x_{1} \\ x_{2} \\ x_{3} \end{array}\right] \\ &=\left(x_{1}\left(x_{1} a_{11}+x_{2} a_{21}+x_{3} a_{31}\right)+x_{2}\left(x_{1} a_{12}+x_{2} a_{22}+x_{3} a_{32}\right)+x_{3}\left(x_{1} a_{13}+x_{2} a_{23}+x_{3} a_{33}\right)\right) \end{aligned}
xTAx=[x1x2x3]⎣⎡a11a21a31a12a22a32a13a23a33⎦⎤⎣⎡x1x2x3⎦⎤=[(x1a11+x2a21+x3a31),(x1a12+x2a22+x3a32),(x1a13+x2a23+x3a33)]⎣⎡x1x2x3⎦⎤=(x1(x1a11+x2a21+x3a31)+x2(x1a12+x2a22+x3a32)+x3(x1a13+x2a23+x3a33))
对每个分量求偏导可得
∂
x
T
A
x
∂
x
1
=
(
2
x
1
a
11
+
x
2
a
21
+
x
3
a
31
)
+
(
x
2
a
12
+
x
3
a
13
)
=
(
x
1
a
11
+
x
2
a
21
+
x
3
a
31
)
+
(
x
1
a
11
+
x
2
a
12
+
x
3
a
13
)
=
[
x
1
x
2
x
3
]
[
a
11
a
21
a
31
]
+
[
a
11
a
12
a
13
]
[
x
1
x
2
x
3
]
\begin{aligned} \frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}}{\partial \boldsymbol{x}_{\mathbf{1}}} &=\left(2 x_{1} a_{11}+x_{2} a_{21}+x_{3} a_{31}\right)+\left(x_{2} a_{12}+x_{3} a_{13}\right) \\ &=\left(x_{1} a_{11}+x_{2} a_{21}+x_{3} a_{31}\right)+\left(x_{1} a_{11}+x_{2} a_{12}+x_{3} a_{13}\right) \\ &=\left[\begin{array}{lll}{x_{1}} & {x_{2}} & {x_{3}}\end{array}\right]\left[\begin{array}{c}{a_{11}} \\ {a_{21}} \\ {a_{31}}\end{array}\right] +\left[\begin{array}{lll}{a_{11}} & {a_{12}} & {a_{13}}\end{array}\right] \left[\begin{array}{l}{x_{1}} \\ {x_{2}} \\ {x_{3}}\end{array}\right] \end{aligned}
∂x1∂xTAx=(2x1a11+x2a21+x3a31)+(x2a12+x3a13)=(x1a11+x2a21+x3a31)+(x1a11+x2a12+x3a13)=[x1x2x3]⎣⎡a11a21a31⎦⎤+[a11a12a13]⎣⎡x1x2x3⎦⎤
∂
x
T
A
x
∂
x
2
=
(
x
1
a
12
+
2
x
2
a
22
+
x
3
a
32
)
+
(
x
1
a
21
+
x
3
a
23
)
=
(
x
1
a
12
+
x
2
a
22
+
x
3
a
32
)
+
(
x
1
a
21
+
x
2
a
22
+
x
3
a
23
)
=
[
x
1
x
2
x
3
]
[
a
12
a
22
a
32
]
+
[
a
21
a
22
a
23
]
[
x
1
x
2
x
3
]
\begin{aligned} \begin{aligned} \frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}}{\partial \boldsymbol{x}_{2}} &=\left(x_{1} a_{12}+2 x_{2} a_{22}+x_{3} a_{32}\right)+\left(x_{1} a_{21}+x_{3} a_{23}\right) \\ &=\left(x_{1} a_{12}+x_{2} a_{22}+x_{3} a_{32}\right)+\left(x_{1} a_{21}+x_{2} a_{22}+x_{3} a_{23}\right) \\ &=\left[\begin{array}{lll}{x_{1}} & {x_{2}} & {x_{3}}\end{array}\right]\left[\begin{array}{c}{a_{12}} \\ {a_{22}} \\ {a_{32}}\end{array}\right]+\left[\begin{array}{lll}{a_{21}} & {a_{22}} & {a_{23}}\end{array}\right]\left[\begin{array}{l}{x_{1}} \\ {x_{2}} \\ {x_{3}}\end{array}\right] \end{aligned} \end{aligned}
∂x2∂xTAx=(x1a12+2x2a22+x3a32)+(x1a21+x3a23)=(x1a12+x2a22+x3a32)+(x1a21+x2a22+x3a23)=[x1x2x3]⎣⎡a12a22a32⎦⎤+[a21a22a23]⎣⎡x1x2x3⎦⎤
∂
x
T
A
x
∂
x
3
=
(
x
1
a
13
+
x
2
a
23
+
x
3
a
33
)
+
(
x
1
a
31
+
x
2
a
32
+
x
3
a
33
)
=
[
x
1
x
2
x
3
]
[
a
13
a
23
a
33
]
+
[
a
31
a
32
a
33
]
[
x
1
x
2
x
3
]
\begin{aligned} \begin{aligned} \frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}}{\partial \boldsymbol{x}_{\mathbf{3}}} &=\left(x_{1} a_{13}+x_{2} a_{23}+x_{3} a_{33}\right)+\left(x_{1} a_{31}+x_{2} a_{32}+x_{3} a_{33}\right) \\ &=\left[\begin{array}{lll}{x_{1}} & {x_{2}} & {x_{3}}\end{array}\right]\left[\begin{array}{c}{a_{13}} \\ {a_{23}} \\ {a_{33}}\end{array}\right]+\left[\begin{array}{lll}{a_{31}} & {a_{32}} & {a_{33}}\end{array}\right]\left[\begin{array}{l}{x_{1}} \\ {x_{2}} \\ {x_{3}}\end{array}\right] \end{aligned} \end{aligned}
∂x3∂xTAx=(x1a13+x2a23+x3a33)+(x1a31+x2a32+x3a33)=[x1x2x3]⎣⎡a13a23a33⎦⎤+[a31a32a33]⎣⎡x1x2x3⎦⎤
∂
x
T
A
x
∂
x
=
[
∂
x
T
A
x
∂
x
1
∂
x
T
A
x
∂
x
2
∂
x
T
A
x
∂
x
3
]
=
[
(
x
1
a
11
+
x
2
a
21
+
x
3
a
31
)
+
(
x
1
a
11
+
x
2
a
12
+
x
3
a
13
)
(
x
1
a
12
+
x
2
a
22
+
x
3
a
32
)
+
(
x
1
a
21
+
x
2
a
22
+
x
3
a
23
)
(
x
1
a
13
+
x
2
a
23
+
x
3
a
33
)
+
(
x
1
a
31
+
x
2
a
32
+
x
3
a
33
)
]
=
[
(
x
1
a
11
+
x
2
a
12
+
x
3
a
13
)
(
x
1
a
21
+
x
2
a
22
+
x
3
a
23
)
(
x
1
a
31
+
x
2
a
32
+
x
3
a
33
)
]
+
[
(
x
1
a
11
+
x
2
a
21
+
x
3
a
31
)
(
x
1
a
12
+
x
2
a
22
+
x
3
a
32
)
(
x
1
a
13
+
x
2
a
23
+
x
3
a
33
)
]
=
A
x
+
A
x
T
\begin{aligned} \frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}}{\partial \boldsymbol{x}} &=\left[\begin{array}{c}{\frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}}{\partial \boldsymbol{x}_{\mathbf{1}}}} \\ {\frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}}{\partial \boldsymbol{x}_{2}}} \\ {\frac{\partial \boldsymbol{x}^{\mathrm{T}} \boldsymbol{A} \boldsymbol{x}}{\partial \boldsymbol{x}_{3}}}\end{array}\right] \\ &=\left[\begin{array}{l}{\left(x_{1} a_{11}+x_{2} a_{21}+x_{3} a_{31}\right)+\left(x_{1} a_{11}+x_{2} a_{12}+x_{3} a_{13}\right)} \\ {\left(x_{1} a_{12}+x_{2} a_{22}+x_{3} a_{32}\right)+\left(x_{1} a_{21}+x_{2} a_{22}+x_{3} a_{23}\right)} \\ {\left(x_{1} a_{13}+x_{2} a_{23}+x_{3} a_{33}\right)+\left(x_{1} a_{31}+x_{2} a_{32}+x_{3} a_{33}\right)}\end{array}\right] \\ &=\left[\begin{array}{l}{\left(x_{1} a_{11}+x_{2} a_{12}+x_{3} a_{13}\right)} \\ {\left(x_{1} a_{21}+x_{2} a_{22}+x_{3} a_{23}\right)} \\ {\left(x_{1} a_{31}+x_{2} a_{32}+x_{3} a_{33}\right)}\end{array}\right]+\left[\begin{array}{c}{\left(x_{1} a_{11}+x_{2} a_{21}+x_{3} a_{31}\right)} \\ {\left(x_{1} a_{12}+x_{2} a_{22}+x_{3} a_{32}\right)} \\ {\left(x_{1} a_{13}+x_{2} a_{23}+x_{3} a_{33}\right)}\end{array}\right] \\ &=\boldsymbol{A} \boldsymbol{x}+\boldsymbol{A} \boldsymbol{x}^{\mathrm{T}} \end{aligned}
∂x∂xTAx=⎣⎢⎡∂x1∂xTAx∂x2∂xTAx∂x3∂xTAx⎦⎥⎤=⎣⎡(x1a11+x2a21+x3a31)+(x1a11+x2a12+x3a13)(x1a12+x2a22+x3a32)+(x1a21+x2a22+x3a23)(x1a13+x2a23+x3a33)+(x1a31+x2a32+x3a33)⎦⎤=⎣⎡(x1a11+x2a12+x3a13)(x1a21+x2a22+x3a23)(x1a31+x2a32+x3a33)⎦⎤+⎣⎡(x1a11+x2a21+x3a31)(x1a12+x2a22+x3a32)(x1a13+x2a23+x3a33)⎦⎤=Ax+AxT
现在回到式
∂
E
w
^
∂
w
^
=
2
X
T
(
X
w
^
−
y
)
\frac{\partial E_{\hat{\boldsymbol{w}}}}{\partial \hat{\boldsymbol{w}}}=2 \mathbf{X}^{\mathrm{T}}(\mathbf{X} \hat{\boldsymbol{w}}-\boldsymbol{y})
∂w^∂Ew^=2XT(Xw^−y)
令上式为零即可得到
w
^
\hat{\boldsymbol{w}}
w^的最优解的闭式解。以下进行分类讨论。
【 注 】 关于闭式解
闭式解closed form solution)也叫解析解(analytical solution),就是一些严格的公式,给出任意的自变量就可以求出其因变量,也就是问题的解, 他人可以利用这些公式计算各自的问题。所谓的解析解是一种包含分式、三角函数、指数、对数甚至无限级数等基本函数的解的形式。用来求得解析解的方法称为解析法〈analytic techniques〉,解析法即是常见的微积分技巧,例如分离变量法等。
解析解为一封闭形式〈closed-form〉的函数,因此对任一独立变量,我们皆可将其带入解析函数求得正确的相应变量。
例如一元二次方程的求解公式为 − b ± b 2 − 4 a c 2 a \frac{-b \pm \sqrt{b^{2}-4 a c}}{2 a} 2a−b±b2−4ac
1、 当
X
T
X
\mathbf{X}^{\mathrm{T}} \mathbf{X}
XTX 为满秩矩阵(fuul-rank matrix)或者正定矩阵(positive definite matrix)时,令上式为零可得
w
^
∗
=
(
X
T
X
)
−
1
X
T
y
\boldsymbol{\hat { w }}^{*}=\left(\mathbf{X}^{\mathrm{T}} \mathbf{X}\right)^{-1} \mathbf{X}^{\mathrm{T}} \boldsymbol{y}
w^∗=(XTX)−1XTy
其中
(
X
T
X
)
−
1
\left(\mathbf{X}^{\mathrm{T}} \mathbf{X}\right)^{-1}
(XTX)−1 是矩阵
(
X
T
X
)
\left(\mathbf{X}^{\mathrm{T}} \mathbf{X}\right)
(XTX)的逆矩阵, 令
x
^
i
=
[
x
i
1
x
i
2
⋮
x
i
d
1
]
\hat{\boldsymbol{x}}_{i} =\left[\begin{array}{c}{x_{i 1}} \\ {x_{i 2}} \\ {\vdots} \\ {x_{i d}} \\ {1}\end{array}\right]
x^i=⎣⎢⎢⎢⎢⎢⎡xi1xi2⋮xid1⎦⎥⎥⎥⎥⎥⎤ 则最终学到的多元回归模型为
f
(
x
^
i
)
=
[
x
i
1
x
i
2
…
x
i
d
1
]
[
w
1
∗
w
2
∗
⋮
w
d
∗
b
]
=
[
x
i
1
x
i
2
…
x
i
d
1
]
w
^
∗
=
x
^
i
T
w
^
∗
=
x
^
i
T
(
X
T
X
)
−
1
X
T
y
\begin{aligned} f\left(\hat{\boldsymbol{x}}_{i}\right) &=\left[\begin{array}{llllll}{x_{i 1}} & {x_{i 2}} & {\dots} & {x_{i d}} & {1}\end{array}\right]\left[\begin{array}{c}{w_{1}^{*}} \\ {w_{2}^{*}} \\ {\vdots} \\ {w_{d}^{*}} \\ {b}\end{array}\right] \\ &=\left[\begin{array}{lllll}{x_{i 1}} & {x_{i 2}} & {\dots} & {x_{i d}} & {1}\end{array}\right]\boldsymbol{\hat { w }}^{*} \\ &=\hat{\boldsymbol{x}}_{i}^{\mathrm{T}}\hat{\boldsymbol{w}}^{*} \\ &=\hat{\boldsymbol{x}}_{i}^{\mathrm{T}}\left(\mathbf{X}^{\mathrm{T}} \mathbf{X}\right)^{-1} \mathbf{X}^{\mathrm{T}} \boldsymbol{y} \end{aligned}
f(x^i)=[xi1xi2…xid1]⎣⎢⎢⎢⎢⎢⎡w1∗w2∗⋮wd∗b⎦⎥⎥⎥⎥⎥⎤=[xi1xi2…xid1]w^∗=x^iTw^∗=x^iT(XTX)−1XTy
2、然而真实世界往往 X T X \mathbf{X}^{\mathrm{T}} \mathbf{X} XTX不为满秩,当样本的特征数大于样本数,即 X \mathbf{X} X 的列数大于行数,则 X T X \mathbf{X}^{\mathrm{T}} \mathbf{X} XTX显然不满秩。由此可由多个 w ^ \hat{\boldsymbol{w}} w^ 解,都能够使得均方差误差最小化。(类比:解线性方程时,未知数个数多于方程的个数时,会有多个解)
故而,选择哪一个解作为输出,则还需要另外一个约束,常见做法是正则化(regularization)
可以对线性回归模型
y
=
w
T
x
+
b
y=\boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}+b
y=wTx+b
进行尺度上的变化,如在指数尺度上变化
ln
y
=
w
T
x
+
b
\ln y=\boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}+b
lny=wTx+b
这是"对数线性回归"(log-linear regression),它实际上是试图让
e
w
T
x
+
b
e^{\boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}+b}
ewTx+b 逼近
y
y
y。上式在形式上仍然是线性回归,但是实际上已经在求取输入空间到输出空间的非线性函数映射。
更一般地,考虑单调可微函数
g
(
⋅
)
g(\cdot)
g(⋅),其中
g
(
⋅
)
g(\cdot)
g(⋅) 连续且充分光滑,令
y
=
g
−
1
(
w
T
x
+
b
)
y=g^{-1}\left(\boldsymbol{w}^{T} \boldsymbol{x}+b\right)
y=g−1(wTx+b)
这样得到的模型称为"广义线性模型"(geberalized linear model),其中函数 g ( ⋅ ) g(\cdot) g(⋅)称为"联系函数"(link function)。显然,对数线性回归是广义线性回归在 g ( ⋅ ) = ln ( ⋅ ) g(\cdot)=\ln (\cdot) g(⋅)=ln(⋅) 时的特例。
广义线性模型的参数估计通常通过加权最小二乘法或者极大似然法进行。
2.2.4 线性回归与逻辑回归的关系
1.在线性回归中,是用预测的直线 f ( x i ) = w x i + b f(x_i)=wx_{i}+b f(xi)=wxi+b来拟合真实的 y y y值,而后用两者之间的平方损失来度量其拟合的效果,具体的是用平方损失函数来表示。而后通过求出这个损失函数的极小值来确定最优的参数 w w w和 b b b。
2.如果数据不是直线分布的,那么类比数据拟合中,把坐标做一下倒数、取对数等等处理,就可以拟合非线性的曲线。因为直线的拟合效果更容易度量。
上面提到,对纵坐标进行尺度上的变化,如在指数尺度上变化
ln
y
=
w
T
x
+
b
\ln y=\boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}+b
lny=wTx+b
这是"对数线性回归"(log-linear regression),它实际上是试图让
e
w
T
x
+
b
e^{\boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}+b}
ewTx+b (其实本质就是
e
x
e^x
ex,
w
w
w和
b
b
b不过是对
x
x
x做一些尺寸缩放和平移而已)逼近
y
y
y。上式在形式上仍然是线性回归,但是实际上已经在求取输入空间到输出空间的非线性函数映射。如下图所示:
在图中,原本的数据
(
x
i
,
y
i
)
(x_i,y_i)
(xi,yi) 是大致呈指数分布的,即图中的黑色曲线(上方的曲线),而后对
y
i
y_i
yi取对数以后,即
y
i
′
=
ln
y
i
y_i'=\ln y_i
yi′=lnyi,即图中的红色直线(下方的曲线)。
3.线性回归是"回归"问题,而后面提到的逻辑回归是"分类"问题。 为了使得回归问题能够变成二分类问题,
z
=
w
x
+
b
z=wx+b
z=wx+b,需要把
z
∈
(
−
∞
,
+
∞
)
z \in(-\infty,+\infty)
z∈(−∞,+∞)映射到
y
∈
(
0
,
1
)
y \in(0,1)
y∈(0,1)。所以需要有一个类似于阶跃函数的图像,但是为了能够微分便于计算,所以有了 Sigmoid函数,即
σ
\sigma
σ函数,
y
=
1
1
+
e
−
z
y=\frac{1}{1+e^{-z}}
y=1+e−z1
函数曲线如图所示。
4.上面的线性回归的图可以理解为用
e
w
T
x
+
b
e^{\boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}+b}
ewTx+b 逼近
y
y
y,即
y
=
e
w
T
x
+
b
⇒
ln
y
=
w
T
x
+
b
y=e^{\boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}+b} \Rightarrow\ln y=\boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}+b
y=ewTx+b⇒lny=wTx+b
而逻辑回归的图可以理解为用
1
1
+
e
−
(
w
T
x
+
b
)
\frac{1}{1+e^{-\left(\boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}+b\right)}}
1+e−(wTx+b)1
逼近
y
y
y,即
y
=
1
1
+
e
−
(
w
T
x
+
b
)
⇒
ln
y
1
−
y
=
w
T
x
+
b
y=\frac{1}{1+e^{-\left(\boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}+b\right)}} \Rightarrow\ln \frac{y}{1-y}=\boldsymbol{w}^{\mathrm{T}} \boldsymbol{x}+b
y=1+e−(wTx+b)1⇒ln1−yy=wTx+b
2.3 线性回归的梯度下降算法
【说明】 这里仅针对单变量线性回归的梯度下降算法,多变量的以此类推。
在线性回归中使用的损失函数是"平方损失函数"(quadratic loss function)。
L
(
Y
,
f
(
X
)
)
=
(
Y
−
f
(
X
)
)
2
L(Y, f(X))=(Y-f(X))^{2}
L(Y,f(X))=(Y−f(X))2 即上面的提到的
E
(
ω
,
b
)
=
∑
i
=
1
m
(
y
i
−
ω
x
i
−
b
)
2
E_{(\omega, b)}=\sum_{i=1}^{m}\left(y_{i}-\omega x_{i}-b\right)^{2}
E(ω,b)=i=1∑m(yi−ωxi−b)2
但是在计算的时候,常常多出一个2,并且表示成平均损失的形式:
L
o
s
s
(
ω
,
b
)
=
E
(
ω
,
b
)
=
1
2
m
∑
i
=
1
m
(
f
(
x
i
)
−
y
i
)
2
=
1
2
m
∑
i
=
1
m
(
y
i
−
ω
x
i
−
b
)
2
\begin{aligned} Loss_{(\omega, b)}=E_{(\omega, b)}&=\frac{1}{2 m} \sum_{i=1}^{m}\left(f(x_i)-y_{i}\right)^{2} \\ &=\frac{1}{2 m} \sum_{i=1}^{m}\left(y_{i}-\omega x_{i}-b\right)^{2} \end{aligned}
Loss(ω,b)=E(ω,b)=2m1i=1∑m(f(xi)−yi)2=2m1i=1∑m(yi−ωxi−b)2
1.这里多出一个 1 / 2 1/2 1/2的系数仅仅是为了计算方便。
2.这里的平方损失函数 L o s s ( ω , b ) Loss_{(\omega, b)} Loss(ω,b) 是凸函数,直接可以求出解析解,即根据上面的求导为零,就可以求出唯一的极值点,当然也可以用梯度下降算法来求极值点。但是,在后面逻辑回归中,如果也用这个平方损失函数,就变成非凸函数,不能直接求出解析解,需要用其他方法。后面再详述。
以下计算损失函数对每个参数的偏导
拟合的曲线
f
(
x
i
)
=
w
x
i
+
b
f\left(x_{i}\right)=w x_{i}+b
f(xi)=wxi+b
损失函数
L
(
w
,
b
)
=
1
2
m
∑
i
=
1
m
(
f
(
x
i
)
−
y
i
)
2
L(w, b)=\frac{1}{2 m} \sum_{i=1}^{m}\left(f\left(x_{i}\right)-y_{i}\right)^{2}
L(w,b)=2m1i=1∑m(f(xi)−yi)2
对
w
w
w求偏导
∂
L
(
w
,
b
)
∂
w
=
∂
∂
w
1
2
m
∑
i
=
1
m
(
f
(
x
i
)
−
y
i
)
2
=
∂
∂
w
1
2
m
∑
i
=
1
m
(
w
x
i
+
b
−
y
i
)
2
=
1
m
∑
i
=
1
m
(
w
x
i
+
b
−
y
i
)
x
i
=
1
m
∑
i
=
1
m
(
f
(
x
i
)
−
y
i
)
x
i
\begin{aligned} \frac{\partial L(w, b)}{\partial w} &=\frac{\partial}{\partial w} \frac{1}{2 m} \sum_{i=1}^{m}\left(f\left(x_{i}\right)-y_{i}\right)^{2} \\ &=\frac{\partial}{\partial w} \frac{1}{2 m} \sum_{i=1}^{m}\left(w x_{i}+b-y_{i}\right)^{2} \\ &=\frac{1}{m} \sum_{i=1}^{m}\left(w x_{i}+b-y_{i}\right) x_{i} \\ &=\frac{1}{m} \sum_{i=1}^{m}\left(f\left(x_{i}\right)-y_{i}\right) x_{i} \end{aligned}
∂w∂L(w,b)=∂w∂2m1i=1∑m(f(xi)−yi)2=∂w∂2m1i=1∑m(wxi+b−yi)2=m1i=1∑m(wxi+b−yi)xi=m1i=1∑m(f(xi)−yi)xi
对
b
b
b求偏导
∂
L
(
w
,
b
)
∂
b
=
∂
∂
b
1
2
m
∑
i
=
1
m
(
f
(
x
i
)
−
y
i
)
2
=
∂
∂
b
1
2
m
∑
i
=
1
m
(
w
x
i
+
b
−
y
i
)
2
=
1
m
∑
i
=
1
m
(
w
x
i
+
b
−
y
i
)
=
1
m
∑
i
=
1
m
(
f
(
x
i
)
−
y
i
)
\begin{aligned} \frac{\partial L(w, b)}{\partial b} &=\frac{\partial}{\partial b} \frac{1}{2 m} \sum_{i=1}^{m}\left(f\left(x_{i}\right)-y_{i}\right)^{2} \\ &=\frac{\partial}{\partial b} \frac{1}{2 m} \sum_{i=1}^{m}\left(w x_{i}+b-y_{i}\right)^{2} \\ &=\frac{1}{m} \sum_{i=1}^{m}\left(w x_{i}+b-y_{i}\right) \\ &=\frac{1}{m} \sum_{i=1}^{m}\left(f\left(x_{i}\right)-y_{i}\right) \end{aligned}
∂b∂L(w,b)=∂b∂2m1i=1∑m(f(xi)−yi)2=∂b∂2m1i=1∑m(wxi+b−yi)2=m1i=1∑m(wxi+b−yi)=m1i=1∑m(f(xi)−yi)
所以梯度下降法:
Repeat
{
w
:
=
w
−
α
∂
L
(
w
,
b
)
∂
w
=
w
−
α
1
m
∑
i
=
1
m
(
f
(
x
i
)
−
y
i
)
x
i
b
:
=
b
−
α
∂
L
(
w
,
b
)
∂
b
=
b
−
α
1
m
∑
i
=
1
m
(
f
(
x
i
)
−
y
i
)
}
\begin{aligned} \textbf{Repeat} \Bigg\{ w : &=w-\alpha \frac{\partial L(w, b)}{\partial w} \\ &=w-\alpha \frac{1}{m} \sum_{i=1}^{m}\left(f\left(x_{i}\right)-y_{i}\right) x_{i} \\ ~\\ b : &=b-\alpha \frac{\partial L(w, b)}{\partial b} \\ &=b-\alpha \frac{1}{m} \sum_{i=1}^{m}\left(f\left(x_{i}\right)-y_{i}\right) \\ \Bigg\} \end{aligned}
Repeat{w: b:}=w−α∂w∂L(w,b)=w−αm1i=1∑m(f(xi)−yi)xi=b−α∂b∂L(w,b)=b−αm1i=1∑m(f(xi)−yi)
三、逻辑回归(Logistic Regression)与最大熵模型
逻辑回归(logistic regression)是统计学习方法中的经典分类方法,最大熵是概率模型学习的一个准则,将其推广到分类问题得到最大熵模型(maximum entropy model)。逻辑斯蒂回归模型与最大熵模型都属于对数线性模型。
3.1 逻辑回归模型(分类模型)
首先申明关于logistic regression
一词的称谓。在周志华西瓜书《机器学习》中称为"对数几率回归",在李航《统计学方法》中称之为"逻辑斯蒂回归"。其他也有采用"逻辑回归"的说法。
虽然称之为"回归",但是实际上是一种分类方法。
3.1.1 逻辑斯蒂分布(logistic distribution)
定义(逻辑斯蒂分布)
设
X
X
X是连续随机变量,
X
X
X服从逻辑斯蒂分布是指
X
X
X具有以下分布函数和密度函数:
F
(
x
)
=
P
(
X
⩽
x
)
=
1
1
+
e
−
(
x
−
μ
)
/
γ
F(x)=P(X \leqslant x)=\frac{1}{1+e^{-(x-\mu) / \gamma}}
F(x)=P(X⩽x)=1+e−(x−μ)/γ1
f
(
x
)
=
F
′
(
x
)
=
e
−
(
x
−
μ
)
/
γ
γ
(
1
+
e
−
(
x
−
μ
)
/
γ
)
2
f(x)=F^{\prime}(x)=\frac{e^{-(x-\mu) / \gamma}}{\gamma\left(1+e^{-(x-\mu) / \gamma}\right)^{2}}
f(x)=F′(x)=γ(1+e−(x−μ)/γ)2e−(x−μ)/γ
式中,
μ
\mu
μ为位置参数,
γ
\gamma
γ为形状参数。
【注】
当
μ
=
0
\mu=0
μ=0,
γ
=
1
\gamma=1
γ=1,
F
(
x
)
=
P
(
X
⩽
x
)
=
1
1
+
e
−
x
F(x)=P(X \leqslant x)=\frac{1}{1+e^{-x}}
F(x)=P(X⩽x)=1+e−x1
即常见的Sigmoid曲线。
逻辑斯蒂分布的密度函数
f
(
x
)
f(x)
f(x)和分布函数
F
(
x
)
F(x)
F(x)的图像如下图所示。分布函数属于逻辑斯蒂函数,即Sigmoid
curve。该曲线以点
(
μ
,
1
2
)
\left(\mu, \frac{1}{2}\right)
(μ,21)为中心对称,即
F
(
−
x
+
μ
)
−
1
2
=
−
F
(
x
−
μ
)
+
1
2
F(-x+\mu)-\frac{1}{2}=-F(x-\mu)+\frac{1}{2}
F(−x+μ)−21=−F(x−μ)+21
曲线在中心附近增长较快,两端增长慢。
形状参数
γ
\gamma
γ越小,曲线在中心附近增长越快。
3.1.2 二项逻辑斯蒂回归模型(binomial logistic regression model)
二项逻辑斯蒂回归模型是一种分类模型,由条件概率 P ( Y ∣ X ) P(Y | X) P(Y∣X)表示,形式为参数化的逻辑斯蒂分布。这里,随机变量 X X X取值为实数,随机变量 Y Y Y取值为1或者0.通过监督学习方法来估计模型参数
符合说明:在李航《统计学方法》中, w \boldsymbol{w} w 和 x \boldsymbol{x} x 等向量没有加粗体,为了便于识别,本文中对向量一律加上粗体表示。 w ⋅ x \boldsymbol{w} \cdot \boldsymbol{x} w⋅x表示内积,其实更加准确的表达应该是 w T ⋅ x \boldsymbol{w}^T \cdot \boldsymbol{x} wT⋅x。
另外,在李航《统计学方法》中, N N N表示样本数, n n n表示特征维度;而周志华《机器学习》以及吴恩达《机器学习》中, m m m表示样本数, d d d表示特征维度。如果符号混乱,注意区分。
定义(逻辑斯蒂回归模型) 二项逻辑斯蒂回归模型是入下的条件概率分布:
P
(
Y
=
1
∣
x
)
=
1
1
+
e
−
(
w
⋅
x
+
b
)
=
e
w
⋅
x
+
b
1
+
e
w
⋅
x
+
b
P(Y=1 | \boldsymbol{x})= \frac{1}{1+e^{-(\boldsymbol{w} \cdot \boldsymbol{x}+b)}} = \frac{e^{\boldsymbol{w} \cdot \boldsymbol{x}+b}}{1+e^{\boldsymbol{w} \cdot \boldsymbol{x}+b}}
P(Y=1∣x)=1+e−(w⋅x+b)1=1+ew⋅x+bew⋅x+b
P
(
Y
=
0
∣
x
)
=
1
−
P
(
Y
=
1
∣
x
)
=
1
1
+
e
w
⋅
x
+
b
P(Y=0 | \boldsymbol{x})=1-P(Y=1 | \boldsymbol{x})= \frac{1}{1+e^{\boldsymbol{w} \cdot \boldsymbol{x}+b}}
P(Y=0∣x)=1−P(Y=1∣x)=1+ew⋅x+b1
这里,
x
∈
R
n
\boldsymbol{x} \in \mathbb{R}^{n}
x∈Rn是输入,
Y
∈
{
0
,
1
}
Y \in\{0,1\}
Y∈{0,1}是输出,
w
∈
R
n
\boldsymbol{w} \in \mathbf{R}^{n}
w∈Rn和
b
∈
R
b \in \mathbf{R}
b∈R是参数,
w
\boldsymbol{w}
w称为权值向量,
b
b
b称为偏置,
w
⋅
x
\boldsymbol{w} \cdot \boldsymbol{x}
w⋅x是
w
\boldsymbol{w}
w和
b
b
b的内积。
对于给定的实例 x x x,
若 P ( Y = 1 ∣ x ) > P ( Y = 0 ∣ x ) P(Y=1 | \boldsymbol{x})>P(Y=0 | \boldsymbol{x}) P(Y=1∣x)>P(Y=0∣x), 则属于 Y = 1 Y=1 Y=1的类;
若 P ( Y = 1 ∣ x ) < P ( Y = 0 ∣ x ) P(Y=1 | \boldsymbol{x})< P(Y=0 | \boldsymbol{x}) P(Y=1∣x)<P(Y=0∣x), 则属于 Y = 0 Y=0 Y=0的类.
为了便于表达,将权重向量和输入向量加以扩充,仍然记作 w , x \boldsymbol{w},\boldsymbol{x} w,x,即 w = [ w ( 1 ) w ( 2 ) ⋮ w ( n ) b ] \boldsymbol{w}=\left[\begin{array}{c}{w^{(1)}} \\ {w^{(2)}} \\ {\vdots} \\ {w^{(n)}} \\ {b}\end{array}\right] w=⎣⎢⎢⎢⎢⎢⎡w(1)w(2)⋮w(n)b⎦⎥⎥⎥⎥⎥⎤, x = [ x ( 1 ) x ( 2 ) ⋮ x ( n ) 1 ] \boldsymbol{x}=\left[\begin{array}{c}{x^{(1)}} \\ {x^{(2)}} \\ {\vdots} \\ {x^{(n)}} \\ {1}\end{array}\right] x=⎣⎢⎢⎢⎢⎢⎡x(1)x(2)⋮x(n)1⎦⎥⎥⎥⎥⎥⎤, 其中 w ⋅ x = w ( 1 ) x ( 1 ) + w ( 2 ) x ( 2 ) + . . . + w ( n ) x ( n ) + b \boldsymbol{w} \cdot \boldsymbol{x}=w^{(1)}x^{(1)}+w^{(2)}x^{(2)}+...+w^{(n)}x^{(n)}+b w⋅x=w(1)x(1)+w(2)x(2)+...+w(n)x(n)+b
这时,逻辑斯蒂回归模型如下:
P
(
Y
=
1
∣
x
)
=
1
1
+
e
−
(
w
⋅
x
)
=
e
w
⋅
x
1
+
e
w
⋅
x
P(Y=1 | \boldsymbol{x})= \frac{1}{1+e^{-(\boldsymbol{w} \cdot \boldsymbol{x})}} = \frac{e^{\boldsymbol{w} \cdot \boldsymbol{x}}}{1+e^{\boldsymbol{w} \cdot \boldsymbol{x}}}
P(Y=1∣x)=1+e−(w⋅x)1=1+ew⋅xew⋅x
P
(
Y
=
0
∣
x
)
=
1
−
P
(
Y
=
1
∣
x
)
=
1
1
+
e
w
⋅
x
P(Y=0 | \boldsymbol{x})=1-P(Y=1 | \boldsymbol{x})= \frac{1}{1+e^{\boldsymbol{w} \cdot \boldsymbol{x}}}
P(Y=0∣x)=1−P(Y=1∣x)=1+ew⋅x1
一个事件的几率(odds)是指该事件发生的概率与该事件不发生概率的比值,如果事件发生概率为
p
p
p,那么该事件的几率是
p
1
−
p
\frac{p}{1-p}
1−pp,该事件的对数几率(log odds)或者logit(对它取log,即 log it)函数是:
logit
(
p
)
=
log
p
1
−
p
\operatorname{logit}(p)=\log \frac{p}{1-p}
logit(p)=log1−pp
对逻辑斯蒂回归而言,由上面的式子
P
(
Y
=
1
∣
x
)
P(Y=1 | \boldsymbol{x})
P(Y=1∣x)和
P
(
Y
=
0
∣
x
)
P(Y=0 | \boldsymbol{x})
P(Y=0∣x)可得
log
P
(
Y
=
1
∣
x
)
1
−
P
(
Y
=
1
∣
x
)
=
log
e
w
⋅
x
1
+
e
w
⋅
x
1
1
+
e
w
⋅
x
=
log
e
w
⋅
x
1
=
log
e
w
⋅
x
=
w
⋅
x
\begin{aligned} \log \frac{P(Y=1 | x)}{1-P(Y=1 | x)} &=\log \frac{\frac{e^{w \cdot x}}{1+e^{\boldsymbol{w} \cdot x}}}{\frac{1}{1+e^{w \cdot x}}} \\ &=\log \frac{e^{\boldsymbol{w} \cdot \boldsymbol{x}}}{1} \\ &=\log e^{\boldsymbol{w} \cdot \boldsymbol{x}} \\ &=\boldsymbol{w} \cdot \boldsymbol{x} \end{aligned}
log1−P(Y=1∣x)P(Y=1∣x)=log1+ew⋅x11+ew⋅xew⋅x=log1ew⋅x=logew⋅x=w⋅x
这就是说,在逻辑斯蒂回归模型中,输出 Y = 1 Y=1 Y=1的对数几率是输入 x x x的线性函数,或者说,输出 Y = 1 Y=1 Y=1的对数几率是由输入 x x x的线性函数表示的模型,即逻辑斯蒂回归模型。
换一个角度看,考虑对输入
x
x
x进行分类的线性函数
w
⋅
x
\boldsymbol{w} \cdot \boldsymbol{x}
w⋅x,其值域
w
⋅
x
∈
R
\boldsymbol{w} \cdot \boldsymbol{x} \in \mathbf{R}
w⋅x∈R,这里
x
∈
R
n
+
1
,
w
∈
R
n
+
1
\boldsymbol{x} \in \mathbf{R}^{n+1}, \boldsymbol{w} \in \mathbf{R}^{n+1}
x∈Rn+1,w∈Rn+1。通过逻辑斯蒂回归定义式可以将线性函数
w
⋅
x
\boldsymbol{w} \cdot \boldsymbol{x}
w⋅x转化为概率:
P
(
Y
=
1
∣
x
)
=
1
1
+
e
−
(
w
⋅
x
)
=
e
w
⋅
x
1
+
e
w
⋅
x
P(Y=1 | \boldsymbol{x})= \frac{1}{1+e^{-(\boldsymbol{w} \cdot \boldsymbol{x})}} = \frac{e^{\boldsymbol{w} \cdot \boldsymbol{x}}}{1+e^{\boldsymbol{w} \cdot \boldsymbol{x}}}
P(Y=1∣x)=1+e−(w⋅x)1=1+ew⋅xew⋅x
这时候,
当线性函数的值 z = w ⋅ x → + ∞ z=\boldsymbol{w} \cdot \boldsymbol{x} \rightarrow+\infty z=w⋅x→+∞,则 P ( Y = 1 ∣ x ) → 1 P(Y=1 | \boldsymbol{x})\rightarrow 1 P(Y=1∣x)→1;
当线性函数的值 z = w ⋅ x → − ∞ z=\boldsymbol{w} \cdot \boldsymbol{x} \rightarrow-\infty z=w⋅x→−∞,则 P ( Y = 1 ∣ x ) → 0 P(Y=1 | \boldsymbol{x})\rightarrow 0 P(Y=1∣x)→0。
【补充说明】
在上一章的线性回归中提到,线性回归使用的是平方损失:
L
o
s
s
(
ω
,
b
)
=
E
(
ω
,
b
)
=
1
2
m
∑
i
=
1
m
(
f
(
x
i
)
−
y
i
)
2
=
1
2
m
∑
i
=
1
m
(
y
i
−
ω
x
i
−
b
)
2
\begin{aligned} Loss_{(\omega, b)}=E_{(\omega, b)}&=\frac{1}{2 m} \sum_{i=1}^{m}\left(f(x_i)-y_{i}\right)^{2} \\ &=\frac{1}{2 m} \sum_{i=1}^{m}\left(y_{i}-\omega x_{i}-b\right)^{2} \end{aligned}
Loss(ω,b)=E(ω,b)=2m1i=1∑m(f(xi)−yi)2=2m1i=1∑m(yi−ωxi−b)2
因为这个函数 L o s s ( ω , b ) Loss_{(\omega, b)} Loss(ω,b)是凸函数,直接可以通过求导等于零,来直接求出其解析解,比较简单。
对于逻辑回归而言,其中Sigmoid函数
f
(
x
)
=
σ
(
z
)
=
1
1
+
e
−
z
=
σ
(
w
⋅
x
)
=
1
1
+
e
−
w
⋅
x
\begin{aligned} f(x)=\sigma(z) &=\frac{1}{1+e^{-z}} \\ &=\sigma(\boldsymbol{w} \cdot \boldsymbol{x}) \\ &=\frac{1}{1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}}} \end{aligned}
f(x)=σ(z)=1+e−z1=σ(w⋅x)=1+e−w⋅x1
如下图所示.
如果逻辑回归的损失函数也用平方损失函数,则有:
L
o
s
s
(
w
)
=
E
(
w
)
=
1
2
m
∑
i
=
1
m
(
f
(
x
i
)
−
y
i
)
2
=
1
2
m
∑
i
=
1
m
(
y
i
−
σ
(
z
)
)
2
=
1
2
m
∑
i
=
1
m
(
y
i
−
σ
(
w
⋅
x
)
)
2
=
1
2
m
∑
i
=
1
m
(
y
i
−
1
1
+
e
−
(
w
⋅
x
)
)
2
\begin{aligned} Loss_{(\boldsymbol{w})}=E_{(\boldsymbol{w})}&=\frac{1}{2 m} \sum_{i=1}^{m}\left(f(x_i)-y_{i}\right)^{2} \\ &=\frac{1}{2 m} \sum_{i=1}^{m}\left(y_{i}- \sigma(z)\right)^{2} \\ &=\frac{1}{2 m} \sum_{i=1}^{m}\left(y_{i}- \sigma(\boldsymbol{w} \cdot \boldsymbol{x})\right)^{2} \\ &=\frac{1}{2 m} \sum_{i=1}^{m}\left(y_{i}-\frac{1}{1+e^{-(\boldsymbol{w} \cdot \boldsymbol{x})}}\right)^{2} \end{aligned}
Loss(w)=E(w)=2m1i=1∑m(f(xi)−yi)2=2m1i=1∑m(yi−σ(z))2=2m1i=1∑m(yi−σ(w⋅x))2=2m1i=1∑m(yi−1+e−(w⋅x)1)2
但是问题在于上式是非凸的,不能直接求出其解析解,而且不易优化,易陷入局部最小,即使使用梯度下降也很难寻找到全局最小值。
如下图所示:
所以下文采取了其他办法来优化,可以得到凸函数,能够求得最优解。
3.1.3 模型参数估计
逻辑斯蒂回归模型学习时,对于给定的训练集数据
T
=
{
(
x
1
,
y
1
)
,
(
x
2
,
y
2
)
,
⋯
,
(
x
N
,
y
N
)
}
T=\left\{\left(\boldsymbol{x}_{1}, y_{1}\right),\left(\boldsymbol{x}_{2}, y_{2}\right), \cdots,\left(\boldsymbol{x}_{N}, y_{N}\right) \right\}
T={(x1,y1),(x2,y2),⋯,(xN,yN)},其中
x
i
=
[
x
i
(
1
)
x
i
(
2
)
⋮
x
i
(
n
)
]
∈
R
n
,
y
i
∈
{
0
,
1
}
\boldsymbol{x}_{i}=\left[\begin{array}{c}{x_{i}^{(1)}} \\ {x_{i}^{(2)}} \\ {\vdots} \\ {x_{i}^{(n)}}\end{array}\right] \in \mathbf{R}^{n}, \quad y_{i} \in\{0,1\}
xi=⎣⎢⎢⎢⎢⎡xi(1)xi(2)⋮xi(n)⎦⎥⎥⎥⎥⎤∈Rn,yi∈{0,1},可以应用极大似然估计法估计模型参数,从而得到逻辑斯蒂回归模型。
设:
P
(
Y
=
1
∣
x
)
=
π
(
x
)
,
P
(
Y
=
0
∣
x
)
=
1
−
π
(
x
)
P(Y=1 | \boldsymbol{x})=\pi(\boldsymbol{x}), \quad P(Y=0 | \boldsymbol{x})=1-\pi(\boldsymbol{x})
P(Y=1∣x)=π(x),P(Y=0∣x)=1−π(x)
似然函数为
∏
i
=
1
N
[
π
(
x
i
)
]
y
i
[
1
−
π
(
x
i
)
]
1
−
y
i
\prod_{i=1}^{N}\left[\pi\left(\boldsymbol{x}_{i}\right)\right]^{y_{i}}\left[1-\pi\left(\boldsymbol{x}_{i}\right)\right]^{1-y_{i}}
i=1∏N[π(xi)]yi[1−π(xi)]1−yi
对数似然函数为
L
(
w
)
=
log
[
∏
i
=
1
N
[
π
(
x
i
)
]
y
i
[
1
−
π
(
x
i
)
]
1
−
y
i
]
=
∑
i
=
1
N
[
y
i
log
π
(
x
i
)
+
(
1
−
y
i
)
log
(
1
−
π
(
x
i
)
)
]
=
∑
i
=
1
N
[
y
i
log
π
(
x
i
)
1
−
π
(
x
i
)
+
log
(
1
−
π
(
x
i
)
)
]
=
∑
i
=
1
N
[
y
i
(
w
⋅
x
i
)
−
log
(
1
+
e
w
⋅
x
i
)
]
\begin{aligned} L(w) &=\log\left[ \prod_{i=1}^{N}\left[\pi\left(\boldsymbol{x}_{i}\right)\right]^{y_{i}}\left[1-\pi\left(\boldsymbol{x}_{i}\right)\right]^{1-y_{i}}\right] \\ &=\sum_{i=1}^{N}\left[y_{i} \log \pi\left(\boldsymbol{x}_{i}\right)+\left(1-y_{i}\right) \log \left(1-\pi\left(\boldsymbol{x}_{i}\right)\right)\right] \\ &=\sum_{i=1}^{N}\left[y_{i} \log \frac{\pi\left(\boldsymbol{x}_{i}\right)}{1-\pi\left(\boldsymbol{x}_{i}\right)}+\log \left(1-\pi\left(\boldsymbol{x}_{i}\right)\right)\right] \\ &=\sum_{i=1}^{N}\left[y_{i}\left(\boldsymbol{w} \cdot \boldsymbol{x}_{i}\right)-\log \left(1+e^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}\right) \right]\end{aligned}
L(w)=log[i=1∏N[π(xi)]yi[1−π(xi)]1−yi]=i=1∑N[yilogπ(xi)+(1−yi)log(1−π(xi))]=i=1∑N[yilog1−π(xi)π(xi)+log(1−π(xi))]=i=1∑N[yi(w⋅xi)−log(1+ew⋅xi)]
接下来求
L
(
w
)
L(w)
L(w)的极大值,从而得到
w
w
w的估计值。
这样一来,问题就变成了以对数似然函数为目标函数的最优化问题,逻辑斯蒂回归中通常的方法就是梯度下降法和拟牛顿法。
假设 w w w的极大似然估计值是 w ^ \hat{\boldsymbol{w}} w^,那么学习到的逻辑斯蒂回归模型为
P
(
Y
=
1
∣
x
)
=
1
1
+
e
−
(
w
^
⋅
x
)
=
e
w
^
⋅
x
1
+
e
w
^
⋅
x
P(Y=1 | \boldsymbol{x})= \frac{1}{1+e^{-(\hat{\boldsymbol{w}} \cdot \boldsymbol{x})}} = \frac{e^{\hat{\boldsymbol{w}} \cdot \boldsymbol{x}}}{1+e^{\hat{\boldsymbol{w}} \cdot \boldsymbol{x}}}
P(Y=1∣x)=1+e−(w^⋅x)1=1+ew^⋅xew^⋅x
P
(
Y
=
0
∣
x
)
=
1
−
P
(
Y
=
1
∣
x
)
=
1
1
+
e
w
^
⋅
x
P(Y=0 | \boldsymbol{x})=1-P(Y=1 | \boldsymbol{x})= \frac{1}{1+e^{\hat{\boldsymbol{w}} \cdot \boldsymbol{x}}}
P(Y=0∣x)=1−P(Y=1∣x)=1+ew^⋅x1
【补充说明】
上面的
L
(
w
)
L(w)
L(w) 中,需要求似然函数的极大值,即求
L
(
w
)
=
log
[
∏
i
=
1
N
[
π
(
x
i
)
]
y
i
[
1
−
π
(
x
i
)
]
1
−
y
i
]
=
∑
i
=
1
N
[
y
i
log
π
(
x
i
)
+
(
1
−
y
i
)
log
(
1
−
π
(
x
i
)
)
]
\begin{aligned} L(w) &=\log\left[ \prod_{i=1}^{N}\left[\pi\left(\boldsymbol{x}_{i}\right)\right]^{y_{i}}\left[1-\pi\left(\boldsymbol{x}_{i}\right)\right]^{1-y_{i}}\right] \\ &=\sum_{i=1}^{N}\left[y_{i} \log \pi\left(\boldsymbol{x}_{i}\right)+\left(1-y_{i}\right) \log \left(1-\pi\left(\boldsymbol{x}_{i}\right)\right)\right] \end{aligned}
L(w)=log[i=1∏N[π(xi)]yi[1−π(xi)]1−yi]=i=1∑N[yilogπ(xi)+(1−yi)log(1−π(xi))]
的极大值,即求得
w
^
=
arg
max
w
∑
i
=
1
N
[
y
i
log
π
(
x
i
)
+
(
1
−
y
i
)
log
(
1
−
π
(
x
i
)
)
]
\begin{aligned} \hat{\boldsymbol{w}} &=\underset{\boldsymbol{w}}{\arg \max } \sum_{i=1}^{N}\left[y_{i} \log \pi\left(\boldsymbol{x}_{i}\right)+\left(1-y_{i}\right) \log \left(1-\pi\left(\boldsymbol{x}_{i}\right)\right)\right] \end{aligned}
w^=wargmaxi=1∑N[yilogπ(xi)+(1−yi)log(1−π(xi))]
其中
w
^
\hat{\boldsymbol{w}}
w^是
w
w
w的极大似然估计值,即优化参数所得的结果。把上式的似然函数再取个相反数,即
w
^
=
arg
min
w
∑
i
=
1
N
[
−
y
i
log
π
(
x
i
)
−
(
1
−
y
i
)
log
(
1
−
π
(
x
i
)
)
]
\begin{aligned} \hat{\boldsymbol{w}} &=\underset{\boldsymbol{w}}{\arg \min } \sum_{i=1}^{N}\left[-y_{i} \log \pi\left(\boldsymbol{x}_{i}\right)-\left(1-y_{i}\right) \log \left(1-\pi\left(\boldsymbol{x}_{i}\right)\right)\right] \end{aligned}
w^=wargmini=1∑N[−yilogπ(xi)−(1−yi)log(1−π(xi))]
使得上式右边取到极小值的 w ^ \hat{\boldsymbol{w}} w^即优化目标的参数。
再对所有样本取平均损失,可得到逻辑回归的损失函数:
L
(
w
)
=
1
N
∑
i
=
1
N
[
−
y
i
log
π
(
x
i
)
−
(
1
−
y
i
)
log
(
1
−
π
(
x
i
)
)
]
=
−
1
N
∑
i
=
1
N
[
y
i
log
π
(
x
i
)
+
(
1
−
y
i
)
log
(
1
−
π
(
x
i
)
)
]
\begin{aligned} L(\boldsymbol{w}) &=\frac{1}{N}\sum_{i=1}^{N}\left[-y_{i} \log \pi\left(\boldsymbol{x}_{i}\right)-\left(1-y_{i}\right) \log \left(1-\pi\left(\boldsymbol{x}_{i}\right)\right)\right] \\ &=-\frac{1}{N}\sum_{i=1}^{N}\left[y_{i} \log \pi\left(\boldsymbol{x}_{i}\right)+\left(1-y_{i}\right) \log \left(1-\pi\left(\boldsymbol{x}_{i}\right)\right)\right] \end{aligned}
L(w)=N1i=1∑N[−yilogπ(xi)−(1−yi)log(1−π(xi))]=−N1i=1∑N[yilogπ(xi)+(1−yi)log(1−π(xi))]
上式这个损失函数即常见的逻辑回归的损失函数,可见它是从极大似然推导出来的。
并且这个损失函数
L
(
w
)
L(\boldsymbol{w})
L(w)是凸函数,没有局部最优解,便于优化。
并且可得损失
L
(
w
)
=
{
−
log
(
π
(
x
i
)
)
if
y
=
1
−
log
(
1
−
π
(
x
i
)
)
if
y
=
0
L(\boldsymbol{w})=\left\{\begin{aligned}-\log \left(\pi\left(\boldsymbol{x}_{i}\right)\right) & \text { if } y=1 \\-\log \left(1-\pi\left(\boldsymbol{x}_{i}\right)\right) & \text { if } y=0 \end{aligned}\right.
L(w)={−log(π(xi))−log(1−π(xi)) if y=1 if y=0
其中
π
(
x
i
)
=
P
(
Y
=
1
∣
x
)
=
e
w
⋅
x
1
+
e
w
⋅
x
=
1
1
+
e
−
(
w
⋅
x
)
\pi\left(\boldsymbol{x}_{i}\right)=P(Y=1 | \boldsymbol{x}) = \frac{e^{\boldsymbol{w} \cdot \boldsymbol{x}}}{1+e^{\boldsymbol{w} \cdot \boldsymbol{x}}} = \frac{1}{1+e^{-(\boldsymbol{w} \cdot \boldsymbol{x})}}
π(xi)=P(Y=1∣x)=1+ew⋅xew⋅x=1+e−(w⋅x)1
以下是直观的理解:
当类别标签为
y
=
1
y=1
y=1时,越靠近
1
1
1则损失越小;当类别标签为
y
=
0
y=0
y=0时,越靠近
1
1
1则损失越大;
3.1.4 多项逻辑斯蒂回归模型(multi-nominal logistic regression model)
上面的二项逻辑斯蒂回归用于二分类问题,接下来的多项逻辑斯蒂回归(multi-nominal logistic regression model)用于多分类问题。
假设离散型随机变量
Y
Y
Y的取值集合是
{
1
,
2
,
⋯
,
K
}
\{1,2, \cdots, K\}
{1,2,⋯,K},即类别标签有
K
K
K类,那么类似的,多项逻辑斯蒂回归模型是
P
(
Y
=
k
∣
x
)
=
e
w
k
⋅
x
1
+
∑
k
=
1
K
−
1
e
w
k
⋅
x
,
k
=
1
,
2
,
⋯
,
K
−
1
P
(
Y
=
K
∣
x
)
=
1
1
+
∑
k
=
1
K
−
1
e
w
k
⋅
x
\begin{aligned} P(Y=k | \boldsymbol{x})&=\frac{e^{\boldsymbol{w}_{k} \cdot \boldsymbol{x}}}{1+\sum_{k=1}^{K-1} e^{\boldsymbol{w}_{k} \cdot \boldsymbol{x}}}, \quad k=1,2, \cdots, K-1 \\ P(Y=K | \boldsymbol{x})&=\frac{1}{1+\sum_{k=1}^{K-1} e^{\boldsymbol{w}_{k} \cdot \boldsymbol{x}}} \end{aligned}
P(Y=k∣x)P(Y=K∣x)=1+∑k=1K−1ewk⋅xewk⋅x,k=1,2,⋯,K−1=1+∑k=1K−1ewk⋅x1
这里,
x
∈
R
n
+
1
,
w
k
∈
R
n
+
1
\boldsymbol{x} \in \mathbf{R}^{n+1}, \boldsymbol{w}_{k} \in \mathbf{R}^{n+1}
x∈Rn+1,wk∈Rn+1
二项逻辑斯蒂回归的参数估计法也可以推广到多项逻辑斯蒂回归中。
3.2 逻辑回归的梯度下降算法
【说明】
本文仅对二项逻辑回归运用梯度下降算法,多项逻辑回归以此类推。
上文已经提到,由于类比线性回归那样,直接采用平方损失的话,由于 σ \sigma σ函数的非线性,会导致这个损失函数是非凸的,所以采用了极大似然估计来推其另一种形式的损失函数,这种由极大似然估计推导出来的损失函数即常见的交叉熵损失函数(Cross Entropy loss )。
由上面已经得到二项逻辑回归的损失函数,接下来利用梯度下降法进行优化求解。
其中,
w
=
[
w
(
1
)
w
(
2
)
⋮
w
(
n
)
b
]
\boldsymbol{w}=\left[\begin{array}{c}{w^{(1)}} \\ {w^{(2)}} \\ {\vdots} \\ {w^{(n)}} \\ {b}\end{array}\right]
w=⎣⎢⎢⎢⎢⎢⎡w(1)w(2)⋮w(n)b⎦⎥⎥⎥⎥⎥⎤,
x
i
=
[
x
i
(
1
)
x
i
(
2
)
⋮
x
i
(
n
)
1
]
\boldsymbol{x}_i=\left[\begin{array}{c}{x_i^{(1)}} \\ {x_i^{(2)}} \\ {\vdots} \\ {x_i^{(n)}} \\ {1}\end{array}\right]
xi=⎣⎢⎢⎢⎢⎢⎢⎡xi(1)xi(2)⋮xi(n)1⎦⎥⎥⎥⎥⎥⎥⎤,下标表示样本数,上标表示特征维度。梯度下降法如下:
Repeat
{
w
j
:
=
w
j
−
α
∂
L
(
w
)
∂
w
j
(
simultaneously(同时) update all
w
j
)
}
\begin{aligned} \text{Repeat} &\Bigg\{ w_j:=w_j-\alpha \frac{\partial L(\boldsymbol{w})}{\partial w_{j}} \\ &(\text{simultaneously(同时) update all} \quad w_j) \\ &\Bigg\} \end{aligned}
Repeat{wj:=wj−α∂wj∂L(w)(simultaneously(同时) update allwj)}
交叉熵损失函数
L
(
w
)
=
−
1
N
∑
i
=
1
N
[
y
i
log
f
(
x
i
)
+
(
1
−
y
i
)
log
(
1
−
f
(
x
i
)
)
]
L(\boldsymbol{w})=-\frac{1}{N} \sum_{i=1}^{N}\left[y_{i} \log f\left(\boldsymbol{x}_{i}\right)+\left(1-y_{i}\right) \log \left(1-f\left(\boldsymbol{x}_{i}\right)\right)\right]
L(w)=−N1i=1∑N[yilogf(xi)+(1−yi)log(1−f(xi))]
其中,预测类别标签为
Y
=
1
Y=1
Y=1的概率
P
(
Y
=
1
∣
x
)
P(Y=1|\boldsymbol{x})
P(Y=1∣x),或者说是预测的输出
f
(
x
i
)
f(x_i)
f(xi)(一般真实的类别标签用
y
i
y_i
yi表示)
P
(
Y
=
1
∣
x
)
=
f
(
x
i
)
=
1
1
+
e
w
⋅
x
i
P(Y=1 | \boldsymbol{x})=f\left(x_{i}\right)=\frac{1}{1+e^{ \boldsymbol{w} \cdot \boldsymbol{x}_{i}}}
P(Y=1∣x)=f(xi)=1+ew⋅xi1
所以
y
i
log
f
(
x
i
)
+
(
1
−
y
i
)
log
(
1
−
f
(
x
i
)
)
=
y
i
log
(
1
1
+
e
−
w
⋅
x
i
)
+
(
1
−
y
i
)
log
(
1
−
1
1
+
e
−
w
⋅
x
i
)
=
−
y
i
log
(
1
+
e
−
w
⋅
x
i
)
+
(
1
−
y
i
)
log
(
e
−
w
⋅
x
i
1
+
e
−
w
⋅
x
i
)
=
−
y
i
log
(
1
+
e
−
w
⋅
x
i
)
−
(
1
−
y
i
)
log
(
1
+
e
−
w
⋅
x
i
e
−
w
⋅
x
i
)
=
−
y
i
log
(
1
+
e
−
w
⋅
x
i
)
−
(
1
−
y
i
)
log
(
1
+
e
−
w
⋅
x
i
)
=
−
y
i
log
(
1
+
e
−
w
⋅
x
i
)
−
(
1
−
y
i
)
log
(
1
+
e
−
w
⋅
x
i
)
\begin{aligned} & y_{i} \log f\left(\boldsymbol{x}_{i}\right)+\left(1-y_{i}\right) \log \left(1-f\left(\boldsymbol{x}_{i}\right)\right) \\=& y_{i} \log \left(\frac{1}{1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}\right)+\left(1-y_{i}\right) \log \left(1-\frac{1}{1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}\right) \\=&-y_{i} \log \left(1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}\right)+\left(1-y_{i}\right) \log \left(\frac{e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}{1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}\right) \\=&-y_{i} \log \left(1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}\right)-\left(1-y_{i}\right) \log \left(\frac{1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}{e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}\right) \\=&-y_{i} \log \left(1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}\right)-\left(1-y_{i}\right) \log \left(1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}\right) \\=&-y_{i} \log \left(1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}\right)-\left(1-y_{i}\right) \log \left(1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}\right) \end{aligned}
=====yilogf(xi)+(1−yi)log(1−f(xi))yilog(1+e−w⋅xi1)+(1−yi)log(1−1+e−w⋅xi1)−yilog(1+e−w⋅xi)+(1−yi)log(1+e−w⋅xie−w⋅xi)−yilog(1+e−w⋅xi)−(1−yi)log(e−w⋅xi1+e−w⋅xi)−yilog(1+e−w⋅xi)−(1−yi)log(1+e−w⋅xi)−yilog(1+e−w⋅xi)−(1−yi)log(1+e−w⋅xi)
所以
∂
L
(
w
)
∂
w
i
=
∂
∂
w
i
{
−
1
N
∑
i
=
1
N
[
y
i
log
f
(
x
i
)
+
(
1
−
y
i
)
log
(
1
−
f
(
x
i
)
)
]
}
=
∂
∂
w
i
{
−
1
N
∑
i
=
1
N
[
−
y
i
log
(
1
+
e
−
w
⋅
x
i
)
−
(
1
−
y
i
)
log
(
1
+
e
w
⋅
x
i
)
]
}
=
−
1
N
∑
i
=
1
N
[
−
y
i
(
−
x
i
(
j
)
e
−
w
⋅
x
i
1
+
e
−
w
⋅
x
i
)
−
(
1
−
y
i
)
(
x
i
(
j
)
e
w
⋅
x
i
1
+
e
w
⋅
x
i
)
]
=
−
1
N
∑
i
=
1
N
[
y
i
(
x
i
(
j
)
1
+
e
w
⋅
x
i
)
−
(
1
−
y
i
)
(
x
i
(
j
)
e
w
⋅
x
i
1
+
e
w
⋅
x
i
)
]
=
−
1
N
∑
i
=
1
N
[
y
i
x
i
(
j
)
−
x
i
(
j
)
e
w
⋅
x
i
+
y
i
x
i
(
j
)
e
w
⋅
x
i
1
+
e
w
⋅
x
i
]
=
−
1
N
∑
i
=
1
N
[
(
y
i
(
1
+
e
w
⋅
x
i
)
−
e
w
⋅
x
i
1
+
e
w
⋅
x
i
)
x
i
(
j
)
]
=
−
1
N
∑
i
=
1
N
[
(
y
i
−
e
w
⋅
x
i
1
+
e
w
⋅
x
i
)
x
i
(
j
)
]
=
−
1
N
∑
i
=
1
N
[
(
y
i
−
1
1
+
e
−
w
⋅
x
i
)
x
i
(
j
)
]
=
−
1
N
∑
i
=
1
N
[
(
y
i
−
f
(
x
i
)
)
x
i
(
j
)
]
=
1
N
∑
i
=
1
N
[
f
(
x
i
)
−
(
y
i
)
x
i
(
j
)
]
\begin{aligned} \frac{\partial L(\boldsymbol{w})}{\partial w_{i}} &=\frac{\partial}{\partial w_{i}}\left\{-\frac{1}{N} \sum_{i=1}^{N}\left[y_{i} \log f\left(\boldsymbol{x}_{i}\right)+\left(1-y_{i}\right) \log \left(1-f\left(\boldsymbol{x}_{\boldsymbol{i}}\right)\right)\right]\right\} \\ &=\frac{\partial}{\partial w_{i}}\left\{-\frac{1}{N} \sum_{i=1}^{N}\left[-y_{i} \log \left(1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}\right)-\left(1-y_{i}\right) \log \left(1+e^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}\right)\right]\right\} \\ &=-\frac{1}{N} \sum_{i=1}^{N}\left[-y_{i}\left(\frac{-x_{i}^{(j)} e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}{1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}\right)-\left(1-y_{i}\right)\left(\frac{\boldsymbol{x}_{i}^{(j)} \boldsymbol{e}^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}{1+\boldsymbol{e}^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}\right)\right] \\ &=-\frac{1}{N} \sum_{i=1}^{N}\left[y_{i}\left(\frac{x_{i}^{(j)}}{1+e^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}\right)-\left(1-y_{i}\right)\left(\frac{{x}_{i}^{(j)} e^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}{1+\boldsymbol{e}^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}\right)\right] \\ &=-\frac{1}{N} \sum_{i=1}^{N}\left[\frac{y_{i} x_{i}^{(j)}-x_{i}^{(j)} e^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}+y_{i} x_{i}^{(j)} e^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}{1+e^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}\right] \\ &=-\frac{1}{N} \sum_{i=1}^{N}\left[\left(\frac{y_{i}\left(1+e^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}\right)-e^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}{1+e^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}\right) x_{i}^{(j)}\right] \\ &=-\frac{1}{N} \sum_{i=1}^{N}\left[\left(y_{i}-\frac{e^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}{1+e^{\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}\right) x_{i}^{(j)}\right] \\ &=-\frac{1}{N} \sum_{i=1}^{N}\left[\left(y_{i}-\frac{1}{1+e^{-\boldsymbol{w} \cdot \boldsymbol{x}_{i}}}\right) x_{i}^{(j)}\right] \\ &=-\frac{1}{N} \sum_{i=1}^{N}\left[\left(y_{i}-f\left(\boldsymbol{x}_{i}\right)\right) x_{i}^{(j)}\right] \\ &=\frac{1}{N} \sum_{i=1}^{N}\left[f\left(\boldsymbol{x}_{i}\right)-\left(y_{i}\right) x_{i}^{(j)}\right] \end{aligned}
∂wi∂L(w)=∂wi∂{−N1i=1∑N[yilogf(xi)+(1−yi)log(1−f(xi))]}=∂wi∂{−N1i=1∑N[−yilog(1+e−w⋅xi)−(1−yi)log(1+ew⋅xi)]}=−N1i=1∑N[−yi(1+e−w⋅xi−xi(j)e−w⋅xi)−(1−yi)(1+ew⋅xixi(j)ew⋅xi)]=−N1i=1∑N[yi(1+ew⋅xixi(j))−(1−yi)(1+ew⋅xixi(j)ew⋅xi)]=−N1i=1∑N[1+ew⋅xiyixi(j)−xi(j)ew⋅xi+yixi(j)ew⋅xi]=−N1i=1∑N[(1+ew⋅xiyi(1+ew⋅xi)−ew⋅xi)xi(j)]=−N1i=1∑N[(yi−1+ew⋅xiew⋅xi)xi(j)]=−N1i=1∑N[(yi−1+e−w⋅xi1)xi(j)]=−N1i=1∑N[(yi−f(xi))xi(j)]=N1i=1∑N[f(xi)−(yi)xi(j)]
所以梯度下降法如下:
Repeat
{
w
j
:
=
w
j
−
α
1
N
∑
i
=
1
N
[
f
(
x
i
)
−
(
y
i
)
x
i
(
j
)
]
(
simultaneously(同时) update all
w
j
)
}
\begin{aligned} \text{Repeat} &\Bigg\{ w_j:=w_j-\alpha \frac{1}{N} \sum_{i=1}^{N}\left[f\left(\boldsymbol{x}_{i}\right)-\left(y_{i}\right) x_{i}^{(j)}\right] \\ &(\text{simultaneously(同时) update all} \quad w_j) \\ &\Bigg\} \end{aligned}
Repeat{wj:=wj−αN1i=1∑N[f(xi)−(yi)xi(j)](simultaneously(同时) update allwj)}
【注】
:虽然得到的梯度下降算法表面上看与线性回归的梯度下降算法一样,但是这里的
f
(
x
)
=
g
(
w
⋅
x
)
=
σ
(
w
⋅
x
)
f(x)=g(\boldsymbol {w \cdot x})=\sigma (\boldsymbol {w \cdot x})
f(x)=g(w⋅x)=σ(w⋅x)是非线性的,与线性回归实际上是不一样的。另外,在运行梯度下降算法时,进行特征缩放是必要的,这个在后续再行补充。
参考
1.李航《统计学方法》
2.周志华《机器学习》
3.吴恩达《机器学习》课程
推荐阅读