@(机器学习)[回归]
#Softmax回归详解
在softmax回归中,我们解决的是多分类问题(相对于logistic回归解决的二分类问题),标记
y
y
y可以取
k
k
k个不同的值。对于训练集
{
(
x
(
1
)
,
y
(
1
)
)
,
⋯
,
(
x
(
m
)
,
y
(
m
)
)
}
\{(x^{(1)},y^{(1)}),\cdots,(x^{(m)},y^{(m)})\}
{(x(1),y(1)),⋯,(x(m),y(m))},我们有
y
(
1
)
∈
{
1
,
2
,
⋯
,
k
}
y^{(1)}\in \{1,2,\cdots,k\}
y(1)∈{1,2,⋯,k}。
对于给定的测试输入
x
x
x,我们相拥假设函数针对每一个类别
j
j
j估算出概率值
P
(
y
=
j
∣
x
)
P(y=j|x)
P(y=j∣x)。因此,我们的假设函数要输出一个
k
k
k维的向量(向量元素的和为1)类表示
k
k
k个估计的概率值。我们采用如下形式的假设函数
h
θ
(
x
)
h_{\theta}(x)
hθ(x):
h
θ
(
x
(
i
)
)
=
[
P
(
y
(
i
)
=
1
∣
x
(
i
)
;
θ
)
P
(
y
(
i
)
=
2
∣
x
(
i
)
;
θ
)
⋮
P
(
y
(
i
)
=
10
∣
x
(
i
)
;
θ
)
]
=
1
∑
j
=
1
k
e
θ
j
T
x
(
i
)
=
[
e
θ
1
T
x
(
i
)
e
θ
2
T
x
(
i
)
⋮
e
θ
k
T
x
(
i
)
]
(1)
h_{\theta}(x^{(i)})= \begin{bmatrix} P(y^{(i)}=1|x^{(i)};\theta) \\ P(y^{(i)}=2|x^{(i)};\theta) \\ \vdots \\ P(y^{(i)}=10|x^{(i)};\theta) \end{bmatrix} =\frac{1}{\sum_{j=1}^ke^{\theta_j^Tx^{(i)}}}= \begin{bmatrix} e^{\theta_1^Tx^{(i)}} \\ e^{\theta_2^Tx^{(i)}} \\ \vdots \\ e^{\theta_k^Tx^{(i)}} \end{bmatrix} \tag{1}
hθ(x(i))=
P(y(i)=1∣x(i);θ)P(y(i)=2∣x(i);θ)⋮P(y(i)=10∣x(i);θ)
=∑j=1keθjTx(i)1=
eθ1Tx(i)eθ2Tx(i)⋮eθkTx(i)
(1)
参数
θ
\theta
θ是一个
k
×
(
n
+
1
)
k\times (n+1)
k×(n+1)的参数矩阵
P
(
y
(
i
)
∣
x
(
i
)
;
θ
)
=
∏
j
=
1
k
{
e
θ
j
T
x
(
i
)
∑
l
=
1
k
e
θ
l
T
x
(
i
)
}
1
(
y
(
i
)
=
j
)
(2)
P(y^{(i)}|x^{(i)};\theta)=\prod_{j=1}^k\left\{\frac{e^{\theta_j^Tx^{(i)}}}{\sum_{l=1}^ke^{\theta_l^Tx^{(i)}}}\right\}^{1(y^{(i)}=j)} \tag{2}
P(y(i)∣x(i);θ)=j=1∏k{∑l=1keθlTx(i)eθjTx(i)}1(y(i)=j)(2)
似然函数为:
L
(
θ
)
=
P
(
Y
∣
X
;
θ
)
=
∏
i
=
1
m
P
(
y
(
i
)
∣
x
(
i
)
;
θ
)
=
∏
i
=
1
m
∏
j
=
1
k
{
e
θ
j
T
x
(
i
)
∑
l
=
1
k
e
θ
l
T
x
(
i
)
}
1
(
y
(
i
)
=
j
)
\begin{align*} L(\theta) &=P(\boldsymbol{Y}|\boldsymbol{X};\theta) \\ &=\prod_{i=1}^{m}P(y^{(i)}|x^{(i)};\theta) \\ &=\prod_{i=1}^{m}\prod_{j=1}^k\left\{\frac{e^{\theta_j^Tx^{(i)}}}{\sum_{l=1}^ke^{\theta_l^Tx^{(i)}}}\right\}^{1(y^{(i)}=j)}\\ \tag{3} \end{align*}
L(θ)=P(Y∣X;θ)=i=1∏mP(y(i)∣x(i);θ)=i=1∏mj=1∏k{∑l=1keθlTx(i)eθjTx(i)}1(y(i)=j)
对数似然函数为:
l
(
θ
)
=
log
L
(
θ
)
=
∑
i
=
1
m
∑
j
=
1
k
1
(
y
(
i
)
=
j
)
log
e
θ
j
T
x
(
i
)
∑
l
=
1
k
e
θ
l
T
x
(
i
)
\begin{align*} l(\theta) &=\log L(\theta) \\ &=\sum_{i=1}^{m}\sum_{j=1}^k1(y^{(i)}=j)\log{\frac{e^{\theta_j^Tx^{(i)}}}{\sum_{l=1}^ke^{\theta_l^Tx^{(i)}}}}\\ \tag{4} \end{align*}
l(θ)=logL(θ)=i=1∑mj=1∑k1(y(i)=j)log∑l=1keθlTx(i)eθjTx(i)
我们将训练模型参数
θ
\theta
θ使其能够最小化代价函数:
J
(
θ
)
=
−
1
m
[
∑
i
=
1
m
∑
j
=
1
k
1
(
y
(
i
)
=
j
)
log
e
θ
j
T
x
(
i
)
∑
l
=
1
k
e
θ
l
T
x
(
i
)
]
(5)
J(\theta)=-\frac{1}{m}\left[\sum_{i=1}^{m}\sum_{j=1}^k1(y^{(i)}=j)\log{\frac{e^{\theta_j^Tx^{(i)}}}{\sum_{l=1}^ke^{\theta_l^Tx^{(i)}}}}\right] \tag{5}
J(θ)=−m1[i=1∑mj=1∑k1(y(i)=j)log∑l=1keθlTx(i)eθjTx(i)](5)
机器学习之softmax回归
于 2017-06-29 21:18:07 首次发布