我的个人博客:https://huaxuan0720.github.io/ ,欢迎访问
前言
EM算法是一种可以求解含有隐变量的迭代算法,当我们在实际过程中收集数据的时候,并不一定会收集全部的有效信息。比如,当我们想统计全校学生的身高分布的时候,可以将全校学生的身高看作是一个正态分布,但是毕竟男生和女生之间身高的分布还是有些不同的,但是万一我们没有对性别信息进行统计,而只是统计了身高信息的话,求得的概率分布的参数肯定会有较大的误差,这个时候,我们就需要将每一个样本的性别分布也考虑进去,从而希望获得更准确的概率分布估计。
一、准备工作
1、极大似然估计
极大似然估计我们并不陌生,在逻辑回归的求解过程中,我们就是用了极大似然估计,现在还是简单说明一下。
假设我们现在有一个概率分布,不妨记作
P
(
x
;
θ
)
P(x;\theta)
P(x;θ),其中,
θ
\theta
θ是未知参数,有可能是一个数值,也有可能是多个数值组成的参数向量,
x
x
x表示输入样本。现在我们想通过抽样的方式对参数
θ
\theta
θ进行估计。假设我们一共采集了
N
N
N组数据,为
{
x
1
,
x
2
,
⋯
 
,
x
N
}
\{x_1, x_2, \cdots, x_N\}
{x1,x2,⋯,xN}。那么样本的联合概率函数可以表示为关于
θ
\theta
θ的函数,即:
L
(
θ
)
=
L
(
x
1
,
x
2
,
⋯
 
,
x
N
;
θ
)
=
∏
i
N
P
(
x
i
;
θ
)
L(\theta) = L(x_1, x_2, \cdots, x_N;\theta) = \prod_i^N P(x_i;\theta)
L(θ)=L(x1,x2,⋯,xN;θ)=i∏NP(xi;θ)
L
(
θ
)
L(\theta)
L(θ)是参数
θ
θ
θ 的函数,随着
θ
θ
θ 在参数变化,
L
L
L函数也在变化。而极大似然估计目的就是在样本
{
x
1
,
.
.
.
,
x
N
}
\{x_1,...,x_N\}
{x1,...,xN}固定的情况下,寻找最优的
θ
θ
θ 来极大化似然函数:
θ
∗
=
arg
max
θ
L
(
θ
)
\theta^* = \mathop{\arg\max}_{\theta}{L(\theta)}
θ∗=argmaxθL(θ)
上式在数学领域,可以看作是对
θ
∗
θ^*
θ∗求解,求
L
(
θ
)
L(θ)
L(θ) 函数的极值点,导数为0处,即为
θ
∗
θ*
θ∗ 的点。
又因为
L
(
θ
)
L(θ)
L(θ) 和
l
n
(
θ
)
ln(θ)
ln(θ) 在同一个
θ
θ
θ 处取得极值,我们可以对
L
(
θ
)
L(θ)
L(θ) 取对数,将连乘转化为连加(方便求导),得到对数化似然函数:
θ
∗
=
arg
max
θ
l
n
  
L
(
θ
)
=
arg
max
θ
∑
i
l
n
  
P
(
x
i
;
θ
)
\theta^* = \mathop{\arg\max}_{\theta}{ln\;L(\theta)} = \mathop{\arg\max}_{\theta} \sum_i ln\; P(x_i;\theta)
θ∗=argmaxθlnL(θ)=argmaxθi∑lnP(xi;θ)
2、Jensen不等式
下图是一张描述Jensen不等式十分经典的图。
如果一个函数 f ( x ) f(x) f(x)在其定义域内是一个连续函数,且其二阶导数恒小于等于0,我们称该函数在其定义域上是凹函数,反之,如果二阶导数恒大于等于0,我们称该函数在其定义域上是凸函数。
如果
f
(
x
)
f(x)
f(x)是一个凸函数,那么在其定义域上我们有:
E
(
f
(
X
)
)
≥
f
(
E
(
X
)
)
E(f(X)) \geq f(E(X))
E(f(X))≥f(E(X))
反之,如果
f
(
x
)
f(x)
f(x)是一个凹函数,在其定义域上我们有:
E
(
f
(
X
)
)
≤
f
(
E
(
X
)
)
E(f(X)) \leq f(E(X))
E(f(X))≤f(E(X))
其中,
E
E
E表示对变量取期望。上面两个不等式当且仅当
X
X
X是一个常量时可以取等号。
3、边缘分布
假设我们有两个随机变量,那么我们通过抽样,就会获得一个二维的联合概率分布 P ( X = x i , Y = y j ) P(X=x_i,Y=y_j) P(X=xi,Y=yj)。
对每一个
X
=
x
i
X=x_i
X=xi,对其所有的
Y
Y
Y进行求和操作,我们有:
∑
y
j
P
(
X
=
x
i
,
Y
=
y
j
)
=
P
(
X
=
x
i
)
\sum_{y_j}P(X=x_i, Y=y_j) = P(X=x_i)
yj∑P(X=xi,Y=yj)=P(X=xi)
将上面的式子称之为
X
=
x
i
X=x_i
X=xi的边际分布(边缘分布)。
有了以上的一些基础准备,我们就可以来推导EM算法了。
二、EM算法
假设我们的数据集为:
D
=
{
x
(
1
)
,
x
(
2
)
,
⋯
 
,
x
(
N
)
}
D = \{x^{(1)}, x^{(2)}, \cdots, x^{(N)}\}
D={x(1),x(2),⋯,x(N)}
其中 x ( i ) x^{(i)} x(i) 是每一个具体的输出实例,表示每一次独立实验的结果, N N N表示独立实验的次数。
我们设样本的概率分布函数为 P ( x ( i ) ; θ ) P(x^{(i)};\theta) P(x(i);θ),其中 θ \theta θ是模型中的待估参数,可以是一个变量,也可以是多个变量所组成的参数向量。
根据极大似然估计,我们有:
(1)
L
(
θ
)
=
∏
i
P
(
x
(
i
)
;
θ
)
1
≤
i
≤
N
L(\theta) = \prod_{i}P(x^{(i)}; \theta) \quad 1 \leq i \leq N \tag{1}
L(θ)=i∏P(x(i);θ)1≤i≤N(1)
两边同时取对数:
(2)
l
n
  
L
(
θ
)
=
∑
i
l
n
  
P
(
x
(
i
)
;
θ
)
1
≤
i
≤
N
ln\;L(\theta) = \sum_{i} ln \; P(x^{(i)}; \theta) \quad 1 \leq i \leq N \tag{2}
lnL(θ)=i∑lnP(x(i);θ)1≤i≤N(2)
此时,我们可以将
P
(
x
(
i
)
;
θ
)
P(x^{(i)}; \theta)
P(x(i);θ)看作是关于隐变量的一个边缘分布,即(我们假设隐变量为
Z
Z
Z):
(3)
l
n
  
L
(
θ
)
=
∑
i
l
n
  
∑
z
(
i
)
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
1
≤
i
≤
N
ln \; L(\theta) = \sum_i ln \; \sum_{z^{(i)}} P(x^{(i)}, z^{(i)}; \theta) \quad 1 \leq i \leq N \tag{3}
lnL(θ)=i∑lnz(i)∑P(x(i),z(i);θ)1≤i≤N(3)
这里我们利用了边缘分布的相关等式,即:
P
(
x
(
i
)
;
θ
)
=
∑
z
(
i
)
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
P(x^{(i)}; \theta) = \sum_{z^{(i)}} P(x^{(i)}, z^{(i)}; \theta)
P(x(i);θ)=z(i)∑P(x(i),z(i);θ)
在上面的式子中,
z
z
z是一个隐藏变量,必然也会满足一个特定的概率分布,我们不妨把这个分布记作
Q
i
(
z
(
i
)
)
Q_{i}(z^{(i)})
Qi(z(i)),显然,我们有
∑
z
(
i
)
Q
i
(
z
(
i
)
)
=
1
\sum_{z^{(i)}} Q_i(z^{(i)}) = 1
∑z(i)Qi(z(i))=1。这里的下标和上标
i
i
i表示的是第
i
i
i个样本。故我们将上式改写成:
(4)
l
n
  
L
(
θ
)
=
∑
i
l
n
∑
z
(
i
)
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
=
∑
i
l
n
∑
z
(
i
)
Q
i
(
z
(
i
)
)
⋅
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
\begin{aligned} ln \; L(\theta) &= \sum_i ln \sum_{z^{(i)}} P(x^{(i)}, z^{(i)}; \theta) \\ &= \sum_i ln \sum_{z^{(i)}} Q_i(z^{(i)}) \cdot \frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})} \end{aligned} \tag{4}
lnL(θ)=i∑lnz(i)∑P(x(i),z(i);θ)=i∑lnz(i)∑Qi(z(i))⋅Qi(z(i))P(x(i),z(i);θ)(4)
现在,我们把如下的部分单独拿出来:
Q
i
(
z
(
i
)
)
=
p
(
z
(
i
)
)
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
=
f
(
z
(
i
)
)
Q_i(z^{(i)}) = p(z^{(i)}) \\ \frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})} = f(z^{(i)})
Qi(z(i))=p(z(i))Qi(z(i))P(x(i),z(i);θ)=f(z(i))
很显然,我们有
∑
z
(
i
)
Q
i
(
z
(
i
)
)
=
∑
i
p
(
z
(
i
)
)
=
1
\sum_{z^{(i)}} Q_i(z^{(i)}) = \sum_i p(z^{(i)}) = 1
∑z(i)Qi(z(i))=∑ip(z(i))=1,所以,我们可以将上式写成:
(5)
l
n
  
L
(
θ
)
=
∑
i
l
n
∑
z
(
i
)
p
(
z
(
i
)
)
f
(
z
(
i
)
)
ln \; L(\theta) = \sum_i ln \sum_{z^{(i)}} p(z^{(i)}) f(z^{(i)}) \tag{5}
lnL(θ)=i∑lnz(i)∑p(z(i))f(z(i))(5)
可以看出,我们的
∑
z
(
i
)
p
(
z
(
i
)
)
f
(
z
(
i
)
)
\sum_{z^{(i)}} p(z^{(i)}) f(z^{(i)})
∑z(i)p(z(i))f(z(i))实际上实在对
f
(
z
(
i
)
)
f(z^{(i)})
f(z(i))计算期望,其中
p
(
z
(
i
)
)
p(z^{(i)})
p(z(i))是函数
f
(
z
(
i
)
)
f(z^{(i)})
f(z(i))的概率分布函数,于是,我们可以把上面的式子记作:
(6)
E
[
f
(
z
(
i
)
)
]
=
∑
z
(
i
)
Q
i
(
z
(
i
)
)
⋅
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
E[f(z^{(i)})] = \sum_{z^{(i)}} Q_i(z^{(i)}) \cdot \frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})} \tag{6}
E[f(z(i))]=z(i)∑Qi(z(i))⋅Qi(z(i))P(x(i),z(i);θ)(6)
于是,我们的似然函数就变成了:
(7)
l
n
  
L
(
θ
)
=
∑
i
l
n
  
(
E
[
f
(
z
(
i
)
)
]
)
=
∑
i
l
n
  
(
E
[
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
]
)
ln \; L(\theta) = \sum_i ln \;(E[f(z^{(i)})]) = \sum_i ln \; (E[\frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})}]) \tag{7}
lnL(θ)=i∑ln(E[f(z(i))])=i∑ln(E[Qi(z(i))P(x(i),z(i);θ)])(7)
这个时候就是Jensen不等式出场的时候了。
我们观察到函数
g
(
x
)
=
l
n
(
x
)
g(x)=ln(x)
g(x)=ln(x),它的一阶导数是
g
′
(
x
)
=
1
x
g'(x) = \frac{1}{x}
g′(x)=x1,二阶导数是
g
′
′
(
x
)
=
−
1
x
2
g''(x) = - \frac{1}{x^2}
g′′(x)=−x21恒小于0,因此
g
(
x
)
=
l
n
(
x
)
g(x) = ln(x)
g(x)=ln(x)是一个凹函数,此时,我们利用Jensen不等式处理
l
n
  
L
(
θ
)
ln \;L(\theta)
lnL(θ),有:
(8)
l
n
  
L
(
θ
)
=
∑
i
l
n
(
E
[
f
(
x
(
i
)
)
]
)
≥
∑
i
E
[
l
n
(
f
(
x
P
(
i
)
)
)
]
=
∑
i
E
[
l
n
(
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
)
]
=
∑
i
∑
z
(
i
)
Q
i
(
z
(
i
)
)
⋅
l
n
(
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
)
)
\begin{aligned} ln \; L(\theta) &= \sum_i ln(E[f(x^{(i)})]) \\ &\geq \sum_i E[ln(f(x^P(i)))] \\ &= \sum_i E[ln(\frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})})] \\ &= \sum_i \sum_{z^{(i)}} Q_i(z^{(i)}) \cdot ln(\frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})})) \end{aligned} \tag{8}
lnL(θ)=i∑ln(E[f(x(i))])≥i∑E[ln(f(xP(i)))]=i∑E[ln(Qi(z(i))P(x(i),z(i);θ))]=i∑z(i)∑Qi(z(i))⋅ln(Qi(z(i))P(x(i),z(i);θ)))(8)
故:我们根据Jensen不等式,有了以下的一个重要的不等式关系:
(9)
l
n
  
L
(
θ
)
≥
∑
i
∑
z
(
i
)
Q
i
(
z
(
i
)
)
⋅
l
n
(
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
)
)
ln \; L(\theta) \geq \sum_i \sum_{z^{(i)}} Q_i(z^{(i)}) \cdot ln(\frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})})) \tag{9}
lnL(θ)≥i∑z(i)∑Qi(z(i))⋅ln(Qi(z(i))P(x(i),z(i);θ)))(9)
需要注意的是,我们使用Jensen不等式的时候,是对
z
(
i
)
z^{(i)}
z(i)的分布使用的,而
∑
i
∑
z
(
i
)
Q
i
(
z
(
i
)
)
⋅
l
n
(
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
)
)
\sum_i \sum_{z^{(i)}} Q_i(z^{(i)}) \cdot ln(\frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})}))
∑i∑z(i)Qi(z(i))⋅ln(Qi(z(i))P(x(i),z(i);θ)))是函数
l
n
  
L
(
θ
)
ln \; L(\theta)
lnL(θ)的一个下界,所以实际上,
l
n
  
L
(
θ
)
ln \; L(\theta)
lnL(θ)包含了两个参数变量,一个是
θ
\theta
θ, 一个是隐藏变量
z
(
i
)
z^{(i)}
z(i),所以我们需要弄清楚调整
θ
\theta
θ和
z
(
i
)
z^{(i)}
z(i)的区别。
由于Jensen不等式处理的是 z ( i ) z^{(i)} z(i),因此当我们调整 z ( i ) z^{(i)} z(i)的时候,我们实际上实在调整似然函数 l n    L ( θ ) ln \; L(\theta) lnL(θ)的下界,使得似然函数 l n    L ( θ ) ln \; L(\theta) lnL(θ)的下界一点一点上升,最终于此时的似然函数 l n    L ( θ ) ln \; L(\theta) lnL(θ)的值相等。
然后,固定 z ( i ) z^{(i)} z(i)的数值,调整 θ \theta θ的时候,就可以将 z ( i ) z^{(i)} z(i)看作是一个已知变量,这个时候就可以利用极大似然估计的方法对 θ \theta θ参数的值进行计算,此时会得到一个新的 θ \theta θ的值,不妨记作 θ ′ \theta' θ′。我们这个时候再根据这个新的 θ ′ \theta' θ′的值,重新调整 z ( i ) z^{(i)} z(i),使得函数的下界一点一点上升,达到和 l n    L ( θ ) ln \; L(\theta) lnL(θ)相同之后,再固定 z ( i ) z^{(i)} z(i),更新 θ \theta θ的值。一直重复以上过程,直到似然函数收敛到某一个极大值 θ ∗ \theta^* θ∗处。
下图是一个很经典的关于EM算法的迭代过程示意图。(图片来自网络)
在上面的求解过程中,只有当此时的函数下界等于当前
θ
\theta
θ的对数似然函数时,才能保证当我们优化了这个下界的时候,才真正优化了目标似然函数。
(10)
l
n
  
L
(
θ
)
≥
∑
i
∑
z
(
i
)
Q
i
(
z
(
i
)
)
⋅
l
n
(
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
)
)
ln \; L(\theta) \geq \sum_i \sum_{z^{(i)}} Q_i(z^{(i)}) \cdot ln(\frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})})) \tag{10}
lnL(θ)≥i∑z(i)∑Qi(z(i))⋅ln(Qi(z(i))P(x(i),z(i);θ)))(10)
在优化迭代的过程中,我们通过固定
θ
\theta
θ并调整
z
(
i
)
z^{(i)}
z(i)的可能分布,使得等式成立,即达到
l
n
  
L
(
θ
)
ln \; L(\theta)
lnL(θ)的下界。根据Jensen不等式的条件,当
f
(
x
)
f(x)
f(x)是一个凹函数的时候,有
f
(
E
[
X
]
)
≥
E
[
f
(
X
)
]
f(E[X]) \geq E[f(X)]
f(E[X])≥E[f(X)],欲使等号成立,
X
X
X需要是一个常量。那么,在上面的式子中,我们有
X
=
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
X = \frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})}
X=Qi(z(i))P(x(i),z(i);θ),故此时我们需要将
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
)
\frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})})
Qi(z(i))P(x(i),z(i);θ))看作一个常数,不妨我们设这个常数为
C
C
C,于是我们有:
(11)
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
)
=
C
\frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})}) = C \tag{11}
Qi(z(i))P(x(i),z(i);θ))=C(11)
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
=
C
Q
i
(
z
(
i
)
)
P(x^{(i)}, z^{(i)}; \theta) = C Q_i(z^{(i)})
P(x(i),z(i);θ)=CQi(z(i))
(12)
∑
z
(
i
)
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
=
C
∑
z
(
i
)
Q
i
(
z
(
i
)
)
\sum_{z^{(i)}} P(x^{(i)}, z^{(i)}; \theta) = C \sum_{z^{(i)}} Q_i(z^{(i)}) \tag{12}
z(i)∑P(x(i),z(i);θ)=Cz(i)∑Qi(z(i))(12)
考虑到
Q
i
(
z
(
i
)
)
Q_i(z^{(i)})
Qi(z(i))实际上是隐变量
z
(
i
)
z^{(i)}
z(i)的一个概率分布,满足:
∑
z
(
i
)
Q
i
(
z
(
i
)
)
=
1
,
Q
i
(
z
(
i
)
)
≥
0
\sum_{z^{(i)}} Q_i(z^{(i)}) = 1 ,\quad Q_i(z^{(i)}) \geq 0
z(i)∑Qi(z(i))=1,Qi(z(i))≥0
于是,我们将
∑
z
(
i
)
Q
i
(
z
(
i
)
)
=
1
\sum_{z^{(i)}} Q_i(z^{(i)}) = 1
∑z(i)Qi(z(i))=1代入到上面的式子(12)中,有:
(11)
∑
z
(
i
)
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
=
C
\sum_{z^{(i)}} P(x^{(i)}, z^{(i)}; \theta) = C \tag{11}
z(i)∑P(x(i),z(i);θ)=C(11)
再将
C
C
C带入到公式(11)中,我们有:
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
=
C
=
∑
z
(
i
)
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
\frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})} = C = \sum_{z^{(i)}} P(x^{(i)}, z^{(i)}; \theta)
Qi(z(i))P(x(i),z(i);θ)=C=z(i)∑P(x(i),z(i);θ)
(12) Q i ( z ( i ) ) = P ( x ( i ) , z ( i ) ; θ ) ∑ z ( i ) P ( x ( i ) , z ( i ) ; θ ) = P ( x ( i ) , z ( i ) ; θ ) P ( x ( i ) ; θ ) = P ( z ( i ) ∣ x ( i ) ; θ ) \begin{aligned} Q_i(z^{(i)}) &= \frac{P(x^{(i)}, z^{(i)};\theta)}{\sum_{z^{(i)}} P(x^{(i)}, z^{(i)}; \theta)} \\ &= \frac{P(x^{(i)}, z^{(i)};\theta)}{P(x^{(i)};\theta)} \\ &= P(z^{(i)}|x^{(i)};\theta) \end{aligned} \tag{12} Qi(z(i))=∑z(i)P(x(i),z(i);θ)P(x(i),z(i);θ)=P(x(i);θ)P(x(i),z(i);θ)=P(z(i)∣x(i);θ)(12)
即我们可以得到 Q i ( z ( i ) ) Q_i(z^{(i)}) Qi(z(i))的值,也即我们得到 P ( z ( i ) ∣ x ( i ) ; θ ) P(z^{(i)}|x^{(i)};\theta) P(z(i)∣x(i);θ)的值,表示在当前的模型参数 θ \theta θ为定值时,在给定的 x ( i ) x^{(i)} x(i)的条件下,得到 z ( i ) z^{(i)} z(i)的概率大小。
至此,我们的EM算法的大部分情况进行了说明。首先,我们会对模型的参数 θ \theta θ 进行随机初始化,不妨记作 θ 0 \theta^0 θ0。然后会在每一次的迭代循环中计算 z ( i ) z^{(i)} z(i)的条件概率期望,这就是EM算法中的”E步“。最后再根据计算得到的概率分布,根据极大似然的方法计算在当前隐藏变量分布下的使得似然函数取得极大的 θ \theta θ的值,并进行更新,这就是EM算法中的”M步“。
观察到在”M步“中,我们有:
l
n
  
L
(
θ
)
=
∑
i
∑
z
(
i
)
Q
i
(
z
(
i
)
)
⋅
l
n
(
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
Q
i
(
z
(
i
)
)
)
)
ln \; L(\theta) = \sum_i \sum_{z^{(i)}} Q_i(z^{(i)}) \cdot ln(\frac{P(x^{(i)}, z^{(i)}; \theta)}{Q_i(z^{(i)})}))
lnL(θ)=i∑z(i)∑Qi(z(i))⋅ln(Qi(z(i))P(x(i),z(i);θ)))
θ ( j + 1 ) = arg max θ ∑ i ∑ z ( i ) Q i ( z ( i ) ) l n ( P ( x ( i ) , z ( i ) ; θ ) ) − ∑ i ∑ z ( i ) Q i ( z ( i ) ) l n ( Q i ( z ( i ) ) ) \theta^{(j + 1)} = \mathop{\arg\max}_{\theta}\sum_i \sum_{z^{(i)}} Q_i(z^{(i)}) ln (P(x^{(i)}, z^{(i)};\theta)) - \sum_i \sum_{z^{(i)}} Q_i(z^{(i)})ln(Q_i(z^{(i)})) θ(j+1)=argmaxθi∑z(i)∑Qi(z(i))ln(P(x(i),z(i);θ))−i∑z(i)∑Qi(z(i))ln(Qi(z(i)))
观察到在上面的式子中,
∑
i
∑
z
(
i
)
Q
i
(
z
(
i
)
)
l
n
(
Q
i
(
z
(
i
)
)
)
\sum_i \sum_{z^{(i)}} Q_i(z^{(i)})ln(Q_i(z^{(i)}))
∑i∑z(i)Qi(z(i))ln(Qi(z(i)))对于整个优化的过程来说相当于是一个常数项,因此可以省略,于是,上式可以简写成:
(13)
θ
(
j
+
1
)
=
arg
max
θ
∑
i
∑
z
(
i
)
Q
i
(
z
(
i
)
)
l
n
(
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
)
=
arg
max
θ
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
l
n
(
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
)
\begin{aligned} \theta^{(j + 1)} &= \mathop{\arg\max}_{\theta}\sum_i \sum_{z^{(i)}} Q_i(z^{(i)}) ln (P(x^{(i)}, z^{(i)};\theta)) \\ &= \mathop{\arg\max}_{\theta}\sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)}) ln (P(x^{(i)}, z^{(i)};\theta)) \end{aligned} \tag{13}
θ(j+1)=argmaxθi∑z(i)∑Qi(z(i))ln(P(x(i),z(i);θ))=argmaxθi∑z(i)∑P(z(i)∣x(i);θ(j))ln(P(x(i),z(i);θ))(13)
公式(13)内部的
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
l
n
(
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
)
\sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)}) ln (P(x^{(i)}, z^{(i)};\theta))
∑i∑z(i)P(z(i)∣x(i);θ(j))ln(P(x(i),z(i);θ))就是EM算法的核心,我们一般将其称之为Q函数,通常记为:
Q
(
θ
,
θ
(
j
)
)
Q(\theta, \theta^{(j)})
Q(θ,θ(j))。
所以,我们的EM算法可以总结如下:
-
数据集为 D = { x ( 1 ) , x ( 2 ) , ⋯   , x ( N ) } D = \{x^{(1)}, x^{(2)}, \cdots, x^{(N)}\} D={x(1),x(2),⋯,x(N)} ,随机初始化模型参数 θ \theta θ,记作 θ ( 0 ) \theta^{(0)} θ(0)。
-
对每一次迭代循环, j = 0 , 1 , 2 , 3 , ⋯   , M j = 0, 1, 2, 3, \cdots, M j=0,1,2,3,⋯,M,我们有:
-
E步(E-Step):在当前的模型参数 θ ( j ) \theta^{(j)} θ(j)的条件下,计算联合分布的条件概率期望:
Q i ( z ( i ) ) = P ( z ( i ) ∣ x ( i ) ; θ ( j ) ) Q_i(z^{(i)}) = P(z^{(i)}|x^{(i)};\theta^{(j)}) Qi(z(i))=P(z(i)∣x(i);θ(j)) -
M步(M-Step):在计算出条件概率分布的期望的基础上,极大化似然函数,得到新的模型参数的值 θ ( j + 1 ) \theta^{(j+1)} θ(j+1):
θ ( j + 1 ) = arg max θ ∑ i ∑ z ( i ) P ( z ( i ) ∣ x ( i ) ; θ ( j ) ) l n ( P ( x ( i ) , z ( i ) ; θ ) ) \theta^{(j+1)} = \mathop{\arg\max}_{\theta}\sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)}) ln (P(x^{(i)}, z^{(i)};\theta)) θ(j+1)=argmaxθi∑z(i)∑P(z(i)∣x(i);θ(j))ln(P(x(i),z(i);θ))
-
-
如果 θ ( j + 1 ) \theta^{(j+1)} θ(j+1)已经收敛,则跳出循环结束:
-
输出最后模型参数 θ \theta θ的值。
三、EM算法解决三硬币模型
三硬币模型是EM算法的一个简单使用,问题请参考《统计学习方法》一书。
假设有三枚质量分布不均匀的硬币A、B、C,这些硬币正面出现的概率分别是 π \pi π、 p p p、 q q q。进行如下掷硬币试验: 先掷A,如果A是正面则再掷B,如果A是反面则再掷C。对于B或C的结果,如果是正面则记为1,如果是反面则记为0。进行N次独立重复实验,得到结果。现在只能观测到结果,不能观测到掷硬币的过程,估计模型参数 θ = ( π , p , q ) \theta=(\pi,p,q) θ=(π,p,q)。
由于实验一共进行了N次,每一次都是独立重复实验,那么我们可以将实验结果记录如下,其中每一次的实验结果是已知的:
X
=
{
x
(
1
)
,
x
(
2
)
,
⋯
 
,
x
(
N
)
}
x
(
i
)
∈
{
0
,
1
}
X = \{x^{(1)}, x^{(2)}, \cdots, x^{(N)}\} \quad x^{(i)} \in \{0, 1\}
X={x(1),x(2),⋯,x(N)}x(i)∈{0,1}
每次独立实验都会产生一个隐藏变量
z
(
i
)
z^{(i)}
z(i),这个隐藏变量是无法被观测到的,我们可以将其记录如下,这个隐藏变量的记录结果是未知的:
Z
=
{
z
(
1
)
,
z
(
2
)
,
⋯
 
,
z
(
N
)
}
z
(
i
)
∈
{
0
,
1
}
Z = \{z^{(1)}, z^{(2)}, \cdots, z^{(N)}\} \quad z^{(i)} \in \{0, 1\}
Z={z(1),z(2),⋯,z(N)}z(i)∈{0,1}
对于第
i
i
i次的独立重复实验,我们有:
P
(
x
(
i
)
=
0
;
θ
)
=
π
(
1
−
p
)
1
−
x
(
i
)
+
(
1
−
π
)
(
1
−
q
)
1
−
x
(
i
)
P(x^{(i)} = 0;\theta) = \pi(1-p)^{1-x^{(i)}} + (1-\pi)(1-q)^{1-x^{(i)}}
P(x(i)=0;θ)=π(1−p)1−x(i)+(1−π)(1−q)1−x(i)
P ( x ( i ) = 1 ; θ ) = π p x ( i ) + ( 1 − π ) q 1 − x ( i ) P(x^{(i)}=1;\theta) = \pi p^{x^{(i)}} + (1-\pi)q^{1-x^{(i)}} P(x(i)=1;θ)=πpx(i)+(1−π)q1−x(i)
故,综合起来看,我们有:
P
(
x
(
i
)
;
θ
)
=
π
p
x
(
i
)
(
1
−
p
)
1
−
x
(
i
)
+
(
1
−
π
)
q
x
(
i
)
(
1
−
q
)
1
−
x
(
i
)
P(x^{(i)};\theta) = \pi p^{x^{(i)}} (1-p)^{1-x^{(i)}} + (1-\pi)q^{x^{(i)}}(1-q)^{1-x^{(i)}}
P(x(i);θ)=πpx(i)(1−p)1−x(i)+(1−π)qx(i)(1−q)1−x(i)
构造极大似然函数
我们可以构造我们的极大似然函数如下:
L
(
θ
)
=
∏
i
P
(
x
(
i
)
;
θ
)
=
∏
i
[
π
p
x
(
i
)
(
1
−
p
)
1
−
x
(
i
)
+
(
1
−
π
)
q
x
(
i
)
(
1
−
q
)
1
−
x
(
i
)
]
\begin{aligned} L(\theta) &= \prod_i P(x^{(i)};\theta) \\ &= \prod_i [\pi p^{x^{(i)}} (1-p)^{1-x^{(i)}} + (1-\pi)q^{x^{(i)}}(1-q)^{1-x^{(i)}}] \end{aligned}
L(θ)=i∏P(x(i);θ)=i∏[πpx(i)(1−p)1−x(i)+(1−π)qx(i)(1−q)1−x(i)]
两边同时取对数,有:
l
n
  
L
(
θ
)
=
∑
i
l
n
  
[
π
p
x
(
i
)
(
1
−
p
)
1
−
x
(
i
)
+
(
1
−
π
)
q
x
(
i
)
(
1
−
q
)
1
−
x
(
i
)
]
ln \; L(\theta) = \sum_i ln\;[\pi p^{x^{(i)}} (1-p)^{1-x^{(i)}} + (1-\pi)q^{x^{(i)}}(1-q)^{1-x^{(i)}}]
lnL(θ)=i∑ln[πpx(i)(1−p)1−x(i)+(1−π)qx(i)(1−q)1−x(i)]
构造我们的Q函数
在没有说明的情况下,我们使用下标表示第几次迭代过程,用上标表示第几个样本, θ ( j ) \theta^{(j)} θ(j)的上标表示第 j j j次迭代。
对于三硬币问题,我们的Q函数可以构造如下:
Q
(
θ
,
θ
(
j
)
)
=
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
l
n
(
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
)
=
∑
i
{
P
(
z
(
i
)
=
1
∣
x
(
i
)
;
θ
(
j
)
)
⋅
l
n
  
P
(
x
(
i
)
,
z
(
i
)
=
1
;
θ
)
+
P
(
z
(
i
)
=
0
∣
x
(
i
)
;
θ
(
j
)
)
⋅
l
n
  
P
(
x
(
i
)
,
z
(
i
)
=
0
;
θ
)
}
\begin{aligned} Q(\theta, \theta^{(j)}) &= \sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)}) ln (P(x^{(i)}, z^{(i)};\theta)) \\ &= \sum_i \{P(z^{(i)} =1|x^{(i)};\theta^{(j)})\cdot ln\;P(x^{(i)}, z^{(i)}=1;\theta) + P(z^{(i)} =0|x^{(i)};\theta^{(j)})\cdot ln\;P(x^{(i)}, z^{(i)}=0;\theta)\} \\ \end{aligned}
Q(θ,θ(j))=i∑z(i)∑P(z(i)∣x(i);θ(j))ln(P(x(i),z(i);θ))=i∑{P(z(i)=1∣x(i);θ(j))⋅lnP(x(i),z(i)=1;θ)+P(z(i)=0∣x(i);θ(j))⋅lnP(x(i),z(i)=0;θ)}
故,我们需要求解
P
(
z
(
i
)
=
1
∣
x
(
i
)
;
θ
(
j
)
)
P(z^{(i)} =1|x^{(i)};\theta^{(j)})
P(z(i)=1∣x(i);θ(j)),
P
(
x
(
i
)
,
z
(
i
)
=
1
;
θ
)
P(x^{(i)}, z^{(i)}=1;\theta)
P(x(i),z(i)=1;θ),
P
(
z
(
i
)
=
0
∣
x
(
i
)
;
θ
(
j
)
)
P(z^{(i)} =0|x^{(i)};\theta^{(j)})
P(z(i)=0∣x(i);θ(j)),
P
(
x
(
i
)
,
z
(
i
)
=
0
;
θ
)
P(x^{(i)}, z^{(i)}=0;\theta)
P(x(i),z(i)=0;θ)这四个概率值。
求解极大值
P ( z ( i ) = 1 ∣ x ( i ) ; θ ( j ) ) = P ( x ( i ) , z ( i ) = 1 ; θ ( j ) ) P ( x ( i ) ) ; θ ( j ) ) = π j ⋅ p j x ( i ) ⋅ ( 1 − p j ( 1 − x ( i ) ) ) π j ⋅ p j x ( i ) ⋅ ( 1 − p j ( 1 − x ( i ) ) ) + ( 1 − π j ) ⋅ q j x ( i ) ⋅ ( 1 − q j ) 1 − x ( i ) = μ j ( i ) \begin{aligned} P(z^{(i)}=1|x^{(i)};\theta^{(j)}) &= \frac{P(x^{(i)}, z^{(i)}=1;\theta^{(j)})}{P(x^{(i)});\theta^{(j)})} \\ &= \frac{\pi_j \cdot p_j^{x^{(i)}} \cdot (1 - p_j^{(1-x^{(i)})})}{\pi_j \cdot p_j^{x^{(i)}} \cdot (1 - p_j^{(1-x^{(i)})}) + (1-\pi_j) \cdot q_j^{x^{(i)}} \cdot (1-q_j)^{1-x^{(i)}}} \\ &= \mu_j^{(i)} \end{aligned} P(z(i)=1∣x(i);θ(j))=P(x(i));θ(j))P(x(i),z(i)=1;θ(j))=πj⋅pjx(i)⋅(1−pj(1−x(i)))+(1−πj)⋅qjx(i)⋅(1−qj)1−x(i)πj⋅pjx(i)⋅(1−pj(1−x(i)))=μj(i)
上式对于迭代过程来说是一个定值,我们使用符号 μ j ( i ) \mu_j^{(i)} μj(i)来表示,上标 ( i ) (i) (i)表示的是第 i i i个样本,下标 j j j表示的是第 j j j次迭代过程。
那么很明显,我们有:
P
(
z
(
i
)
=
0
∣
x
(
i
)
;
θ
(
j
)
)
=
P
(
x
(
i
)
,
z
(
i
)
=
0
;
θ
(
j
)
)
P
(
x
(
i
)
)
;
θ
(
j
)
)
=
(
1
−
π
j
)
⋅
q
j
x
(
i
)
⋅
(
1
−
q
j
)
1
−
x
(
j
)
π
j
⋅
p
j
x
(
i
)
⋅
(
1
−
p
j
(
1
−
x
(
i
)
)
)
+
(
1
−
π
j
)
⋅
q
j
x
(
i
)
⋅
(
1
−
q
j
)
1
−
x
(
i
)
=
1
−
μ
j
(
i
)
\begin{aligned} P(z^{(i)}=0|x^{(i)};\theta^{(j)}) &= \frac{P(x^{(i)}, z^{(i)}=0;\theta^{(j)})}{P(x^{(i)});\theta^{(j)})} \\ &= \frac{(1-\pi_j) \cdot q_j^{x^{(i)}} \cdot (1-q_j)^{1-x^{(j)}}}{\pi_j \cdot p_j^{x^{(i)}} \cdot (1 - p_j^{(1-x^{(i)})}) + (1-\pi_j) \cdot q_j^{x^{(i)}} \cdot (1-q_j)^{1-x^{(i)}}} \\ &= 1 - \mu_j^{(i)} \end{aligned}
P(z(i)=0∣x(i);θ(j))=P(x(i));θ(j))P(x(i),z(i)=0;θ(j))=πj⋅pjx(i)⋅(1−pj(1−x(i)))+(1−πj)⋅qjx(i)⋅(1−qj)1−x(i)(1−πj)⋅qjx(i)⋅(1−qj)1−x(j)=1−μj(i)
接着,我们计算
P
(
x
(
i
)
,
z
(
i
)
=
1
;
θ
)
P(x^{(i)}, z^{(i)}=1;\theta)
P(x(i),z(i)=1;θ),
P
(
x
(
i
)
,
z
(
i
)
=
0
;
θ
)
P(x^{(i)}, z^{(i)}=0;\theta)
P(x(i),z(i)=0;θ):
P
(
x
(
i
)
,
z
(
i
)
=
1
;
θ
)
=
π
⋅
p
x
(
i
)
⋅
(
1
−
p
)
1
−
x
(
i
)
P(x^{(i)}, z^{(i)}=1;\theta) = \pi \cdot p^{x^{(i)}} \cdot (1-p)^{1-x^{(i)}}
P(x(i),z(i)=1;θ)=π⋅px(i)⋅(1−p)1−x(i)
P ( x ( i ) , z ( i ) = 0 ; θ ) = ( 1 − π ) ⋅ q ( i ) ⋅ ( 1 − q ) 1 − x ( i ) P(x^{(i)}, z^{(i)}=0;\theta)=(1-\pi) \cdot q^{(i)}\cdot (1-q)^{1-x^{(i)}} P(x(i),z(i)=0;θ)=(1−π)⋅q(i)⋅(1−q)1−x(i)
我们将上面的计算结果都带入到Q函数中,有:
Q
(
θ
,
θ
(
j
)
)
=
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
l
n
(
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
)
=
∑
i
{
P
(
z
(
i
)
=
1
∣
x
(
i
)
;
θ
(
j
)
)
⋅
l
n
  
P
(
x
(
i
)
,
z
(
i
)
=
1
;
θ
)
+
P
(
z
(
i
)
=
0
∣
x
(
i
)
;
θ
(
j
)
)
⋅
l
n
  
P
(
x
(
i
)
,
z
(
i
)
=
0
;
θ
)
}
=
∑
i
{
μ
j
(
i
)
⋅
l
n
  
[
π
⋅
p
x
(
i
)
⋅
(
1
−
p
)
1
−
x
(
i
)
]
+
(
1
−
μ
j
(
i
)
)
⋅
l
n
  
[
(
1
−
π
)
⋅
q
(
i
)
⋅
(
1
−
q
)
1
−
x
(
i
)
]
}
\begin{aligned} Q(\theta, \theta^{(j)}) &= \sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)}) ln (P(x^{(i)}, z^{(i)};\theta)) \\ &= \sum_i \{P(z^{(i)} =1|x^{(i)};\theta^{(j)})\cdot ln\;P(x^{(i)}, z^{(i)}=1;\theta) + P(z^{(i)} =0|x^{(i)};\theta^{(j)})\cdot ln\;P(x^{(i)}, z^{(i)}=0;\theta)\} \\ &= \sum_i \{\mu_j^{(i)} \cdot ln\;[\pi \cdot p^{x^{(i)}} \cdot (1-p)^{1-x^{(i)}}] + (1-\mu_j^{(i)})\cdot ln\; [(1-\pi) \cdot q^{(i)}\cdot (1-q)^{1-x^{(i)}}]\} \end{aligned}
Q(θ,θ(j))=i∑z(i)∑P(z(i)∣x(i);θ(j))ln(P(x(i),z(i);θ))=i∑{P(z(i)=1∣x(i);θ(j))⋅lnP(x(i),z(i)=1;θ)+P(z(i)=0∣x(i);θ(j))⋅lnP(x(i),z(i)=0;θ)}=i∑{μj(i)⋅ln[π⋅px(i)⋅(1−p)1−x(i)]+(1−μj(i))⋅ln[(1−π)⋅q(i)⋅(1−q)1−x(i)]}
下一步就是对我们需要求解的变量进行求偏导数的操作,如下:
∂
Q
∂
π
=
∑
i
{
μ
j
(
i
)
⋅
p
x
(
i
)
⋅
(
1
−
p
)
1
−
x
(
i
)
π
⋅
p
x
(
i
)
⋅
(
1
−
p
)
1
−
x
(
i
)
+
(
1
−
μ
j
(
i
)
)
⋅
−
1
⋅
q
(
i
)
⋅
(
1
−
q
)
1
−
x
(
i
)
(
1
−
π
)
⋅
q
(
i
)
⋅
(
1
−
q
)
1
−
x
(
i
)
}
=
∑
i
{
μ
j
(
i
)
⋅
1
π
+
(
μ
j
(
i
)
−
1
)
⋅
1
1
−
π
}
=
∑
i
{
μ
j
(
i
)
(
1
π
+
1
1
−
π
)
−
1
1
−
π
}
=
∑
i
{
μ
j
(
i
)
⋅
1
π
(
1
−
π
)
}
−
N
π
π
(
1
−
π
)
=
1
π
(
1
−
π
)
{
∑
i
μ
j
(
i
)
−
N
π
}
\begin{aligned} \frac{\partial Q}{\partial \pi} &= \sum_i \{\mu_j^{(i)} \cdot \frac{p^{x^{(i)}} \cdot (1-p)^{1-x^{(i)}}}{\pi \cdot p^{x^{(i)}} \cdot (1-p)^{1-x^{(i)}}} + (1-\mu_j^{(i)})\cdot \frac{-1 \cdot q^{(i)}\cdot (1-q)^{1-x^{(i)}}}{(1-\pi) \cdot q^{(i)}\cdot (1-q)^{1-x^{(i)}}}\} \\ &= \sum_i \{\mu_j^{(i)} \cdot \frac{1}{\pi} + (\mu_j^{(i)}-1)\cdot \frac{1}{1-\pi}\} \\ &= \sum_i \{\mu_j^{(i)}(\frac{1}{\pi} + \frac{1}{1-\pi}) - \frac{1}{1-\pi} \} \\ &= \sum_i \{\mu_j^{(i)} \cdot \frac{1}{\pi(1-\pi)}\} - \frac{N\pi}{\pi(1-\pi)} \\ &= \frac{1}{\pi(1-\pi)}\{\sum_i \mu_j^{(i)} - N\pi\} \end{aligned}
∂π∂Q=i∑{μj(i)⋅π⋅px(i)⋅(1−p)1−x(i)px(i)⋅(1−p)1−x(i)+(1−μj(i))⋅(1−π)⋅q(i)⋅(1−q)1−x(i)−1⋅q(i)⋅(1−q)1−x(i)}=i∑{μj(i)⋅π1+(μj(i)−1)⋅1−π1}=i∑{μj(i)(π1+1−π1)−1−π1}=i∑{μj(i)⋅π(1−π)1}−π(1−π)Nπ=π(1−π)1{i∑μj(i)−Nπ}
令上式为0,我们有:
∑
i
μ
j
(
i
)
−
N
π
=
0
\sum_i \mu_j^{(i)} - N\pi = 0
i∑μj(i)−Nπ=0
即:
π
=
1
N
∑
i
μ
j
(
i
)
\pi = \frac{1}{N} \sum_i \mu_j^{(i)}
π=N1i∑μj(i)
同样的道理,我们可以计算出Q函数相对于
p
p
p的偏导数,如下:
∂
Q
∂
p
=
∑
i
μ
j
(
i
)
x
(
i
)
⋅
p
x
(
i
)
−
1
⋅
(
1
−
p
)
1
−
x
(
i
)
+
p
x
(
i
)
⋅
(
1
−
x
(
i
)
)
⋅
(
1
−
p
)
−
x
(
i
)
⋅
(
−
1
)
p
x
(
i
)
⋅
(
1
−
p
)
1
−
x
(
i
)
+
0
=
∑
i
μ
j
(
i
)
x
(
i
)
p
⋅
p
x
(
i
)
⋅
(
1
−
p
)
1
−
x
(
i
)
+
p
x
(
i
)
⋅
(
1
−
p
)
1
−
x
(
i
)
⋅
1
1
−
p
⋅
(
1
−
x
(
i
)
)
⋅
(
−
1
)
p
x
(
i
)
⋅
(
1
−
p
)
1
−
x
(
i
)
=
∑
i
μ
j
(
i
)
{
x
(
i
)
p
+
1
−
x
(
i
)
p
−
1
}
=
∑
i
μ
j
(
i
)
⋅
(
p
−
1
)
⋅
x
(
i
)
+
p
(
1
−
x
(
i
)
)
p
(
p
−
1
)
=
1
p
(
p
−
1
)
∑
i
μ
j
(
i
)
{
p
−
x
(
i
)
}
=
1
p
(
p
−
1
)
{
p
⋅
∑
i
μ
j
(
i
)
−
∑
i
μ
j
(
i
)
⋅
x
(
i
)
}
\begin{aligned} \frac{\partial Q}{\partial p} &= \sum_i \mu_j^{(i)} \frac{x^{(i)} \cdot p^{x^{(i)}-1} \cdot (1-p)^{1-x^{(i)}} + p^{x^{(i)}}\cdot (1-x^{(i)})\cdot (1-p)^{-x^{(i)}}\cdot (-1)}{p^{x^{(i)}}\cdot (1-p)^{1-x^{(i)}}} + 0 \\ &= \sum_i \mu_j^{(i)} \frac{\frac{x^{(i)}}{p}\cdot p^{x^{(i)}}\cdot (1-p)^{1-x^{(i)}} + p^{x^{(i)}}\cdot (1-p)^{1-x^{(i)}} \cdot \frac{1}{1-p}\cdot (1-x^{(i)})\cdot (-1)}{p^{x^{(i)}}\cdot (1-p)^{1-x^{(i)}}} \\ &= \sum_i \mu_j^{(i)} \{\frac{x^{(i)}}{p} + \frac{1-x^{(i)}}{p-1}\} \\ &= \sum_i \mu_j^{(i)} \cdot \frac{(p-1)\cdot x^{(i)} + p(1-x^{(i)})}{p(p-1)} \\ &= \frac{1}{p(p-1)} \sum_i \mu_j^{(i)} \{p-x^{(i)}\} \\ &= \frac{1}{p(p-1)}\{p \cdot \sum_i \mu_j^{(i)} - \sum_i \mu_j^{(i)} \cdot x^{(i)}\} \end{aligned}
∂p∂Q=i∑μj(i)px(i)⋅(1−p)1−x(i)x(i)⋅px(i)−1⋅(1−p)1−x(i)+px(i)⋅(1−x(i))⋅(1−p)−x(i)⋅(−1)+0=i∑μj(i)px(i)⋅(1−p)1−x(i)px(i)⋅px(i)⋅(1−p)1−x(i)+px(i)⋅(1−p)1−x(i)⋅1−p1⋅(1−x(i))⋅(−1)=i∑μj(i){px(i)+p−11−x(i)}=i∑μj(i)⋅p(p−1)(p−1)⋅x(i)+p(1−x(i))=p(p−1)1i∑μj(i){p−x(i)}=p(p−1)1{p⋅i∑μj(i)−i∑μj(i)⋅x(i)}
令上式等于0,我们可以得到:
p
⋅
∑
i
μ
j
(
i
)
−
∑
i
μ
j
(
i
)
⋅
x
(
i
)
=
0
p \cdot \sum_i \mu_j^{(i)} - \sum_i \mu_j^{(i)} \cdot x^{(i)} = 0
p⋅i∑μj(i)−i∑μj(i)⋅x(i)=0
即:
p
=
∑
i
μ
j
(
i
)
⋅
x
(
i
)
∑
i
μ
j
(
i
)
p = \frac{\sum_i \mu_j^{(i)\cdot x^{(i)}}}{\sum_i \mu_j^{(i)}}
p=∑iμj(i)∑iμj(i)⋅x(i)
同理,我们对
q
q
q求偏导数,有:
∂
Q
∂
q
=
∑
i
(
1
−
μ
j
(
i
)
)
(
x
(
i
)
q
+
1
−
x
(
i
)
p
−
1
)
=
∑
i
(
1
−
μ
j
(
i
)
)
q
−
x
(
i
)
q
(
q
−
1
)
=
1
q
(
q
−
1
)
{
q
⋅
∑
i
(
1
−
μ
j
(
i
)
)
−
∑
i
(
1
−
μ
j
(
i
)
)
x
(
i
)
}
\begin{aligned} \frac{\partial Q}{\partial q} &= \sum_i (1-\mu_j^{(i)})(\frac{x^{(i)}}{q}+\frac{1-x^{(i)}}{p-1}) \\ &= \sum_i (1-\mu_j^{(i)})\frac{q-x^{(i)}}{q(q-1)} \\ &= \frac{1}{q(q-1)}\{q\cdot \sum_i (1-\mu_j^{(i)}) - \sum_i (1-\mu_j^{(i)})x^{(i)} \} \end{aligned}
∂q∂Q=i∑(1−μj(i))(qx(i)+p−11−x(i))=i∑(1−μj(i))q(q−1)q−x(i)=q(q−1)1{q⋅i∑(1−μj(i))−i∑(1−μj(i))x(i)}
令上式等于0,我们有:
q
⋅
∑
i
(
1
−
μ
j
(
i
)
)
−
∑
i
(
1
−
μ
j
(
i
)
)
x
(
i
)
=
0
q\cdot \sum_i (1-\mu_j^{(i)}) - \sum_i (1-\mu_j^{(i)})x^{(i)} =0
q⋅i∑(1−μj(i))−i∑(1−μj(i))x(i)=0
即:
q
=
∑
i
(
1
−
μ
j
(
i
)
)
x
(
i
)
∑
i
(
1
−
μ
j
(
i
)
)
q = \frac{\sum_i (1-\mu_j^{(i)})x^{(i)}}{\sum_i (1-\mu_j^{(i)})}
q=∑i(1−μj(i))∑i(1−μj(i))x(i)
所以,以上就是我们解决三硬币模型的迭代公式的求解过程,公式汇总如下,这里加入了下标,表示新的一轮迭代变量:
μ
j
(
i
)
=
π
j
⋅
p
j
x
(
i
)
⋅
(
1
−
p
j
(
1
−
x
(
i
)
)
)
π
j
⋅
p
j
x
(
i
)
⋅
(
1
−
p
j
(
1
−
x
(
i
)
)
)
+
(
1
−
π
j
)
⋅
q
j
x
(
i
)
⋅
(
1
−
q
j
)
1
−
x
(
i
)
\mu_j^{(i)} = \frac{\pi_j \cdot p_j^{x^{(i)}} \cdot (1 - p_j^{(1-x^{(i)})})}{\pi_j \cdot p_j^{x^{(i)}} \cdot (1 - p_j^{(1-x^{(i)})}) + (1-\pi_j) \cdot q_j^{x^{(i)}} \cdot (1-q_j)^{1-x^{(i)}}}
μj(i)=πj⋅pjx(i)⋅(1−pj(1−x(i)))+(1−πj)⋅qjx(i)⋅(1−qj)1−x(i)πj⋅pjx(i)⋅(1−pj(1−x(i)))
π j + 1 = 1 N ∑ i μ j ( i ) \pi_{j+1} = \frac{1}{N} \sum_i \mu_j^{(i)} πj+1=N1i∑μj(i)
p j + 1 = ∑ i μ j ( i ) ⋅ x ( i ) ∑ i μ j ( i ) p_{j+1} = \frac{\sum_i \mu_j^{(i)\cdot x^{(i)}}}{\sum_i \mu_j^{(i)}} pj+1=∑iμj(i)∑iμj(i)⋅x(i)
q j + 1 = ∑ i ( 1 − μ j ( i ) ) x ( i ) ∑ i ( 1 − μ j ( i ) ) q_{j+1} = \frac{\sum_i (1-\mu_j^{(i)})x^{(i)}}{\sum_i (1-\mu_j^{(i)})} qj+1=∑i(1−μj(i))∑i(1−μj(i))x(i)
四、EM算法的收敛性
在之前的过程中,我们都是默认EM算法可以收敛到某一极大值附近,但是并没有给出十分严格的证明,所以,我们需要对EM的收敛性进行一定的验证。
由于我们是利用极大似然估计来估计参数的值,那么,我们只需要保证在每一次的迭代过程中,似然函数的数值都在上升即可,即下面的不等式成立:
l
n
  
L
(
θ
(
j
+
1
)
)
≥
l
n
  
L
(
θ
(
j
)
)
ln \; L(\theta^{(j+1)}) \geq ln\;L(\theta^{(j)})
lnL(θ(j+1))≥lnL(θ(j))
由于:
P
(
x
(
i
)
;
θ
)
=
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
)
P(x^{(i)};\theta) = \frac{P(x^{(i)}, z^{(i)};\theta)}{P(z^{(i)}|x^{(i)};\theta)}
P(x(i);θ)=P(z(i)∣x(i);θ)P(x(i),z(i);θ)
因此,两边取对数,我们有:
l
n
  
P
(
x
(
i
)
;
θ
)
=
l
n
  
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
−
l
n
  
P
(
z
(
i
)
∣
x
(
i
)
;
θ
)
ln \; P(x^{(i)};\theta) = ln\; P(x^{(i)}, z^{(i)};\theta) - ln\;P(z^{(i)}|x^{(i)};\theta)
lnP(x(i);θ)=lnP(x(i),z(i);θ)−lnP(z(i)∣x(i);θ)
对每一个样本进行累加,我们有:
∑
i
l
n
  
P
(
x
(
i
)
;
θ
)
=
∑
i
(
l
n
  
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
−
l
n
  
P
(
z
(
i
)
∣
x
(
i
)
;
θ
)
)
\sum_i ln \; P(x^{(i)};\theta) = \sum_i (ln\; P(x^{(i)}, z^{(i)};\theta) - ln\;P(z^{(i)}|x^{(i)};\theta))
i∑lnP(x(i);θ)=i∑(lnP(x(i),z(i);θ)−lnP(z(i)∣x(i);θ))
根据我们构造的Q函数,我们有:
Q
(
θ
,
θ
(
j
)
)
=
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
l
n
(
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
)
Q(\theta, \theta^{(j)}) = \sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)}) ln (P(x^{(i)}, z^{(i)};\theta))
Q(θ,θ(j))=i∑z(i)∑P(z(i)∣x(i);θ(j))ln(P(x(i),z(i);θ))
另,我们可以构造如下的一个函数,记作
H
(
θ
,
θ
(
j
)
)
H(\theta, \theta^{(j)})
H(θ,θ(j)),如下:
H
(
θ
,
θ
(
j
)
)
=
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
l
n
(
P
(
z
(
i
)
∣
x
(
i
)
;
θ
)
)
H(\theta, \theta^{(j)}) = \sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)}) ln (P(z^{(i)}| x^{(i)};\theta))
H(θ,θ(j))=i∑z(i)∑P(z(i)∣x(i);θ(j))ln(P(z(i)∣x(i);θ))
我们将上面的两个函数相减,有:
Q
(
θ
,
θ
(
j
)
)
−
H
(
θ
,
θ
(
j
)
)
=
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
l
n
(
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
)
−
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
l
n
(
P
(
z
(
i
)
∣
x
(
i
)
;
θ
)
)
=
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
(
l
n
  
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
−
l
n
  
P
(
z
(
i
)
∣
x
(
i
)
;
θ
)
)
=
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
l
n
  
P
(
x
(
i
)
,
z
(
i
)
;
θ
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
)
=
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
l
n
  
P
(
x
(
i
)
;
θ
)
=
∑
i
l
n
  
P
(
x
(
i
)
;
θ
)
(
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
)
=
∑
i
l
n
  
P
(
x
(
i
)
;
θ
)
=
l
n
  
L
(
θ
)
\begin{aligned} Q(\theta, \theta^{(j)}) - H(\theta, \theta^{(j)}) &= \sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)}) ln (P(x^{(i)}, z^{(i)};\theta)) \\&\quad- \sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)}) ln (P(z^{(i)}| x^{(i)};\theta)) \\ &= \sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)}) (ln\;P(x^{(i)}, z^{(i)};\theta) - ln\;P(z^{(i)}| x^{(i)};\theta)) \\ &= \sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)})ln\;\frac{P(x^{(i)}, z^{(i)};\theta)}{P(z^{(i)}| x^{(i)};\theta)} \\ &= \sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)})ln\;P(x^{(i)};\theta) \\ &= \sum_i ln\;P(x^{(i)};\theta) (\sum_{z^{(i)}}P(z^{(i)}|x^{(i)};\theta^{(j)}) ) \\ &= \sum_i ln\;P(x^{(i)};\theta) \\ &= ln \;L(\theta) \end{aligned}
Q(θ,θ(j))−H(θ,θ(j))=i∑z(i)∑P(z(i)∣x(i);θ(j))ln(P(x(i),z(i);θ))−i∑z(i)∑P(z(i)∣x(i);θ(j))ln(P(z(i)∣x(i);θ))=i∑z(i)∑P(z(i)∣x(i);θ(j))(lnP(x(i),z(i);θ)−lnP(z(i)∣x(i);θ))=i∑z(i)∑P(z(i)∣x(i);θ(j))lnP(z(i)∣x(i);θ)P(x(i),z(i);θ)=i∑z(i)∑P(z(i)∣x(i);θ(j))lnP(x(i);θ)=i∑lnP(x(i);θ)(z(i)∑P(z(i)∣x(i);θ(j)))=i∑lnP(x(i);θ)=lnL(θ)
在上面的式子中,第三行是利用了条件概率的公式,第五行则是利用了
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
=
1
\sum_{z^{(i)}}P(z^{(i)}|x^{(i)};\theta^{(j)}) = 1
∑z(i)P(z(i)∣x(i);θ(j))=1的条件。
所以,我们构造出的两个式子,相减之后正好是我们的极大似然函数的对数。
于是,我们将
l
n
  
L
(
θ
(
j
+
1
)
)
ln\; L(\theta^{(j+1)})
lnL(θ(j+1))和
l
n
  
L
(
θ
(
j
)
)
ln\;L(\theta^{(j)})
lnL(θ(j))相减,我们有:
l
n
  
L
(
θ
(
j
+
1
)
)
−
l
n
  
L
(
θ
(
j
)
)
=
(
Q
(
θ
(
j
+
1
)
,
θ
(
j
)
)
−
H
(
θ
(
j
+
1
)
,
θ
(
j
)
)
)
−
(
Q
(
θ
(
j
)
,
θ
(
j
)
)
−
H
(
θ
(
j
)
,
θ
(
j
)
)
)
=
(
Q
(
θ
(
j
+
1
)
,
θ
(
j
)
)
−
Q
(
θ
(
j
)
,
θ
(
j
)
)
)
−
(
H
(
θ
(
j
+
1
)
,
θ
(
j
)
)
−
H
(
θ
(
j
)
,
θ
(
j
)
)
)
\begin{aligned} ln \; L(\theta^{(j+1)}) - ln\;L(\theta^{(j)}) &= (Q(\theta^{(j+1)}, \theta^{(j)}) - H(\theta^{(j+1)}, \theta^{(j)})) - (Q(\theta^{(j)}, \theta^{(j)}) - H(\theta^{(j)}, \theta^{(j)})) \\ &= (Q(\theta^{(j+1)}, \theta^{(j)})-Q(\theta^{(j)}, \theta^{(j)})) - (H(\theta^{(j+1)}, \theta^{(j)}) - H(\theta^{(j)}, \theta^{(j)})) \end{aligned}
lnL(θ(j+1))−lnL(θ(j))=(Q(θ(j+1),θ(j))−H(θ(j+1),θ(j)))−(Q(θ(j),θ(j))−H(θ(j),θ(j)))=(Q(θ(j+1),θ(j))−Q(θ(j),θ(j)))−(H(θ(j+1),θ(j))−H(θ(j),θ(j)))
对于第一个括号内部的
Q
(
θ
(
j
+
1
)
,
θ
(
j
)
)
−
Q
(
θ
(
j
)
,
θ
(
j
)
)
Q(\theta^{(j+1)}, \theta^{(j)})-Q(\theta^{(j)}, \theta^{(j)})
Q(θ(j+1),θ(j))−Q(θ(j),θ(j)),由于我们是通过极大化Q函数来更新参数的数值,所以
Q
(
θ
(
j
+
1
)
,
θ
(
j
)
)
≥
Q
(
θ
(
j
)
,
θ
(
j
)
)
Q(\theta^{(j+1)}, \theta^{(j)}) \geq Q(\theta^{(j)}, \theta^{(j)})
Q(θ(j+1),θ(j))≥Q(θ(j),θ(j)),故这一部分一定会大于等于0,即:
Q
(
θ
(
j
+
1
)
,
θ
(
j
)
)
−
Q
(
θ
(
j
)
,
θ
(
j
)
)
≥
0
Q(\theta^{(j+1)}, \theta^{(j)})-Q(\theta^{(j)}, \theta^{(j)}) \geq 0
Q(θ(j+1),θ(j))−Q(θ(j),θ(j))≥0
对于第二个括号内部的数值,我们有:
H
(
θ
(
j
+
1
)
,
θ
(
j
)
)
−
H
(
θ
(
j
)
,
θ
(
j
)
)
=
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
l
n
(
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
+
1
)
)
)
−
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
l
n
(
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
)
=
∑
i
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
l
n
  
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
+
1
)
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
≤
∑
i
l
n
(
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
+
1
)
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
)
)
)
=
∑
i
l
n
(
∑
z
(
i
)
P
(
z
(
i
)
∣
x
(
i
)
;
θ
(
j
+
1
)
)
)
=
∑
i
l
n
(
1
)
=
0
\begin{aligned} H(\theta^{(j+1)}, \theta^{(j)}) - H(\theta^{(j)}, \theta^{(j)}) &= \sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)}) ln (P(z^{(i)}| x^{(i)};\theta^{(j+1)})) \\ &\quad- \sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)}) ln (P(z^{(i)}| x^{(i)};\theta^{(j)})) \\ &= \sum_i \sum_{z^{(i)}} P(z^{(i)}|x^{(i)};\theta^{(j)}) ln\; \frac{P(z^{(i)}| x^{(i)};\theta^{(j+1)})}{P(z^{(i)}| x^{(i)};\theta^{(j)})} \\ &\leq \sum_i ln(\sum_{z^{(i)}} \frac{P(z^{(i)}| x^{(i)};\theta^{(j+1)})}{P(z^{(i)}| x^{(i)};\theta^{(j)})} P(z^{(i)}|x^{(i)};\theta^{(j)})) \\ &= \sum_i ln (\sum_{z^{(i)}} P(z^{(i)}| x^{(i)};\theta^{(j+1)})) \\ &= \sum_i ln(1) \\ &= 0 \end{aligned}
H(θ(j+1),θ(j))−H(θ(j),θ(j))=i∑z(i)∑P(z(i)∣x(i);θ(j))ln(P(z(i)∣x(i);θ(j+1)))−i∑z(i)∑P(z(i)∣x(i);θ(j))ln(P(z(i)∣x(i);θ(j)))=i∑z(i)∑P(z(i)∣x(i);θ(j))lnP(z(i)∣x(i);θ(j))P(z(i)∣x(i);θ(j+1))≤i∑ln(z(i)∑P(z(i)∣x(i);θ(j))P(z(i)∣x(i);θ(j+1))P(z(i)∣x(i);θ(j)))=i∑ln(z(i)∑P(z(i)∣x(i);θ(j+1)))=i∑ln(1)=0
在上面的第三步中,我们使用了Jensen不等式,在第五步中,我们使用了 ∑ z ( i ) P ( z ( i ) ∣ x ( i ) ; θ ( j + 1 ) ) = 1 \sum_{z^{(i)}} P(z^{(i)}| x^{(i)};\theta^{(j+1)}) = 1 ∑z(i)P(z(i)∣x(i);θ(j+1))=1这一条件。
于是,我们可以有:
Q
(
θ
(
j
+
1
)
,
θ
(
j
)
)
−
Q
(
θ
(
j
)
,
θ
(
j
)
)
≥
0
Q(\theta^{(j+1)}, \theta^{(j)})-Q(\theta^{(j)}, \theta^{(j)}) \geq 0
Q(θ(j+1),θ(j))−Q(θ(j),θ(j))≥0
H ( θ ( j + 1 ) , θ ( j ) ) − H ( θ ( j ) , θ ( j ) ) ≤ 0 H(\theta^{(j+1)}, \theta^{(j)}) - H(\theta^{(j)}, \theta^{(j)}) \leq 0 H(θ(j+1),θ(j))−H(θ(j),θ(j))≤0
故:
(
Q
(
θ
(
j
+
1
)
,
θ
(
j
)
)
−
Q
(
θ
(
j
)
,
θ
(
j
)
)
)
−
(
H
(
θ
(
j
+
1
)
,
θ
(
j
)
)
−
H
(
θ
(
j
)
,
θ
(
j
)
)
)
≥
0
(Q(\theta^{(j+1)}, \theta^{(j)})-Q(\theta^{(j)}, \theta^{(j)})) - (H(\theta^{(j+1)}, \theta^{(j)}) - H(\theta^{(j)}, \theta^{(j)})) \geq 0
(Q(θ(j+1),θ(j))−Q(θ(j),θ(j)))−(H(θ(j+1),θ(j))−H(θ(j),θ(j)))≥0
即:
l
n
  
L
(
θ
(
j
+
1
)
)
−
l
n
  
L
(
θ
(
j
)
)
≥
0
ln \; L(\theta^{(j+1)}) - ln\;L(\theta^{(j)}) \geq 0
lnL(θ(j+1))−lnL(θ(j))≥0
所以EM算法是可以逐步收敛到某一极大值附近的。证毕。
五、EM算法的缺陷
EM算法是处理含有隐藏变量模型的重要算法,但是EM算法也有其缺陷,首先,EM算法对初始值敏感,不同的初始值可能会导致不同的结果,这是由于似然函数的性质决定的,如果一个似然函数是凹函数,那么最后会收敛到极大值附近,也就是最大值附近,但是如果函数存在多个极大值,那么算法的初始值就会影响最后的结果。