EM具体的含义是什么?
- EM算法(Expectation Maximization Algorithm, 最大期望算法)是一种迭代类型的算法,是一种在概率模型中寻找参数最大似然估计或者最大后验估计的算法,其中概率模型依赖于无法观测的隐藏变量。
- EM算法流程:
- 初始化分布参数/模型参数
- 重复下列两个操作直到收敛:
- E步骤:估计隐藏变量的概率分布期望函数;
- M步骤:根据期望函数重新估计分布参数。
Jensen不等式
-
如果函数f为凸函数,那么存在下列公式:
f ( θ x + ( 1 − θ ) y ) ≤ θ f ( x ) + ( 1 − θ ) f ( y ) f(\theta x+(1-\theta)y)\leq\theta f(x)+(1-\theta)f(y) f(θx+(1−θ)y)≤θf(x)+(1−θ)f(y) -
若 θ 1 , . . . , θ k ≥ 0 , θ 1 + . . . . + θ k = 1 ; θ_1,...,θ_k≥0,θ_1+....+θ_k=1; θ1,...,θk≥0,θ1+....+θk=1;则
f ( θ 1 x 1 + . . . + θ k x k ) ≤ θ 1 f ( x 1 ) + . . . + θ k f ( x k ) f(\theta_1x_1+...+\theta_kx_k)\leq\theta_1f(x_1)+...+\theta_kf(x_k) f(θ1x1+...+θkxk)≤θ1f(x1)+...+θkf(xk) f ( E ( x ) ) ≤ E ( f ( x ) ) f(E(x))\leq E(f(x)) f(E(x))≤E(f(x))
算法原理
-
给定的m个训练样本 x ( 1 ) , x ( 2 ) , . . . , x ( m ) {x^{(1)},x^{(2)},...,x^{(m)}} x(1),x(2),...,x(m),样本间独立,找出样本的模型参数θ,极大化模型分布的对数似然函数如下:
θ = arg max θ ∑ i = 1 m log ( P ( x ( i ) ; θ ) ) \theta=\argmax_\theta \sum^m_{i=1}\log(P(x^{(i)};\theta)) θ=θargmaxi=1∑mlog(P(x(i);θ)) -
假定样本数据中存在隐含数据 z ( 1 ) , z ( 2 ) , . . . , z ( k ) {z^{(1)},z^{(2)},...,z^{(k)}} z(1),z(2),...,z(k),此时极大化模型分布的对数似然函数如下:
θ = arg max θ ∑ i = 1 m log ( P ( x ( i ) ; θ ) ) = arg max θ ∑ i = 1 m log ( ∑ z ( i ) P ( z ( i ) ) P ( x ( i ) ∣ z ( i ) ; θ ) ) = arg max θ ∑ i = 1 m log ( ∑ z ( i ) P ( x ( i ) , z ( i ) ; θ ) ) \begin{aligned}\theta &=\argmax_\theta\sum^m_{i=1}\log(P(x^{(i)};\theta))\\ &=\argmax_\theta\sum^m_{i=1}\log\left(\sum_{z^{(i)}}P(z^{(i)})P(x^{(i)}|z^{(i)};\theta)\right)\\ &=\argmax_\theta\sum^m_{i=1}\log\left(\sum_{z^{(i)}}P(x^{(i)},z^{(i)};\theta)\right) \end{aligned} θ=θargmaxi=1∑mlog(P(x(i);θ))=θargmaxi=1∑mlog(z(i)∑P(z(i))P(x(i)∣z(i);θ))=θargmaxi=1∑mlog(z(i)∑P(x(i),z(i);θ)) -
令z的分布为Q(z;θ) ,并且Q(z;θ)≥0;
∑ z = Q ( z ; θ ) = 1 \sum_z=Q(z;\theta)=1 z∑=Q(z;θ)=1
那么有如下公式:
l ( θ ) = ∑ i = 1 m log ∑ z p ( x , z ; θ ) = ∑ i = 1 m log ∑ z Q ( z ; θ o l d ) ⋅ p ( x , z ; θ ) Q ( z ; θ o l d ) = ∑ i = 1 m log ( E Q ( p ( x , z ; θ ) Q ( z ; θ o l d ) ) ) ≥ ∑ i = 1 m E Q ( log ( p ( x , z ; θ ) Q ( z ; θ o l d ) ) ) = ∑ i = 1 m ∑ z Q ( z ; θ o l d ) log ( p ( x , z ; θ ) Q ( z ; θ o l d ) ) \begin{aligned}l(\theta) &=\sum_{i=1}^m\log\sum_zp(x,z;\theta)\\ &=\sum^m_{i=1}\log\sum_zQ(z;\theta^{old})\cdot\frac{p(x,z;\theta)}{Q(z;\theta^{old})}\\ &=\sum_{i=1}^m\log\left(E_Q\left(\frac{p(x,z;\theta)}{Q(z;\theta^{old})}\right)\right)\geq\sum_{i=1}^mE_Q\left(\log\left(\frac{p(x,z;\theta)}{Q(z;\theta^{old})}\right)\right)\\ & = \sum^m_{i=1}\sum_zQ(z;\theta^{old})\log\left(\frac{p(x,z;\theta)}{Q(z;\theta^{old})}\right) \end{aligned} l(θ)=i=1∑mlogz∑p(x,z;θ)=i=1∑mlogz∑Q(z;θold)⋅Q(z;θold)p(x,z;θ)=i=1∑mlog(EQ(Q(z;θold)p(x,z;θ)))≥i=1∑mEQ(log(Q(z;θold)p(x,z;θ)))=i=1∑mz∑Q(z;θold)log(Q(z;θold)p(x,z;θ)) -
根据Jensen不等式的特性,当下列式子的值为常数的时候,l(θ)函数才能取等号。
l
(
θ
)
≥
∑
i
=
1
m
∑
z
Q
(
z
;
θ
o
l
d
)
log
(
p
(
x
,
z
;
θ
)
Q
(
z
;
θ
o
l
d
)
)
l(\theta)\geq\sum^m_{i=1}\sum_zQ(z;\theta^{old})\log\left(\frac{p(x,z;\theta)}{Q(z;\theta^{old})}\right)
l(θ)≥i=1∑mz∑Q(z;θold)log(Q(z;θold)p(x,z;θ))
p
(
x
,
z
;
θ
o
l
d
)
Q
(
z
;
θ
o
l
d
)
=
c
,
∀
x
,
∀
z
\frac{p(x,z;\theta^{old})}{Q(z;\theta^{old})}=c,\forall x,\forall z
Q(z;θold)p(x,z;θold)=c,∀x,∀z
Q
(
z
;
θ
o
l
d
)
=
p
(
x
,
z
;
θ
o
l
d
)
c
=
p
(
x
,
z
;
θ
o
l
d
)
c
⋅
∑
z
i
Q
(
z
i
;
θ
o
l
d
)
=
p
(
x
,
z
;
θ
o
l
d
)
∑
z
i
c
⋅
Q
(
z
i
;
θ
o
l
d
)
=
p
(
x
,
z
;
θ
o
l
d
)
∑
z
i
p
(
x
,
z
i
;
θ
o
l
d
)
=
p
(
x
,
z
;
θ
o
l
d
)
p
(
x
;
θ
o
l
d
)
=
p
(
z
∣
x
;
θ
o
l
d
)
\begin{aligned}Q(z;\theta^{old})&=\frac{p(x,z;\theta^{old})}{c}=\frac{p(x,z;\theta^{old})}{c\cdot\sum_{z^{i}}Q(z^i;\theta^{old})}\\& =\frac{p(x,z;\theta^{old})}{\sum_{z^{i}}c\cdot Q(z^i;\theta^{old})}=\frac{p(x,z;\theta^{old})}{\sum_{z^{i}}p(x,z^i;\theta^{old})}\\&=\frac{p(x,z;\theta^{old})}{p(x;\theta^{old})}=p(z|x;\theta^{old})\end{aligned}
Q(z;θold)=cp(x,z;θold)=c⋅∑ziQ(zi;θold)p(x,z;θold)=∑zic⋅Q(zi;θold)p(x,z;θold)=∑zip(x,zi;θold)p(x,z;θold)=p(x;θold)p(x,z;θold)=p(z∣x;θold)
θ = arg max θ l ( θ ) = arg max θ ∑ i = 1 m ∑ z Q ( z ; θ o l d ) log ( p ( x , z ; θ ) Q ( z ; θ o l d ) ) = arg max θ ∑ i = 1 m ∑ z p ( z ∣ x ; θ o l d ) log ( p ( x , z ; θ ) p ( z ∣ x ; θ o l d ) ) ≌ arg max θ ∑ i = 1 m ∑ z p ( z ∣ x ; θ o l d ) log ( p ( x , z ; θ ) ) \begin{aligned}\theta&=\argmax_{\theta}l(\theta)=\argmax_\theta \sum^m_{i=1}\sum_zQ(z;\theta^{old})\log\left(\frac{p(x,z;\theta)}{Q(z;\theta^{old})}\right)\\&=\argmax_\theta\sum^m_{i=1}\sum_zp(z|x;\theta^{old})\log\left(\frac{p(x,z;\theta)}{p(z|x;\theta^{old})}\right)\\&≌\argmax_\theta\sum^m_{i=1}\sum_zp(z|x;\theta^{old})\log(p(x,z;\theta)) \end{aligned} θ=θargmaxl(θ)=θargmaxi=1∑mz∑Q(z;θold)log(Q(z;θold)p(x,z;θ))=θargmaxi=1∑mz∑p(z∣x;θold)log(p(z∣x;θold)p(x,z;θ))≌θargmaxi=1∑mz∑p(z∣x;θold)log(p(x,z;θ))
EM算法流程
- 条件:样本数据
x
=
x
1
,
x
2
,
.
.
.
,
x
m
x={x^1,x^2,...,x^m}
x=x1,x2,...,xm,联合分布
p
(
x
,
z
;
θ
)
p(x,z;\theta)
p(x,z;θ),条件分布
p
(
x
,
z
;
θ
)
p(x,z;\theta)
p(x,z;θ),最大迭代次数J
- 随机初始化模型参数 θ \theta θ的初始值 θ 0 \theta^0 θ0
- 开始EM算法的迭代处理:
- E步:计算联合分布的条件概率期望
Q j = p ( z ∣ x ; θ j ) Q^j=p(z|x;\theta^j) Qj=p(z∣x;θj) l ( θ ) = ∑ i = 1 m ∑ z Q j log ( p ( z , x ; θ ) ) l(\theta)=\sum_{i=1}^m\sum_zQ^j\log(p(z,x;\theta)) l(θ)=i=1∑mz∑Qjlog(p(z,x;θ)) - M步:极大化L函数,得到
θ
j
+
1
\theta^{j+1}
θj+1
θ j + 1 = arg max θ l ( θ ) \theta^{j+1}=\argmax_\theta l(\theta) θj+1=θargmaxl(θ) - 如果θj+1已经收敛,则算法结束,输出最终的模型参数θ,否则继续
- E步:计算联合分布的条件概率期望
EM算法收敛证明
GMM
- GMM(Gaussian Mixture Model, 高斯混合模型)是指该算法由多个高斯模型线性叠加混合而成。每个高斯模型称之为component。GMM算法描述的是数据的本身存在的一种分布。
- GMM算法常用于聚类应用中,component的个数就可以认为是类别的数量。
- 假定GMM由k个Gaussian分布线性叠加而成,那么概率密度函数如下:
p
(
x
)
=
∑
k
=
1
K
p
(
k
)
p
(
x
∣
k
)
=
∑
k
=
1
K
π
k
p
(
x
;
μ
k
,
∑
k
)
p(x)=\sum^K_{k=1}p(k)p(x|k)=\sum^K_{k=1}\pi_kp(x;\mu_k,\sum_k)
p(x)=k=1∑Kp(k)p(x∣k)=k=1∑Kπkp(x;μk,k∑)
对数似然函数
l
(
π
,
μ
,
∑
)
=
∑
i
=
1
N
log
(
∑
i
=
1
K
π
k
p
(
x
i
;
μ
k
,
∑
k
)
)
l(\pi,\mu,\sum)=\sum^N_{i=1}\log\left(\sum_{i=1}^K\pi_kp(x^i;\mu_k,\sum_k)\right)
l(π,μ,∑)=i=1∑Nlog(i=1∑Kπkp(xi;μk,k∑))