一、概述
以一维数据为例,我们可以看到下图通过将多个单一的高斯模型加权叠加到一起就可以获得一个高斯混合模型,这个混合模型显然具备比单个高斯模型更强的拟合能力:
再举一个二维数据的例子,在下图中可以看到有两个数据密集区域,对应的概率分布也就会有两个峰。高斯混合模型可以看做生成模型,其数据生成过程可以认为先选择一个高斯分布,再从被选择的高斯分布中生成数据:
综合上述两种描述,我们可以从两种角度来描述高斯混合模型:
- 几何角度:加权平均
可以认为高斯混合模型是将多个高斯分布加权平均而成的模型:
p ( x ) = ∑ k = 1 K α k N ( μ k , Σ k ) , ∑ k = 1 K α k = 1 p(x)=\sum_{k=1}^{K}\alpha _{k}N(\mu _{k},\Sigma _{k}),\sum_{k=1}^{K}\alpha _{k}=1 p(x)=k=1∑KαkN(μk,Σk),k=1∑Kαk=1
- 混合模型(或者生成模型)角度
可以认为高斯混合模型是一种含有隐变量的生成模型:
x x x:observed variable
z z z:latent variable
z z z是隐变量,表示对应的样本 x x x属于哪一个高斯分布,其概率分为如下表:
z z z | C 1 C_1 C1 | C 2 C_2 C2 | ⋯ \cdots ⋯ | C k C_k Ck |
---|---|---|---|---|
p p p | p 1 p_1 p1 | p 2 p_2 p2 | ⋯ \cdots ⋯ | p k p_k pk |
可以认为这里的概率 p k p_k pk就是几何角度加权平均中权重,两种角度的解释其实是一个意思。
我们可以画出高斯混合模型的概率图:
实心点代表模型的参数,右下角的 N N N代表样本个数。
二、尝试用极大似然估计来求解
X X X:observed data → X = ( x 1 , x 2 , ⋯ , x N ) \rightarrow X=(x_{1},x_{2},\cdots ,x_{N}) →X=(x1,x2,⋯,xN)
( X , Z ) (X,Z) (X,Z):comlete data
θ \theta θ:parameter → θ = { p 1 , p 2 , ⋯ , p k , μ 1 , μ 2 , ⋯ , μ k , Σ 1 , Σ 2 , ⋯ , Σ k } , ∑ i = 1 K p k = 1 \rightarrow \theta =\left \{p_{1},p_{2},\cdots ,p_{k},\mu _{1},\mu _{2},\cdots ,\mu _{k},\Sigma _{1},\Sigma _{2},\cdots ,\Sigma _{k}\right \},\sum_{i=1}^{K}p_{k}=1 →θ={p1,p2,⋯,pk,μ1,μ2,⋯,μk,Σ1,Σ2,⋯,Σk},∑i=1Kpk=1
以上为我们的数据以及需要求解的参数。接下来我们表示一下概率 p ( x ) p(x) p(x):
p ( x ) = ∑ z p ( x , z ) = ∑ k = 1 K p ( x , z = C k ) = ∑ k = 1 K p ( z = C k ) ⋅ p ( x ∣ z = C k ) = ∑ k = 1 K p k ⋅ N ( x ∣ μ k , Σ k ) p(x)=\sum _{z}p(x,z)\\ =\sum _{k=1}^{K}p(x,z=C_{k})\\ =\sum _{k=1}^{K}p(z=C_{k})\cdot p(x|z=C_{k})\\ =\sum _{k=1}^{K}p_{k}\cdot N(x|\mu _{k},\Sigma _{k}) p(x)=z∑p(x,z)=k=1∑Kp(x,z=Ck)=k=1∑Kp(z=Ck)⋅p(x∣z=Ck)=k=1∑Kpk⋅N(x∣μk,Σk)
然后我们使用极大似然估计法求解这个参数估计问题。首先告知结论:极大似然估计法无法求解含有隐变量的参数估计问题,或者说不能得到解析解。接下来来看为什么不能用极大似然估计法来求解:
θ ^ M L E = a r g m a x θ l o g p ( X ) = a r g m a x θ l o g ∏ i = 1 N p ( x i ) = a r g m a x θ ∑ i = 1 N l o g p ( x i ) = a r g m a x θ ∑ i = 1 N l o g ∑ k = 1 K p k ⋅ N ( x i ∣ μ k , Σ k ) \hat{\theta }_{MLE}=\underset{\theta }{argmax}\; log\; p(X)\\ =\underset{\theta }{argmax}\; log\prod_{i=1}^{N}p(x_{i})\\ =\underset{\theta }{argmax}\sum_{i=1}^{N}log\; p(x_{i})\\ =\underset{\theta }{argmax}\sum_{i=1}^{N}{\color{Red}{log\sum _{k=1}^{K}}}p_{k}\cdot N(x_{i}|\mu _{k},\Sigma _{k}) θ^MLE=θargmaxlogp(X)=θargmaxlogi=1∏Np(xi)=θargmaxi=1∑Nlogp(xi)=θargmaxi=1∑Nlogk=1∑Kpk⋅N(xi∣μk,Σk)
极大似然估计法不能得到解析解的原因为 l o g log log函数内部出现了求和符号。当然我们可以使用梯度下降法来进行求解,但是对于含有隐变量的模型来说使用EM算法是更为合适的。
三、使用EM算法求解
由于使用EM算法需要用到联合概率 p ( x , z ) p(x,z) p(x,z)和后验 p ( z ∣ x ) p(z|x) p(z∣x),所有我们首先写出这两个概率的表示:
p ( x , z ) = p ( z ) p ( x ∣ z ) = p z ⋅ N ( x ∣ μ z , Σ z ) p ( z ∣ x ) = p ( x , z ) p ( x ) = p z ⋅ N ( x ∣ μ z , Σ z ) ∑ k = 1 K p k ⋅ N ( x ∣ μ k , Σ k ) p(x,z)=p(z)p(x|z)=p_{z}\cdot N(x|\mu _{z},\Sigma _{z})\\ p(z|x)=\frac{p(x,z)}{p(x)}=\frac{p_{z}\cdot N(x|\mu _{z},\Sigma _{z})}{\sum_{k=1}^{K}p_{k}\cdot N(x|\mu _{k},\Sigma _{k})} p(x,z)=p(z)p(x∣z)=pz⋅N(x∣μz,Σz)p(z∣x)=p(x)p(x,z)=∑k=1Kpk⋅N(x∣μk,Σk)pz⋅N(x∣μz,Σz)
- E step
Q ( θ , θ t ) = ∑ Z l o g p ( X , Z ∣ θ ) ⋅ p ( Z ∣ X , θ t ) = ∑ z 1 z 2 ⋯ z N l o g ∏ i = 1 N p ( x i , z i ∣ θ ) ⋅ ∏ i = 1 N p ( z i ∣ x i , θ t ) = ∑ z 1 z 2 ⋯ z N ∑ i = 1 N l o g p ( x i , z i ∣ θ ) ⋅ ∏ i = 1 N p ( z i ∣ x i , θ t ) = ∑ z 1 z 2 ⋯ z N [ l o g p ( x 1 , z 1 ∣ θ ) + l o g p ( x 2 , z 2 ∣ θ ) + ⋯ + l o g p ( x i , z i ∣ θ ) ] ⋅ ∏ i = 1 N p ( z i ∣ x i , θ t ) Q(\theta ,\theta ^{t})=\sum _{Z}log\; p(X,Z|\theta )\cdot p(Z|X,\theta ^{t})\\ =\sum _{z_{1}z_{2}\cdots z_{N}}log\; \prod_{i=1}^{N}p(x_{i},z_{i}|\theta )\cdot \prod_{i=1}^{N} p(z_{i}|x_{i},\theta ^{t})\\ =\sum _{z_{1}z_{2}\cdots z_{N}}\sum_{i=1}^{N}log\; p(x_{i},z_{i}|\theta )\cdot \prod_{i=1}^{N} p(z_{i}|x_{i},\theta ^{t})\\ =\sum _{z_{1}z_{2}\cdots z_{N}}[log\; p(x_{1},z_{1}|\theta )+log\; p(x_{2},z_{2}|\theta )+\cdots +log\; p(x_{i},z_{i}|\theta )]\cdot \prod_{i=1}^{N} p(z_{i}|x_{i},\theta ^{t}) Q(θ,θt)=Z∑logp(X,Z∣θ)⋅p(Z∣X,θt)=z1z2⋯zN∑logi=1∏Np(xi,zi∣θ)⋅i=1∏Np(zi∣xi,θt)=z1z2⋯zN∑i=1∑Nlogp(xi,zi∣θ)⋅i=1∏Np(zi∣xi,θt)=z1z2⋯zN∑[logp(x1,z1∣θ)+logp(x2,z2∣θ)+⋯+logp(xi,zi∣θ)]⋅i=1∏Np(zi∣xi,θt)
对于上式展开的每一项,我们可以进行化简:
∑ z 1 z 2 ⋯ z N l o g p ( x 1 , z 1 ∣ θ ) ∏ i = 1 N p ( z i ∣ x i , θ t ) = ∑ z 1 z 2 ⋯ z N l o g p ( x 1 , z 1 ∣ θ ) ⋅ p ( z 1 ∣ x 1 , θ t ) ∏ i = 2 N p ( z i ∣ x i , θ t ) = ∑ z 1 l o g p ( x 1 , z 1 ∣ θ ) ⋅ p ( z 1 ∣ x 1 , θ t ) ∑ z 2 z 3 ⋯ z N ∏ i = 2 N p ( z i ∣ x i , θ t ) = ∑ z 1 l o g p ( x 1 , z 1 ∣ θ ) ⋅ p ( z 1 ∣ x 1 , θ t ) ∑ z 2 z 3 ⋯ z N p ( z 2 ∣ x 2 , θ t ) p ( z 3 ∣ x 3 , θ t ) ⋯ p ( z N ∣ x N , θ t ) = ∑ z 1 l o g p ( x 1 , z 1 ∣ θ ) ⋅ p ( z 1 ∣ x 1 , θ t ) ∑ z 2 p ( z 2 ∣ x 2 , θ t ) ⏟ = 1 ∑ z 3 p ( z 3 ∣ x 3 , θ t ) ⏟ = 1 ⋯ ∑ z N p ( z N ∣ x N , θ t ) ⏟ = 1 = ∑ z 1 l o g p ( x 1 , z 1 ∣ θ ) ⋅ p ( z 1 ∣ x 1 , θ t ) 同 样 的 我 们 可 以 得 到 ∑ z 1 z 2 ⋯ z N l o g p ( x 1 , z 1 ∣ θ ) ∏ i = 1 N p ( z i ∣ x i , θ t ) = ∑ z i l o g p ( x i , z i ∣ θ ) p ( z i ∣ x i , θ t ) \sum _{z_{1}z_{2}\cdots z_{N}}log\; p(x_{1},z_{1}|\theta )\prod_{i=1}^{N} p(z_{i}|x_{i},\theta ^{t})\\ =\sum _{z_{1}z_{2}\cdots z_{N}}log\; p(x_{1},z_{1}|\theta )\cdot p(z_{1}|x_{1},\theta ^{t})\prod_{i=2}^{N} p(z_{i}|x_{i},\theta ^{t})\\ =\sum _{z_{1}}log\; p(x_{1},z_{1}|\theta )\cdot p(z_{1}|x_{1},\theta ^{t})\sum _{z_{2}z_{3}\cdots z_{N}}\prod_{i=2}^{N} p(z_{i}|x_{i},\theta ^{t})\\ =\sum _{z_{1}}log\; p(x_{1},z_{1}|\theta )\cdot p(z_{1}|x_{1},\theta ^{t})\sum _{z_{2}z_{3}\cdots z_{N}}p(z_{2}|x_{2},\theta ^{t})p(z_{3}|x_{3},\theta ^{t})\cdots p(z_{N}|x_{N},\theta ^{t})\\ =\sum _{z_{1}}log\; p(x_{1},z_{1}|\theta )\cdot p(z_{1}|x_{1},\theta ^{t})\underset{=1}{\underbrace{\sum _{z_{2}}p(z_{2}|x_{2},\theta ^{t})}}\underset{=1}{\underbrace{\sum _{z_{3}}p(z_{3}|x_{3},\theta ^{t})}}\cdots \underset{=1}{\underbrace{\sum _{z_{N}}p(z_{N}|x_{N},\theta ^{t})}}\\ =\sum _{z_{1}}log\; p(x_{1},z_{1}|\theta )\cdot p(z_{1}|x_{1},\theta ^{t})\\ 同样的我们可以得到\sum _{z_{1}z_{2}\cdots z_{N}}log\; p(x_{1},z_{1}|\theta )\prod_{i=1}^{N} p(z_{i}|x_{i},\theta ^{t})=\sum _{z_{i}}log\; p(x_{i},z_{i}|\theta )p(z_{i}|x_{i},\theta ^{t}) z1z2⋯zN∑logp(x1,z1∣θ)i=1∏Np(zi∣xi,θt)=z1z2⋯zN∑logp(x1,z1∣θ)⋅p(z1∣x1,θt)i=2∏Np(zi∣xi,θt)=z1∑logp(x1,z1∣θ)⋅p(z1∣x1,θt)z2z3⋯zN∑i=2∏Np(zi∣xi,θt)=z1∑logp(x1,z1∣θ)⋅p(z1∣x1,θt)z2z3⋯zN∑p(z2∣x2,θt)p(z3∣x3,θt)⋯p(zN∣xN,θt)=z1∑logp(x1,z1∣θ)⋅p(z1∣x1,θt)=1 z2∑p(z2∣x2,θt)=1 z3∑p(z3∣x3,θt)⋯=1 zN∑p(zN∣xN,θt)=z1∑logp(x1,z1∣θ)⋅p(z1∣x1,θt)同样的我们可以得到z1z2⋯zN∑logp(x1,z1∣θ)i=1∏Np(zi∣xi,θt)=zi∑logp(xi,zi∣θ)p(zi∣xi,θt)
继续对 Q ( θ , θ t ) Q(\theta ,\theta ^{t}) Q(θ,θt)进行化简可以得到:
Q ( θ , θ t ) = ∑ z 1 l o g p ( x 1 , z 1 ∣ θ ) ⋅ p ( z 1 ∣ x 1 , θ t ) + ⋯ + ∑ z i l o g p ( x i , z i ∣ θ ) ⋅ p ( z i ∣ x i , θ t ) = ∑ i = 1 N ∑ z i l o g p ( x i , z i ∣ θ ) ⋅ p ( z i ∣ x i , θ t ) = ∑ i = 1 N ∑ z i l o g [ p z i ⋅ N ( x i ∣ μ z i , Σ z i ) ] ⋅ p z i t ⋅ N ( x i ∣ μ z i t , Σ z i t ) ∑ k = 1 K p k t ⋅ N ( x i ∣ μ k t , Σ k t ) ( p z i t ⋅ N ( x i ∣ μ z i t , Σ z i t ) ∑ k = 1 K p k t ⋅ N ( x i ∣ μ k t , Σ k t ) 与 θ 无 关 , 暂 时 写 作 p ( z i ∣ x i , θ t ) ) = ∑ i = 1 N ∑ z i l o g [ p z i ⋅ N ( x i ∣ μ z i , Σ z i ) ] ⋅ p ( z i ∣ x i , θ t ) = ∑ z i ∑ i = 1 N l o g [ p z i ⋅ N ( x i ∣ μ z i , Σ z i ) ] ⋅ p ( z i ∣ x i , θ t ) = ∑ k = 1 K ∑ i = 1 N l o g [ p k ⋅ N ( x i ∣ μ k , Σ k ) ] ⋅ p ( z i = C k ∣ x i , θ t ) = ∑ k = 1 K ∑ i = 1 N [ l o g p k + l o g N ( x i ∣ μ k , Σ k ) ] ⋅ p ( z i = C k ∣ x i , θ t ) Q(\theta ,\theta ^{t})=\sum _{z_{1}}log\; p(x_{1},z_{1}|\theta )\cdot p(z_{1}|x_{1},\theta ^{t})+\cdots +\sum _{z_{i}}log\; p(x_{i},z_{i}|\theta )\cdot p(z_{i}|x_{i},\theta ^{t})\\ =\sum_{i=1}^{N}\sum _{z_{i}}log\; p(x_{i},z_{i}|\theta )\cdot p(z_{i}|x_{i},\theta ^{t})\\ =\sum_{i=1}^{N}\sum _{z_{i}}log\; [p_{z_{i}}\cdot N(x_{i}|\mu _{z_{i}},\Sigma _{z_{i}})]\cdot \frac{p_{z_{i}}^{t}\cdot N(x_{i}|\mu _{z_{i}}^{t},\Sigma _{z_{i}}^{t})}{\sum_{k=1}^{K}p_{k}^{t}\cdot N(x_{i}|\mu _{k}^{t},\Sigma _{k}^{t})}\\ (\frac{p_{z_{i}}^{t}\cdot N(x_{i}|\mu _{z_{i}}^{t},\Sigma _{z_{i}}^{t})}{\sum_{k=1}^{K}p_{k}^{t}\cdot N(x_{i}|\mu _{k}^{t},\Sigma _{k}^{t})}与\theta 无关,暂时写作p(z_{i}|x_{i},\theta ^{t}))\\ =\sum_{i=1}^{N}\sum _{z_{i}}log\; [p_{z_{i}}\cdot N(x_{i}|\mu _{z_{i}},\Sigma _{z_{i}})]\cdot p(z_{i}|x_{i},\theta ^{t})\\ =\sum _{z_{i}}\sum_{i=1}^{N}log\; [p_{z_{i}}\cdot N(x_{i}|\mu _{z_{i}},\Sigma _{z_{i}})]\cdot p(z_{i}|x_{i},\theta ^{t})\\ =\sum_{k=1}^{K}\sum_{i=1}^{N}log\; [p_{k}\cdot N(x_{i}|\mu _{k},\Sigma _{k})]\cdot p(z_{i}=C_{k}|x_{i},\theta ^{t})\\ =\sum_{k=1}^{K}\sum_{i=1}^{N}[log\; p_{k }+log\; N(x_{i}|\mu _{k},\Sigma _{k})]\cdot p(z_{i}=C_{k}|x_{i},\theta ^{t}) Q(θ,θt)=z1∑logp(x1,z1∣θ)⋅p(z1∣x1,θt)+⋯+zi∑logp(xi,zi∣θ)⋅p(zi∣xi,θt)=i=1∑Nzi∑logp(xi,zi∣θ)⋅p(zi∣xi,θt)=i=1∑Nzi∑log[pzi⋅N(xi∣μzi,Σzi)]⋅∑k=1Kpkt⋅N(xi∣μkt,Σkt)pzit⋅N(xi∣μzit,Σzit)(∑k=1Kpkt⋅N(xi∣μkt,Σkt)pzit⋅N(xi∣μzit,Σzit)与θ无关,暂时写作p(zi∣xi,θt))=i=1∑Nzi∑log[pzi⋅N(xi∣μzi,Σzi)]⋅p(zi∣xi,θt)=zi∑i=1∑Nlog[pzi⋅N(xi∣μzi,Σzi)]⋅p(zi∣xi,θt)=k=1∑Ki=1∑Nlog[pk⋅N(xi∣μk,Σk)]⋅p(zi=Ck∣xi,θt)=k=1∑Ki=1∑N[logpk+logN(xi∣μk,Σk)]⋅p(zi=Ck∣xi,θt)
- M step
EM算法的迭代公式为:
θ t + 1 = a r g m a x θ Q ( θ , θ t ) \theta ^{t+1}=\underset{\theta }{argmax}\; Q(\theta ,\theta ^{t}) θt+1=θargmaxQ(θ,θt)
下面以求解 p t + 1 = ( p 1 t + 1 , p 2 t + 1 , ⋯ , p K t + 1 ) T p^{t+1}=(p_{1}^{t+1},p_{2}^{t+1},\cdots ,p_{K}^{t+1})^{T} pt+1=(p1t+1,p2t+1,⋯,pKt+1)T为例,来看如何进行迭代求解,以下是求解的迭代公式:
p k t + 1 = a r g m a x p k ∑ k = 1 K ∑ i = 1 N l o g p k ⋅ p ( z i = C k ∣ x i , θ t ) , s . t . ∑ k = 1 K p k = 1 p_{k}^{t+1}=\underset{p_{k}}{argmax}\sum_{k=1}^{K}\sum_{i=1}^{N}log\; p_{k }\cdot p(z_{i}=C_{k}|x_{i},\theta ^{t}),s.t.\; \sum_{k=1}^{K}p_{k}=1 pkt+1=pkargmaxk=1∑Ki=1∑Nlogpk⋅p(zi=Ck∣xi,θt),s.t.k=1∑Kpk=1
于是可以转化为以下约束优化问题:
{ m i n p − ∑ k = 1 K ∑ i = 1 N l o g p k ⋅ p ( z i = C k ∣ x i , θ t ) s . t . ∑ k = 1 K p k = 1 \left\{\begin{matrix} \underset{p}{min}-\sum_{k=1}^{K}\sum_{i=1}^{N}log\; p_{k }\cdot p(z_{i}=C_{k}|x_{i},\theta ^{t})\\ s.t.\; \sum_{k=1}^{K}p_{k}=1 \end{matrix}\right. {pmin−∑k=1K∑i=1Nlogpk⋅p(zi=Ck∣xi,θt)s.t.∑k=1Kpk=1
然后使用拉格朗日乘子法进行求解:
L ( p , λ ) = − ∑ k = 1 K ∑ i = 1 N l o g p k ⋅ p ( z i = C k ∣ x i , θ t ) + λ ( ∑ k = 1 K p k − 1 ) ∂ L ∂ p k = − ∑ i = 1 N 1 p k ⋅ p ( z i = C k ∣ x i , θ t ) + λ = △ 0 ⇒ − ∑ i = 1 N p ( z i = C k ∣ x i , θ t ) + p k t + 1 λ = 0 ⇒ k = 1 , 2 , ⋯ , K − ∑ i = 1 N ∑ k = 1 K p ( z i = C k ∣ x i , θ t ) ⏟ = 1 ⏟ = N + ∑ k = 1 K p k t + 1 ⏟ = 1 λ = 0 ⇒ − N + λ = 0 ⇒ λ = N 代 入 − ∑ i = 1 N p ( z i = C k ∣ x i , θ t ) + p k t + 1 λ = 0 得 − ∑ i = 1 N p ( z i = C k ∣ x i , θ t ) + p k t + 1 N = 0 ⇒ p k t + 1 = ∑ i = 1 N p ( z i = C k ∣ x i , θ t ) N L(p,\lambda )=-\sum_{k=1}^{K}\sum_{i=1}^{N}log\; p_{k }\cdot p(z_{i}=C_{k}|x_{i},\theta ^{t})+\lambda (\sum_{k=1}^{K}p_{k}-1)\\ \frac{\partial L}{\partial p_{k}}=-\sum_{i=1}^{N}\frac{1}{p_{k}}\cdot p(z_{i}=C_{k}|x_{i},\theta ^{t})+\lambda \overset{\triangle }{=}0\\ \Rightarrow -\sum_{i=1}^{N}p(z_{i}=C_{k}|x_{i},\theta ^{t})+p_{k}^{t+1}\lambda =0\\ \overset{k=1,2,\cdots ,K}{\Rightarrow }-\underset{=N}{\underbrace{\sum_{i=1}^{N}\underset{=1}{\underbrace{\sum_{k=1}^{K}p(z_{i}=C_{k}|x_{i},\theta ^{t})}}}}+\underset{=1}{\underbrace{\sum_{k=1}^{K}p_{k}^{t+1}}}\lambda =0\\ \Rightarrow -N+\lambda =0\\ \Rightarrow \lambda =N\\ 代入-\sum_{i=1}^{N}p(z_{i}=C_{k}|x_{i},\theta ^{t})+p_{k}^{t+1}\lambda =0得\\ -\sum_{i=1}^{N}p(z_{i}=C_{k}|x_{i},\theta ^{t})+p_{k}^{t+1}N=0\\ \Rightarrow p_{k}^{t+1}=\frac{\sum_{i=1}^{N}p(z_{i}=C_{k}|x_{i},\theta ^{t})}{N} L(p,λ)=−k=1∑Ki=1∑Nlogpk⋅p(zi=Ck∣xi,θt)+λ(k=1∑Kpk−1)∂pk∂L=−i=1∑Npk1⋅p(zi=Ck∣xi,θt)+λ=△0⇒−i=1∑Np(zi=Ck∣xi,θt)+pkt+1λ=0⇒k=1,2,⋯,K−=N i=1∑N=1 k=1∑Kp(zi=Ck∣xi,θt)+=1 k=1∑Kpkt+1λ=0⇒−N+λ=0⇒λ=N代入−i=1∑Np(zi=Ck∣xi,θt)+pkt+1λ=0得−i=1∑Np(zi=Ck∣xi,θt)+pkt+1N=0⇒pkt+1=N∑i=1Np(zi=Ck∣xi,θt)
这里以求解 p t + 1 p^{t+1} pt+1为例展示了M step的求解过程,其他参数也按照极大化Q函数的思路就可以求解,求得一轮参数后要继续迭代求解直至收敛。