EM算法 (第十五周周报12.6-12.12)

本文详细介绍了EM算法在估计高斯混合模型参数时的理论基础,包括单个高斯情况下的最大似然估计,以及高斯混合模型中通过E步和M步求解参数的步骤,还证明了EM算法的收敛性。通过实例演示了混合高斯的计算,并扩展到GEM算法的应用.
摘要由CSDN通过智能技术生成

一.引例

1.引例11

设四种实验结果发生的概率依次为: 1 2 − θ 4 , 1 4 − θ 4 , 1 4 + θ 4 , θ 4 \frac{1}{2}-\frac{\theta}{4},\frac{1}{4}-\frac{\theta}{4},\frac{1}{4}+\frac{\theta}{4},\frac{\theta}{4} 214θ,414θ,41+4θ,4θ;
发生的次数依次为: y 1 , y 2 , y 3 , y 4 y_1,y_2,y_3,y_4 y1,y2,y3,y4次,求 θ ^ \hat \theta θ^.

  • 最大似然估计:
    L ( θ ) = ( 1 2 − θ 4 ) y 1 ( 1 4 − θ 4 ) y 2 ( 1 4 + θ 4 ) y 3 ( θ 4 ) y 4 ln ⁡ L ( θ ) = y 1 ln ⁡ ( 1 2 − θ 4 ) + y 2 ln ⁡ ( 1 4 − θ 4 ) + y 3 ln ⁡ ( 1 4 + θ 4 ) + y 4 ln ⁡ ( θ 4 ) d ln ⁡ L ( θ ) d θ = − y 1 2 − θ − y 2 1 − θ − y 3 1 + θ − y 4 θ = 0 \begin{aligned}L(\theta) &=(\frac{1}{2}-\frac{\theta}{4})^{y_1}(\frac{1}{4}-\frac{\theta}{4})^{y_2}(\frac{1}{4}+\frac{\theta}{4})^{y_3}(\frac{\theta}{4})^{y_4}\\ \ln{L(\theta)} &= y_1\ln{(\frac{1}{2}-\frac{\theta}{4})} +y_2\ln{(\frac{1}{4}-\frac{\theta}{4})} +y_3\ln{(\frac{1}{4}+\frac{\theta}{4})} +y_4\ln{(\frac{\theta}{4})} \\ \frac{\mathrm{d}\ln{L(\theta)}}{\mathrm{d}\theta}&=-\frac{y_1}{2-\theta} -\frac{y_2}{1-\theta}-\frac{y_3}{1+\theta}-\frac{y_4}{\theta}=0 \end{aligned} L(θ)lnL(θ)dθdlnL(θ)=(214θ)y1(414θ)y2(41+4θ)y3(4θ)y4=y1ln(214θ)+y2ln(414θ)+y3ln(41+4θ)+y4ln(4θ)=2θy11θy21+θy3θy4=0
    由于导数是关于 θ \theta θ的一元三次方程,求解困难,当然也可以用数值方法求解,但是如果涉及的实验结果不止4个,就显得麻烦,所以为了使方法更加通用化,则使用EM算法来求解.
  • EM算法
    1)把事件拆分为两个和事件:
    假设发生结果概率为 1 2 − θ 4 \frac{1}{2}-\frac{\theta}{4} 214θ的事件拆分为发生概率为 1 4 − θ 4 \frac{1}{4}-\frac{\theta}{4} 414θ 1 4 \frac{1}{4} 41的两个事件,发生次数分为 z 1 , y 1 − z 1 ; z_1,y_1-z_1; z1,y1z1;
    假设发生结果概率为 1 4 + θ 4 \frac{1}{4}+\frac{\theta}{4} 41+4θ的事件拆分为发生概率为 θ 4 \frac{\theta}{4} 4θ 1 4 \frac{1}{4} 41的两个事件,发生次数分为 z 2 , y 3 − z 2 ; z_2,y_3-z_2; z2,y3z2;
    2)由最大似然估计:
    L ( θ ) = ( 1 4 ) y 1 − z 1 ( 1 4 − θ 4 ) y 2 + z 1 ( 1 4 ) y 3 − z 2 ( θ 4 ) y 4 + z 2 ln ⁡ L ( θ ) = ( y 1 − z 1 ) ln ⁡ 1 4 + ( y 2 + z 1 ) ln ⁡ ( 1 4 − θ 4 ) + ( y 3 − z 2 ) ln ⁡ 1 4 + ( y 4 + z 2 ) ln ⁡ θ 4 d ln ⁡ L ( θ ) d θ = − y 2 + z 1 1 − θ + y 4 + z 2 θ = 0 \begin{aligned}L(\theta) &=(\frac{1}{4})^{y_1-z_1}(\frac{1}{4}-\frac{\theta}{4})^{y_2+z_1}(\frac{1}{4})^{y_3-z_2}(\frac{\theta}{4})^{y_4+z_2}\\ \ln{L(\theta)} &= (y_1-z_1)\ln{\frac{1}{4}} +(y_2+z_1)\ln{(\frac{1}{4}-\frac{\theta}{4})} +(y_3-z_2)\ln{\frac{1}{4}} +(y_4+z_2)\ln{\frac{\theta}{4}} \\ \frac{\mathrm{d}\ln{L(\theta)}}{\mathrm{d}\theta}&=-\frac{y_2+z_1}{1-\theta}+\frac{y_4+z_2}{\theta}=0 \end{aligned} L(θ)lnL(θ)dθdlnL(θ)=(41)y1z1(414θ)y2+z1(41)y3z2(4θ)y4+z2=(y1z1)ln41+(y2+z1)ln(414θ)+(y3z2)ln41+(y4+z2)ln4θ=1θy2+z1+θy4+z2=0
    θ ^ = z 2 + y 4 z 2 + z 1 + y 2 + y 4 (1) \hat \theta =\frac{z_2+y_4}{z_2+z_1+y_2+y_4} \tag1 θ^=z2+z1+y2+y4z2+y4(1)
    3)由于 z 1 , z 2 z_1,z_2 z1,z2未知,但次数服从二项分布,故 z 1 ∼ B ( y 1 , 1 4 − θ 4 1 2 − θ 4 = 1 − θ 2 − θ ) z_1 \sim B(y_1,\frac{\frac{1}{4}-\frac{\theta}{4}}{\frac{1}{2}-\frac{\theta}{4}}=\frac{1-\theta}{2-\theta}) z1B(y1,214θ414θ=2θ1θ),同理, z 2 ∼ B ( y 3 , θ 1 + θ ) z_2 \sim B(y_3,\frac{\theta}{1+\theta}) z2B(y3,1+θθ)
    4)EM算法:
    • 第一步(E步):求期望;目的:消去潜在变量 z 1 , z 2 z_1,z_2 z1,z2
      E ( z 1 ) = y 1 1 − θ 2 − θ , E ( z 2 ) = y 3 θ 1 + θ E(z_1)=y_1\frac{1-\theta}{2-\theta},E(z_2)=y_3\frac{\theta}{1+\theta} E(z1)=y12θ1θ,E(z2)=y31+θθ
      ( 1 ) (1) (1)式两边求期望,带入即可.
    • 第二步(M步):求最大
      带入后得: θ ^ = y 3 θ 1 + θ + y 4 y 3 θ 1 + θ + y 1 1 − θ 2 − θ + y 2 + y 4 \hat \theta =\frac{y_3\frac{\theta}{1+\theta}+y_4}{y_3\frac{\theta}{1+\theta}+y_1\frac{1-\theta}{2-\theta}+y_2+y_4} θ^=y31+θθ+y12θ1θ+y2+y4y31+θθ+y4
      迭代法求:
      θ i + 1 = y 3 θ i 1 + θ i + y 4 y 3 θ i 1 + θ i + y 1 1 − θ i 2 − θ i + y 2 + y 4 \theta^{i+1} =\frac{y_3\frac{\theta^i}{1+\theta^i}+y_4}{y_3\frac{\theta^i}{1+\theta^i}+y_1\frac{1-\theta^i}{2-\theta^i}+y_2+y_4} θi+1=y31+θiθi+y12θi1θi+y2+y4y31+θiθi+y4

2.引例22

3枚硬币A,B,C出现正面的概率分别为: π , p , q \pi,p,q π,p,q.现做如下实验:先抛硬币A,若出现正面选B,若出现反面选C,再投掷选出的硬币,出现正面记为1,反面记为0;独立重复 n n n次实验,假设只能观测掷硬币的结果,不能观测掷硬币的过程,如何求出 π , p , q \pi,p,q π,p,q
P ( y ∣ θ ) = ∑ z P ( y , z ∣ θ ) = ∑ z P ( z ∣ θ ) P ( y ∣ z , θ ) = π p y ( 1 − p ) 1 − y + ( 1 − π ) q y ( 1 − q ) 1 − y \begin{aligned}P(y\vert \theta) &= \sum_zP(y,z\vert \theta)=\sum_zP(z\vert \theta)P(y\vert z,\theta) \\&=\pi p^y(1-p)^{1-y}+(1-\pi)q^y(1-q)^{1-y}\end{aligned} P(yθ)=zP(y,zθ)=zP(zθ)P(yz,θ)=πpy(1p)1y+(1π)qy(1q)1y
其中 y y y为一次观测的结果, z z z为隐变量,即中间A的结果, θ = ( π , p , q ) \theta=(\pi,p,q) θ=(π,p,q).
将观测数据表示为: Y = ( y 1 , y 2 , … , y n ) T Y=(y_1,y_2,\dots,y_n)^\mathrm{T} Y=(y1,y2,,yn)T,隐变量表示为: Z = ( z 1 , z 2 , … , z n ) T Z=(z_1,z_2,\dots,z_n)^\mathrm{T} Z=(z1,z2,,zn)T,则:
P ( Y ∣ θ ) = ∏ j = 1 n [ π p y j ( 1 − p ) 1 − y j + ( 1 − π ) q y j ( 1 − q ) 1 − y j ] P(Y\vert \theta)=\prod^n_{j=1}[\pi p^{y_j}(1-p)^{1-{y_j}}+(1-\pi)q^{y_j}(1-q)^{1-{y_j}}] P(Yθ)=j=1n[πpyj(1p)1yj+(1π)qyj(1q)1yj]
根据最大似然估计:
θ ^ = arg ⁡ max ⁡ θ log ⁡ P ( Y ∣ θ ) \hat \theta=\arg \max_\theta\log P(Y\vert \theta) θ^=argθmaxlogP(Yθ)
由于这个问题无解析解,故用数值方法迭代法求解,即EM法.
以下介绍EM算法的理论由来部分.

二.证明EM的收敛性3

1.单个高斯

当总体 X ∼ N ( μ , σ 2 ) , x i ∼ i i d X , i = 1 , 2 , … , n X\sim N(\mu,\sigma^2),x_i\overset{iid}{\sim}X,i=1,2,\dots,n XN(μ,σ2),xiiidX,i=1,2,,n,令 θ = ( μ , σ 2 ) \theta=(\mu,\sigma^2) θ=(μ,σ2)
θ ^ = arg ⁡ max ⁡ θ ∑ i = 1 n log ⁡ N ( x i ∣ μ , σ 2 ) (2) \hat \theta = \arg\max_{\theta} \sum_{i=1}^n\log N(x_i\vert\mu,\sigma^2) \tag2 θ^=argθmaxi=1nlogN(xiμ,σ2)(2)
当总体服从单个高斯分布时,易根据最大似然估计法求得: μ ^ = x ˉ , σ 2 ^ = S 2 \hat \mu=\bar x,\hat {\sigma^2}=S^2 μ^=xˉ,σ2^=S2;
其中 L ( θ ∣ x 1 , x 2 , … , x n ) = ∑ i = 1 n log ⁡ N ( x i ∣ μ , σ 2 ) L(\theta\vert x_1,x_2,\dots,x_n)=\sum_{i=1}^n\log N(x_i\vert\mu,\sigma^2) L(θx1,x2,,xn)=i=1nlogN(xiμ,σ2)称为对数似然函数.

2.高斯混合模型

当总体服从混合高斯模型时,假设有 k k k个高斯模型,样本 x i , i = 1 , 2 , … , n x_i,i=1,2,\dots,n xi,i=1,2,,n , θ = ( μ 1 , μ 2 , … , μ k , σ 1 2 , σ 2 2 , … , σ k 2 , λ 1 , λ 2 , … , λ k − 1 ) \theta=(\mu_1,\mu_2,\dots,\mu_k,\sigma^2_1,\sigma^2_2,\dots,\sigma^2_k,\lambda_1,\lambda_2,\dots,\lambda_{k-1}) θ=(μ1,μ2,,μk,σ12,σ22,,σk2,λ1,λ2,,λk1),则 x i x_i xi出现的概率为 k k k个高斯的叠加,即:
P ( x i ∣ θ ) = ∑ j = 1 k λ j N ( μ j , σ j 2 ) s . t . ∑ j = 1 k λ j = 1 \begin{aligned}P(x_i \vert \theta) &=\sum_{j=1}^k\lambda_j N(\mu_j,\sigma^2_j)\\ \mathrm{s.t.}\sum_{j=1}^{k}\lambda_j &=1\end{aligned} P(xiθ)s.t.j=1kλj=j=1kλjN(μj,σj2)=1
若使用最大似然估计,则得(即用 P ( x i ∣ θ ) P(x_i \vert \theta) P(xiθ)代替 ( 2 ) (2) (2)式的 N ( x i ∣ μ , σ 2 ) N(x_i\vert\mu,\sigma^2) N(xiμ,σ2)):
θ ^ = arg ⁡ max ⁡ θ ∑ i = 1 n log ⁡ ∑ j = 1 k λ j N ( μ j , σ j 2 ) \hat \theta = \arg\max_{\theta} \sum_{i=1}^n\log \sum_{j=1}^k\lambda_j N(\mu_j,\sigma^2_j) θ^=argθmaxi=1nlogj=1kλjN(μj,σj2)
由于对每一个参数求导为零是一件很困难的事,所以使用迭代的方法(EM法)求解 θ ^ \hat \theta θ^,迭代公式为:
θ ( g + 1 ) = arg ⁡ max ⁡ θ ∫ Z log ⁡ P ( X , Z ∣ θ ) P ( Z ∣ X , θ g ) d Z s . t . log ⁡ P ( X ∣ θ ( g + 1 ) ) ≥ log ⁡ P ( X ∣ θ g ) (4) \begin{aligned}\theta^{(g+1)} &=\arg\max_\theta \int_Z\log P(X,Z \vert\theta)P(Z \vert X,\theta^{g})\mathrm{d}Z\\ \mathrm{s.t.} \log P(X \vert \theta^{(g+1)}) &\geq \log P(X \vert \theta^{g}) \tag4\end{aligned} θ(g+1)s.t.logP(Xθ(g+1))=argθmaxZlogP(X,Zθ)P(ZX,θg)dZlogP(Xθg)(4)
其中 Z Z Z为隐变量集合, X X X为数据集合

3.收敛性证明

即证明: log ⁡ P ( X ∣ θ ( g + 1 ) ) ≥ log ⁡ P ( X ∣ θ g ) \log P(X \vert \theta^{(g+1)})\geq \log P(X \vert \theta^{g}) logP(Xθ(g+1))logP(Xθg)
证明:
由: log ⁡ P ( X ∣ θ ) = log ⁡ P ( X , Z ∣ θ ) − log ⁡ P ( Z ∣ X , θ ) (3) \log P(X \vert \theta)=\log P(X,Z \vert \theta)-\log P(Z \vert X,\theta) \tag 3 logP(Xθ)=logP(X,Zθ)logP(ZX,θ)(3)

因为P(AB)=P(A)P(B|A),故 log ⁡ P ( A ) = log ⁡ P ( A B ) − log ⁡ P ( B ∣ A ) \log P(A)=\log P(AB)-\log P(B|A) logP(A)=logP(AB)logP(BA),两边同时加上 θ \theta θ即可

( 3 ) (3) (3)式两边对分布 P ( Z ∣ X , θ g ) P(Z|X,\theta^g) P(ZX,θg)求期望:
∫ Z log ⁡ P ( X ∣ θ ) P ( Z ∣ X , θ g ) d Z = ∫ Z log ⁡ P ( X , Z ∣ θ ) P ( Z ∣ X , θ g ) d Z − ∫ Z log ⁡ P ( Z ∣ X , θ ) P ( Z ∣ X , θ g ) d Z \int_Z \log P(X \vert \theta)P(Z|X,\theta^g)\mathrm{d}Z=\int_Z \log P(X,Z \vert \theta)P(Z|X,\theta^g)\mathrm{d}Z-\int_Z \log P(Z \vert X,\theta) P(Z|X,\theta^g)\mathrm{d}Z ZlogP(Xθ)P(ZX,θg)dZ=ZlogP(X,Zθ)P(ZX,θg)dZZlogP(ZX,θ)P(ZX,θg)dZ
左 端 = log ⁡ P ( X ∣ θ ) 左端=\log P(X \vert \theta) =logP(Xθ)
Q ( θ , θ g ) = ∫ Z log ⁡ P ( X , Z ∣ θ ) P ( Z ∣ X , θ g ) d Z Q(\theta,\theta^g)=\int_Z \log P(X,Z \vert \theta)P(Z|X,\theta^g)\mathrm{d}Z Q(θ,θg)=ZlogP(X,Zθ)P(ZX,θg)dZ
H ( θ , θ g ) = ∫ Z log ⁡ P ( Z ∣ X , θ ) P ( Z ∣ X , θ g ) d Z H(\theta,\theta^g)=\int_Z \log P(Z \vert X,\theta) P(Z|X,\theta^g)\mathrm{d}Z H(θ,θg)=ZlogP(ZX,θ)P(ZX,θg)dZ
故: 右 端 = Q ( θ , θ g ) − H ( θ , θ g ) 右端=Q(\theta,\theta^g)-H(\theta,\theta^g) =Q(θ,θg)H(θ,θg)
假设: ∀ θ , 都 有 H ( θ g , θ g ) ≥ H ( θ , θ g ) \forall \theta,都有H(\theta^g,\theta^g)\geq H(\theta,\theta^g) θ,H(θg,θg)H(θ,θg),得: H ( θ g , θ g ) ≥ H ( θ ( g + 1 ) , θ g ) H(\theta^g,\theta^g)\geq H(\theta^{(g+1)},\theta^g) H(θg,θg)H(θ(g+1),θg);又由 ( 4 ) (4) (4)式,得 Q ( θ g , θ g ) ≤ Q ( θ ( g + 1 ) , θ g ) Q(\theta^g,\theta^g) \leq Q(\theta^{(g+1)},\theta^g) Q(θg,θg)Q(θ(g+1),θg)故:
Q ( θ g , θ g ) − H ( θ g , θ g ) ≤ Q ( θ ( g + 1 ) , θ g ) − H ( θ ( g + 1 ) , θ g ) Q(\theta^g,\theta^g)-H(\theta^g,\theta^g) \leq Q(\theta^{(g+1)},\theta^g)-H(\theta^{(g+1)},\theta^g) Q(θg,θg)H(θg,θg)Q(θ(g+1),θg)H(θ(g+1),θg)
由此可得 log ⁡ P ( X ∣ θ ( g + 1 ) ) ≥ log ⁡ P ( X ∣ θ g ) \log P(X \vert \theta^{(g+1)}) \geq \log P(X \vert \theta^{g}) logP(Xθ(g+1))logP(Xθg).
现证满足假设: ∀ θ , 都 有 H ( θ g , θ g ) ≥ H ( θ , θ g ) \forall \theta,都有H(\theta^g,\theta^g)\geq H(\theta,\theta^g) θ,H(θg,θg)H(θ,θg)
证明:
H ( θ g , θ g ) − H ( θ , θ g ) = ∫ Z log ⁡ P ( Z ∣ X , θ g ) P ( Z ∣ X , θ g ) d Z − ∫ Z log ⁡ P ( Z ∣ X , θ ) P ( Z ∣ X , θ g ) d Z = ∫ Z − log ⁡ P ( Z ∣ X , θ ) P ( Z ∣ X , θ g ) P ( Z ∣ X , θ g ) d Z ≥ ∗ 0 \begin{aligned}H(\theta^g,\theta^g)- H(\theta,\theta^g) &=\int_Z \log P(Z \vert X,\theta^g) P(Z|X,\theta^g)\mathrm{d}Z-\int_Z \log P(Z \vert X,\theta) P(Z|X,\theta^g)\mathrm{d}Z \\ &=\int_Z -\log \frac{P(Z \vert X,\theta)}{P(Z \vert X,\theta^g)} P(Z|X,\theta^g)\mathrm{d}Z \\& \overset{*}{\geq} 0\end{aligned} H(θg,θg)H(θ,θg)=ZlogP(ZX,θg)P(ZX,θg)dZZlogP(ZX,θ)P(ZX,θg)dZ=ZlogP(ZX,θg)P(ZX,θ)P(ZX,θg)dZ0

( *)步得由来:
f ( x ) = − log ⁡ x f(x)=-\log x f(x)=logx是一个凸函数,即满足定义域内 ∀ x , y , λ ∈ [ 0 , 1 ] \forall x,y,\lambda\in[0,1] x,yλ[0,1] s . t . λ f ( x ) + ( 1 − λ ) f ( y ) ≥ f ( λ x + ( 1 − λ ) y ) \mathrm{s.t.} \lambda f(x)+(1-\lambda)f(y)\geq f(\lambda x+(1-\lambda)y) s.t.λf(x)+(1λ)f(y)f(λx+(1λ)y)
即:两点连线在函数的上方.
还可以将式子两边视作期望:函数的期望大于等于期望的函数;
故函数 − log ⁡ P ( Z ∣ X , θ ) P ( Z ∣ X , θ g ) -\log \frac{P(Z \vert X,\theta)}{P(Z \vert X,\theta^g)} logP(ZX,θg)P(ZX,θ)的期望等于它期望的函数:
∫ Z − log ⁡ P ( Z ∣ X , θ ) P ( Z ∣ X , θ g ) P ( Z ∣ X , θ g ) d Z ≥ − log ⁡ { ∫ Z P ( Z ∣ X , θ ) P ( Z ∣ X , θ g ) P ( Z ∣ X , θ g ) d Z } ≥ − log ⁡ 1 ≥ 0 \begin{aligned}\int_Z -\log \frac{P(Z \vert X,\theta)}{P(Z \vert X,\theta^g)} P(Z|X,\theta^g)\mathrm{d}Z &\geq -\log\{\int_Z \frac{P(Z \vert X,\theta)}{P(Z \vert X,\theta^g)}P(Z|X,\theta^g)\mathrm{d}Z\} \\ &\geq -\log1 \\&\geq 0\end{aligned} ZlogP(ZX,θg)P(ZX,θ)P(ZX,θg)dZlog{ZP(ZX,θg)P(ZX,θ)P(ZX,θg)dZ}log10

三.EM算法的步骤

由: θ ( g + 1 ) = arg ⁡ max ⁡ θ ∫ Z log ⁡ P ( X , Z ∣ θ ) P ( Z ∣ X , θ g ) d Z \begin{aligned}\theta^{(g+1)} &=\arg\max_\theta \int_Z\log P(X,Z \vert\theta)P(Z \vert X,\theta^{g})\mathrm{d}Z \end{aligned} θ(g+1)=argθmaxZlogP(X,Zθ)P(ZX,θg)dZ
我们只需要得知每个模型的 P ( X , Z ∣ θ ) P(X,Z \vert\theta) P(X,Zθ) P ( Z ∣ X , θ g ) P(Z \vert X,\theta^{g}) P(ZX,θg)即可迭代求出 θ ^ \hat \theta θ^

四.混合高斯举例

  • P ( X , Z ∣ θ ) P(X,Z \vert\theta) P(X,Zθ)
    P ( X , Z ∣ θ ) = ∏ i = 1 n P ( x i , z i ∣ θ ) = ∏ i = 1 n P ( x i ∣ z i , θ ) P ( z i ∣ θ ) = ∏ i = 1 n λ z i N ( x i ∣ μ z i , σ z i 2 ) P(X,Z \vert\theta)=\prod_{i=1}^nP(x_i,z_i|\theta)=\prod_{i=1}^nP(x_i|z_i,\theta)P(z_i|\theta)=\prod_{i=1}^n\lambda_{z_i}N(x_i|\mu_{z_i},\sigma^2_{z_i}) P(X,Zθ)=i=1nP(xi,ziθ)=i=1nP(xizi,θ)P(ziθ)=i=1nλziN(xiμzi,σzi2)
  • P ( Z ∣ X , θ g ) P(Z \vert X,\theta^{g}) P(ZX,θg)
    P ( Z ∣ X , θ g ) = ∏ i = 1 n P ( z i ∣ x i , θ g ) = ∗ ∗ λ z i N ( x i ∣ μ z i , σ z i 2 ) ∑ z i = 1 k λ z i N ( x i ∣ μ z i , σ z i 2 ) \begin{aligned}P(Z \vert X,\theta^{g})& =\prod_{i=1}^n P(z_i \vert x_i,\theta^{g})\\ &\overset{**}{=}\frac{\lambda_{z_i}N(x_i|\mu_{z_i},\sigma^2_{z_i})}{\sum_{z_i=1}^k\lambda_{z_i}N(x_i|\mu_{z_i},\sigma^2_{z_i})}\end{aligned} P(ZX,θg)=i=1nP(zixi,θg)=zi=1kλziN(xiμzi,σzi2)λziN(xiμzi,σzi2)

(**)是由全概率公式: P ( A ∣ B ) = P ( B ∣ A ) P ( A ) ∑ i = 1 n P ( B ∣ A ) P ( A ) P(A|B)=\frac{P(B|A)P(A)}{\sum_{i=1}^n P(B|A)P(A)} P(AB)=i=1nP(BA)P(A)P(BA)P(A)
推导而来

  • 带入 ( 4 ) (4) (4)式:

    • E-step(即求期望步骤):
      原 式 = ∑ z 1 = 1 k ∑ z 2 = 1 k ⋯ ∑ z n = 1 k [ ∑ i = 1 n ( log ⁡ λ z i + log ⁡ N ( x i ∣ μ z i , σ z i 2 ) ) ∏ i = 1 n P ( z i ∣ x i , θ g ) ] = ∗ ∗ ∗ ∑ i = 1 n ∑ z i = 1 k ( log ⁡ λ z i + log ⁡ N ( x i ∣ μ z i , σ z i 2 ) ) P ( z i ∣ x i , θ g ) \begin{aligned}原式& =\sum_{z_1=1}^{k}\sum_{z_2=1}^{k}\dots\sum_{z_n=1}^{k}[\sum_{i=1}^n(\log \lambda_{z_i}+\log N(x_i|\mu_{z_i},\sigma^2_{z_i}))\prod_{i=1}^n P(z_i \vert x_i,\theta^{g})]\\ &\overset{***}{=}\sum_{i=1}^n\sum_{z_i=1}^{k}(\log \lambda_{z_i}+\log N(x_i|\mu_{z_i},\sigma^2_{z_i}))P(z_i|x_i,\theta^g)\end{aligned} =z1=1kz2=1kzn=1k[i=1n(logλzi+logN(xiμzi,σzi2))i=1nP(zixi,θg)]=i=1nzi=1k(logλzi+logN(xiμzi,σzi2))P(zixi,θg)

    (***)的由来:
    f i ( z i ) = log ⁡ λ z i + log ⁡ N ( x i ∣ μ z i , σ z i 2 ) , P ( z 1 , z 2 , … , z n ) = ∏ i = 1 n P ( z i ∣ x i , θ g ) f_i(z_i)=\log \lambda_{z_i}+\log N(x_i|\mu_{z_i},\sigma^2_{z_i}),P(z_1,z_2,\dots,z_n)=\prod_{i=1}^n P(z_i \vert x_i,\theta^{g}) fi(zi)=logλzi+logN(xiμzi,σzi2),P(z1,z2,,zn)=i=1nP(zixi,θg)
    原 式 = ∑ z 1 = 1 k ∑ z 2 = 1 k ⋯ ∑ z n = 1 k ( f 1 ( z 1 ) + f 2 ( z 2 ) + f n ( z n ) ) P ( z 1 , z 2 , … , z n ) = ∑ z 1 = 1 k ∑ z 2 = 1 k ⋯ ∑ z n = 1 k f 1 ( z 1 ) P ( z 1 , z 2 , … , z n ) + … = ∑ z 1 = 1 k f 1 ( z 1 ) ∑ z 2 = 1 k ⋯ ∑ z n = 1 k P ( z 1 , z 2 , … , z n ) + … = ∑ z 1 = 1 k f 1 ( z 1 ) P ( z 1 ) + … \begin{aligned}原式 &=\sum_{z_1=1}^{k}\sum_{z_2=1}^{k}\dots\sum_{z_n=1}^{k}(f_1(z_1)+f_2(z_2)+f_n(z_n))P(z_1,z_2,\dots,z_n) \\&=\sum_{z_1=1}^{k}\sum_{z_2=1}^{k}\dots\sum_{z_n=1}^{k}f_1(z_1)P(z_1,z_2,\dots,z_n) +\dots \\ &=\sum_{z_1=1}^{k}f_1(z_1)\sum_{z_2=1}^{k}\dots\sum_{z_n=1}^{k}P(z_1,z_2,\dots,z_n) +\dots \\&=\sum_{z_1=1}^kf_1(z_1)P(z_1)+\dots\end{aligned} =z1=1kz2=1kzn=1k(f1(z1)+f2(z2)+fn(zn))P(z1,z2,,zn)=z1=1kz2=1kzn=1kf1(z1)P(z1,z2,,zn)+=z1=1kf1(z1)z2=1kzn=1kP(z1,z2,,zn)+=z1=1kf1(z1)P(z1)+

    • M-step(argmax步骤)
      • λ z i \lambda_{z_i} λzi
        d log ⁡ λ z i P ( z i ∣ x i , θ g ) d λ z i = 令 0 \frac{\mathrm{d}\log \lambda_{z_i}P(z_i|x_i,\theta^g)}{\mathrm{d}\lambda_{z_i}} \overset{令}{=}0 dλzidlogλziP(zixi,θg)=0
        s . t . ∑ z i = 1 k = 1 \mathrm{s.t.} \sum_{z_i=1}^{k}=1 s.t.zi=1k=1
        用拉格朗日乘数法求解即可.
        解得: λ z i = 1 n ∑ i = 1 n P ( z i ∣ x i , θ ) \lambda_{z_i}=\frac{1}{n}\sum_{i=1}^{n}P(z_i|x_i,\theta) λzi=n1i=1nP(zixi,θ)
        含义:所有的高斯的占比的和求平均.
      • μ i , σ i 2 \mu_i,\sigma^2_i μi,σi2:
        用矩阵求导为零计算所得.
        综上所述:
        λ l ( g + 1 ) = 1 n ∑ l = 1 n P ( l ∣ x l , θ g ) μ l ( g + 1 ) = ∑ l = 1 n x l P ( l ∣ x l , θ g ) ∑ l = 1 n P ( z l ∣ x l , θ g ) σ l 2 ( g + 1 ) = ∑ l = 1 n ( x l − μ l l + 1 ) ( x l − μ l l + 1 ) T P ( l ∣ x l , θ g ) ∑ l = 1 n P ( l ∣ x l , θ g ) \begin{aligned}\lambda_{l}^{(g+1)}&=\frac{1}{n}\sum_{l=1}^{n}P(l|x_l,\theta^g)\\ \mu_l^{(g+1)}&=\frac{\sum_{l=1}^{n}x_lP(l|x_l,\theta^g)}{\sum_{l=1}^{n}P(z_l|x_l,\theta^g)}\\{\sigma^2_l}^{(g+1)} &=\frac{\sum_{l=1}^{n}(x_l-\mu_l^{l+1})(x_l-\mu_l^{l+1})^\mathrm{T}P(l|x_l,\theta^g)}{\sum_{l=1}^{n}P(l|x_l,\theta^g)}\end{aligned} λl(g+1)μl(g+1)σl2(g+1)=n1l=1nP(lxl,θg)=l=1nP(zlxl,θg)l=1nxlP(lxl,θg)=l=1nP(lxl,θg)l=1n(xlμll+1)(xlμll+1)TP(lxl,θg)

五.EM算法推广——GEM


  1. 此引例来自于mathtsing ↩︎

  2. 此引例来自于李航的《统计学习方法》 ↩︎

  3. 以下内容来源于徐亦达的机器学习 ↩︎

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值